source
stringlengths
32
199
text
stringlengths
26
3k
https://en.wikipedia.org/wiki/ICON%20%28microcomputer%29
The ICON (also the CEMCorp ICON, Burroughs ICON, and Unisys ICON, and nicknamed the bionic beaver) was a networked personal computer built specifically for use in schools, to fill a standard created by the Ontario Ministry of Education. It was based on the Intel 80186 CPU and ran an early version of QNX, a Unix-like operating system. The system was packaged as an all-in-one machine similar to the Commodore PET, and included a trackball for mouse-like control. Over time, a number of GUI-like systems appeared for the platform, based on the system's NAPLPS-based graphics system. The ICON was widely used in the mid to late 1980s, but disappeared after that time with the widespread introduction of PCs and Apple Macintoshes. History Development Origin In 1981, four years after the first microcomputers for mainstream consumers appeared, the Ontario Ministry of Education sensed that microcomputers could be an important component of education. In June the Minister of Education, Bette Stephenson, announced the need for computer literacy for all students and formed the Advisory Committee on Computers in Education to guide their efforts. She stated that: It is now clear that one of the major goals that education must add to its list of purposes, is computer literacy. The world of the very near future requires that all of us have some understanding of the processes and uses of computers. According to several contemporary sources, Stephenson was the driving force behind the project; "whenever there was a problem she appears to have 'moved heaven and earth' to get it back on the tracks." The Ministry recognized that a small proportion of teachers and other school personnel were already quite involved with microcomputers and that some schools were acquiring first-generation machines. These acquisitions were uneven, varying in brand and model not just between school boards, but among schools within boards and even classroom to classroom. Among the most popular were the Commodore PET which had a strong following in the new computer programming classes due to its tough all-in-one construction and built-in support for Microsoft BASIC, and the Apple II which had a wide variety of educational software, mostly aimed at early education. The Ministry wanted to encourage uses of microcomputers that supported its curriculum guidelines and was willing to underwrite the development of software for that purpose. However, the wide variety of machines being used meant that development costs had to be spread over several platforms. Additionally, many of the curriculum topics they wanted to cover required more storage or graphics capability than at least some of the machines then in use, if not all of them. Educational software was in its infancy, and many hardware acquisitions were made without a clear provision for educational software or a plan for use. A series of Policy Memos followed outlining the Committee's views. Policy Memo 47 stated that computers are to be
https://en.wikipedia.org/wiki/Buzzkill%20%28TV%20series%29
Buzzkill is a hidden-camera reality show which started airing in 1996 on the MTV network. The show derived its name from the slang term buzzkill, meaning a sudden undesired event that causes one's "high" or "buzz" to become of a lesser experience or depleted. Each new episode was set in a different location and consisted of three separate pranks. Premise A forerunner to prank reality shows, Buzzkill was essentially a series of elaborate pranks (backed by a major television network's budget) played not only on the layman but often on celebrities and major public figures. Each prank was played by three aspiring actors from the Chicago area: Dave Sheridan (creator), Frank Hudetz, and Travis Draft. The show's most memorable moment was when Hudetz disguised himself as famous designer Isaac Mizrahi. The likeness was so uncanny that he fooled superstar Whitney Houston at an awards show; when Houston discovered the error, she felt she was made a fool of and vowed never to appear on MTV again. The show was eventually cancelled due to litigation concerns at MTV. Because of Buzzkill, more outrageous reality shows were developed for MTV, including The Tom Green Show, Jackass, and Punk'd. The show's theme song uses the same verse and chorus melody as the GG Allin classic 'Multiple Forms of Self-Satisfaction'. Episode guide References External links Jump The Shark – Buzzkill An oral history of Buzzkill MTV reality television series 1990s American comedy television series 1996 American television series debuts 1996 American television series endings American hidden camera television series
https://en.wikipedia.org/wiki/Computer%20and%20network%20surveillance
Computer and network surveillance is the monitoring of computer activity and data stored locally on a computer or data being transferred over computer networks such as the Internet. This monitoring is often carried out covertly and may be completed by governments, corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent government agencies. Computer and network surveillance programs are widespread today and almost all Internet traffic can be monitored. Surveillance allows governments and other agencies to maintain social control, recognize and monitor threats or any suspicious or abnormal activity, and prevent and investigate criminal activities. With the advent of programs such as the Total Information Awareness program, technologies such as high-speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens. Many civil rights and privacy groups, such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union, have expressed concern that increasing surveillance of citizens will result in a mass surveillance society, with limited political and/or personal freedoms. Such fear has led to numerous lawsuits such as Hepting v. AT&T. The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance". Network surveillance The vast majority of computer surveillance involves the monitoring of personal data and traffic on the Internet. For example, in the United States, the Communications Assistance For Law Enforcement Act mandates that all phone calls and broadband internet traffic (emails, web traffic, instant messaging, etc.) be available for unimpeded, real-time monitoring by Federal law enforcement agencies. Packet capture (also known as "packet sniffing") is the monitoring of data traffic on a network. Data sent between computers over the Internet or between any networks takes the form of small chunks called packets, which are routed to their destination and assembled back into a complete message. A packet capture appliance intercepts these packets, so that they may be examined and analyzed. Computer technology is needed to perform traffic analysis and sift through intercepted data to look for important/useful information. Under the Communications Assistance For Law Enforcement Act, all U.S. telecommunications providers are required to install such packet capture technology so that Federal law enforcement and intelligence agencies are able to intercept all of their customers' broadband Internet and voice over Internet protocol (VoIP) traffic. These technologies can be used both by the intelligence and for illegal activities. There is far too much data gathered by these packet sniffers for human inve
https://en.wikipedia.org/wiki/Webcam
A webcam is a video camera which is designed to record or stream to a computer or computer network. They are primarily used in video telephony, live streaming and social media, and security. Webcams can be built-in computer hardware or peripheral devices, and are commonly connected to a device using USB or wireless protocols. Webcams have been used on the Internet as early as 1993, and the first widespread commercial one became available in 1994. Early webcam usage on the Internet was primarily limited to stationary shots streamed to web sites. In the late 1990s and early 2000s, instant messaging clients added support for webcams, increasing their popularity in video conferencing. Computer manufacturers also started integrating webcams into laptop hardware. In 2020, the COVID-19 pandemic caused a shortage of webcams due to the increased number of people working from home. History Early development (early 1990s) First developed in 1991, a webcam was pointed at the Trojan Room coffee pot in the Cambridge University Computer Science Department (initially operating over a local network instead of the web). The camera was finally switched off on August 22, 2001. The final image captured by the camera can still be viewed at its homepage. The oldest continuously operating webcam, San Francisco State University's FogCam, has run since 1994 and is still operating It updates every 20 seconds. The released in 1993 SGI Indy is the first commercial computer to have a standard video camera, and the first SGI computer to have standard video inputs. The maximum supported input resolution is 640×480 for NTSC or 768×576 for PAL. A fast machine is required to capture at either of these resolutions, though; an Indy with slower R4600PC CPU, for example, may require the input resolution to be reduced before storage or processing. However, the Vino hardware is capable of DMAing video fields directly into the frame buffer with minimal CPU overhead. The first widespread commercial webcam, the black-and-white QuickCam, entered the marketplace in 1994, created by the U.S. computer company Connectix. QuickCam was available in August 1994 for the Apple Macintosh, connecting via a serial port, at a cost of $100. Jon Garber, the designer of the device, had wanted to call it the "Mac-camera", but was overruled by Connectix's marketing department; a version with a PC-compatible parallel port and software for Microsoft Windows was launched in October 1995. The original Quick Cam provided 320x240-pixel resolution with a grayscale depth of 16 shades at 60 frames per second, or 256 shades at 15 frames per second. These cam were tested on several Delta II launch using a variety of communication protocols including CDMA, TDMA, GSM and HF. Videoconferencing via computers already existed, and at the time client-server based videoconferencing software such as CU-SeeMe had started to become popular. The first widely known laptop with integrated webcam option, at a pricepoint
https://en.wikipedia.org/wiki/LiveJournal
LiveJournal (), stylised as LiVEJOURNAL, is a Russian-owned social networking service where users can keep a blog, journal, or diary. American programmer Brad Fitzpatrick started LiveJournal on April 15, 1999, as a way of keeping his high school friends updated on his activities. In January 2005, American blogging software company Six Apart purchased Danga Interactive, the company that operated LiveJournal, from Fitzpatrick. Six Apart sold LiveJournal to Russian media company SUP Media in 2007; the service continued to operate out of the U.S. via a California-based subsidiary, LiveJournal, Inc., but began moving some operations to Russian offices in 2009. In December 2016, the service relocated its servers to Russia, and in April 2017, LiveJournal changed its terms of service to conform to Russian law. As with other social networks, a wide variety of public figures use the service, as do political pundits, who use it for political commentary, particularly in Russia, where it partners with the online newspaper Gazeta.ru. Features The unit of social networking on LiveJournal is quaternary (with four possible states of connection between one user and another). Two users can have no relationship, they can list each other as friends mutually, or either can "friend" the other without reciprocation. The term "friend" on LiveJournal is mostly a technical term, but because it is emotionally loaded for many people, there have been discussions in such LiveJournal communities as lj_dev and lj_biz as well as suggestions about whether the term should be used this way. A user's list of friends (friends list, often shortened to flist) will often include several communities and RSS feeds in addition to individual users. Generally, "friending" allows a user's friends to read protected entries and causes the friends' entries to appear on the user's "friends page". Friends can also be grouped together in "friends groups", allowing for more complex behavior. Features common to all accounts Each journal entry has its own web page, which includes the comments left by other users. In addition, each user has a journal page, which shows all of their most recent journal entries, along with links to the comment pages. The most distinctive feature of LiveJournal is the "friends list", which gives the site a strong social aspect in addition to the blog services. The friends list provides various syndication and privacy services, described below. Each user has a friends page, which collects the most recent journal entries of the people on their friends list. LiveJournal allows users to customize their accounts. The S2 programming language allows journal templates to be modified by members. Users may upload graphical avatars, or "userpics", which appear next to the username in prominent areas as on an Internet forum. Paid account holders are given full access to S2 management and more userpics, as well as other features. Each user also has a "User Info" page, which c
https://en.wikipedia.org/wiki/Shinkansen
The , colloquially known in English as the bullet train, is a network of high-speed railway lines in Japan. Initially, it was built to connect distant Japanese regions with Tokyo, the capital, to aid economic growth and development. Beyond long-distance travel, some sections around the largest metropolitan areas are used as a commuter rail network. It is owned by the Japan Railway Construction, Transport and Technology Agency and operated by five Japan Railways Group companies. Starting with the Tokaido Shinkansen () in 1964, the network has expanded to currently consist of of lines with maximum speeds of , of Mini-Shinkansen lines with a maximum speed of , and of spur lines with Shinkansen services. The network presently links most major cities on the islands of Honshu and Kyushu, and Hakodate on northern island of Hokkaido, with an extension to Sapporo under construction and scheduled to commence in March 2031. The maximum operating speed is (on a section of the Tōhoku Shinkansen). Test runs have reached for conventional rail in 1996, and up to a world record for SCMaglev trains in April 2015. The original Tokaido Shinkansen, connecting Tokyo, Nagoya and Osaka, three of Japan's largest cities, is one of the world's busiest high-speed rail lines. In the one-year period preceding March 2017, it carried 159 million passengers, and since its opening more than five decades ago, it has transported more than 6.4 billion total passengers. At peak times, the line carries up to 16 trains per hour in each direction with 16 cars each (1,323-seat capacity and occasionally additional standing passengers) with a minimum headway of three minutes between trains. The Shinkansen network of Japan had the highest annual passenger ridership (a maximum of 353 million in 2007) of any high-speed rail network until 2011, when the Chinese high-speed railway network surpassed it at 370 million passengers annually, reaching over 2.3 billion annual passengers in 2019. Etymology in Japanese means 'new trunk line' or 'new main line', but this word is used to describe both the railway lines the trains run on and the trains themselves. In English, the trains are also known as the bullet train. The term originates from 1939, and was the initial name given to the Shinkansen project in its earliest planning stages. Furthermore, the name , used exclusively until 1972 for trains on the Tōkaidō Shinkansen, is used today in English-language announcements and signage. History Japan was the first country to build dedicated railway lines for high-speed travel. Because of the mountainous terrain, the existing network consisted of narrow-gauge lines, which generally took indirect routes and could not be adapted to higher speeds due to technical limitations of narrow-gauge rail. For example, if a standard-gauge rail has a curve with a maximum speed of , the same curve on narrow-gauge rail will have a maximum allowable speed of . Consequently, Japan had a greater need for
https://en.wikipedia.org/wiki/IBM%201620
The IBM 1620 was announced by IBM on October 21, 1959, and marketed as an inexpensive scientific computer. After a total production of about two thousand machines, it was withdrawn on November 19, 1970. Modified versions of the 1620 were used as the CPU of the IBM 1710 and IBM 1720 Industrial Process Control Systems (making it the first digital computer considered reliable enough for real-time process control of factory equipment). Being variable-word-length decimal, as opposed to fixed-word-length pure binary, made it an especially attractive first computer to learn on and hundreds of thousands of students had their first experiences with a computer on the IBM 1620. Core memory cycle times were 20 microseconds for the (earlier) Model I, 10 microseconds for the Model II (about a thousand times slower than typical computer main memory in 2006). The Model II was introduced in 1962. Architecture Memory The IBM 1620 Model I was a variable "word" length decimal (BCD) computer using core memory. The Model I core could hold 20,000 decimal digits with each digit stored in 6 bits. More memory could be added with the IBM 1623 Storage Unit, Model 1 which held 40,000 digits, or the 1623 Model 2 which held 60,000. The Model II deployed the IBM 1625 core-storage memory unit, whose memory cycle time was halved by using faster cores, compared to the Model I's (internal or 1623 memory unit): to 10 µs (i.e., the cycle speed was raised to 100 kHz). While the five-digit addresses of either model could have addressed 100,000 decimal digits, no machine larger than 60,000 decimal digits was ever marketed. Memory access Memory was accessed two decimal digits at the same time (even-odd digit pair for numeric data or one alphameric character for text data). Each decimal digit was six bits, composed of an odd parity Check bit, a Flag bit, and four BCD bits for the value of the digit in the following format: C F 8 4 2 1 The Flag bit had several uses: In the least significant digit it was set to indicate a negative number (signed magnitude). It was set to mark the most significant digit of a number (wordmark). In the least significant digit of five-digit addresses it was set for indirect addressing (an option on the Model I, standard on the 1620 Model II). Multi-level indirection could be used (you could even put the machine in an infinite indirect addressing loop). In the middle three digits of five-digit addresses (on the 1620 II) they were set to select one of seven index registers. In addition to the valid BCD digit values there were three special digit values (these could be used in calculations): C F 8 4 2 1 1 0 1 0 Record Mark (right most end of record, prints as a double dagger symbol, ‡) 1 1 0 0 Numeric Blank (blank for punched card output formatting) 1 1 1 1 Group Mark (right most end of a group of records for disk I/O) Instructions were fixed length (12 decimal digits), consisting of a two-digit "op code", a five-dig
https://en.wikipedia.org/wiki/IBM%201401
The IBM 1401 is a variable-wordlength decimal computer that was announced by IBM on October 5, 1959. The first member of the highly successful IBM 1400 series, it was aimed at replacing unit record equipment for processing data stored on punched cards and at providing peripheral services for larger computers. The 1401 is considered to be the Ford Model-T of the computer industry, because it was mass-produced and because of its sales volume. Over 12,000 units were produced and many were leased or resold after they were replaced with newer technology. The 1401 was withdrawn on February 8, 1971. History The 1401 project evolved from an IBM project named World Wide Accounting Machine (WWAM), which in turn was a reaction to the success of the Bull Gamma 3. The 1401 was used as an independent system in conjunction with IBM punched card equipment. It was also operated as auxiliary equipment to IBM 700 or 7000 series systems. Monthly rental for 1401 configurations started at US$2,500 (worth about $ today). Demand exceeded expectations. "IBM was pleasantly surprised (perhaps shocked) to receive 5,200 orders in just the first five weeks – more than predicted for the entire life of the machine!" By late 1961, the 2000 installed in the USA were about one quarter of all electronic stored-program computers by all manufacturers. The number of installed 1401s peaked above 10,000 in the mid-1960s. "In all, by the mid-1960s nearly half of all computer systems in the world were 1401-type systems." The system was marketed until February 1971. Commonly used by small businesses as their primary data processing machines, the 1401 was also frequently used as an off-line peripheral controller for mainframe computers. In such installations, with an IBM 7090 for example, the mainframe computers used only magnetic tape for input-output. It was the 1401 that transferred input data from slow peripherals (such as the IBM 1402 Card Read-Punch) to tape, and transferred output data from tape to the card punch, the IBM 1403 Printer, or other peripherals. This allowed the mainframe's throughput to not be limited by the speed of a card reader or printer. (For more information, see Spooling.) Some later installations (e.g., at NASA) included the 1401 as a front-end peripherals controller to an IBM 7094 in a Direct Coupled System (DCS). Elements within IBM, notably John Haanstra, an executive in charge of 1401 deployment, supported its continuation in larger models for evolving needs (e.g., the IBM 1410) but the 1964 decision at the top to focus resources on the System/360 ended these efforts rather suddenly. IBM was facing a competitive threat from the Honeywell 200 and the 360's incompatibility with the 1401 design. IBM pioneered the use of microcode emulation, in the form of ROM, so that some System/360 models could run 1401 programs. Due to its popularity and mass-production, the IBM 1401 was often considered to be the first electronic mainframe computer to be introdu
https://en.wikipedia.org/wiki/IBM%201710
The IBM 1710 was a process control system that IBM introduced in March 1961. It used either a 1620 I or a 1620 II Computer and specialized I/O devices (e.g., IBM 1711 analog-to-digital converter and digital-to-analog converter, IBM 1712 discrete I/O and analog multiplexer, factory floor operator control panels). The IBM 1620 used in the 1710 system was modified in several ways, the most obvious was the addition of a very primitive hardware interrupt mechanism. The 1710 was used by paper mills, oil refineries and electric companies. See also IBM 1720 IBM 1800 References External links "Evolution of Small Real-Time IBM Computer Systems" (1.25 MB PDF file), from the IBM Journal of Research and Development. IBM Archives: IBM 1710 — Control system 1710 Computer-related introductions in 1961
https://en.wikipedia.org/wiki/Northern%20line
The Northern line is a London Underground line that runs from North London to South London. It is printed in black on the Tube map. The Northern line is unique on the Underground network in having two different routes through central London, two southern branches and two northern branches. Despite its name, it does not serve the northernmost stations on the Underground, though it does serve the southernmost station at , the terminus of one of the two southern branches. The line's northern termini, all in the London Borough of Barnet, are at and ; is the terminus of a single-station branch line off the High Barnet branch. The two main northern branches run south to join at where two routes, one via in the West End and the other via in the City, continue to join at in Southwark. At Kennington, the line again divides into two branches, one to each of the southern termini at , in the borough of Merton, and in Wandsworth. For most of its length it is a deep tube line. The portion between and opened in 1890 and is the oldest section of deep-level tube line on the network. About 294 million passenger journeys were recorded in 2016/17 on the Northern line, making it the busiest on the Underground. It has 18 of the system's 31 stations south of the River Thames. There are 52 stations in total on the line, of which 38 have platforms below ground. The line has a complicated history. The longtime arrangement of two main northern branches, two central branches and the southern unification reflects its genesis as three separate railways, combined in the 1920s and 1930s. An extension in the 1920s used a route originally planned by a fourth company. Abandoned plans from the 1920s to extend the line further southwards, and then northwards in the 1930s, would have incorporated parts of the routes of two further companies. From the 1930s to the 1970s, the tracks of a seventh company were also managed as a branch of the Northern line. An extension of the Charing Cross branch from Kennington to Battersea opened on 20 September 2021, giving the line a second southern branch. There are also proposals to split the line into separate lines following the opening of the new link to Battersea. History Formation See City and South London Railway and Charing Cross, Euston and Hampstead Railway for detailed histories of these companies The core of the Northern line evolved from two railway companies: the City & South London Railway (C&SLR) and the Charing Cross, Euston & Hampstead Railway (CCE&HR). The C&SLR, London's first electric hauled deep-level tube railway, was built under the supervision of James Henry Greathead, who had been responsible, with Peter W. Barlow, for the Tower Subway. It was the first of the Underground's lines to be constructed by boring deep below the surface and the first to be operated by electric traction. The railway opened in November 1890 from Stockwell to a now-disused station at King William Street. This was inconveniently place
https://en.wikipedia.org/wiki/Bourne%20shell
The Bourne shell (sh) is a shell command-line interpreter for computer operating systems. The Bourne shell was the default shell for Version 7 Unix. Unix-like systems continue to have /bin/sh—which will be the Bourne shell, or a symbolic link or hard link to a compatible shell—even when other shells are used by most users. Developed by Stephen Bourne at Bell Labs, it was a replacement for the Thompson shell, whose executable file had the same name—sh. It was released in 1979 in the Version 7 Unix release distributed to colleges and universities. Although it is used as an interactive command interpreter, it was also intended as a scripting language and contains most of the features that are commonly considered to produce structured programs. It gained popularity with the publication of The Unix Programming Environment by Brian Kernighan and Rob Pike—the first commercially published book that presented the shell as a programming language in a tutorial form. History Origins Work on the Bourne shell initially started in 1976. First appearing in Version 7 Unix, the Bourne shell superseded the Mashey shell. Some of the primary goals of the shell were: To allow shell scripts to be used as filters. To provide programmability including control flow and variables. Control over all input/output file descriptors. Control over signal handling within scripts. No limits on string lengths when interpreting shell scripts. Rationalize and generalize string quoting mechanism. The environment mechanism. This allowed context to be established at startup and provided a way for shell scripts to pass context to sub scripts (processes) without having to use explicit positional parameters. Features of the original version Features of the Version 7 UNIX Bourne shell include: Scripts can be invoked as commands by using their filename May be used interactively or non-interactively Allows both synchronous and asynchronous execution of commands Supports input and output redirection and pipelines Provides a set of built-in commands Provides flow control constructs and quotation facilities. Typeless variables Provides local and global variable scope Scripts do not require compilation before execution Does not have a goto facility, so code restructuring may be necessary Command substitution using backquotes: `command`. Here documents using << to embed a block of input text within a script. for ~ do ~ done loops, in particular the use of $* to loop over arguments, as well as for ~ in ~ do ~ done loops for iterating over lists. case ~ in ~ esac selection mechanism, primarily intended to assist argument parsing. sh provided support for environment variables using keyword parameters and exportable variables. Contains strong provisions for controlling input and output and in its expression matching facilities. The Bourne shell also was the first to feature the convention of using file descriptor 2> for error messages, allowing much greater programmatic
https://en.wikipedia.org/wiki/Brewster
Brewster may refer to: People Brewster (surname) Brewster Kahle (born 1960), American computer technologist Brewster H. Shaw (born 1945), American astronaut Places Brewster Park (Enniskillen), Northern Ireland Brewster (crater), The Moon United States Brewster, Florida Brewster, Kansas Brewster, Massachusetts Brewster (CDP), Massachusetts Brewster, Minnesota Brewster, Nebraska Brewster, New York Brewster (Metro-North station) Brewster Hill, New York Brewster, Ohio Brewster, Washington Brewster County, Texas Brewster Creek, in Akron, Ohio Islands in Boston Harbor Great Brewster Island Little Brewster Island Middle Brewster Island Structures Brewster-Douglass Housing Projects, in Detroit, Michigan, USA Brewster Hospital, in Duval County, Florida, USA Schools Brewster Academy, a boarding school in New Hampshire, USA Brewster High School (Brewster, Washington), USA Brewster School District (disambiguation), several Business Brewster & Co., American coachbuilders and automobile maker and a brand of automobile Brewster Aeronautical Corporation, the aircraft manufacturing division Brewster F2A Buffalo, an American fighter aircraft which saw limited service early in World War II Brewster Jennings & Associates, a front company set up by the CIA Crocker & Brewster, publishing house in Boston, USA Science Brewster's angle, a physics concept Brewster (unit), a unit of measure named after David Brewster Edinburgh Encyclopædia, 1808–1830 publication edited by David Brewster Entertainment Brewster McCloud, a 1970 film Brewster's Millions, 1902 novel Brewster's Millions (1985 film) Brewster Rockit: Space Guy!, a comic strip Punky Brewster, a TV program Brewster the Guru, a minor character on The Muppet Show Brewster, one of the protagonists on Chuggington Other uses Brewster Chair, a style of chair formerly made in New England Brewster Color, a color film system Brewster Body Shield, a prototype World War I body armor Brewster (police dog), (2004-2017), the longest serving police dog in Britain See also Brewer (disambiguation)
https://en.wikipedia.org/wiki/Tensegrity
Tensegrity, tensional integrity or floating compression is a structural principle based on a system of isolated components under compression inside a network of continuous tension, and arranged in such a way that the compressed members (usually bars or struts) do not touch each other while the prestressed tensioned members (usually cables or tendons) delineate the system spatially. The term was coined by Buckminster Fuller in the 1960s as a portmanteau of "tensional integrity". The other denomination of tensegrity, floating compression, was used mainly by the constructivist artist Kenneth Snelson. Concept Tensegrity structures are based on the combination of a few simple design patterns: members loaded in either pure compression or pure tension, which means that the structure will only fail if the cables yield or the rods buckle. This enables the material properties and cross-sectional geometry of each member to be optimized to the particular load it carries. preload or tensional prestress allows cables to always be in tension, to maintain structural integrity. mechanical stability, which allows the members to remain in tension/compression as stress on the structure increases. The structure also becomes stiffer as cable tension increases. Because of these patterns, no structural member experiences a bending moment and there are no shear stresses within the system. This can produce exceptionally strong and rigid structures for their mass and for the cross section of the components. The loading of at least some tensegrity structures causes an auxetic response and negative Poisson ratio, e.g. the T3-prism and 6-strut tensegrity icosahedron. A conceptual building block of tensegrity is seen in the 1951 Skylon. Six cables, three at each end, hold the tower in position. The three cables connected to the bottom "define" its location. The other three cables are simply keeping it vertical. A three-rod tensegrity structure (shown above in a spinning drawing of a T3-Prism) builds on this simpler structure: the ends of each green rod look like the top and bottom of the Skylon. As long as the angle between any two cables is smaller than 180°, the position of the rod is well defined. While three cables are the minimum required for stability, additional cables can be attached to each node for aesthetic purposes or to build in additional stability. For example, Snelson's Needle Tower uses a repeated pattern built using nodes that are connected to 5 cables each. Eleanor Heartney points out visual transparency as an important aesthetic quality of these structures. Korkmaz et al. has argued that lightweight tensegrity structures are suitable for adaptive architecture. Applications Tensegrities saw increased application in architecture beginning in the 1960s, when Maciej Gintowt and Maciej Krasiński designed Spodek arena complex (in Katowice, Poland), as one of the first major structures to employ the principle of tensegrity. The roof uses an inclined
https://en.wikipedia.org/wiki/Miranda%20%28programming%20language%29
Miranda is a lazy, purely functional programming language designed by David Turner as a successor to his earlier programming languages SASL and KRC, using some concepts from ML and Hope. It was produced by Research Software Ltd. of England (which holds a trademark on the name Miranda) and was the first purely functional language to be commercially supported. Miranda was first released in 1985 as a fast interpreter in C for Unix-flavour operating systems, with subsequent releases in 1987 and 1989. It had a strong influence on the later Haskell language. Turner stated that the benefits of Miranda over Haskell are: "Smaller language, simpler type system, simpler arithmetic". In 2020 a version of Miranda was released as open source under a BSD licence. The code has been updated to conform to modern C standards (C11/C18) and to generate 64-bit binaries. This has been tested on operating systems including Debian, Ubuntu, WSL/Ubuntu, and macOS (Catalina). Overview Miranda is a lazy, purely functional programming language. That is, it lacks side effects and imperative programming features. A Miranda program (called a script) is a set of equations that define various mathematical functions and algebraic data types. The word set is important here: the order of the equations is, in general, irrelevant, and there is no need to define an entity prior to its use. Since the parsing algorithm makes intelligent use of layout (indentation, via off-side rule), bracketing statements are rarely needed and statement terminators are unneeded. This feature, inspired by ISWIM, is also used in occam and Haskell and was later popularized by Python. Commentary is introduced into regular scripts by the characters || and continue to the end of the same line. An alternative commenting convention affects an entire source code file, known as a "literate script", in which every line is considered a comment unless it starts with a > sign. Miranda's basic data types are char, num and bool. A character string is simply a list of char, while num is silently converted between two underlying forms: arbitrary-precision integers (a.k.a. bignums) by default, and regular floating point values as required. Tuples are sequences of elements of potentially mixed types, analogous to records in Pascal-like languages, and are written delimited with parentheses: this_employee = ("Folland, Mary", 10560, False, 35) The list instead is the most commonly used data structure in Miranda. It is written delimited by square brackets and with comma-separated elements, all of which must be of the same type: week_days = ["Mon","Tue","Wed","Thur","Fri"] List concatenation is ++, subtraction is --, construction is :, sizing is # and indexing is !, so: days = week_days ++ ["Sat","Sun"] days = "Nil":days days!0 ⇒ "Nil" days = days -- ["Nil"] #days ⇒ 7 There are several list-building shortcuts: .. is used for lists whose elements form an arithmetic series, with the possibility for spec
https://en.wikipedia.org/wiki/Lazy%20initialization
In computer programming, lazy initialization is the tactic of delaying the creation of an object, the calculation of a value, or some other expensive process until the first time it is needed. It is a kind of lazy evaluation that refers specifically to the instantiation of objects or other resources. This is typically accomplished by augmenting an accessor method (or property getter) to check whether a private member, acting as a cache, has already been initialized. If it has, it is returned straight away. If not, a new instance is created, placed into the member variable, and returned to the caller just-in-time for its first use. If objects have properties that are rarely used, this can improve startup speed. Mean average program performance may be slightly worse in terms of memory (for the condition variables) and execution cycles (to check them), but the impact of object instantiation is spread in time ("amortized") rather than concentrated in the startup phase of a system, and thus median response times can be greatly improved. In multithreaded code, access to lazy-initialized objects/state must be synchronized to guard against race conditions. The "lazy factory" In a software design pattern view, lazy initialization is often used together with a factory method pattern. This combines three ideas: Using a factory method to create instances of a class (factory method pattern) Storing the instances in a map, and returning the same instance to each request for an instance with same parameters (multiton pattern) Using lazy initialization to instantiate the object the first time it is requested (lazy initialization pattern) Examples ActionScript 3 The following is an example of a class with lazy initialization implemented in ActionScript: package examples.lazyinstantiation { public class Fruit { private var _typeName:String; private static var instancesByTypeName:Dictionary = new Dictionary(); public function Fruit(typeName:String):void { this._typeName = typeName; } public function get typeName():String { return _typeName; } public static function getFruitByTypeName(typeName:String):Fruit { return instancesByTypeName[typeName] ||= new Fruit(typeName); } public static function printCurrentTypes():void { for each (var fruit:Fruit in instancesByTypeName) { // iterates through each value trace(fruit.typeName); } } } } Basic Usage: package { import examples.lazyinstantiation; public class Main { public function Main():void { Fruit.getFruitByTypeName("Banana"); Fruit.printCurrentTypes(); Fruit.getFruitByTypeName("Apple"); Fruit.printCurrentTypes(); Fruit.getFruitByTypeName("Banana"); Fruit.printCurrentTypes(); } } } C In C, lazy evaluation would normally be implemented inside a single function, or a single source file, using static variables. In a function: #include <string.h> #include <stdlib.h> #include <stddef.h> #include <stdio.
https://en.wikipedia.org/wiki/Apache%20Xerces
In computing, Xerces is Apache's collection of software libraries for parsing, validating, serializing and manipulating XML. The library implements a number of standard APIs for XML parsing, including DOM, SAX and SAX2. The implementation is available in the Java, C++ and Perl programming languages. The name "Xerces" is believed to commemorate the extinct Xerces blue butterfly (Glaucopsyche xerces). Xerces language versions There are several language versions of the Xerces parser: Xerces2 Java, the Java reference implementation Xerces C++, a C++ implementation Xerces Perl, a Perl implementation. This implementation is a wrapper around the C++ API. Features The features supported by Xerces depend on the language, the Java version having the most features. See also Apache License Java XML Apache Xalan References Notes Implemented third edition. Implemented second edition. Section 2.13 Normalization Checking has not been implemented. Implemented first edition. Implemented first edition. External links Apache Xerces Project home Xerces C++ libraries Java (programming language) libraries Java platform Software using the Apache license XML parsers
https://en.wikipedia.org/wiki/User%20space%20and%20kernel%20space
A modern computer operating system usually segregates virtual memory into user space and kernel space. Primarily, this separation serves to provide memory protection and hardware protection from malicious or errant software behaviour. Kernel space is strictly reserved for running a privileged operating system kernel, kernel extensions, and most device drivers. In contrast, user space is the memory area where application software and some drivers execute. Overview The term user space (or userland) refers to all code that runs outside the operating system's kernel. User space usually refers to the various programs and libraries that the operating system uses to interact with the kernel: software that performs input/output, manipulates file system objects, application software, etc. Each user space process normally runs in its own virtual memory space, and, unless explicitly allowed, cannot access the memory of other processes. This is the basis for memory protection in today's mainstream operating systems, and a building block for privilege separation. A separate user mode can also be used to build efficient virtual machines – see Popek and Goldberg virtualization requirements. With enough privileges, processes can request the kernel to map part of another process's memory space to its own, as is the case for debuggers. Programs can also request shared memory regions with other processes, although other techniques are also available to allow inter-process communication. Implementation The most common way of implementing a user mode separate from kernel mode involves operating system protection rings. Protection rings, in turn, are implemented using CPU modes. Typically, kernel space programs run in kernel mode, also called supervisor mode; normal applications in user space run in user mode. Many operating systems are single address space operating systems—they have a single address space for all user-mode code. (The kernel-mode code may be in the same address space, or it may be in a second address space). Many other operating systems have a per-process address space, a separate address space for each and every user-mode process. Another approach taken in experimental operating systems is to have a single address space for all software, and rely on a programming language's semantics to make sure that arbitrary memory cannot be accessed – applications simply cannot acquire any references to the objects that they are not allowed to access. This approach has been implemented in JXOS, Unununium as well as Microsoft's Singularity research project. See also BIOS CPU modes Early user space Memory protection OS-level virtualization Notes References External links Linux Kernel Space Definition Operating system technology Device drivers
https://en.wikipedia.org/wiki/Web%20service
A web service (WS) is either: a service offered by an electronic device to another electronic device, communicating with each other via the Internet, or a server running on a computer device, listening for requests at a particular port over a network, serving web documents (HTML, JSON, XML, images). In a web service, a web technology such as HTTP is used for transferring machine-readable file formats such as XML and JSON. In practice, a web service commonly provides an object-oriented web-based interface to a database server, utilized for example by another web server, or by a mobile app, that provides a user interface to the end-user. Many organizations that provide data in formatted HTML pages will also provide that data on their server as XML or JSON, often through a Web service to allow syndication. Another application offered to the end-user may be a mashup, where a Web server consumes several Web services at different machines and compiles the content into one user interface. Web services (generic) Asynchronous JavaScript And XML Asynchronous JavaScript And XML (AJAX) is a dominant technology for Web services. Developing from the combination of HTTP servers, JavaScript clients and Plain Old XML (as distinct from SOAP and W3C Web Services), now it is frequently used with JSON as well as, or instead of, XML. REST Representational State Transfer (REST) is an architecture for well-behaved Web services that can function at Internet scale. In a 2004 document, the W3C sets following REST as a key distinguishing feature of Web services: Web services that use markup languages There are a number of Web services that use markup languages: JSON-RPC. JSON-WSP Representational state transfer (REST) versus remote procedure call (RPC) Web Services Conversation Language (WSCL) Web Services Description Language (WSDL), developed by the W3C Web Services Flow Language (WSFL), superseded by BPEL Web template WS-MetadataExchange XML Interface for Network Services (XINS), provides a POX-style web service specification format Web API A Web API is a development in Web services where emphasis has been moving to simpler representational state transfer (REST) based communications. Restful APIs do not require XML-based Web service protocols (SOAP and WSDL) to support their interfaces. W3C Web services In relation to W3C Web services, the W3C defined a Web service as: W3C Web Services may use SOAP over HTTP protocol, allowing less costly (more efficient) interactions over the Internet than via proprietary solutions like EDI/B2B. Besides SOAP over HTTP, Web services can also be implemented on other reliable transport mechanisms like FTP. In a 2002 document, the Web Services Architecture Working Group defined a Web services architecture, requiring a standardized implementation of a "Web service." Explanation The term "Web service" describes a standardized way of integrating Web-based applications using the XML, SOAP, WSDL and UDDI open standards
https://en.wikipedia.org/wiki/RSS
RSS (RDF Site Summary or Really Simple Syndication) is a web feed that allows users and applications to access updates to websites in a standardized, computer-readable format. Subscribing to RSS feeds can allow a user to keep track of many different websites in a single news aggregator, which constantly monitor sites for new content, removing the need for the user to manually check them. News aggregators (or "RSS readers") can be built into a browser, installed on a desktop computer, or installed on a mobile device. Websites usually use RSS feeds to publish frequently updated information, such as blog entries, news headlines, episodes of audio and video series, or for distributing podcasts. An RSS document (called "feed", "web feed", or "channel") includes full or summarized text, and metadata, like publishing date and author's name. RSS formats are specified using a generic XML file. Although RSS formats have evolved from as early as March 1999, it was between 2005 and 2006 when RSS gained widespread use, and the ("") icon was decided upon by several major web browsers. RSS feed data is presented to users using software called a news aggregator and the passing of content is called web syndication. Users subscribe to feeds either by entering a feed's URI into the reader or by clicking on the browser's feed icon. The RSS reader checks the user's feeds regularly for new information and can automatically download it, if that function is enabled. History The RSS formats were preceded by several attempts at web syndication that did not achieve widespread popularity. The basic idea of restructuring information about websites goes back to as early as 1995, when Ramanathan V. Guha and others in Apple's Advanced Technology Group developed the Meta Content Framework. RDF Site Summary, the first version of RSS, was created by Dan Libby and Ramanathan V. Guha at Netscape. It was released in March 1999 for use on the My.Netscape.Com portal. This version became known as RSS 0.9. In July 1999, Dan Libby of Netscape produced a new version, RSS 0.91, which simplified the format by removing RDF elements and incorporating elements from Dave Winer's news syndication format. Libby also renamed the format from RDF to RSS Rich Site Summary and outlined further development of the format in a "futures document". This would be Netscape's last participation in RSS development for eight years. As RSS was being embraced by web publishers who wanted their feeds to be used on My.Netscape.Com and other early RSS portals, Netscape dropped RSS support from My.Netscape.Com in April 2001 during new owner AOL's restructuring of the company, also removing documentation and tools that supported the format. Two parties emerged to fill the void, with neither Netscape's help nor approval: The RSS-DEV Working Group and Dave Winer, whose UserLand Software had published some of the first publishing tools outside Netscape that could read and write RSS. Winer published a modified ver
https://en.wikipedia.org/wiki/Distributed%20Component%20Object%20Model
Distributed Component Object Model (DCOM) is a proprietary Microsoft technology for communication between software components on networked computers. DCOM, which originally was called "Network OLE", extends Microsoft's COM, and provides the communication substrate under Microsoft's COM+ application server infrastructure. The extension COM into Distributed COM was due to extensive use of DCE/RPC (Distributed Computing Environment/Remote Procedure Calls) – more specifically Microsoft's enhanced version, known as MSRPC. In terms of the extensions it added to COM, DCOM had to solve the problems of: Marshalling – serializing and deserializing the arguments and return values of method calls "over the wire". Distributed garbage collection – ensuring that references held by clients of interfaces are released when, for example, the client process crashed, or the network connection was lost. Combining significant numbers of objects in the client's browser into a single transmission in order to minimize bandwidth utilization. One of the key factors in solving these problems is the use of DCE/RPC as the underlying RPC mechanism behind DCOM. DCE/RPC has strictly defined rules regarding marshalling and who is responsible for freeing memory. DCOM was a major competitor to CORBA. Proponents of both of these technologies saw them as one day becoming the model for code and service-reuse over the Internet. However, the difficulties involved in getting either of these technologies to work over Internet firewalls, and on unknown and insecure machines, meant that normal HTTP requests in combination with web browsers won out over both of them. Microsoft, at one point, attempted to remediate these shortcomings by adding an extra http transport to DCE/RPC called ncacn_http (Network Computing Architecture connection-oriented protocol). DCOM was publicly launched as a beta for Windows 95 September 18, 1996. DCOM is supported natively in all versions of Windows starting from Windows 95, and all versions of Windows Server since Windows NT 4.0 Security improvements As part of the initiative that began at Microsoft as part of Secure Development Lifecycle to re-architect insecure code, DCOM saw some significant security-focused changes in Windows XP Service Pack 2. In response to a security vulnerability reported by Tencent Security Xuanwu Lab in June of 2021, Microsoft released security updates for several versions of Windows and Windows Server, hardening access to DCOM. Alternative versions and implementations COMsource is a Unix based implementation of DCOM, allowing interoperability between different platforms. Its source code is available, along with full and complete documentation, sufficient to use and also implement an interoperable version of DCOM. COMsource comes directly from the Windows NT 4.0 source code, and includes the source code for a Windows NT Registry Service. In 1995, Digital and Microsoft announced Affinity for OpenVMS (also known as NT Aff
https://en.wikipedia.org/wiki/COM%20%28hardware%20interface%29
COM (communication port) is the original, yet still common, name of the serial port interface on PC-compatible computers. It can refer not only to physical ports, but also to emulated ports, such as ports created by Bluetooth or USB adapters. History The name for the COM port started with the original IBM PC. IBM had called the four well-defined communication RS-232 ports the "COM" ports, starting from COM1 through COM4. In BASICA and PC DOS you can open these ports as "COM1:" through "COM4:", and all PC compatibles using MSDOS used the same denotation. Most PC-compatible computers in the 1980s and 1990s had one or two COM ports. By 2007, most computers shipped with only one or no physical COM ports. Today, few consumer-grade PC-compatible computers include COM ports, though some of them do still include a COM header on the motherboard. After the RS-232 COM port was removed from most consumer-grade computers, an external USB-to-UART serial adapter cable was used to compensate for the loss. A major supplier of these chips is FTDI. I/O addresses The COM ports are interfaced by an integrated circuit such as 16550 UART. This IC has seven internal 8-bit registers which hold information and configuration data about which data is to be sent or was received, the baud rate, interrupt configuration and more. In the case of COM1, these registers can be accessed by writing to or reading from the I/O addresses 0x3F8 to 0x3FF. If the CPU, for example, wants to send information out on COM1, it writes to I/O port 0x3F8, as this I/O port is "connected" to the UART IC register which holds the information that is to be sent out. The COM ports in PC-compatible computers are typically defined as: COM1: I/O port 0x3F8, IRQ 4 COM2: I/O port 0x2F8, IRQ 3 COM3: I/O port 0x3E8, IRQ 4 COM4: I/O port 0x2E8, IRQ 3 Implementations See also Device file Parallel port References Further reading Serial Port Complete: COM Ports, USB Virtual COM Ports, and Ports for Embedded Systems; 2nd Edition; Jan Axelson; Lakeview Research; 380 pages; 2007; . External links How to Interface Hardware in COM ports Computer buses
https://en.wikipedia.org/wiki/Robin%20Milner
Arthur John Robin Gorell Milner (13 January 1934 – 20 March 2010) was a British computer scientist, and a Turing Award winner. Life, education and career Milner was born in Yealmpton, near Plymouth, England into a military family. He gained a King's Scholarship to Eton College in 1947, and was awarded the Tomline Prize (the highest prize in Mathematics at Eton) in 1952. Subsequently, he served in the Royal Engineers, attaining the rank of Second Lieutenant. He then enrolled at King's College, Cambridge, graduating in 1957. Milner first worked as a schoolteacher then as a programmer at Ferranti, before entering academia at City University, London, then Swansea University, Stanford University, and from 1973 at the University of Edinburgh, where he was a co-founder of the Laboratory for Foundations of Computer Science (LFCS). He returned to Cambridge as the head of the Computer Laboratory in 1995 from which he eventually stepped down, although he was still at the laboratory. From 2009, Milner was a Scottish Informatics & Computer Science Alliance Advanced Research Fellow and held (part-time) the Chair of Computer Science at the University of Edinburgh. Milner died of a heart attack on 20 March 2010 in Cambridge. His wife, Lucy, died shortly before he did. Contributions Milner is generally regarded as having made three major contributions to computer science. He developed Logic for Computable Functions (LCF), one of the first tools for automated theorem proving. The language he developed for LCF, ML, was the first language with polymorphic type inference and type-safe exception handling. In a very different area, Milner also developed a theoretical framework for analyzing concurrent systems, the calculus of communicating systems (CCS), and its successor, the -calculus. At the time of his death, he was working on bigraphs, a formalism for ubiquitous computing subsuming CCS and the -calculus. He is also credited for rediscovering the Hindley–Milner type system. Honors and awards He was made a Fellow of the Royal Society and a Distinguished Fellow of the British Computer Society in 1988. Milner received the ACM Turing Award in 1991. In 1994 he was inducted as a Fellow of the ACM. In 2004, the Royal Society of Edinburgh awarded Milner with a Royal Medal for his "bringing about public benefits on a global scale". In 2008, he was elected a Foreign Associate of the National Academy of Engineering for "fundamental contributions to computer science, including the development of LCF, ML, CCS, and the -calculus." The Royal Society Milner Award and the ACM SIGPLAN Robin Milner Young Researcher Award are both named after him. Selected publications A Calculus of Communicating Systems, Robin Milner. Springer-Verlag (LNCS 92), 1980. Communication and Concurrency, Robin Milner. Prentice Hall International Series in Computer Science, 1989. The Definition of Standard ML, Robin Milner, Mads Tofte, Robert Harper, MIT Press 1990 Commentary on Standa
https://en.wikipedia.org/wiki/Edgar%20F.%20Codd
Edgar Frank "Ted" Codd (19 August 1923 – 18 April 2003) was an English computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases and relational database management systems. He made other valuable contributions to computer science, but the relational model, a very influential general theory of data management, remains his most mentioned, analyzed and celebrated achievement. Biography Edgar Frank Codd was born in Fortuneswell, on the Isle of Portland in Dorset, England. After attending Poole Grammar School, he studied mathematics and chemistry at Exeter College, Oxford, before serving as a pilot in the RAF Coastal Command during the Second World War, flying Sunderlands. In 1948, he moved to New York to work for IBM as a mathematical programmer. Codd first worked for the company’s Selective Sequence Electronic (SSEC) project and was later involved in the development of IBM 701 and 702. In 1953, angered by Senator Joseph McCarthy, Codd moved to Ottawa, Ontario, Canada. In 1957, he returned to the US working for IBM and from 1961 to 1965 pursuing his doctorate in computer science at the University of Michigan in Ann Arbor. Two years later, he moved to San Jose, California, to work at IBM's San Jose Research Laboratory, where he continued to work until the 1980s. He was appointed IBM Fellow in 1976. During the 1990s, his health deteriorated and he ceased work. Codd received the Turing Award in 1981, and in 1994 he was inducted as a Fellow of the Association for Computing Machinery. Codd died of heart failure at his home in Williams Island, Florida, at the age of 79 on 18 April 2003. Work Codd received a PhD in 1965 from the University of Michigan, Ann Arbor, advised by John Henry Holland. His thesis was about self-replication in cellular automata, extending on work of von Neumann and showing that a set of eight states was sufficient for universal computation and construction. His design for a self-replicating computer was implemented only in 2010. In the 1960s and 1970s, he worked out his theories of data arrangement, issuing his paper "A Relational Model of Data for Large Shared Data Banks" in 1970, after an internal IBM paper one year earlier. To his disappointment, IBM proved slow to exploit his suggestions until commercial rivals started implementing them. Initially, IBM refused to implement the relational model to preserve revenue from IMS/DB, a hierarchical database the company promoted in the 1970s. Codd then showed IBM customers the potential of the implementation of its model, and they, in turn, pressured IBM. Then IBM included in its Future Systems project a System R subproject – but put in charge of it developers who were not thoroughly familiar with Codd's ideas, and isolated the team from Codd. As a result, they did not use Codd's own Alpha language but created a non-relational one, SEQUEL. Even so, SEQUEL was so superior to pre-relational
https://en.wikipedia.org/wiki/PKZIP
PKZIP is a file archiving computer program, notable for introducing the popular ZIP file format. PKZIP was first introduced for MS-DOS on the IBM-PC compatible platform in 1989. Since then versions have been released for a number of other architectures and operating systems. PKZIP was originally written by Phil Katz and marketed by his company PKWARE, Inc starting in 1986. The company bears his initials: "PK". History By the 1970s, file archiving programs were distributed as standard utilities with operating systems. They include the Unix utilities ar, shar, and tar. These utilities were designed to gather a number of separate files into a single archive file for easier copying and distribution. These archives could optionally be passed through a stream compressor utility, such as compress and others. Other archivers also appeared during the 1980s, including ARC by System Enhancement Associates, Inc. (SEA), Rahul Dhesi's ZOO, Dean W. Cooper's DWC, LHarc by Haruhiko Okomura and Haruyasu Yoshizaki and ARJ which stands for "Archived by Robert Jung". The development of PKZIP was first announced in the file SOFTDEV.DOC from within the PKPAK 3.61 package, stating it would develop a new and yet unnamed compression program. The announcement had been made following the lawsuit between SEA and PKWARE, Inc. Although SEA won the suit, it lost the compression war, as the user base migrated to PKZIP as the compressor of choice. Led by some BBS sysops who refused to accept or offer files compressed as .ARC files, users began recompressing any old archives that were currently stored in .ARC format into .ZIP files. The first version was released in 1989, as a DOS command-line tool, distributed under shareware model with a US$25 registration fee (US$47 with manual). .ZIP file format To help ensure the interoperability of the ZIP format, Phil Katz published the original .ZIP File Format Specification in the APPNOTE.TXT documentation file. PKWARE continued to maintain this document and periodically published updates. Originally only bundled with registered versions of PKZIP, it was later available on the PKWARE site. The specification has its own version number, which does not necessarily correspond to the PKZIP version numbers, especially with PKZIP 6 or later. At various times, PKWARE adds preliminary features that allows PKZIP products to extract archives using advanced features, but PKZIP products that create such archives won't be available until the next major release. Compatibility Although popular at the time, ZIP archives using PKZIP 1.0 compression methods are now rare, and many unzip tools such as 7-Zip are able to read and write several other archive formats. Patents Shrinking uses dynamic LZW, on which Unisys held patents. A patent for the Reduce Algorithm had also been filed on June 19, 1984, long before PKZIP was produced. See also Comparison of file archivers List of archive formats PKLite References External links Official ,
https://en.wikipedia.org/wiki/Brent%20Spiner
Brent Jay Spiner (; born February 2, 1949) is an American actor. He is best known for his role as the android Data on the television series Star Trek: The Next Generation (19871994), four subsequent films (19942002), and Star Trek: Picard (20202023). In 1997, he won the Saturn Award for Best Supporting Actor for his portrayal of Data in Star Trek: First Contact, and was nominated in the same category for portraying Dr. Brackish Okun in Independence Day, a role he reprised in Independence Day: Resurgence. Spiner has also enjoyed a career in the theater and as a musician. Early life Brent Jay Spiner was born on February 2, 1949, in Houston, to Sylvia (née Schwartz) and Jack Spiner, who owned a furniture store. At age 29, Jack Spiner died of kidney failure when his son was ten months old. After his father's death, Spiner was adopted by his mother's second husband, Sol Mintz, whose surname he used between 1955 and 1975. Spiner attended Bellaire High School in Bellaire, Texas. He became active on the Bellaire speech team, winning the national championship in dramatic interpretation. He attended the University of Houston, where he performed in local theater. In 1968, Spiner worked as a performer at Six Flags Astroworld, first as a gunfighter and later in Dr. Featherflowers Medicine Show with his friend Trey Wilson. Both performers alternated as Dr. Featherflowers. Spiner also performed the role in the 1968 TV special The Pied Piper of Astroworld. Career Early work Spiner moved to New York City in the early 1970s, where he became a stage actor, performing in several Broadway and off-Broadway plays, including The Three Musketeers and Stephen Sondheim's Sunday in the Park with George. Spiner (as Brent Mintz) appeared as an imposter on a 1972 episode of To Tell the Truth. He had a brief non-speaking role in the film Stardust Memories, credited as "Fan in Lobby", the one with a Polaroid. He can also be seen as a passenger on the train full of misfits that the Allen character is trapped on in one of the films-within-the-film. Spiner appeared as a media technician in "The Advocates", a second-season episode of the Showtime cable series The Paper Chase. In 1984, he moved to Los Angeles, where he appeared in several pilots and made-for-TV movies. He played a recurring character on Night Court, Bob Wheeler, patriarch of a rural family. In 1986, he played a condemned soul in "Dead Run", an episode of the revival of Rod Serling's series The Twilight Zone on CBS. He made two appearances in season three (1986) of the situation comedy Mama's Family, playing two different characters. Spiner's first and only starring film role was in Rent Control (1984). In the Cheers episode "Never Love a Goalie, Part II", he played acquitted murder suspect Bill Grand. Spiner also appeared in the Tales from the Darkside episode, "A Case of the Stubborns", as a preacher. He portrayed Jim Stevens in the made-for-TV movie Manhunt for Claude Dallas. Spiner guest-starred in Friend
https://en.wikipedia.org/wiki/Data%20type
In computer science and computer programming, a data type (or simply type) is a collection or grouping of data values, usually specified by a set of possible values, a set of allowed operations on these values, and/or a representation of these values as machine types. A data type specification in a program constrains the possible values that an expression, such as a variable or a function call, might take. On literal data, it tells the compiler or interpreter how the programmer intends to use the data. Most programming languages support basic data types of integer numbers (of varying sizes), floating-point numbers (which approximate real numbers), characters and Booleans. Concept A data type may be specified for many reasons: similarity, convenience, or to focus the attention. It is frequently a matter of good organization that aids the understanding of complex definitions. Almost all programming languages explicitly include the notion of data type, though the possible data types are often restricted by considerations of simplicity, computability, or regularity. An explicit data type declaration typically allows the compiler to choose an efficient machine representation, but the conceptual organization offered by data types should not be discounted. Different languages may use different data types or similar types with different semantics. For example, in the Python programming language, int represents an arbitrary-precision integer which has the traditional numeric operations such as addition, subtraction, and multiplication. However, in the Java programming language, the type int represents the set of 32-bit integers ranging in value from −2,147,483,648 to 2,147,483,647, with arithmetic operations that wrap on overflow. In Rust this 32-bit integer type is denoted i32 and panics on overflow in debug mode. Most programming languages also allow the programmer to define additional data types, usually by combining multiple elements of other types and defining the valid operations of the new data type. For example, a programmer might create a new data type named "complex number" that would include real and imaginary parts, or a color data type represented by three bytes denoting the amounts each of red, green, and blue, and a string representing the color's name. Data types are used within type systems, which offer various ways of defining, implementing, and using them. In a type system, a data type represents a constraint placed upon the interpretation of data, describing representation, interpretation and structure of values or objects stored in computer memory. The type system uses data type information to check correctness of computer programs that access or manipulate the data. A compiler may use the static type of a value to optimize the storage it needs and the choice of algorithms for operations on the value. In many C compilers the data type, for example, is represented in 32 bits, in accord with the IEEE specification for single-pre
https://en.wikipedia.org/wiki/COWSEL
COWSEL (COntrolled Working SpacE Language) is a programming language designed between 1964 and 1966 by Robin Popplestone. It was based on an reverse Polish notation (RPN) form of the language Lisp, combined with some ideas from Combined Programming Language (CPL). COWSEL was initially implemented on a Ferranti Pegasus computer at the University of Leeds and on a Stantec Zebra at the Bradford Institute of Technology. Later, Rod Burstall implemented it on an Elliot 4120 at the University of Edinburgh. COWSEL was renamed POP-1 in 1966, during summer, and development continued under that name from then on. Example code function member lambda x y comment Is x a member of list y; define y atom then *0 end y hd x equal then *1 end y tl -> y repeat up Reserved words (keywords) were also underlined in the original printouts. Popplestone performed syntax highlighting by using underscoring on a Friden Flexowriter. See also POP-2 programming language POP-11 programming language Poplog programming environment References Technical report: EPU-R-12, U Edinburgh (Apr 1966) External links "The Early Development of POP" on The Encyclopedia of Computer Languages Functional languages History of computing in the United Kingdom Programming languages created in 1964 Programming languages
https://en.wikipedia.org/wiki/Associative%20array
In computer science, an associative array, map, symbol table, or dictionary is an abstract data type that stores a collection of (key, value) pairs, such that each possible key appears at most once in the collection. In mathematical terms, an associative array is a function with finite domain. It supports 'lookup', 'remove', and 'insert' operations. The dictionary problem is the classic problem of designing efficient data structures that implement associative arrays. The two major solutions to the dictionary problem are hash tables and search trees. It is sometimes also possible to solve the problem using directly addressed arrays, binary search trees, or other more specialized structures. Many programming languages include associative arrays as primitive data types, while many other languages provide software libraries that support associative arrays. Content-addressable memory is a form of direct hardware-level support for associative arrays. Associative arrays have many applications including such fundamental programming patterns as memoization and the decorator pattern. The name does not come from the associative property known in mathematics. Rather, it arises from the association of values with keys. It is not to be confused with associative processors. Operations In an associative array, the association between a key and a value is often known as a "mapping"; the same word may also be used to refer to the process of creating a new association. The operations that are usually defined for an associative array are: Insert or put: add a new pair to the collection, mapping the key to its new value. Any existing mapping is overwritten. The arguments to this operation are the key and the value. Remove or delete: remove a pair from the collection, unmapping a given key from its value. The argument to this operation is the key. Lookup, find, or get: find the value (if any) that is bound to a given key. The argument to this operation is the key, and the value is returned from the operation. If no value is found, some lookup functions raise an exception, while others return a default value (such as zero, null, or a specific value passed to the constructor). Associative arrays may also include other operations such as determining the number of mappings or constructing an iterator to loop over all the mappings. For such operations, the order in which the mappings are returned is usually implementation-defined. A multimap generalizes an associative array by allowing multiple values to be associated with a single key. A bidirectional map is a related abstract data type in which the mappings operate in both directions: each value must be associated with a unique key, and a second lookup operation takes a value as an argument and looks up the key associated with that value. Properties The operations of the associative array should satisfy various properties: lookup(k, insert(j, v, D)) = if k == j then v else lookup(k, D) lookup(k, new())
https://en.wikipedia.org/wiki/John%20Vincent%20Atanasoff
John Vincent Atanasoff, , (October 4, 1903 – June 15, 1995) was an American physicist and inventor credited with inventing the first electronic digital computer. Atanasoff invented the first electronic digital computer in the 1930s at Iowa State College (now known as Iowa State University). Challenges to his claim were resolved in 1973 when the Honeywell v. Sperry Rand lawsuit ruled that Atanasoff was the inventor of the computer. His special-purpose machine has come to be called the Atanasoff–Berry Computer. Early life and education Atanasoff was born on October 4, 1903, in Hamilton, New York to an electrical engineer and a school teacher. Atanasoff's father, Ivan Atanasov, was of Bulgarian origin, born in 1876 in the village of Boyadzhik, close to Yambol, then in the Ottoman Empire. While Ivan Atanasov was still an infant, his own father was killed by Ottoman soldiers after the Bulgarian April Uprising. In 1889, Ivan immigrated to the United States with his uncle. John's father later became an electrical engineer, whereas his mother, Iva Lucena Purdy (of mixed French and Irish ancestry), was a teacher of mathematics. Atanasoff was raised in Brewster, Florida. Young Atanasoff's ambitions and intellectual pursuits were in part influenced by his parents, whose interests in the natural and applied sciences cultivated in him a sense of critical curiosity and confidence. At the age of nine, he learned to use a slide rule, followed shortly by the study of logarithms, and subsequently completed high school at Mulberry High School in two years. In 1925, Atanasoff received his Bachelor of Science degree in electrical engineering from the University of Florida. He continued his education at Iowa State College and in 1926 earned a master's degree in mathematics. He completed his formal education in 1930 by earning a PhD in theoretical physics from the University of Wisconsin–Madison with his thesis, The Dielectric Constant of Helium. Upon completion of his doctorate, Atanasoff accepted an assistant professorship at Iowa State College in mathematics and physics. Computer development Partly due to the drudgery of using the mechanical Monroe calculator, which was the best tool available to him while he was writing his doctoral thesis, Atanasoff began to search for faster methods of computation. At Iowa State, Atanasoff researched the use of slaved Monroe calculators and IBM tabulators for scientific problems, with which controlled the Monroe using the output of an IBM. In 1936 he invented an analog calculator for analyzing surface geometry. At this point, he was pushing the boundaries of what gears could do and the fine mechanical tolerance required for good accuracy pushed him to consider digital solutions. With a grant of $650 received in September 1939 and the assistance of his graduate student Clifford Berry, the Atanasoff–Berry Computer (ABC) was prototyped by November of that year. According to Atanasoff, several operative principles of the ABC w
https://en.wikipedia.org/wiki/Nascom
The Nascom 1 and 2 were single-board computer kits issued in the United Kingdom in 1977 and 1979, respectively, based on the Zilog Z80 and including a keyboard and video interface, a serial port that could be used to store data on a tape cassette using the Kansas City standard, and two 8-bit parallel ports. At that time, including a full keyboard and video display interface was uncommon, as most microcomputer kits were then delivered with only a hexadecimal keypad and seven-segment display. To minimize cost, the buyer had to assemble a Nascom by hand-soldering about 3,000 joints on the single circuit board. Later on, a pre-built, cased machine named Nascom 3 was available; this used the Nascom 2 board. History The history of Nascom starts with the history of John A. Marshall. Marshall was the "& Son" of "A Marshall & Son (London) Ltd", an electronic component retailer whose adverts were a regular feature in hobby electronics magazines from as early as 1967. Marshall was a director of a company called Nasco Sales Ltd; a UK distributor of US semiconductors. He was also connected with a company called Lynx Electronics (London) Ltd. which had been a regular advertiser in the hobby electronics press since 1976. During a business trip to California in the Autumn of 1976, Marshall attended an amateur computer club meeting at Stanford University. On the flight home, he started to wonder whether there was a market in the UK for a kit computer. Marshall used the price of an SLR camera (about £200) as a reference point for the amount someone might be prepared to spend on a "hobby" purchase. At the end of 1976, Marshall attended a microprocessor seminar at Imperial College and met Phil Pitman. Pitman was the marketing manager for Mostek, which had recently become a second source for Zilog's Z80 processor. Pitman put Marshall in touch with a design consultant named Chris Shelton and, in the Spring of 1977, Marshall commissioned Shelton Instruments to design the Nascom 1. Most of the details of the Nascom design were described in a series of articles by Pitman that appeared in Wireless World between November 1977 and January 1979. By July 1977, monthly magazine adverts by Lynx Electronics were starting to hint about a microprocessor seminar in the autumn and a forthcoming computer product. On Saturday, 26 November 1977, Lynx Electronics launched the Nascom 1 at their "Home Microcomputer Symposium" at Wembley Conference Centre, London. Tickets cost £3.50 and hosting the event on a Saturday pitched it at an amateur/hobbyist rather than a professional audience. The event included a raffle for a Nascom 1 computer kit. About 550 people attended the symposium and over 300 kits were sold in the two weeks following the launch. The symposium was covered in detail in Issue 1 of PCW magazine and the Nascom 1 was the cover photograph for that issue (though not with the final keyboard). An article in that issue by K. S. Borland (another director of Nasco Sales Lt
https://en.wikipedia.org/wiki/Exokernel
Exokernel is an operating system kernel developed by the MIT Parallel and Distributed Operating Systems group, and also a class of similar operating systems. Operating systems generally present hardware resources to applications through high-level abstractions such as (virtual) file systems. The idea behind exokernels is to force as few abstractions as possible on application developers, enabling them to make as many decisions as possible about hardware abstractions. Exokernels are tiny, since functionality is limited to ensuring protection and multiplexing of resources, which is considerably simpler than conventional microkernels' implementation of message passing and monolithic kernels' implementation of high-level abstractions. Implemented abstractions are called library operating systems; they may request specific memory addresses, disk blocks, etc. The kernel only ensures that the requested resource is free, and the application is allowed to access it. This low-level hardware access allows the programmer to implement custom abstractions, and omit unnecessary ones, most commonly to improve a program's performance. It also allows programmers to choose what level of abstraction they want, high, or low. Exokernels can be seen as an application of the end-to-end principle to operating systems, in that they do not force an application program to layer its abstractions on top of other abstractions that were designed with different requirements in mind. For example, in the MIT Exokernel project, the Cheetah web server stores preformatted Internet Protocol packets on the disk, the kernel provides safe access to the disk by preventing unauthorized reading and writing, but how the disk is abstracted is up to the application or the libraries the application uses. Motivation Traditionally kernel designers have sought to make individual hardware resources invisible to application programs by requiring the programs to interact with the hardware via some abstraction model. These models include file systems for disk storage, virtual address spaces for memory, schedulers for task management, and sockets for network communication. These abstractions of the hardware make it easier to write programs in general, but limit performance and stifle experimentation in new abstractions. A security-oriented application might need a file system that does not leave old data on the disk, while a reliability-oriented application might need a file system that keeps such data for failure recovery. One option is to remove the kernel completely and program directly to the hardware, but then the entire machine would be dedicated to the application being written (and, conversely, the entire application codebase would be dedicated to that machine). The exokernel concept is a compromise: let the kernel allocate the basic physical resources of the machine (e.g. disk blocks, memory pages, and processor time) to multiple application programs, and let each program decide what to d
https://en.wikipedia.org/wiki/L4%20microkernel%20family
L4 is a family of second-generation microkernels, used to implement a variety of types of operating systems (OS), though mostly for Unix-like, Portable Operating System Interface (POSIX) compliant types. L4, like its predecessor microkernel L3, was created by German computer scientist Jochen Liedtke as a response to the poor performance of earlier microkernel-based OSes. Liedtke felt that a system designed from the start for high performance, rather than other goals, could produce a microkernel of practical use. His original implementation in hand-coded Intel i386-specific assembly language code in 1993 sparked intense interest in the computer industry. Since its introduction, L4 has been developed to be cross-platform and to improve security, isolation, and robustness. There have been various re-implementations of the original binary L4 kernel application binary interface (ABI) and its successors, including L4Ka::Pistachio (implemented by Liedtke and his students at Karlsruhe Institute of Technology), L4/MIPS (University of New South Wales (UNSW)), Fiasco (Dresden University of Technology (TU Dresden)). For this reason, the name L4 has been generalized and no longer refers to only Liedtke's original implementation. It now applies to the whole microkernel family including the L4 kernel interface and its different versions. L4 is widely deployed. One variant, OKL4 from Open Kernel Labs, shipped in billions of mobile devices. Design paradigm Specifying the general idea of a microkernel, Liedtke states: A concept is tolerated inside the microkernel only if moving it outside the kernel, i.e., permitting competing implementations, would prevent the implementation of the system's required functionality. In this spirit, the L4 microkernel provides few basic mechanisms: address spaces (abstracting page tables and providing memory protection), threads and scheduling (abstracting execution and providing temporal protection), and inter-process communication (for controlled communication across isolation boundaries). An operating system based on a microkernel like L4 provides services as servers in user space that monolithic kernels like Linux or older generation microkernels include internally. For example, to implement a secure Unix-like system, servers must provide the rights management that Mach included inside the kernel. History The poor performance of first-generation microkernels, such as Mach, led a number of developers to re-examine the entire microkernel concept in the mid-1990s. The asynchronous in-kernel-buffering process communication concept used in Mach turned out to be one of the main reasons for its poor performance. This induced developers of Mach-based operating systems to move some time-critical components, like file systems or drivers, back inside the kernel. While this somewhat ameliorated the performance issues, it plainly violates the minimality concept of a true microkernel (and squanders their major advantages). Detailed
https://en.wikipedia.org/wiki/Maxima%20%28software%29
Maxima () is a computer algebra system (CAS) based on a 1982 version of Macsyma. It is written in Common Lisp and runs on all POSIX platforms such as macOS, Unix, BSD, and Linux, as well as under Microsoft Windows and Android. It is free software released under the terms of the GNU General Public License (GPL). History Maxima is based on a 1982 version of Macsyma, which was developed at MIT with funding from the United States Department of Energy and other government agencies. A version of Macsyma was maintained by Bill Schelter from 1982 until his death in 2001. In 1998, Schelter obtained permission from the Department of Energy to release his version under the GPL. That version, now called Maxima, is maintained by an independent group of users and developers. Maxima does not include any of the many modifications and enhancements made to the commercial version of Macsyma during 1982–1999. Though the core functionality remains similar, code depending on these enhancements may not work on Maxima, and bugs which were fixed in Macsyma may still be present in Maxima, and vice versa. Maxima participated in Google Summer of Code in 2019 under International Neuroinformatics Coordinating Facility. Symbolic calculations Like most computer algebra systems, Maxima supports a variety of ways of reorganizing symbolic algebraic expressions, such as polynomial factorization, polynomial greatest common divisor calculation, expansion, separation into real and imaginary parts, and transformation of trigonometric functions to exponential and vice versa. It has a variety of techniques for simplifying algebraic expressions involving trigonometric functions, roots, and exponential functions. It can calculate symbolic antiderivatives ("indefinite integrals"), definite integrals, and limits. It can derive closed-form series expansions as well as terms of Taylor-Maclaurin-Laurent series. It can perform matrix manipulations with symbolic entries. Maxima is a general-purpose system, and special-case calculations such as factorization of large numbers, manipulation of extremely large polynomials, etc. are sometimes better done in specialized systems. Numeric calculations Maxima specializes in symbolic operations, but it also offers numerical capabilities such as arbitrary-precision integer, rational number, and floating-point numbers, limited only by space and time constraints. Programming Maxima includes a complete programming language with ALGOL-like syntax but Lisp-like semantics. It is written in Common Lisp and can be accessed programmatically and extended, as the underlying Lisp can be called from Maxima. It uses gnuplot for drawing. For calculations using floating point and arrays heavily, Maxima has translators from the Maxima language to other programming languages (notably Fortran), which may execute more efficiently. Interfaces Various graphical user interfaces (GUIs) are available for Maxima: wxMaxima is a graphical front-end using wxWidgets. There is
https://en.wikipedia.org/wiki/Gerald%20Jay%20Sussman
Gerald Jay Sussman (born February 8, 1947) is the Panasonic Professor of Electrical Engineering at the Massachusetts Institute of Technology (MIT). He has been involved in artificial intelligence (AI) research at MIT since 1964. His research has centered on understanding the problem-solving strategies used by scientists and engineers, with the goals of automating parts of the process and formalizing it to provide more effective methods of science and engineering education. Sussman has also worked in computer languages, in computer architecture, and in Very Large Scale Integration (VLSI) design. Education Sussman attended the Massachusetts Institute of Technology as an undergraduate and received his SB in mathematics in 1968. He continued his studies at MIT and obtained a PhD in 1973, also in mathematics, under the supervision of Seymour Papert. His doctoral thesis was titled "A Computational Model of Skill Acquisition" focusing on artificial intelligence and machine learning, using a computational performance model named HACKER. Academic work Sussman is a coauthor (with Hal Abelson and Julie Sussman) of the introductory computer science textbook Structure and Interpretation of Computer Programs. It was used at MIT for several decades, and has been translated into several languages. Sussman's contributions to artificial intelligence include problem solving by debugging almost-right plans, propagation of constraints applied to electrical circuit analysis and synthesis, dependency-based explanation and dependency-based backtracking, and various language structures for expressing problem-solving strategies. Sussman and his former student, Guy L. Steele Jr., invented the programming language Scheme in 1975. Sussman saw that artificial intelligence ideas can be applied to computer-aided design (CAD). Sussman developed, with his graduate students, sophisticated computer-aided design tools for Very Large Scale Integration (VLSI). Steele made the first Scheme chips in 1978. These ideas and the AI-based CAD technology to support them were further developed in the Scheme chips of 1979 and 1981. The technique and experience developed were then used to design other special-purpose computers. Sussman was the principal designer of the Digital Orrery, a machine designed to do high-precision integrations for orbital mechanics experiments. The Orrery hardware was designed and built by a few people in a few months, using AI-based simulation and compiling tools. Using the Digital Orrery, Sussman has worked with Jack Wisdom to discover numerical evidence for chaotic motions in the outer planets. The Digital Orrery machine is now retired at the Smithsonian Institution in Washington, DC. Sussman was also the lead designer of the Supercomputer Toolkit, another multiprocessor computer optimized for evolving of ordinary differential equations. The Supercomputer Toolkit was used by Sussman and Wisdom to confirm and extend the discoveries made with the Digital Orrery t
https://en.wikipedia.org/wiki/Blow%20%28film%29
Blow is a 2001 American biographical crime drama film directed by Ted Demme, about an American cocaine kingpin and his international network. David McKenna and Nick Cassavetes adapted Bruce Porter's 1993 book Blow: How a Small Town Boy Made $100 Million with the Medellín Cocaine Cartel and Lost It All for the screenplay. It is based on the real-life stories of U.S. drug trafficker George Jung (played by Johnny Depp) and his connections including narcotics kings Pablo Escobar and Carlos Lehder Rivas (portrayed in the film as Diego Delgado), and the Medellín Cartel. Plot George Jung and his parents Fred and Ermine live in Weymouth, Massachusetts. When George is 10 years old, Fred files for bankruptcy, but tries to make George realize that money is not important. In the late 1960s, an adult George moves to Los Angeles with his friend "Tuna"; they meet Barbara, a flight attendant, who introduces them to Derek Foreal, a marijuana dealer. With Derek's help, George and Tuna make a lot of money. Kevin Dulli, a visiting college student from Boston, tells them of the demand for marijuana back home. They start selling marijuana there, buying marijuana directly from Mexico with the help of Santiago Sanchez, a Mexican drug lord. Two years later, George is caught in Chicago trying to import of marijuana and is sentenced to two years' imprisonment. After unsuccessfully trying to plead his innocence, George skips bail to take care of Barbara, who dies from cancer. Her death marks the disbanding of the group of friends. While hiding from the authorities, George visits his parents. George's mother calls the police, who arrest him. He is sentenced to 26 months in a federal prison in Danbury, Connecticut. His cellmate Diego Delgado has contacts in the Medellin cartel and convinces George to help him go into the cocaine business. Upon his release from prison, George violates his parole conditions and heads down to Cartagena, Colombia to meet with Diego. They meet with cartel officer Cesar Rosa to negotiate the terms for smuggling to establish "good faith". As the smuggling operation grows, Diego is arrested, leaving George to find a way to sell . George reconnects with Derek in California, and the two sell all the cocaine. George then goes to Medellín, Colombia and meets Pablo Escobar, who agrees to go into business with them. With the help of Derek, the pair become Escobar's top U.S. importer. At Diego's wedding, George meets Cesar's fiancée Mirtha and later marries her. However, Diego resents George for keeping Derek's identity secret and pressures George to reveal his connection. George eventually discovers that Diego has betrayed him by cutting him out of the connection with Derek. Inspired by the birth of his daughter and a drug-related heart attack, George severs his relationship with the cartel. All goes well with George's newfound civilian life for five years, until Mirtha organizes a 38th birthday party for him. Many of his former drug associates atte
https://en.wikipedia.org/wiki/Jochen%20Hippel
Andreas Jochen Hippel (born October 14, 1971) is a musician from Kirchheimbolanden in southwest Germany. He played one of the most prominent roles in computer music during the 16-bit microcomputer era, composing hundreds of tunes for games and demos. He was also an experienced Amiga programmer and ported many of Thalion Software's Atari ST titles. He no longer composes music for a living and in 2006 he was working in logistics. Jochen's first computer music was a set of Christmas songs that he arranged in a rock style on his school's Commodore 64. As a member of The Exceptions under the handle Mad Max, he wrote most of the music for their demos including the B.I.G. Demo (Best in Galaxy). The demo was essentially a large collection of C64 tunes that was ported across to the Atari ST's Yamaha YM2149 sound chip using Jochen's own driver to get the most out of it. Jochen then had to fix all the music in order to get it to sound correct on the ST as the YM2149 has no resonance filter, no Oscillator sync, no combined waveforms, no ASDR enveloping and no ring modulation, and it has fewer waveforms than the MOS Technology 6581. Composers (such as Rob Hubbard) used a lot of special effects in their music which was difficult to replicate on the ST's sound chip. Another note of interest is that the B.I.G. demo contained an additional demo screen entitled "The Digital Department" containing 6 digital versions of C64 tunes. The sound routine used each channel of the YM2149 as a 4-bit DAC and played samples for each instrument. This is the first time music using PCM sample instruments is heard on the Atari ST, unfortunately only one more piece of music is ever written using this routine, albeit the 16-minute-long Knuckle Busters tune by Rob Hubbard. This appears as a guest screen in the Cuddly Demos (written by The Carebears) and was used to torment Richard Karsmakers of ST News who was promptly chained to a chair as the disk was formatted before his very eyes! He worked as a freelance musician, doing music for many 16-bit games. He eventually joined Thalion Software as a programmer and musician. His musical track for the game Amberstar is considered among his best works, and the game and Hippel's music acquired a cult following. For in game music on the Amiga Jochen often stuck to chiptune-like sound – that became his trademark – instead of using more "realistic" instrument sounds that machine's support for digitized sound made possible. Title music for Wings of Death and Lethal Xcess are exceptions. He has released an album called Give it a Try and has composed music for other albums including tracks on Immortal 2 and Immortal 3 Amiga Immortal. Hippel was also a programmer, he created all of his own music tools and also ported most of Thalion's early Atari ST titles to the Amiga. Hippel also created the Amiga 7 voice replay routine which was used in several Thalion and Eclipse titles and later used by Chris Hülsbeck in his TFMX replay routine for the t
https://en.wikipedia.org/wiki/Direct3D
Direct3D is a graphics application programming interface (API) for Microsoft Windows. Part of DirectX, Direct3D is used to render three-dimensional graphics in applications where performance is important, such as games. Direct3D uses hardware acceleration if it is available on the graphics card, allowing for hardware acceleration of the entire 3D rendering pipeline or even only partial acceleration. Direct3D exposes the advanced graphics capabilities of 3D graphics hardware, including Z-buffering, W-buffering, stencil buffering, spatial anti-aliasing, alpha blending, color blending, mipmapping, texture blending, clipping, culling, atmospheric effects, perspective-correct texture mapping, programmable HLSL shaders and effects. Integration with other DirectX technologies enables Direct3D to deliver such features as video mapping, hardware 3D rendering in 2D overlay planes, and even sprites, providing the use of 2D and 3D graphics in interactive media ties. Direct3D contains many commands for 3D computer graphics rendering; however, since version 8, Direct3D has superseded the DirectDraw framework and also taken responsibility for the rendering of 2D graphics. Microsoft strives to continually update Direct3D to support the latest technology available on 3D graphics cards. Direct3D offers full vertex software emulation but no pixel software emulation for features not available in hardware. For example, if software programmed using Direct3D requires pixel shaders and the video card on the user's computer does not support that feature, Direct3D will not emulate it, although it will compute and render the polygons and textures of the 3D models, albeit at a usually degraded quality and performance compared to the hardware equivalent. The API does include a Reference Rasterizer (or REF device), which emulates a generic graphics card in software, although it is too slow for most real-time 3D applications and is typically only used for debugging. A new real-time software rasterizer, WARP, designed to emulate the complete feature set of Direct3D 10.1, is included with Windows 7 and Windows Vista Service Pack 2 with the Platform Update; its performance is said to be on par with lower-end 3D cards on multi-core CPUs. As part of DirectX, Direct3D is available for Windows 95 and above, and is the base for the vector graphics API on the different versions of Xbox console systems. The Wine compatibility layer, a free software reimplementation of several Windows APIs, includes an implementation of Direct3D. Direct3D's main competitor is Khronos' OpenGL and its follow-on Vulkan. Fahrenheit was an attempt by Microsoft and SGI to unify OpenGL and Direct3D in the 1990s, but was eventually canceled. Overview Direct3D 6.0 – Multitexturing Direct3D 7.0 – Hardware Transformation, Clipping and Lighting (TCL/T&L) Direct3D 8.0 – Pixel Shader 1.0/1.1 & Vertex Shader 1.0/1.1 Direct3D 8.1 – Pixel Shader 1.2/1.3/1.4 Direct3D 9.0 – Shader Model 2.0 (Pixel Shader 2.0 & Ve
https://en.wikipedia.org/wiki/Breadth-first%20search
Breadth-first search (BFS) is an algorithm for searching a tree data structure for a node that satisfies a given property. It starts at the tree root and explores all nodes at the present depth prior to moving on to the nodes at the next depth level. Extra memory, usually a queue, is needed to keep track of the child nodes that were encountered but not yet explored. For example, in a chess endgame, a chess engine may build the game tree from the current position by applying all possible moves and use breadth-first search to find a win position for White. Implicit trees (such as game trees or other problem-solving trees) may be of infinite size; breadth-first search is guaranteed to find a solution node if one exists. In contrast, (plain) depth-first search, which explores the node branch as far as possible before backtracking and expanding other nodes, may get lost in an infinite branch and never make it to the solution node. Iterative deepening depth-first search avoids the latter drawback at the price of exploring the tree's top parts over and over again. On the other hand, both depth-first algorithms get along without extra memory. Breadth-first search can be generalized to both undirected graphs and directed graphs with a given start node (sometimes referred to as a 'search key'). In state space search in artificial intelligence, repeated searches of vertices are often allowed, while in theoretical analysis of algorithms based on breadth-first search, precautions are typically taken to prevent repetitions. BFS and its application in finding connected components of graphs were invented in 1945 by Konrad Zuse, in his (rejected) Ph.D. thesis on the Plankalkül programming language, but this was not published until 1972. It was reinvented in 1959 by Edward F. Moore, who used it to find the shortest path out of a maze, and later developed by C. Y. Lee into a wire routing algorithm (published in 1961). Pseudocode Input: A graph and a starting vertex of Output: Goal state. The parent links trace the shortest path back to 1 procedure BFS(G, root) is 2 let Q be a queue 3 label root as explored 4 Q.enqueue(root) 5 while Q is not empty do 6 v := Q.dequeue() 7 if v is the goal then 8 return v 9 for all edges from v to w in G.adjacentEdges(v) do 10 if w is not labeled as explored then 11 label w as explored 12 w.parent := v 13 Q.enqueue(w) More details This non-recursive implementation is similar to the non-recursive implementation of depth-first search, but differs from it in two ways: it uses a queue (First In First Out) instead of a stack and it checks whether a vertex has been explored before enqueueing the vertex rather than delaying this check until the vertex is dequeued from the queue. If is a tree, replacing the queue of this breadth-first search algorithm with a stack will yield a
https://en.wikipedia.org/wiki/Depth-first%20search
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible along each branch before backtracking. Extra memory, usually a stack, is needed to keep track of the nodes discovered so far along a specified branch which helps in backtracking of the graph. A version of depth-first search was investigated in the 19th century by French mathematician Charles Pierre Trémaux as a strategy for solving mazes. Properties The time and space analysis of DFS differs according to its application area. In theoretical computer science, DFS is typically used to traverse an entire graph, and takes time where is the number of vertices and the number of edges. This is linear in the size of the graph. In these applications it also uses space in the worst case to store the stack of vertices on the current search path as well as the set of already-visited vertices. Thus, in this setting, the time and space bounds are the same as for breadth-first search and the choice of which of these two algorithms to use depends less on their complexity and more on the different properties of the vertex orderings the two algorithms produce. For applications of DFS in relation to specific domains, such as searching for solutions in artificial intelligence or web-crawling, the graph to be traversed is often either too large to visit in its entirety or infinite (DFS may suffer from non-termination). In such cases, search is only performed to a limited depth; due to limited resources, such as memory or disk space, one typically does not use data structures to keep track of the set of all previously visited vertices. When search is performed to a limited depth, the time is still linear in terms of the number of expanded vertices and edges (although this number is not the same as the size of the entire graph because some vertices may be searched more than once and others not at all) but the space complexity of this variant of DFS is only proportional to the depth limit, and as a result, is much smaller than the space needed for searching to the same depth using breadth-first search. For such applications, DFS also lends itself much better to heuristic methods for choosing a likely-looking branch. When an appropriate depth limit is not known a priori, iterative deepening depth-first search applies DFS repeatedly with a sequence of increasing limits. In the artificial intelligence mode of analysis, with a branching factor greater than one, iterative deepening increases the running time by only a constant factor over the case in which the correct depth limit is known due to the geometric growth of the number of nodes per level. DFS may also be used to collect a sample of graph nodes. However, incomplete DFS, similarly to incomplete BFS, is biased towards nodes of high degree. Example For the following gr
https://en.wikipedia.org/wiki/Bucket%20sort
Bucket sort, or bin sort, is a sorting algorithm that works by distributing the elements of an array into a number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. It is a distribution sort, a generalization of pigeonhole sort that allows multiple keys per bucket, and is a cousin of radix sort in the most-to-least significant digit flavor. Bucket sort can be implemented with comparisons and therefore can also be considered a comparison sort algorithm. The computational complexity depends on the algorithm used to sort each bucket, the number of buckets to use, and whether the input is uniformly distributed. Bucket sort works as follows: Set up an array of initially empty "buckets". Scatter: Go over the original array, putting each object in its bucket. Sort each non-empty bucket. Gather: Visit the buckets in order and put all elements back into the original array. Pseudocode function bucketSort(array, k) is buckets ← new array of k empty lists M ← 1 + the maximum key value in the array for i = 0 to length(array) do insert array[i] into buckets[floor(k × array[i] / M)] for i = 0 to k do nextSort(buckets[i]) return the concatenation of buckets[0], ...., buckets[k] Let array denote the array to be sorted and k denote the number of buckets to use. One can compute the maximum key value in linear time by iterating over all the keys once. The floor function must be used to convert a floating number to an integer ( and possibly casting of datatypes too ). The function nextSort is a sorting function used to sort each bucket. Conventionally, insertion sort is used, but other algorithms could be used as well, such as selection sort or merge sort. Using bucketSort itself as nextSort produces a relative of radix sort; in particular, the case n = 2 corresponds to quicksort (although potentially with poor pivot choices). Analysis Worst-case analysis When the input contains several keys that are close to each other (clustering), those elements are likely to be placed in the same bucket, which results in some buckets containing more elements than average. The worst-case scenario occurs when all the elements are placed in a single bucket. The overall performance would then be dominated by the algorithm used to sort each bucket, for example insertion sort or comparison sort algorithms, such as merge sort. Average-case analysis Consider the case that the input is uniformly distributed. The first step, which is initialize the buckets and find the maximum key value in the array, can be done in time. If division and multiplication can be done in constant time, then scattering each element to its bucket also costs . Assume insertion sort is used to sort each bucket, then the third step costs , where is the length of the bucket indexed . Since we are concerning the average time, the expectation has to be e
https://en.wikipedia.org/wiki/Digital%20image%20processing
Digital image processing is the use of a digital computer to process digital images through an algorithm. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of multidimensional systems. The generation and development of digital image processing are mainly affected by three factors: first, the development of computers; second, the development of mathematics (especially the creation and improvement of discrete mathematics theory); third, the demand for a wide range of applications in environment, agriculture, military, industry and medical science has increased. History Many of the techniques of digital image processing, or digital picture processing as it often was called, were developed in the 1960s, at Bell Laboratories, the Jet Propulsion Laboratory, Massachusetts Institute of Technology, University of Maryland, and a few other research facilities, with application to satellite imagery, wire-photo standards conversion, medical imaging, videophone, character recognition, and photograph enhancement. The purpose of early image processing was to improve the quality of the image. It was aimed for human beings to improve the visual effect of people. In image processing, the input is a low-quality image, and the output is an image with improved quality. Common image processing include image enhancement, restoration, encoding, and compression. The first successful application was the American Jet Propulsion Laboratory (JPL). They used image processing techniques such as geometric correction, gradation transformation, noise removal, etc. on the thousands of lunar photos sent back by the Space Detector Ranger 7 in 1964, taking into account the position of the Sun and the environment of the Moon. The impact of the successful mapping of the Moon's surface map by the computer has been a success. Later, more complex image processing was performed on the nearly 100,000 photos sent back by the spacecraft, so that the topographic map, color map and panoramic mosaic of the Moon were obtained, which achieved extraordinary results and laid a solid foundation for human landing on the Moon. The cost of processing was fairly high, however, with the computing equipment of that era. That changed in the 1970s, when digital image processing proliferated as cheaper computers and dedicated hardware became available. This led to images being processed in real-time, for some dedicated problems such as television standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and computer-intensive operations. With the fast computers a
https://en.wikipedia.org/wiki/Z88DK
Z88DK is a Small-C-derived cross compiler for a long list of Z80 based computers. The name derives from the fact that it was originally developed to target the Cambridge Z88. Z88DK is much developed from Small-C and it accepts many features of ANSI C with the notable exception of multi-dimensional arrays and prototyped function pointers. Later version also supports SDCC as compiler. It has been used for many software and hardware projects, notably the REX DK (targeted to the REX 6000 platform) and the S1 SDK (targeted to the S1 MP3 Player) teams. The compiler is highly portable, and is known to be run on AmigaOS, BeOS, HP-UX 9, Linux, BSD, Mac OS X, Solaris, Win64, Win32, Win16 and MS-DOS. Supported target platforms As of the Z88DK supports the following target platforms: Amstrad CPC Amstrad NC100 Amstrad NC200 Cambridge Z88 Camputers Lynx Canon X-07 Casio PV-1000 Casio PV-2000 CCE MC-1000 Commodore 128 (in Z80 mode) CP/M based machines EACA Colour Genie EG2000 Enterprise 64 and 128 Epson PX-4 Epson PX-8 Exidy Sorcerer Galaksija Grundy NewBrain Jupiter Ace Lambda 8300 Luxor ABC 80 Luxor ABC 800 Master System Mattel Aquarius Memotech MTX MSX Nascom 1 and 2 NEC PC-6001 NEC PC-8801 Pac-Man arcade hardware Philips P2000 Philips VG5000 C7420 module for the Philips Videopac + G7400 Rabbit 2000/3000/4000 platform SAM Coupé SG-1000 Sharp MZ series Sharp OZ/QZ 700 family palmtop organizers Sharp X1 Sord M5 S-OS Spectravideo SVI Peters Plus Sprinter Tatung Einstein TI calculators (TI-82, TI-83 series, TI-84 Plus series, TI-85, TI-86) Timex Sinclair 2068 Toshiba Pasopia 7 TRS 80 VTech VZ200/300 (also known as Laser 200) Xircom REX 6000 (also known as DataSlim) ZX Spectrum ZX80 ZX81 See also Retargetable compiler Microcontroller SDCC References External links Z88DK Main website Z88DK Documentation C (programming language) compilers Cross-compilers Z80
https://en.wikipedia.org/wiki/TMS
TMS may refer to: Broadcasting TMS (entertainment data), data provider Test Match Special, BBC cricket coverage This Movie Sucks!, a Canadian TV show on bad movies That Metal Show, a US TV talk show Organizations The Minerals, Metals & Materials Society, a professional organization Texas Memory Systems, a manufacturer of solid-state drives TMS Entertainment; formerly Tokyo Movie Shinsha Co., Ltd.; a Japanese animation studio Toronto Montessori Schools, Richmond Hill, Ontario, Canada Toyota Motor Sales, U.S.A., Inc. Trinity Mathematical Society, UK TMS (production team), an English songwriting and record team The Micropalaeontological Society Schools Tech Music School, London, UK Temasek Secondary School, Bedok, Singapore The Master's Seminary, Sun Valley, California, US Tyrrell Middle School, Wolcott, Connecticut, US People T. M. Soundararajan (1922–2013), an Indian singer Places São Tomé International Airport, IATA code Science and medicine Chemistry Tetramethylsilane, an organosilicon chemical compound Trimethylsilanol, an organosilicon chemical compound Trimethylsilyl, a functional group in chemistry Tricaine methanesulfonate or tricaine mesylate, an anesthetic for fish Medicine and psychology Tension myositis syndrome, a medical condition causing pain Total motile spermatozoa, in semen analysis Transcranial magnetic stimulation in neuroscience Tandem mass spectrometry, to analyse biomolecules Sports Texas Motor Speedway, Fort Worth, Texas, US TMS Ringsted, a Danish handball club Technology SAP Transport management system, managing software updates Tape management system for computer backups Tile Map Service, a standard for downloading maps Translation management system, software Transportation management system Traffic management system Training Management System Other uses Tesla Model S, battery electric luxury executive car Truth maintenance system, a knowledge representation method Hobby of Model Railroading or Tetsudō Mokei Shumi, Japanese magazine Tuen Mun South station, Hong Kong; MTR station code The Most Mysterious Song on the Internet common known as abbreviation TMS
https://en.wikipedia.org/wiki/Small-C
Small-C is both a subset of the C programming language, suitable for resource-limited microcomputers and embedded systems, and an implementation of that subset. Originally valuable as an early compiler for microcomputer systems available during the late 1970s and early 1980s, the implementation has also been useful as an example simple enough for teaching purposes. The original compiler, written in Small-C for the Intel 8080 by Ron Cain, appeared in the May 1980 issue of Dr. Dobb's Journal of Computer Calisthenics & Orthodontia. James E. Hendrix improved and extended the original compiler, and wrote The Small-C Handbook. Ron bootstrapped Small-C on the SRI International PDP 11/45 Unix system with an account provided by John Bass for Small C development. The provided source code was released with management permission into the public domain. Small-C was important for tiny computers in a manner somewhat analogous to the importance of GCC for larger computers. Just like its Unix counterparts, the compiler generates assembler code, which then must be translated to machine code by an available assembler. Small-C is a retargetable compiler. Porting Small-C requires only that the back-end code generator and the library to operating system interface calls be rewritten for the target processor. Language subset "In May of 1980 Dr. Dobb's Journal ran an article entitled "A Small C Compiler for the 8080s" in which Ron Cain presented a small compiler for a subset of the C language. The most interesting feature of the compiler besides its small size was the language in which it was written—the one it compiled. It was a self-compiler! (Although this is commonplace today, it was a fairly novel idea at the time.) With a simple, one-pass algorithm, his compiler generated assembly language for the 8080 processor. Being small, however, it had its limitations. It recognized only characters, integers, and single dimension arrays of either type. The only loop controlling device was the while statement. There were no Boolean operators, so the bitwise logical operators & (AND) and | (OR) were used instead. But even with these limitations, it was a very capable language and a delight to use, especially compared to assembly language. Recognizing the need for improvements, Ron encouraged me to produce a second version, and in December 1982 it also appeared in Dr. Dobb's Journal. The new compiler augmented Small C with (1) code optimizing, (2) data initializing, (3) conditional compiling, (4) the extern storage class, (5) the for, do/while, switch, and goto statements, (6) combination assignment operators, (7) Boolean operators, (8) the one's complement operator, (9) block local variables, and (10) various other features. Then in 1984 Ernest Payne and I developed and published a CP/M compatible run-time library for the compiler. It consisted of over 80 functions and included most of those in the UNIX C Standard I/O Library—the ones that pertained to the CP/M environmen
https://en.wikipedia.org/wiki/KIM-1
The KIM-1, short for Keyboard Input Monitor, is a small 6502-based single-board computer developed and produced by MOS Technology, Inc. and launched in 1976. It was very successful in that period, due to its low price (thanks to the inexpensive 6502 microprocessor) and easy-access expandability. History MOS Technology's first processor, the 6501, could be plugged into existing motherboards that used the Motorola 6800, allowing potential users (i.e. engineers and hobbyists) to get a development system up and running very easily using existing hardware. Motorola immediately sued, forcing MOS to pull the 6501 from the market. Changing the pin layout produced the "lawsuit-friendly" 6502. Otherwise identical to the 6501, it nevertheless had the disadvantage of having no machine in which new users could quickly start playing with the CPU. Chuck Peddle, leader of the 650x group at MOS (and former member of Motorola's 6800 team), designed the KIM-1 in order to fill this need. The KIM-1 came to market in 1976. While the machine was originally intended to be used by engineers, it quickly found a large audience with hobbyists. A complete system could be constructed for under with the purchase of the computer itself for only , and then adding a power supply, a secondhand terminal and a cassette tape drive. Many books were available demonstrating small assembly language programs for the KIM, including The First Book of KIM by Jim Butterfield et al. One demo program converted the KIM into a music box by toggling a software-controllable output bit connected to a small loudspeaker. Canadian programmer Peter R. Jennings produced what was probably the first game for microcomputers to be sold commercially, Microchess, originally for the KIM-1. As the system became more popular one of the common additions was the Tiny BASIC programming language. This required an easy memory expansion; "all of the decoding for the first 4 K is provided right on the KIM board. All you need to provide is 4 K more of RAM chips and some buffers." The hard part was loading the BASIC from cassette tape—a 15-minute, error-prone ordeal. Rockwell International—who second-sourced the 6502, along with Synertek—released their own microcomputer in one board in 1978, the AIM-65. The AIM included a full ASCII keyboard, a 20-character 14-segment alphanumeric LED display, and a small cash register-like printer. A debug monitor was provided as standard firmware for the AIM, and users could also purchase optional ROM chips with an assembler and a Microsoft BASIC interpreter to choose from. Finally, there was the Synertek SYM-1 variant, which could be said to be a machine halfway between the KIM and the AIM; it had the KIM's small display, and a simple membrane keyboard of 29 keys (hex digits and control keys only), but provided AIM-standard expansion interfaces and true RS-232 (voltage level as well as current loop mode supported). Description The KIM-1 consisted of a single printed ci
https://en.wikipedia.org/wiki/Nunet
nunet was a global provider of mobile and IPTV video management technologies for broadcasters, media brands and Mobile Network Operators. The Mobile TV CMS product provides broadcasters with the ability to aggregate, encode, optimise and schedule live, looped and VOD channels for broadcast to mobile devices and IPTV platforms. The company's Mobile TV CMS is in use by a wide range of Mobile Network Operators globally to include the Vodafone Group, Vodacom, SFR, Proximus and Mobilkom A1; distributing content from a wide range of broadcasters and producers to include Eurosport, MTV, Discovery and Fashion TV. Founded in 1997, nunet AG was acquired by IMG (business) in 2006 and later in 2009 by KIT Digital, Inc. and was based in Cologne, Germany and London, UK. External links Nunet AG Official site Reallife Official site MobiTV Official site IMG See also GoTV Networks Mobile telephone broadcasting Streaming television
https://en.wikipedia.org/wiki/Surveyor%201
Surveyor 1 was the first lunar soft-lander in the uncrewed Surveyor program of the National Aeronautics and Space Administration (NASA, United States). This lunar soft-lander gathered data about the lunar surface that would be needed for the crewed Apollo Moon landings that began in 1969. The successful soft landing of Surveyor 1 on the Ocean of Storms was the first by an American space probe on any extraterrestrial body, occurring on the first attempt and just four months after the first soft Moon landing by the Soviet Union's Luna 9 probe. Surveyor 1 was launched May 30, 1966, from the Cape Canaveral Air Force Station at Cape Canaveral, Florida, and it landed on the Moon on June 2, 1966. Surveyor 1 transmitted 11,237 still photos of the lunar surface to the Earth by using a television camera and a sophisticated radio-telemetry system. The Surveyor program was managed by the Jet Propulsion Laboratory, in Los Angeles County, California, and the Surveyor space probe was built by the Hughes Aircraft Company in El Segundo, California. Mission description The Surveyor series of space probes was designed to carry out the first soft landings on the Moon by any American spacecraft. No instrumentation was carried specifically for scientific experiments by Surveyor 1, but considerable scientific data were collected by its television camera and then returned to Earth via the Deep Space Network from 1966 to 1967. These spacecraft carried two television cameras — one for its approach, which was not used in this case, and one for taking still pictures of the lunar surface. Over 100 engineering sensors were on board each Surveyor. Their television systems transmitted pictures of the spacecraft footpad and surrounding lunar terrain and surface materials. These spacecraft also acquired data on the radar reflectivity of the lunar surface, the load-bearing strength of the lunar surface, and the temperatures for use in the analysis of the lunar surface temperatures. (Later Surveyor space probes, beginning with Surveyor 3, carried scientific instruments to measure the composition and mechanical properties of the lunar "soil".) Surveyor 1 was launched May 30, 1966 and sent directly into a trajectory to the Moon without any parking orbit. Its retrorockets were turned off at a height of about 3.4 meters above the lunar surface. Surveyor 1 fell freely to the surface from this height, and it landed on the lunar surface on June 2, 1966, on the Oceanus Procellarum. This location was at . This is within the northeast portion of the large crater called Flamsteed P (or the Flamsteed Ring). Flamsteed itself lies within Flamsteed P on the south side. The duration of the spaceflight of Surveyor 1 was about 63 hours, 30 minutes. Surveyor 1's lunar launch weight was about , and its landing weight (minus expended maneuvering propellant, its solid-fueled retrorocket (which had been jettisoned), and its radar altimeter system) was about . Surveyor 1 transmitted video data
https://en.wikipedia.org/wiki/Demographics%20of%20the%20Soviet%20Union
According to data from the 1989 Soviet census, the population of the USSR was made up of 70% East Slavs, 17% Turkic peoples, and less than 2% other ethnic groups. Alongside the atheist majority of 60%, there were sizable minorities of Russian Orthodox Christians (approximately 20%) and Muslims (approximately 15%). History Revolution and Civil war, 1917 - 1923 During the Russian Revolution and Civil War period, Russia lost territories of the former Russian Empire, whose populations totalled about 30 million people (Poland: 18 million; Finland: 3 million; Romania: 3 million; the Baltic states: 5 million, Kars: 400 thousand). At least 2 million citizens of the former Russian Empire died during the Russian Civil War of 1917–1923, and a further 1 to 2 million emigrated. Interwar period, 1924 to 1940 Great Patriotic War, 1941 - 1945 During the Second World War on the Eastern Front, the Soviet Union lost an approximate 26.6 million people. Rejuvenation of the population, 1946 - 1960s After the Second World War, the population of the Soviet Union began to gradually recover to pre-war levels. By 1959 there were a registered 209,035,000 people, over the 1941 population count of 196,716,000. In 1958-59, Soviet fertility stood at around 2.8 children per woman. Population dynamics in the 1970s - 1980s The crude birth rate in the Soviet Union throughout its history had been decreasing - from 44.0 per thousand in 1926 to 18.0 in 1974, mostly due to urbanization and rising average age of marriages. The total fertility rate fell from 2.4 in 1969-70 to 2.3 in 1978-79. The crude death rate had been gradually decreasing as well - from 23.7 per thousand in 1926 to 8.7 in 1974. While death rates did not differ greatly across regions of the Soviet Union through much of Soviet history, birth rates in the southern republics of Transcaucasia and Central Asia were much higher than those in the northern parts of the Soviet Union, and in some cases even increased in the post-World War II period. This was partly due to slower rates of urbanization and traditionally early marriages in southern republics. Mainly as a result of differential birthrates, with most of the European nationalities moving toward sub-replacement fertility and the Central Asian and other nationalities of southern republics having well-above replacement-level fertility, the percentage of the population who were ethnic Russians was gradually being reduced. According to some Western predictions made in the 1990s, if the Soviet Union had stayed together, it is likely that ethnic Russians would have lost their majority status in the 2000s (decade). This differential could not be offset by assimilation of non-Russians by Russians, in part because the nationalities of southern republics maintained a distinct ethnic consciousness and were not easily assimilated. The late 1960s and the 1970s witnessed a dramatic reversal of the path of declining mortality in the Soviet Union, and was especially
https://en.wikipedia.org/wiki/Magma%20%28computer%20algebra%20system%29
Magma is a computer algebra system designed to solve problems in algebra, number theory, geometry and combinatorics. It is named after the algebraic structure magma. It runs on Unix-like operating systems, as well as Windows. Introduction Magma is produced and distributed by the Computational Algebra Group within the School of Mathematics and Statistics at the University of Sydney. In late 2006, the book Discovering Mathematics with Magma was published by Springer as volume 19 of the Algorithms and Computations in Mathematics series. The Magma system is used extensively within pure mathematics. The Computational Algebra Group maintain a list of publications that cite Magma, and as of 2010 there are about 2600 citations, mostly in pure mathematics, but also including papers from areas as diverse as economics and geophysics. History The predecessor of the Magma system was named Cayley (1982–1993), after Arthur Cayley. Magma was officially released in August 1993 (version 1.0). Version 2.0 of Magma was released in June 1996 and subsequent versions of 2.X have been released approximately once per year. In 2013, the Computational Algebra Group finalized an agreement with the Simons Foundation, whereby the Simons Foundation will underwrite all costs of providing Magma to all U.S. nonprofit, non-governmental scientific research or educational institutions. All students, researchers and faculty associated with a participating institution will be able to access Magma for free, through that institution. Mathematical areas covered by the system Group theory Magma includes permutation, matrix, finitely presented, soluble, abelian (finite or infinite), polycyclic, braid and straight-line program groups. Several databases of groups are also included. Number theory Magma contains asymptotically fast algorithms for all fundamental integer and polynomial operations, such as the Schönhage–Strassen algorithm for fast multiplication of integers and polynomials. Integer factorization algorithms include the Elliptic Curve Method, the Quadratic sieve and the Number field sieve. Algebraic number theory Magma includes the KANT computer algebra system for comprehensive computations in algebraic number fields. A special type also allows one to compute in the algebraic closure of a field. Module theory and linear algebra Magma contains asymptotically fast algorithms for all fundamental dense matrix operations, such as Strassen multiplication. Sparse matrices Magma contains the structured Gaussian elimination and Lanczos algorithms for reducing sparse systems which arise in index calculus methods, while Magma uses Markowitz pivoting for several other sparse linear algebra problems. Lattices and the LLL algorithm Magma has a provable implementation of fpLLL, which is an LLL algorithm for integer matrices which uses floating point numbers for the Gram–Schmidt coefficients, but such that the result is rigorously proven to be LLL-reduced. Commutative algebr
https://en.wikipedia.org/wiki/James%20H.%20Clark
James Henry Clark (born March 23, 1944) is an American entrepreneur and computer scientist. He founded several notable Silicon Valley technology companies, including Silicon Graphics, Netscape, myCFO, and Healtheon. His research work in computer graphics led to the development of systems for the fast rendering of three-dimensional computer images. In 1998, Clark was elected a member of the National Academy of Engineering for the development of computer graphics and for technical leadership in the computer industry. Early life and education Clark was born in Plainview, Texas, on March 23, 1944. He dropped out of high school at 16 and spent four years in the Navy, where he was introduced to electronics. Clark began taking night courses at Tulane University's University College where, despite his lack of a high school diploma, he was able to earn enough credits to be admitted to the University of New Orleans. There, Clark earned his bachelor's and a master's degrees in physics, followed by a Ph.D. in computer science from the University of Utah in 1974. Career Academia After completing his doctorate, Clark briefly worked at the New York Institute of Technology's Computer Graphics Lab. He served as an assistant professor at the University of California, Santa Cruz (1974-1978) before moving to Stanford University as an associate professor of electrical engineering (1979-1982). Clark's research work concerned geometry pipelines, specialized software or hardware that accelerates the display of three dimensional images. The peak of his group's advancements was the Geometry Engine, an early hardware accelerator for rendering computer images based on geometric models which he developed in 1979 with his students at Stanford. Silicon Graphics In 1982, Clark along with several Stanford graduate students founded Silicon Graphics (SGI). The earliest Silicon Graphics graphical workstations were mainly terminals, but they were soon followed by stand-alone graphical Unix workstations with very fast graphics rendering hardware. In the mid-1980s, Silicon Graphics began to use the MIPS CPU as the foundation of their newest workstations, replacing the Motorola 68000. By 1991, Silicon Graphics had become the world leader in the production of Hollywood movie visual effects and 3-D imaging. Silicon Graphics focused on the high-end market where they could charge a premium for their special hardware and graphics software. Clark had differences of opinion with Silicon Graphics management regarding the future direction of the company, and departed in late January 1994. Netscape In February 1994, Clark sought out Marc Andreessen who had led the development of Mosaic, the first widely distributed and easy-to-use software for browsing the World Wide Web, while employed at the National Center for Supercomputing Applications (NCSA). Clark and Andreessen founded Netscape, and developed the Netscape Navigator web browser. The founding of Netscape and its IPO in August 1995
https://en.wikipedia.org/wiki/Spanning%20Tree%20Protocol
The Spanning Tree Protocol (STP) is a network protocol that builds a loop-free logical topology for Ethernet networks. The basic function of STP is to prevent bridge loops and the broadcast radiation that results from them. Spanning tree also allows a network design to include backup links providing fault tolerance if an active link fails. As the name suggests, STP creates a spanning tree that characterizes the relationship of nodes within a network of connected layer-2 bridges, and disables those links that are not part of the spanning tree, leaving a single active path between any two network nodes. STP is based on an algorithm that was invented by Radia Perlman while she was working for Digital Equipment Corporation. In 2001, the IEEE introduced Rapid Spanning Tree Protocol (RSTP) as 802.1w. RSTP provides significantly faster recovery in response to network changes or failures, introducing new convergence behaviors and bridge port roles to do this. RSTP was designed to be backwards-compatible with standard STP. STP was originally standardized as IEEE 802.1D but the functionality of spanning tree (802.1D), rapid spanning tree (802.1w), and multiple spanning tree (802.1s) has since been incorporated into IEEE 802.1Q-2014. While STP is still in use today, in most modern networks its primary use is as a loop-protection mechanism rather than a fault tolerance mechanism. Link aggregation protocols such as LACP will bond two or more links to provide fault tolerance while simultaneously increasing overall link capacity. Protocol operation The need for the Spanning Tree Protocol (STP) arose because switches in local area networks (LANs) are often interconnected using redundant links to improve resilience should one connection fail. However, this connection configuration creates a switching loop resulting in broadcast radiations and MAC table instability. If redundant links are used to connect switches, then switching loops need to be avoided. To avoid the problems associated with redundant links in a switched LAN, STP is implemented on switches to monitor the network topology. Every link between switches, and in particular redundant links, are catalogued. The spanning-tree algorithm then blocks forwarding on redundant links by setting up one preferred link between switches in the LAN. This preferred link is used for all Ethernet frames unless it fails, in which case a non-preferred redundant link is enabled. When implemented in a network, STP designates one layer-2 switch as root bridge. All switches then select their best connection towards the root bridge for forwarding and block other redundant links. All switches constantly communicate with their neighbors in the LAN using bridge protocol data units (BPDUs). Provided there is more than one link between two switches, the STP root bridge calculates the cost of each path based on bandwidth. STP will select the path with the lowest cost, that is the highest bandwidth, as the preferred link. S
https://en.wikipedia.org/wiki/Natural-language%20understanding
Natural-language understanding (NLU) or natural-language interpretation (NLI) is a subtopic of natural-language processing in artificial intelligence that deals with machine reading comprehension. Natural-language understanding is considered an AI-hard problem. There is considerable commercial interest in the field because of its application to automated reasoning, machine translation, question answering, news-gathering, text categorization, voice-activation, archiving, and large-scale content analysis. History The program STUDENT, written in 1964 by Daniel Bobrow for his PhD dissertation at MIT, is one of the earliest known attempts at natural-language understanding by a computer. Eight years after John McCarthy coined the term artificial intelligence, Bobrow's dissertation (titled Natural Language Input for a Computer Problem Solving System) showed how a computer could understand simple natural language input to solve algebra word problems. A year later, in 1965, Joseph Weizenbaum at MIT wrote ELIZA, an interactive program that carried on a dialogue in English on any topic, the most popular being psychotherapy. ELIZA worked by simple parsing and substitution of key words into canned phrases and Weizenbaum sidestepped the problem of giving the program a database of real-world knowledge or a rich lexicon. Yet ELIZA gained surprising popularity as a toy project and can be seen as a very early precursor to current commercial systems such as those used by Ask.com. In 1969, Roger Schank at Stanford University introduced the conceptual dependency theory for natural-language understanding. This model, partially influenced by the work of Sydney Lamb, was extensively used by Schank's students at Yale University, such as Robert Wilensky, Wendy Lehnert, and Janet Kolodner. In 1970, William A. Woods introduced the augmented transition network (ATN) to represent natural language input. Instead of phrase structure rules ATNs used an equivalent set of finite state automata that were called recursively. ATNs and their more general format called "generalized ATNs" continued to be used for a number of years. In 1971, Terry Winograd finished writing SHRDLU for his PhD thesis at MIT. SHRDLU could understand simple English sentences in a restricted world of children's blocks to direct a robotic arm to move items. The successful demonstration of SHRDLU provided significant momentum for continued research in the field. Winograd continued to be a major influence in the field with the publication of his book Language as a Cognitive Process. At Stanford, Winograd would later advise Larry Page, who co-founded Google. In the 1970s and 1980s, the natural language processing group at SRI International continued research and development in the field. A number of commercial efforts based on the research were undertaken, e.g., in 1982 Gary Hendrix formed Symantec Corporation originally as a company for developing a natural language interface for database queries on pe
https://en.wikipedia.org/wiki/SHRDLU
SHRDLU is an early natural-language understanding computer program that was developed by Terry Winograd at MIT in 1968–1970. In the program, the user carries on a conversation with the computer, moving objects, naming collections and querying the state of a simplified "blocks world", essentially a virtual box filled with different blocks. SHRDLU was written in the Micro Planner and Lisp programming language on the DEC PDP-6 computer and a DEC graphics terminal. Later additions were made at the computer graphics labs at the University of Utah, adding a full 3D rendering of SHRDLU's "world". The name SHRDLU was derived from ETAOIN SHRDLU, the arrangement of the letter keys on a Linotype machine, arranged in descending order of usage frequency in English. Functionality SHRDLU is primarily a language parser that allows user interaction using English terms. The user instructs SHRDLU to move various objects around in the "blocks world" containing various basic objects: blocks, cones, balls, etc. What made SHRDLU unique was the combination of four simple ideas that added up to make the simulation of "understanding" far more convincing. One was that SHRDLU's world is so simple that the entire set of objects and locations could be described by including as few as perhaps 50 words: nouns like "block" and "cone", verbs like "place on" and "move to", and adjectives like "big" and "blue". The possible combinations of these basic language building blocks are quite simple, and the program is fairly adept at figuring out what the user means. SHRDLU also includes a basic memory to supply context. One could ask SHRDLU to "put the green cone on the red block" and then "take the cone off"; "the cone" would be taken to mean the green cone one had just talked about. SHRDLU can search back further through the interactions to find the proper context in most cases when additional adjectives were supplied. One could also ask questions about the history, for instance one could ask "did you pick up anything before the cone?" A side effect of this memory, and the original rules SHRDLU was supplied with, is that the program can answer questions about what was possible in the world and what was not. For instance, SHRDLU can deduce that blocks could be stacked by looking for examples, but also realize that triangles couldn't be stacked, after having tried it. The "world" contains basic physics to make blocks fall over, independent of the language parser. Finally, SHRDLU can also remember names given to objects, or arrangements of them. For instance one could say "a steeple is a small triangle on top of a tall rectangle"; SHRDLU can then answer questions about steeples in the blocks world, and build new ones. Here is a famous demonstration of SHRDLU: Person: Pick up a big red block. Computer: OK. Person: Grasp the pyramid. Computer: I DON'T UNDERSTAND WHICH PYRAMID YOU MEAN. Person (changing their mind): Find a block which is taller than the one you are holding and pu
https://en.wikipedia.org/wiki/AIM%20alliance
The AIM alliance, also known as the PowerPC alliance, was formed on October 2, 1991, between Apple, IBM, and Motorola. Its goal was to create an industry-wide open-standard computing platform based on the POWER instruction set architecture. It was intended to solve legacy problems, future-proof the industry, and compete with Microsoft's monopoly and the Wintel duopoly. The alliance yielded the launch of Taligent, Kaleida Labs, the PowerPC CPU family, the Common Hardware Reference Platform (CHRP) hardware platform standard, and Apple's Power Macintosh computer line. History Development From the 1980s into the 1990s, the computer industry was moving from a model of just individual personal computers toward an interconnected world, where no single company could afford to be vertically isolated anymore. Infinite Loop says "most people at Apple knew the company would have to enter into ventures with some of its erstwhile enemies, license its technology, or get bought". Furthermore, Microsoft's monopoly and the Wintel duopoly threatened competition industrywide, and the Advanced Computing Environment (ACE) consortium was underway. Phil Hester, a designer of the IBM RS/6000, convinced IBM's president Jack Kuehler of the necessity of a business alliance. Kuehler called Apple President Michael Spindler, who bought into the approach for a design that could challenge the Wintel-based PC. Apple CEO John Sculley was even more enthusiastic. On July 3, 1991, Apple and IBM signed a non-contractual letter of intent, proposing an alliance and outlining its long-term strategic technology goals. Its main goal was creating a single unifying open-standard computing platform for the whole industry, made of a new hardware design and a next-generation operating system. IBM intended to bring the Macintosh operating system into the enterprise and Apple intended to become a prime customer for the new POWER hardware platform. Considering it to be critically poorly communicated and confusing to the outside world at this point, industry commentators nonetheless saw this partnership as an overall competitive force against Microsoft's monopoly and Intel's and Microsoft's duopoly. IBM and Motorola would have 300 engineers to codevelop chips at a joint manufacturing facility in Austin, Texas. Motorola would sell the chips to Apple or anyone else. Between the three companies, more than 400 people had been involved to define a more unified corporate culture with less top-down executive decree. They collaborated as peers and future coworkers in creating the alliance and the basis of its ongoing future dialog which promised to "change the landscape of computing in the 90s". Launch On October 2, 1991, the historic AIM alliance was officially formed with a contract between Apple CEO John Sculley, IBM Research and Development Chief Jack Kuehler, and IBM Vice President James Cannavino. Kuehler said "Together we announce the second decade of personal computing, and it begins today"
https://en.wikipedia.org/wiki/IBM%20370%20printer
The IBM 370 printer was used on the IBM 305 RAMAC computer system, introduced by IBM on September 14, 1956. The 370 was connected to the 305 by a serial data line from the S track of the computer's drum memory (the printer and punch both obtain information from a single output track, the control as to what information to print or punch and how, is within the print and punch units) and printed 80-columns with a punched tape controlled carriage. Line formatting was programmed by inserting wire jumpers into a plugboard control panel. The printer mechanism used an eight sided, seven position (56 character) print slug in a horizontal orientation. The X, O, and 2 bits of the character code rotate the slug and the 1, 4, and 8 bits selected the position. The platen hammer then struck the paper from behind, causing the selected character to print. Of the 56 characters on the print slug, only 47 were printable with the standard valid character set of the IBM 305 computer—the complete alphabet, numbers 0–9, and eleven special characters (48 characters, including the blank character). The printer can achieve the speed of 50 columns per second, with processing of 30 cards per minute; At two seconds per line can be achieved 1,800 lines per hour. References 370 IBM 0370 Computer-related introductions in 1956
https://en.wikipedia.org/wiki/Linear%20classifier
In the field of machine learning, the goal of statistical classification is to use an object's characteristics to identify which class (or group) it belongs to. A linear classifier achieves this by making a classification decision based on the value of a linear combination of the characteristics. An object's characteristics are also known as feature values and are typically presented to the machine in a vector called a feature vector. Such classifiers work well for practical problems such as document classification, and more generally for problems with many variables (features), reaching accuracy levels comparable to non-linear classifiers while taking less time to train and use. Definition If the input feature vector to the classifier is a real vector , then the output score is where is a real vector of weights and f is a function that converts the dot product of the two vectors into the desired output. (In other words, is a one-form or linear functional mapping onto R.) The weight vector is learned from a set of labeled training samples. Often f is a threshold function, which maps all values of above a certain threshold to the first class and all other values to the second class; e.g., The superscript T indicates the transpose and is a scalar threshold. A more complex f might give the probability that an item belongs to a certain class. For a two-class classification problem, one can visualize the operation of a linear classifier as splitting a high-dimensional input space with a hyperplane: all points on one side of the hyperplane are classified as "yes", while the others are classified as "no". A linear classifier is often used in situations where the speed of classification is an issue, since it is often the fastest classifier, especially when is sparse. Also, linear classifiers often work very well when the number of dimensions in is large, as in document classification, where each element in is typically the number of occurrences of a word in a document (see document-term matrix). In such cases, the classifier should be well-regularized. Generative models vs. discriminative models There are two broad classes of methods for determining the parameters of a linear classifier . They can be generative and discriminative models. Methods of the former model joint probability distribution, whereas methods of the latter model conditional density functions . Examples of such algorithms include: Linear Discriminant Analysis (LDA)—assumes Gaussian conditional density models Naive Bayes classifier with multinomial or multivariate Bernoulli event models. The second set of methods includes discriminative models, which attempt to maximize the quality of the output on a training set. Additional terms in the training cost function can easily perform regularization of the final model. Examples of discriminative training of linear classifiers include: Logistic regression—maximum likelihood estimation of assuming that the observed t
https://en.wikipedia.org/wiki/FreeDOS
FreeDOS (formerly Free-DOS and PD-DOS) is a free software operating system for IBM PC compatible computers. It intends to provide a complete MS-DOS-compatible environment for running legacy software and supporting embedded systems. FreeDOS can be booted from a floppy disk or USB flash drive. It is designed to run well under virtualization or x86 emulation. Unlike most versions of MS-DOS, FreeDOS is composed of free software, licensed under the terms of the GNU General Public License. However, other packages that form part of the FreeDOS project include non-GPL software considered worthy of preservation, such as 4DOS, which is distributed under a modified MIT License. History The FreeDOS project began on 29 June 1994, after Microsoft announced it would no longer sell or support MS-DOS. Jim Hall – who at the time was a student – posted a manifesto proposing the development of PD-DOS, a public domain version of DOS. Within a few weeks, other programmers including Pat Villani and Tim Norman joined the project. Between them, a kernel (by Villani), the COMMAND.COM command line interpreter (by Villani and Norman), and core utilities (by Hall) were created by pooling code they had written or found available. For some time, the project was maintained by Morgan "Hannibal" Toal. There have been many official pre-release distributions of FreeDOS before the final FreeDOS 1.0 distribution. GNU/DOS, an unofficial distribution of FreeDOS, was discontinued after version 1.0 was released. Blinky the Fish is the mascot of FreeDOS. He was designed by Bas Snabilie. Distribution FreeDOS 1.1, released on 2 January 2012, is available for download as a CD-ROM image: a limited install disc that only contains the kernel and basic applications, and a full disc that contains many more applications (games, networking, development, etc.), not available but with a newer, fuller 1.2. The legacy version 1.0 (2006) consisted of two CDs, one of which was an 8 MB install CD targeted at regular users and the other which was a larger 49 MB live CD that also held the source code of the project. Commercial uses FreeDOS is used by several companies: Dell preloaded FreeDOS with their n-series desktops to reduce their cost. The firm has been criticized for making these machines no cheaper, and harder to buy, than identical systems with Windows. HP provided FreeDOS as an option in its dc5750 desktops, Mini 5101 netbooks and Probook laptops. FreeDOS is also used as bootable media for updating the BIOS firmware in HP systems. FreeDOS is included by Steve Gibson's hard drive maintenance and recovery program, SpinRite. Intel's Solid-State Drive Firmware Update Tool loaded the FreeDOS kernel. Non-commercial uses FreeDOS is also used in multiple independent projects: FED-UP is the Floppy Enhanced DivX Universal Player. FUZOMA is a FreeDOS-based distribution that can boot from a floppy disk and converts older computers into educational tools for children. XFDOS is a FreeDOS-bas
https://en.wikipedia.org/wiki/GEOS
GEOS may refer to: Computer software GEOS (8-bit operating system), an operating system originally designed for the Commodore 64 GEOS (16-bit operating system), a DOS-based graphical user interface and x86 operating system GEOS (securities processing software), an integrated online system for the management and processing of securities GEOS (software library), an open-source geometry engine Goddard Earth Observing System, an Earth system model Other GEOS (eikaiwa), an English conversation teaching company based in Japan GEOS circle, an intersection of four lines that are associated with a generalized triangle GEOS (satellite), a research satellite from ESRO (1978–1982) GEOS (satellite series), three research satellites from NASA Groupe GEOS, a French business consultancy
https://en.wikipedia.org/wiki/Richard%20Hamming
Richard Wesley Hamming (February 11, 1915 – January 7, 1998) was an American mathematician whose work had many implications for computer engineering and telecommunications. His contributions include the Hamming code (which makes use of a Hamming matrix), the Hamming window, Hamming numbers, sphere-packing (or Hamming bound), Hamming graph concepts, and the Hamming distance. Born in Chicago, Hamming attended University of Chicago, University of Nebraska and the University of Illinois at Urbana–Champaign, where he wrote his doctoral thesis in mathematics under the supervision of Waldemar Trjitzinsky (1901–1973). In April 1945, he joined the Manhattan Project at the Los Alamos Laboratory, where he programmed the IBM calculating machines that computed the solution to equations provided by the project's physicists. He left to join the Bell Telephone Laboratories in 1946. Over the next fifteen years, he was involved in nearly all of the laboratories' most prominent achievements. For his work, he received the Turing Award in 1968, being its third recipient. After retiring from the Bell Labs in 1976, Hamming took a position at the Naval Postgraduate School in Monterey, California, where he worked as an adjunct professor and senior lecturer in computer science, and devoted himself to teaching and writing books. He delivered his last lecture in December 1997, just a few weeks before he died from a heart attack on January 7, 1998. Early life Hamming was born in Chicago, Illinois, on February 11, 1915, the son of Richard J. Hamming, a credit manager, and Mabel G. Redfield. He grew up in Chicago, where he attended Crane Technical High School and Crane Junior College. Hamming initially wanted to study engineering, but money was scarce during the Great Depression, and the only scholarship offer he received came from the University of Chicago, which had no engineering school. Instead, he became a science student, majoring in mathematics, and received his Bachelor of Science degree in 1937. He later considered this a fortunate turn of events. "As an engineer," he said, "I would have been the guy going down manholes instead of having the excitement of frontier research work." He went on to earn a Master of Arts degree from the University of Nebraska in 1939, and then entered the University of Illinois at Urbana–Champaign, where he wrote his doctoral thesis on Some Problems in the Boundary Value Theory of Linear Differential Equations under the supervision of Waldemar Trjitzinsky. His thesis was an extension of Trjitzinsky's work in that area. He looked at Green's function and further developed Jacob Tamarkin's methods for obtaining characteristic solutions. While he was a graduate student, he discovered and read George Boole's The Laws of Thought. The University of Illinois at Urbana–Champaign awarded Hamming his Doctor of Philosophy in 1942, and he became an instructor in mathematics there. He married Wanda Little, a fellow student, on September 5, 1942,
https://en.wikipedia.org/wiki/Weak%20entity
In a relational database, a weak entity is an entity that cannot be uniquely identified by its attributes alone; therefore, it must use a foreign key in conjunction with its attributes to create a primary key. The foreign key is typically a primary key of an entity it is related to. The foreign key is an attribute of the identifying (or owner, parent, or dominant) entity set. Each element in the weak entity set must have a relationship with exactly one element in the owner entity set, and therefore, the relationship cannot be a many-to-many relationship. In entity relationship diagrams (ER diagrams), a weak entity set is indicated by a bold (or double-lined) rectangle (the entity) connected by a bold (or double-lined) type arrow to a bold (or double-lined) diamond (the relationship). This type of relationship is called an identifying relationship and in IDEF1X notation it is represented by an oval entity rather than a square entity for base tables. An identifying relationship is one where the primary key is populated to the child weak entity as a primary key in that entity. In general (though not necessarily) a weak entity does not have any items in its primary key other than its inherited primary key and a sequence number. There are two types of weak entities: associative entities and subtype entities. The latter represents a crucial type of normalization, where the super-type entity inherits its attributes to subtype entities based on the value of the discriminator. In IDEF1X, a government standard for capturing requirements, possible sub-type relationships are: Complete subtype relationship, when all categories are known. Incomplete subtype relationship, when all categories may not be known. A classic example of a weak entity without a sub-type relationship would be the "header/detail' records in many real world situations such as claims, orders and invoices, where the header captures information common across all forms and the detail captures information specific to individual items. The standard example of a complete subtype relationship is the party entity. Given the discriminator PARTY TYPE (which could be individual, partnership, C Corporation, Sub Chapter S Association, Association, Governmental Unit, Quasi-governmental agency) the two subtype entities are PERSON, which contains individual-specific information such as first and last name and date of birth, and ORGANIZATION, which would contain such attributes as the legal name, and organizational hierarchies such as cost centers. When sub-type relationships are rendered in a database, the super-type becomes what is referred to as a base table. The sub-types are considered derived tables, which correspond to weak entities. Referential integrity is enforced via cascading updates and deletes. Example Consider a database that records customer orders, where an order is for one or more of the items that the enterprise sells. The database would contain a table identifying customers
https://en.wikipedia.org/wiki/Extended%20Euclidean%20algorithm
In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers a and b, also the coefficients of Bézout's identity, which are integers x and y such that This is a certifying algorithm, because the gcd is the only number that can simultaneously satisfy this equation and divide the inputs. It allows one to compute also, with almost no extra cost, the quotients of a and b by their greatest common divisor. also refers to a very similar algorithm for computing the polynomial greatest common divisor and the coefficients of Bézout's identity of two univariate polynomials. The extended Euclidean algorithm is particularly useful when a and b are coprime. With that provision, x is the modular multiplicative inverse of a modulo b, and y is the modular multiplicative inverse of b modulo a. Similarly, the polynomial extended Euclidean algorithm allows one to compute the multiplicative inverse in algebraic field extensions and, in particular in finite fields of non prime order. It follows that both extended Euclidean algorithms are widely used in cryptography. In particular, the computation of the modular multiplicative inverse is an essential step in the derivation of key-pairs in the RSA public-key encryption method. Description The standard Euclidean algorithm proceeds by a succession of Euclidean divisions whose quotients are not used. Only the remainders are kept. For the extended algorithm, the successive quotients are used. More precisely, the standard Euclidean algorithm with a and b as input, consists of computing a sequence of quotients and a sequence of remainders such that It is the main property of Euclidean division that the inequalities on the right define uniquely and from and The computation stops when one reaches a remainder which is zero; the greatest common divisor is then the last non zero remainder The extended Euclidean algorithm proceeds similarly, but adds two other sequences, as follows The computation also stops when and gives is the greatest common divisor of the input and The Bézout coefficients are and that is The quotients of a and b by their greatest common divisor are given by and Moreover, if a and b are both positive and , then for where denotes the integral part of , that is the greatest integer not greater than . This implies that the pair of Bézout's coefficients provided by the extended Euclidean algorithm is the minimal pair of Bézout coefficients, as being the unique pair satisfying both above inequalities . Also it means that the algorithm can be done without integer overflow by a computer program using integers of a fixed size that is larger than that of a and b. Example The following table shows how the extended Euclidean algorithm proceeds with input and . The greatest common divisor is the last non zero entry, in the column "remainder". The com
https://en.wikipedia.org/wiki/BowLingual
, or "Bow-Lingual" as the North American version is spelled, is a computer-based dog language-to-human language translation device developed by Japanese toy company Takara and first sold in Japan in 2002. Versions for South Korea and the United States were launched in 2003. The device was named by Time magazine as one of the "Best Inventions of 2002." The inventors of BowLingual, Keita Satoh, Matsumi Suzuki and Norio Kogure were awarded the 2002 Ig Nobel Prize for "promoting peace and harmony between the species." The device is presented as a "translator" but has been called an "emotion analyzer". It is said to categorize dog barks into one of six standardized emotional categories. BowLingual also provides a phrase which is representative of that emotion. The product instructions state that these phrases "are for entertainment purposes only" and are not meant to be accurate translations of each bark. Features BowLingual has bow functions which include dog training tips, a "Bow Wow Diary," tips on understanding a dog's body language, a medical checklist and a home alone bark recording function. The device consists of a hand-held receiver that contains a LCD information screen and functions as the controller and a wireless microphone-transmitter which is attached to the dog's collar. When a dog barks, the microphone records and transmits the sound to the hand-held unit for computer analysis by a database with thousands of dog barks pre-recorded into it. The unit then categorizes the bark into one of six distinct dog emotions (happy, sad, frustrated, on-guard, assertive, needy) and displays the corresponding emotion on the screen. After displaying the emotion, BowLingual then displays a phrase which has been categorized to fit within the range of each emotion. Versions Regional versionals of BowLingual have been released in Japan, South Korea, the US and Canada. The versions for South Korea, the US and Canada have different modifications in comparison to the Japanese version. In May 2003, at the request of the Japan Foreign Ministry, Takara provided Japanese Prime Minister Junichiro Koizumi with two prototypes of the English version of BowLingual several months before it had been released in North America. Koizumi then presented these to Russian President Vladimir Putin, for each of his dogs (Tosca, a standard Poodle, and Connie, a Labrador Retriever), at ceremonies celebrating the 300th anniversary of St. Petersburg. Effectiveness BowLingual uses customized voice-print analysis technology which has been adapted for dog barks. The accuracy of this product can be affected by varying conditions and situations. Sound interference can occur when the wireless collar-microphone picks up noises made by chain collars and collars with dog tags attached. As a result, the dog owner may believe that the device is malfunctioning and not registering the dog bark correctly. In windy conditions, the microphone will sometimes interpret a gust of wind as a ba
https://en.wikipedia.org/wiki/ARCNET
Attached Resource Computer NETwork (ARCNET or ARCnet) is a communications protocol for local area networks. ARCNET was the first widely available networking system for microcomputers; it became popular in the 1980s for office automation tasks. It was later applied to embedded systems where certain features of the protocol are especially useful. History Development ARCNET was developed by principal development engineer John Murphy at Datapoint Corporation in 1976 under Victor Poor, and announced in 1977. It was originally developed to connect groups of their Datapoint 2200 terminals to talk to a shared 8" floppy disk system. It was the first loosely coupled LAN-based clustering system, making no assumptions about the type of computers that would be connected. This was in contrast to contemporary larger and more expensive computer systems such as DECnet or SNA, where a homogeneous group of similar or proprietary computers were connected as a cluster. The token-passing bus protocol of that I/O device-sharing network was subsequently applied to allowing processing nodes to communicate with each other for file-serving and computing scalability purposes. An application could be developed in DATABUS, Datapoint's proprietary COBOL-like language and deployed on a single computer with dumb terminals. When the number of users outgrew the capacity of the original computer, additional 'compute' resource computers could be attached via ARCNET, running the same applications and accessing the same data. If more storage was needed, additional disk resource computers could also be attached. This incremental approach broke new ground and by the end of the 1970s (before the first IBM PC was announced in 1981) over ten thousand ARCNET LAN installations were in commercial use around the world, and Datapoint had become a Fortune 500 company. As microcomputers took over the industry, well-proven and reliable ARCNET was also offered as an inexpensive LAN for these machines. Market ARCNET remained proprietary until the early-to-mid 1980s. This did not cause concern at the time, as most network architectures were proprietary. The move to non-proprietary, open systems began as a response to the dominance of International Business Machines (IBM) and its Systems Network Architecture (SNA). In 1979, the Open Systems Interconnection Reference Model (OSI model) was published. Then, in 1980, Digital, Intel and Xerox (the DIX consortium) published an open standard for Ethernet that was soon adopted as the basis of standardization by the IEEE and the ISO. IBM responded by proposing Token Ring as an alternative to Ethernet but kept such tight control over standardization that competitors were wary of using it. ARCNET was less expensive than either, more reliable, more flexible, and by the late 1980s it had a market share about equal to that of Ethernet. Tandy/Radio Shack offered ARCNET as an application and file sharing medium for their TRS-80 Model II, Model 12, Model 16, Ta
https://en.wikipedia.org/wiki/Dexter%27s%20Laboratory
Dexter's Laboratory is an American animated television series created by Genndy Tartakovsky for Cartoon Network as the first Cartoon Cartoon. The series follows Dexter, an enthusiastic boy-genius with a hidden science laboratory in his room full of inventions, which he keeps secret from his clueless parents, who are only referred to as "Mom" and "Dad". Dexter is at constant odds with his older and more extroverted sister Dee Dee, who always gains access to the lab and inadvertently foils his experiments. Dexter has a bitter rivalry with his neighbor and classmate Mandark, a nefarious boy-genius who attempts to undermine Dexter at every opportunity. Prominently featured in the first and second seasons are other segments focusing on superhero-based characters Monkey, Dexter's pet lab-monkey/superhero, and the Justice Friends, a trio of superheroes who share an apartment. Tartakovsky pitched the series to Fred Seibert's first animated shorts showcase What a Cartoon! at Hanna-Barbera, basing it on student films he produced at the California Institute of the Arts. Four pilots aired on Cartoon Network and TNT from 1995 to 1996. Viewer approval ratings led to a half-hour series, which consisted of two seasons totaling 52 episodes, airing from April 27, 1996, to June 15, 1998. On December 10, 1999, a television film titled Dexter's Laboratory: Ego Trip aired as the intended series finale, and Tartakovsky left to begin working on Samurai Jack. In November 2000, the series was renewed for two seasons containing 26 total episodes, which began airing on November 18, 2001, and concluded on November 20, 2003. Due to Tartakovsky's departure, the last two seasons featured Chris Savino as showrunner along with a new production team at Cartoon Network Studios with changes made to the visual art style and character designs. Dexter's Laboratory won three Annie Awards, with nominations for four Primetime Emmy Awards, four Golden Reel Awards, and nine other Annie Awards. The series is notable for helping launch the careers of animators Craig McCracken, Seth MacFarlane, Butch Hartman, Paul Rudish, and Rob Renzetti. Spin-off media include children's books, comic books, DVD and VHS releases, music albums, toys, and video games. Premise Dexter (voiced by Christine Cavanaugh in seasons 1–3; Candi Milo in seasons 3–4) is a bespectacled boy-genius who, behind a bookcase in his bedroom, conceals a secret laboratory, which can be accessed by spoken passwords or hidden switches on his bookshelf. Though highly intelligent, Dexter often fails to achieve his goals when he becomes overexcited and careless. Although he comes from a typical American family, Dexter speaks with an accent of indeterminate origin. Christine Cavanaugh described it as "an affectation, [a] kind of accent, we're not quite sure. A small Peter Lorre, but not. Perhaps he's Latino, perhaps he's French. He's a scientist; he knows he needs [a] kind of accent." Genndy Tartakovsky explained, "he's a scientist.
https://en.wikipedia.org/wiki/Algorithmics
Algorithmics is the systematic study of the design and analysis of algorithms. It is fundamental and one of the oldest fields of computer science. It includes algorithm design, the art of building a procedure which can solve efficiently a specific problem or a class of problem, algorithmic complexity theory, the study of estimating the hardness of problems by studying the properties of the algorithm that solves them, or algorithm analysis, the science of studying the properties of a problem, such as quantifying resources in time and memory space needed by this algorithm to solve this problem. The term algorithmics is rarely used in the English-speaking world, where it is synonymous with algorithms and data structures. The term gained wider popularity after the publication of the book Algorithmics: The Spirit of Computing by David Harel. See also Divide-and-conquer algorithm Heuristic Akra–Bazzi method Notes
https://en.wikipedia.org/wiki/Counting%20sort
In computer science, counting sort is an algorithm for sorting a collection of objects according to keys that are small positive integers; that is, it is an integer sorting algorithm. It operates by counting the number of objects that possess distinct key values, and applying prefix sum on those counts to determine the positions of each key value in the output sequence. Its running time is linear in the number of items and the difference between the maximum key value and the minimum key value, so it is only suitable for direct use in situations where the variation in keys is not significantly greater than the number of items. It is often used as a subroutine in radix sort, another sorting algorithm, which can handle larger keys more efficiently. Counting sort is not a comparison sort; it uses key values as indexes into an array and the lower bound for comparison sorting will not apply. Bucket sort may be used in lieu of counting sort, and entails a similar time analysis. However, compared to counting sort, bucket sort requires linked lists, dynamic arrays, or a large amount of pre-allocated memory to hold the sets of items within each bucket, whereas counting sort stores a single number (the count of items) per bucket. Input and output assumptions In the most general case, the input to counting sort consists of a collection of items, each of which has a non-negative integer key whose maximum value is at most . In some descriptions of counting sort, the input to be sorted is assumed to be more simply a sequence of integers itself, but this simplification does not accommodate many applications of counting sort. For instance, when used as a subroutine in radix sort, the keys for each call to counting sort are individual digits of larger item keys; it would not suffice to return only a sorted list of the key digits, separated from the items. In applications such as in radix sort, a bound on the maximum key value will be known in advance, and can be assumed to be part of the input to the algorithm. However, if the value of is not already known then it may be computed, as a first step, by an additional loop over the data to determine the maximum key value. The output is an array of the elements ordered by their keys. Because of its application to radix sorting, counting sort must be a stable sort; that is, if two elements share the same key, their relative order in the output array and their relative order in the input array should match. Pseudocode In pseudocode, the algorithm may be expressed as: function CountingSort(input, k) count ← array of k + 1 zeros output ← array of same length as input for i = 0 to length(input) - 1 do j = key(input[i]) count[j] = count[j] + 1 for i = 1 to k do count[i] = count[i] + count[i - 1] for i = length(input) - 1 down to 0 do j = key(input[i]) count[j] = count[j] - 1 output[count[j]] = input[i] return outp
https://en.wikipedia.org/wiki/Bogosort
In computer science, bogosort (also known as permutation sort and stupid sort) is a sorting algorithm based on the generate and test paradigm. The function successively generates permutations of its input until it finds one that is sorted. It is not considered useful for sorting, but may be used for educational purposes, to contrast it with more efficient algorithms. Two versions of this algorithm exist: a deterministic version that enumerates all permutations until it hits a sorted one, and a randomized version that randomly permutes its input. An analogy for the working of the latter version is to sort a deck of cards by throwing the deck into the air, picking the cards up at random, and repeating the process until the deck is sorted. Its name is a portmanteau of the words bogus and sort. Description of the algorithm The following is a description of the randomized algorithm in pseudocode: while not sorted(deck): shuffle(deck) Here is the above pseudocode rewritten in Python 3: from random import shuffle def is_sorted(data) -> bool: """Determine whether the data is sorted.""" return all(a <= b for a, b in zip(data, data[1:])) def bogosort(data) -> list: """Shuffle data until sorted.""" while not is_sorted(data): shuffle(data) return data This code assumes that is a simple, mutable, array-like data structure—like Python's built-in —whose elements can be compared without issue. Running time and termination If all elements to be sorted are distinct, the expected number of comparisons performed in the average case by randomized bogosort is asymptotically equivalent to , and the expected number of swaps in the average case equals . The expected number of swaps grows faster than the expected number of comparisons, because if the elements are not in order, this will usually be discovered after only a few comparisons, no matter how many elements there are; but the work of shuffling the collection is proportional to its size. In the worst case, the number of comparisons and swaps are both unbounded, for the same reason that a tossed coin might turn up heads any number of times in a row. The best case occurs if the list as given is already sorted; in this case the expected number of comparisons is , and no swaps at all are carried out. For any collection of fixed size, the expected running time of the algorithm is finite for much the same reason that the infinite monkey theorem holds: there is some probability of getting the right permutation, so given an unbounded number of tries it will almost surely eventually be chosen. Related algorithms A sorting algorithm introduced in the 2011 Google Code Jam. As long as the list is not in order, a subset of all elements is randomly permuted. If this subset is optimally chosen each time this is performed, the expected value of the total number of times this operation needs to be done is equal to the number of misplaced elements. An algorithm that recursively calls
https://en.wikipedia.org/wiki/The%20Tonight%20Show
The Tonight Show is an American late-night talk show that has been broadcast on the NBC Television Network since 1954. The program has been hosted by six comedians: Steve Allen (1954–1957), Jack Paar (1957–1962), Johnny Carson (1962–1992), Jay Leno (1992–2009 and 2010–2014), Conan O'Brien (2009–2010), and Jimmy Fallon (2014–present). Besides the main hosts, a number of regular "guest hosts" have been used, notably Ernie Kovacs, who hosted two nights per week during 1956–1957, and a number of guests used by Carson, who curtailed his own hosting duties back to three nights per week by the 1980s. Among Carson's regular guest hosts were Joey Bishop, McLean Stevenson, David Letterman, David Brenner, Joan Rivers and Jay Leno, although the practice has been mostly abandoned since hosts currently prefer reruns to showcasing potential rivals. Fallon has used guest hosts rarely, co-hosting the May 24, 2021 broadcast with Dave Grohl, Jimmy Kimmel hosting the April 1, 2022 broadcast (with Fallon swapping duties to guest host Jimmy Kimmel Live!), Shawn Mendes co-hosting the April 29, 2022 broadcast, Megan Thee Stallion co-hosting the August 11, 2022 broadcast, Demi Lovato co-hosting the August 17, 2022 broadcast and Jack Harlow co-hosting the October 6, 2022 broadcast. The Tonight Show is the world's longest-running talk show and the longest-running regularly scheduled entertainment program in the United States. It is the third-longest-running show on NBC, after the news-and-talk shows Today and Meet the Press. The current incarnation is taped from Studio 6B at NBC Studios in Rockefeller Center in New York, the same studio used during the later Jack Paar era and the first 10 years of Carson. During its initial run under Steve Allen, it originated from the Hudson Theatre on Broadway. From 1973 to 2009, and from 2010 to 2014 the show was taped at one of three different studios at NBC's Burbank, California Studios. During Conan O'Brien's brief tenure, the show was taped at an opulently reworked studio on Stage 1 of Universal Studios Hollywood. Over the course of more than 60 years, The Tonight Show has undergone only minor title changes. It aired under the name Tonight for several of its early years, as well as Tonight Starring Jack Paar and The Jack Paar Show due to the runaway popularity of its host, eventually settling permanently on The Tonight Show after Carson began his tenure in 1962, albeit with the host's name always included in the title. Beginning with Carson's debut episode, network programmers, advertisers, and the show's announcers would refer to the show by including the name of the host: The Tonight Show Starring Johnny Carson, The Tonight Show with Jay Leno, The Tonight Show with Conan O'Brien, and, currently, The Tonight Show Starring Jimmy Fallon. In 1957, the show briefly tried a more news-style format. It has otherwise adhered to the talk show format introduced by Allen and honed further by Paar. Carson is the longest-serving host to d
https://en.wikipedia.org/wiki/PostGIS
PostGIS ( ) is an open source software program that adds support for geographic objects to the PostgreSQL object-relational database. PostGIS follows the Simple Features for SQL specification from the Open Geospatial Consortium (OGC). Technically PostGIS was implemented as a PostgreSQL external extension. Features Geometry types for Points, LineStrings, Polygons, MultiPoints, MultiLineStrings, MultiPolygons, GeometryCollections, 3D types TINS and polyhedral surfaces, including solids. Spheroidal types under the geography datatype Points, LineStrings, Polygons, MultiPoints, MultiLineStrings, MultiPolygons and GeometryCollections. raster type - supports various pixel types and more than 1000 bands per raster. Since PostGIS 3, is a separate PostgreSQL extension called postgis_raster. SQL/MM Topology support - via PostgreSQL extension postgis_topology. Spatial predicates for determining the interactions of geometries using the 3x3 DE-9IM (provided by the GEOS software library). Spatial operators for determining geospatial measurements like area, distance, length and perimeter. Spatial operators for determining geospatial set operations, like union, difference, symmetric difference and buffers (provided by GEOS). R-tree-over-GiST (Generalized Search Tree) spatial indexes for high speed spatial querying. Index selectivity support, to provide high performance query plans for mixed spatial/non-spatial queries. The PostGIS implementation is based on "light-weight" geometries and indexes optimized to reduce disk and memory footprint. Using light-weight geometries helps servers increase the amount of data migrated up from physical disk storage into RAM, improving query performance substantially. PostGIS is registered as "implements the specified standard" for "Simple Features for SQL" by the OGC. PostGIS has not been certified as compliant by the OGC. History Refractions Research released the first version of PostGIS in 2001 under the GNU General Public License. After six release candidates, a stable "1.0" version followed on April 19, 2005. In 2006 the OGC registered PostGIS as "implement[ing] the specified standard" for "Simple Features for SQL". Users Many software products can use PostGIS as a database backend, including: ArcGIS (via GISquirrel, ST-Links SpatialKit, ZigGIS, ArcSDE and other third-party connectors) Cadcorp SIS CartoDB CockroachDB GeoMedia (via third-party connectors) GeoServer (GPL) GeoNetwork (GPL) GRASS GIS (GPL) gvSIG (GPL) Kosmo (GPL) Manifold System MapInfo Professional Mapnik (LGPL) MapServer (BSD) Maptitude MapGuide (LGPL) OpenJUMP (GPL) OpenStreetMap QGIS (GPL) SAGA GIS (GPL) TerraLib (LGPL) TerraView (GPL) uDig (LGPL) See also Well-known text and binary, descriptions of geospatial objects used within PostGIS DE-9IM, the Dimensionally Extended nine-Intersection Model used by PostGIS References External links Free GIS software PostgreSQL Spatial database management systems
https://en.wikipedia.org/wiki/Index%20of%20computing%20articles
Originally, the word computing was synonymous with counting and calculating, and the science and technology of mathematical calculations. Today, "computing" means using computers and other computing machines. It includes their operation and usage, the electrical processes carried out within the computing hardware itself, and the theoretical concepts governing them (computer science). See also: List of programmers, List of computing people, List of computer scientists, List of basic computer science topics, List of terms relating to algorithms and data structures. Topics on computing include: 0–9 1.TR.6 – 100BASE-FX – 100BASE-TX – 100BaseVG – 100VG-AnyLAN – 10BASE-2 – 10BASE-5 – 10BASE-T – 120 reset – 1-bit computing – 16-bit computing – 16-bit application – 16550 UART – 1NF – 1TBS – 20-GATE – 20-GATE – 28-bit – 2B1D – 2B1Q – 2D – 2NF – 3-tier (computing) – 32-bit application – 32-bit computing – 320xx microprocessor – 320xx – 386BSD – 386SPART.PAR – 3Com Corporation – 3DO – 3D computer graphics – 3GL – 3NF – 3Station – 4.2BSD – 4-bit computing – 404 error – 431A – 473L system – 486SX – 4GL – 4NF – 51-FORTH – 56 kbit/s line – 5ESS switch – 5NF – 5th Glove – 6.001 – 64-bit computing – 680x0 – 6x86 – 8-bit clean – 8-bit computing – 8.3 filename – 80x86 – 82430FX – 82430HX – 82430MX – 82430VX – 8514 (display standard) – 8514-A – 88open – 8N1 – 8x86 – 90–90 rule – 9PAC A ABC ALGOL – ABLE – ABSET – ABSYS – Accent – Acceptance, Test Or Launch Language – Accessible Computing – Ada – Addressing mode – AIM alliance – AirPort – AIX – Algocracy – ALGOL – Algorithm – AltiVec – Amdahl's law – America Online – Amiga – AmigaE – Analysis of algorithms – AOL – APL – Apple Computer – Apple II – AppleScript – Array programming – Arithmetic and logical unit – ASCII – Active Server Pages – ASP.NET – Assembly language – Atari – Atlas Autocode – AutoLISP – Automaton – AWK B B – Backus–Naur form – Basic Rate Interface (2B+D) – BASIC – Batch job – BCPL – Befunge – BeOS – Berkeley Software Distribution – BETA – Big Mac – Big O notation – Binary symmetric channel – Binary Synchronous Transmission – Binary numeral system – Bit – BLISS – Blue – Blu-ray Disc – Blue screen of death – Bourne shell (sh) Bourne-Again shell (bash) – Brainfuck – Btrieve – Burrows-Abadi-Needham logic – Business computing C C++ – C# – C – Cache – Canonical LR parser – Cat (Unix) – CD-ROM – Central processing unit – Chimera – Chomsky normal form – CIH virus – Classic Mac OS – COBOL – Cocoa (software) – Code and fix – Code Red worm – ColdFusion – Colouring algorithm – COMAL – Comm (Unix) – Command line interface – Command line interpreter – COMMAND.COM – Commercial at (computing) – Commodore 1541 – Commodore 1581 – Commodore 64 – Common logarithm – Common Unix Printing System – Compact disc – Compiler – Computability theory – Computational complexity theory – Computation – Computer-aided design – Computer-aided manufacturing – Computer architecture – Computer cluster – Computer hardware – C
https://en.wikipedia.org/wiki/Douglas%20Lenat
Douglas Bruce Lenat (September 13, 1950 – August 31, 2023) was an American computer scientist and researcher in artificial intelligence who was the founder and CEO of Cycorp, Inc. in Austin, Texas. Lenat was awarded the biannual IJCAI Computers and Thought Award in 1976 for creating the machine-learning program AM. He has worked on (symbolic, not statistical) machine learning (with his AM and Eurisko programs), knowledge representation, "cognitive economy", blackboard systems, and what he dubbed in 1984 "ontological engineering" (with his Cyc program at MCC and, since 1994, at Cycorp). He has also worked in military simulations, and numerous projects for the US government, military, intelligence, and scientific organizations. In 1980, he published a critique of conventional random-mutation Darwinism. He authored a series of articles in the Journal of Artificial Intelligence exploring the nature of heuristic rules. Lenat was one of the original Fellows of the AAAI, and is the only individual to have served on the Scientific Advisory Boards of both Microsoft and Apple. He was a Fellow of the AAAS, AAAI, and Cognitive Science Society, and an editor of the J. Automated Reasoning, J. Learning Sciences, and J. Applied Ontology. He was one of the founders of TTI/Vanguard in 1991 and member of its advisory board. He was named one of the Wired 25. Background and education Lenat was born in Philadelphia, United States, on September 13, 1950. When he was 5, the family moved to Wilmington, Delaware, where his father, Nathan Lenat, owned a bottling plant. His father died when he was 13 and the family then returned to Pennsylvania, where he attended Cheltenham High School. His after-school job was cleaning rat cages and goose pens at Beaver College which motivated him to learn programming as a better occupation. He attended the University of Pennsylvania, supporting himself by programming, including the design and development of a natural language interface for a United States Navy online operations manual. He graduated with bachelor's degrees in Mathematics and Physics, and a master's degree in Applied Mathematics, all in 1972. For his senior thesis, advised in part by Dennis Gabor, was to bounce acoustic waves in the 40 mHz range off real-world objects, record their interference patterns on a 2-meter square plot, photo-reduce those to a 10-mm square film image, shine a laser through the film, and thus project the three-dimensional imaged object—i.e., the first known acoustic hologram. To settle an argument with Dr. Gabor, Lenat computer-generated a five-dimensional hologram, by photo-reducing computer printout of the interference pattern of a globe rotating and expanding over time, reducing the large two-dimensional paper printout to a moderately large 5-cm square film surface through which a conventional laser beam was then able to project a three-dimensional image, which changed in two independent ways (rotating and changing in size) as the
https://en.wikipedia.org/wiki/Sokolsky%20Opening
The Sokolsky Opening, also known as the Orangutan and the Polish Opening, is an uncommon chess opening that begins with the move: 1. b4 According to various databases, out of the twenty possible first moves from White, the move 1.b4 ranks ninth in popularity. It is considered an irregular opening, so it is classified under the A00 code in the Encyclopaedia of Chess Openings. Origins One of the earliest opening plays of b4 was by Bernhard Fleissig playing against Carl Schlechter in 1893, although Fleissig was handily defeated in just 18 moves. Nikolai Bugaev defeated former world champion Wilhelm Steinitz with it in a simul exhibition game, and later published an analysis of the opening in 1903 in a Russian magazine article. Savielly Tartakower defeated Richard Réti using b4 in a match in 1919 when both were top-level players, and Reti himself defeated Abraham Speijer in Scheveningen 1923 using the opening. The most famous use came in a game between Tartakower and Géza Maróczy at the New York 1924 chess tournament on March 21, 1924. The name "The Orangutan" originates from that game: the players visited the Bronx Zoo the previous day, where Tartakower consulted an orangutan named Susan. She somehow indicated, Tartakower insisted, that he should open with b4. Also, Tartakower was impressed with the climbing skills of the orangutan, and thought that the "climb" of the b-pawn was similar. In that particular game, Tartakower came out of the opening with a decent position, but the game was ultimately drawn. The opening received sporadic play in the decades that followed. Tartakower had more success in 1926 when he used it against Edgard Colle for a victory. One of the largest proponents was the Soviet player Alexei Pavlovich Sokolsky (1908–1969), who often used it in high-level play. Sokolsky wrote a monograph on the opening in 1963, Debyut 1 b2–b4, which would lead to the opening being called the "Sokolsky Opening". Sokolsky's work defended the viability of the opening even at the highest levels of professional play. The final term, and the one used in contemporary books and chess websites such as Chess.com and Lichess, is the Polish Opening. This is by analogy to the Polish Defense (1. d4 b5), where Black is who advances their Queen's Knight pawn two spaces. Notable later usage In general, the opening is not popular at the top level. Alexander Alekhine, who played in the same 1924 New York tournament as Tartakower and the Orangutan game, wrote that the problem is that it reveals White's intentions before White knows what Black's intentions are. That said, it still sees sporadic use among top level grandmasters. Boris Spassky used it against Vasily Smyslov in a 1960 match, albeit having to settle for a draw. In May 2021, world champion Magnus Carlsen essayed the opening against GMs Hikaru Nakamura and Wesley So in the online FTX Crypto Cup rapid tournament. Details The opening is largely based upon tactics on the or the f6- an
https://en.wikipedia.org/wiki/List%20of%20protected%20areas%20of%20Western%20Australia
Western Australia is the second largest country subdivision in the world. As of 2022, based on the latest Collaborative Australian Protected Areas Database report, it contains separate land-based protected areas with a total area of , accounting for just over 30 percent of the state's land mass. By area, Indigenous Protected Areas account for the largest part of this, almost 67 percent while, by number, nature reserves hold the majority with two-third of all land-based protected areas being nature reserves. Marine-based protected areas in Western Australia, as of 2022, covered or 41.05 percent of the state's waters. 41 individual Marine Protected Areas existed in the state of which the largest amount, 20, were Marine Parks, followed by Marine Reserves with 15. Marine Parks accounted for 92.25 percent of all Marine Protected Areas in the state. Protected areas of Western Australia Conservation Parks As of 2022, the following 72 conservation parks exist in Western Australia, covering or 0.5 percent of Western Australia's land mass, and accounting for 1.66 percent of all protected areas in the state. Barnabinmah Blackbutt Boyagarring Brooking Gorge Burra Camp Creek Cane River Cape Range Coalseam Dardanup Devonian Reef Geikie Gorge Goldfields Woodlands Gooralong Hester Jinmarnkur Kerr Korijekup Kujungurru Warrarn Lakeside Lane Poole Reserve Laterite Len Howard Leschenault Peninsula Leschenaultia Lupton Montebello Islands Mount Manning Range Muja Penguin Island R 46235 Rapids Rock Gully Rowles Lagoon Shell Beach Totadgin Unnamed WA01333 Unnamed WA17804 Unnamed WA23088 Unnamed WA23920 Unnamed WA24657 Unnamed WA29901 Unnamed WA33448 Unnamed WA34213 Unnamed WA38749 Unnamed WA39584 Unnamed WA39752 Unnamed WA41986 Unnamed WA43290 Unnamed WA46756 Unnamed WA47100 Unnamed WA48291 Unnamed WA48436 Unnamed WA48717 Unnamed WA49144 Unnamed WA49220 Unnamed WA49363 Unnamed WA49561 Unnamed WA49742 Unnamed WA49994 Unnamed WA51272 Unnamed WA51376 Unnamed WA51963 Unnamed WA52103 Unnamed WA53269 Unnamed WA53313 Unnamed WA53632 Unnamed WA53971 Wallaroo Rock Walyarta Westralia Marine Nature Reserves As of 2022, just Marine Nature Reserves exists in Western Australia. Hamelin Pool Marine Parks As of 2022, 20 Marine parks exist in Western Australia, covering or 37.88 percent of Western Australia's waters, and accounting for 92.25 percent of all marine protected areas in the state. Barrow Island Eighty Mile Beach Jurien Bay Lalang-garram / Camden Sound Lalang-garram / Horizontal Falls Marmion Montebello Islands Ngari Capes Ningaloo North Kimberley North Lalang-garram Rowley Shoals Shark Bay Shoalwater Islands Swan Estuary Swan Estuary - Alfred Cove Swan Estuary - Milyu Swan Estuary - Pelican Point Walpole And Nornalup Inlets Yawuru Nagulagun / Roebuck Bay National Parks Overview Western Australia has had national parks or protected areas under legislation since the ear
https://en.wikipedia.org/wiki/Skinny%20Client%20Control%20Protocol
The Skinny Client Control Protocol (SCCP) is a proprietary network terminal control protocol originally developed by Selsius Systems, which was acquired by Cisco Systems in 1998. SCCP is a lightweight IP-based protocol for session signaling with Cisco Unified Communications Manager, formerly named CallManager. The protocol architecture is similar to the media gateway control protocol architecture, in that is decomposes the function of media conversion in telecommunication for transmission via an Internet Protocol network into a relatively low-intelligence customer-premises device and a call agent implementation that controls the CPE via signaling commands. The call agent product is Cisco CallManager, which also performs as a signaling proxy for call events initiated over other common protocols such as H.323, and Session Initiation Protocol (SIP) for voice over IP, or ISDN for the public switched telephone network. Protocol components An SCCP client uses TCP/IP to communicate with one or more Call Manager applications in a cluster. It uses the Real-time Transport Protocol (RTP) over UDP-transport for the bearer traffic (real-time audio stream) with other Skinny clients or an H.323 terminal. SCCP is a stimulus-based protocol and is designed as a communications protocol for hardware endpoints and other embedded systems, with significant CPU and memory constraints. Some Cisco analog media gateways, such as the VG248 gateway, register and communicate with Cisco Unified Communications Manager using SCCP. Origin Cisco acquired SCCP technology when it acquired Selsius Corporation in 1998. For this reason the protocol is also referred to in Cisco documentation as the Selsius Skinny Station Protocol. Another remnant of the origin of the Cisco IP phones is the default device name format for registered Cisco phones with CallManager. It is SEP, as in Selsius Ethernet Phone, followed by the MAC address. Cisco also has marketed a Skinny-based softphone called Cisco IP Communicator. Client examples Examples of SCCP client devices include the Cisco 7900 series of IP phones, Cisco IP Communicator softphone, and the 802.11b wireless Wireless IP Phone 7920, along with Cisco Unity voicemail server. Other implementations Other companies, such as Symbol Technologies, SocketIP, and Digium, have implemented the protocol in VoIP terminals and IP phones, media gateway controllers, and softswitches. An open source implementation of a call agent is available in the Asterisk and FreeSWITCH systems. IPBlue provides a soft phone that emulates a Cisco 7960 telephone. Twinlights Software distributes a soft phone implementation for Android-based devices. The Cisco Unified Application Environment, the product acquired by Cisco when they purchased Metreos, supports using SCCP to emulate Cisco 7960 phones allowing applications to access all Cisco line-side features. See also Media Gateway Control Protocol References VoIP protocols Cisco protocols Application layer proto
https://en.wikipedia.org/wiki/Symmetric%20digital%20subscriber%20line
A symmetric digital subscriber line (SDSL) is a digital subscriber line (DSL) that transmits digital data over the copper wires of the telephone network, where the bandwidth in the downstream direction, from the network to the subscriber, is identical to the bandwidth in the upstream direction, from the subscriber to the network. This symmetric bandwidth can be considered to be the opposite of the asymmetric bandwidth offered by asymmetric digital subscriber line (ADSL) technologies, where the upstream bandwidth is lower than the downstream bandwidth. SDSL is generally marketed at business customers, while ADSL is marketed at private as well as business customers. More specifically, SDSL can be understood as: In the wider sense, an umbrella term for all DSL variant which offer symmetric bandwidth, including IDSL, which offers 144 kbit/s, HDSL, HDSL2, G.SHDSL, which offers up to 22.784 Mbit/s over four pairs of copper wires, as well as the SDSL variant below In the narrow sense, a particular proprietary and non-standardized DSL variant for operation at 1.544 Mbit/s or 2.048 Mbit/s over a single pair of copper wires, without support for analog calls on the same line A term used by ETSI to refer to G.SHDSL Proprietary SDSL technology SDSL is a rate-adaptive digital subscriber line (DSL) variant with T1/E1-like data rates (T1: 1.544 Mbit/s, E1: 2.048 Mbit/s). It runs over one pair of copper wires, with a maximum range of . It cannot co-exist with a conventional voice service on the same pair as it takes over the entire bandwidth. Standardization efforts SDSL is a proprietary technology that was never standardized. As such it usually only interoperates with devices from the same vendor. It is the predecessor of G.SHDSL which was standardized in February 2001 by ITU-T with recommendation G.991.2. SDSL is often confused with G.SHDSL and HDSL; in Europe, G.SHDSL was standardized by ETSI using the name 'SDSL'. This ETSI variant is compatible with the ITU-T G.SHDSL standardized regional variant for Europe. As there is a standardised successor available, SDSL installations today are considered legacy. Most new installations use G.SHDSL equipment instead of SDSL. Target audience SDSL typically falls between ADSL and T1/E1 in price and was mainly targeted at small and medium businesses which don't need the service guarantees of Frame Relay or the higher performance of a leased line. See also ISDN List of interface bit rates References Digital subscriber line ITU-T recommendations
https://en.wikipedia.org/wiki/Secret%20service
A secret service is a government agency, intelligence agency, or the activities of a government agency, concerned with the gathering of intelligence data. The tasks and powers of a secret service can vary greatly from one country to another. For instance, a country may establish a secret service which has some policing powers (such as surveillance) but not others. The powers and duties of a government organization may be partly secret and partly not. The organization may be said to operate openly at home and secretly abroad, or vice versa. Secret police and intelligence agencies can usually be considered secret services. Various states and regimes, at different times and places, established bodies that could be described as a secret service or secret police – for example, the agentes in rebus of the late Roman Empire were sometimes defined as such. In modern times, the French police officer Joseph Fouché is sometimes regarded as the primary pioneer within secret intelligence. Among other things, he is alleged to have prevented several murder attempts on Napoleon during his time as First Consul (1799–1804) through a large and tight net of various informers. References External links National security Government agencies by type
https://en.wikipedia.org/wiki/National%20Science%20Foundation%20Network
The National Science Foundation Network (NSFNET) was a program of coordinated, evolving projects sponsored by the National Science Foundation (NSF) from 1985 to 1995 to promote advanced research and education networking in the United States. The program created several nationwide backbone computer networks in support of these initiatives. Initially created to link researchers to the NSF-funded supercomputing centers, through further public funding and private industry partnerships it developed into a major part of the Internet backbone. The National Science Foundation permitted only government agencies and universities to use the network until 1989 when the first commercial Internet service provider emerged. By 1991, the NSF removed access restrictions and the commercial ISP business grew rapidly. History Following the deployment of the Computer Science Network (CSNET), a network that provided Internet services to academic computer science departments, in 1981, the U.S. National Science Foundation (NSF) aimed to create an academic research network facilitating access by researchers to the supercomputing centers funded by NSF in the United States. In 1985, NSF began funding the creation of five new supercomputing centers: John von Neumann Center at Princeton University Cornell Theory Center at Cornell University Pittsburgh Supercomputing Center (PSC), a joint effort of Carnegie Mellon University, the University of Pittsburgh, and Westinghouse National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana–Champaign San Diego Supercomputer Center (SDSC) on the campus of the University of California, San Diego (UCSD) Also in 1985, under the leadership of Dennis Jennings, the NSF established the National Science Foundation Network (NSFNET). NSFNET was to be a general-purpose research network, a hub to connect the five supercomputing centers along with the NSF-funded National Center for Atmospheric Research (NCAR) to each other and to the regional research and education networks that would in turn connect campus networks. Using this three tier network architecture NSFNET would provide access between the supercomputer centers and other sites over the backbone network at no cost to the centers or to the regional networks using the open TCP/IP protocols initially deployed successfully on the ARPANET. 56kbit/s backbone The NSFNET initiated operations in 1986 using TCP/IP. Its six backbone sites were interconnected with leased 56-kbit/s links, built by a group including the University of Illinois National Center for Supercomputing Applications (NCSA), Cornell University Theory Center, University of Delaware, and Merit Network. PDP-11/73 minicomputers with routing and management software, called Fuzzballs, served as the network routers since they already implemented the TCP/IP standard. This original 56kbit/s backbone was overseen by the supercomputer centers themselves with the lead taken by Ed Krol at the Univers
https://en.wikipedia.org/wiki/Standard%20ML
Standard ML (SML) is a general-purpose, modular, functional programming language with compile-time type checking and type inference. It is popular among compiler writers and programming language researchers, as well as in the development of theorem provers. Standard ML is a modern dialect of ML, the language used in the Logic for Computable Functions (LCF) theorem-proving project. It is distinctive among widely used languages in that it has a formal specification, given as typing rules and operational semantics in The Definition of Standard ML. Language Standard ML is a functional programming language with some impure features. Programs written in Standard ML consist of expressions as opposed to statements or commands, although some expressions of type unit are only evaluated for their side-effects. Functions Like all functional languages, a key feature of Standard ML is the function, which is used for abstraction. The factorial function can be expressed as follows: fun factorial n = if n = 0 then 1 else n * factorial (n - 1) Type inference An SML compiler must infer the static type without user-supplied type annotations. It has to deduce that is only used with integer expressions, and must therefore itself be an integer, and that all terminal expressions are integer expressions. Declarative definitions The same function can be expressed with clausal function definitions where the if-then-else conditional is replaced with templates of the factorial function evaluated for specific values: fun factorial 0 = 1 | factorial n = n * factorial (n - 1) Imperative definitions or iteratively: fun factorial n = let val i = ref n and acc = ref 1 in while !i > 0 do (acc := !acc * !i; i := !i - 1); !acc end Lambda functions or as a lambda function: val rec factorial = fn 0 => 1 | n => n * factorial (n - 1) Here, the keyword introduces a binding of an identifier to a value, introduces an anonymous function, and allows the definition to be self-referential. Local definitions The encapsulation of an invariant-preserving tail-recursive tight loop with one or more accumulator parameters within an invariant-free outer function, as seen here, is a common idiom in Standard ML. Using a local function, it can be rewritten in a more efficient tail-recursive style: local fun loop (0, acc) = acc | loop (m, acc) = loop (m - 1, m * acc) in fun factorial n = loop (n, 1) end Type synonyms A type synonym is defined with the keyword . Here is a type synonym for points on a plane, and functions computing the distances between two points, and the area of a triangle with the given corners as per Heron's formula. (These definitions will be used in subsequent examples). type loc = real * real fun square (x : real) = x * x fun dist (x, y) (x', y') = Math.sqrt (square (x' - x) + square (y' - y)) fun heron (a, b, c) = let val x = dist a b val y = dist b c val z = dist a c val s = (x + y + z) / 2.0 in Math
https://en.wikipedia.org/wiki/Venevisi%C3%B3n
Venevisión () is a Venezuelan free-to-air television channel and one of Venezuela's largest television networks, owned by the Cisneros Media division of Grupo Cisneros. History The company's roots date back to June 1, 1952, with the establishment of Televisora Independiente S.A, (TeleVisa), which operated the channel 4 in Caracas and channel 5 in Maracaibo. When TeleVisa went bankrupt in 1951, Diego Cisneros purchased the remaining assets of the company. On February 27, 1960, Venevisión (a portmanteau based on the words Venezuela and Televisión) was officially inaugurated with a special inaugural on March 1, 1960, show in which thousands of people attended, and took place in the station's parking lot. Venevisión began with a capital of 5,500,000 bolívares and 150 employees including artists, administrators, and technical personnel. Venevisión's original administrators were Diego Cisneros (president), Alfredo Torres (transmission manager), Héctor Beltrán (production manager), and Orlando Cuevas (general manager). Initially, Venevisión broadcast live because they hadn't yet installed the videotape system. Except for the news, the elaboration of their programs utilized the technical formats used in movies at that time. In a short period of time, Venevisión greatly expanded nationally, and was seen in most of Venezuela on many VHF and UHF channels. In March 1960, the newly created Venevisión and the American television network, ABC, signed two agreements: one for technical support and the other for the rights to broadcast each other's programs. Because of these agreements, Venevisión later began using the videotape system. In their first year of existence, Venevisión made approximately 800,000 bolívares a month in advertisements. By 1971, it began to bring its then black and white programs to viewers internationally via videotape, with the drama program Esmeralda as the first to do so. In the next year, the network officially took over the broadcasts of the Miss Venezuela beauty pageant, and it has been its home ever since. In 1976, Venevisión moved their transmitters, which were located on the top of a building in La Colina, a neighborhood in Caracas where Venevisión's studios can be found, to Los Mecedores, near Venezolana de Televisión's studios and CANTV's installations. In Los Mecedores, a tower with an altitude of 100 meters was placed and a powerful new antenna was installed. With this new antenna, Venevisión's signal was able to reach Petare, Caricuao, and Guarenas with better quality. In the 1970s, like other television stations in Venezuela, Venevisión began experimenting with color broadcasts. In 1978, the Ministry of Transport and Communications fined Venevisión 4,000 bolívares on two occasions in one week for violating the regulations for color broadcasting. It was only the next year when color broadcasts commenced, with full color transmissions commencing on June 1, 1980. The very first programme by Venevision shown in color was t
https://en.wikipedia.org/wiki/Natura%202000
Natura 2000 is a network of nature protection areas in the territory of the European Union. It is made up of Special Areas of Conservation and Special Protection Areas designated under the Habitats Directive and the Birds Directive, respectively. The network includes both terrestrial and Marine Protected Areas. The Natura 2000 network covered more than 18% of the European Union's land area and more than 7% of its marine area in 2022. History In May 1992, the governments of the European Communities adopted legislation designed to protect the most seriously threatened habitats and species across Europe. The Habitats Directive complements the Birds Directive adopted in 1979, and together they make up the Natura 2000 network of protected areas. The Birds Directive requires the establishment of Special Protection Areas for birds. The Habitats Directive similarly requires Sites of Community Importance which upon the agreement of the European Commission become Special Areas of Conservation to be designated for species other than birds, and for habitat types (e.g. particular types of forest, grasslands, wetlands, etc.). Together, Special Protection Areas and Special Areas of Conservation form the Natura 2000 network of protected areas. The Natura 2000 network is the EU contribution to the "Emerald network" of Areas of Special Conservation Interest set up under the Bern Convention on the conservation of European wildlife and natural habitats. Natura 2000 is also a key contribution to the Program of Work of Protected Areas of the Convention on Biological Diversity. As a prerequisite for joining the EU, accession states have to submit proposals for Natura 2000 sites meeting the same criteria as EU member states. Some new member states have large areas which qualify to be protected under the directives, and implementation has not always been simple. The Natura 2000 sites are selected by member states and the European Commission following strictly scientific criteria according to the two directives mentioned above. The Special Protection Areas are designated directly by each EU member state, while the Special Areas of Conservation follow a more elaborate process: each EU member state must compile a list of the best wildlife areas containing the habitats and species listed in the Habitats Directive; this list must then be submitted to the European Commission, after which an evaluation and selection process on European level will take place in order to become a Natura 2000 site. The Habitats Directive divides the EU territory into nine biogeographic regions, each with its own ecological coherence. Natura 2000 sites are selected according to the conditions in each biogeographical region; thus selected sites represent species and habitat types under similar natural conditions across a suite of countries. Each Natura 2000 site has a unique identification form called a standard data form. This form is used as a legal reference when assessing the management
https://en.wikipedia.org/wiki/T2
T2, T-2, T2, T2 may refer to: Computing Apple T2, a SoC from Apple for security and hardware management within Intel based Macs Palm Tungsten T2, a Palm OS-based personal digital assistant T2 or T-2, a 6.312 Mbit/s T-carrier in telecommunications T2, a German keyboard layout T2 Temporal Prover, an automated program analyzer by Microsoft Research UltraSPARC T2, a Sun Microsystem microprocessor T2 SDE, a Linux distribution kit T2 (social), a social platform Medicine and physiology T-2 mycotoxin, a type of trichothecene mycotoxin, a naturally occurring mold byproduct of certain species of Fusarium fungi T2 or diiodothyronine, a metabolite of thyroid hormone T2 hyperintensity, an area of high intensity on types of MRI scans of the brain that reflect lesions produced largely by demyelination and axonal loss T2, the second highest (closest to the neck) of the thoracic vertebrae, the bones of the spine of the upper back T2, the thoracic spinal nerve 2 T2, a small-to-moderate size cancerous tumor in the TNM staging system Entertainment Books T2 (novel series), a literary trilogy that continues after Terminator 2: Judgment Day Films Terminator 2: Judgment Day, a 1991 science fiction film T2 (2009 film), a 2009 Filipino film T2 Trainspotting, a 2017 sequel to Trainspotting Games Tak 2: The Staff of Dreams, a 2004 video game Take-Two Interactive, American multinational publisher distributor of video games Tekken 2, a 1995 fighting game Thief II: The Metal Age, a 2000 video game Tribes 2, a 2001 video game Tsquared (born 1987), American e-sports gamer Turok 2: Seeds of Evil, a 1998 video game Music T2 (band), a British progressive rock band T2 (producer), British producer most notable for the track "Heartbroken", which reached No. 2 in the UK Singles Chart Sport Portland Timbers 2, an American soccer club in the United Soccer League T2 (classification), a para-cycling classification Terrell Thomas (born 1985), American football player who plays for the New York Giants Science and specifications T-mount, a Tamron lens thread specification T-2, the second Jupiter Trojan survey, a subproject of the Palomar–Leiden survey T2, a fluorescent tube diameter designation T2, a mathematical concept used in topology, see Hausdorff space T2 or spin-spin relaxation time, a time constant in radiology T2, a temperature classification, also referred to as a T-code, on electrical equipment labeled for hazardous locations T2 (Torx size), a six-sided screw, bolt or driver size T2 or Treadmill 2, on board the International Space Station Transport Air ACAZ T.2, a Belgian monoplane designed in 1924 Antonov T-2M Maverick, ultralight trike aircraft Fokker F.IV, aircraft designated T-2 when used by the United States Army Air Service Perry Beadle T.2, a 1914 British biplane Mitsubishi T-2, a 1971 Japanese jet trainer aircraft T-2 Buckeye (aircraft) Thai Air Cargo, IATA airline designator Rail Inner West & Leppington Line, a rail serv
https://en.wikipedia.org/wiki/Smoothsort
In computer science, smoothsort is a comparison-based sorting algorithm. A variant of heapsort, it was invented and published by Edsger Dijkstra in 1981. Like heapsort, smoothsort is an in-place algorithm with an upper bound of , but it is not a stable sort. The advantage of smoothsort is that it comes closer to time if the input is already sorted to some degree, whereas heapsort averages regardless of the initial sorted state. Overview Like heapsort, smoothsort organizes the input into a priority queue and then repeatedly extracts the maximum. Also like heapsort, the priority queue is an implicit heap data structure (a heap-ordered implicit binary tree), which occupies a prefix of the array. Each extraction shrinks the prefix and adds the extracted element to a growing sorted suffix. When the prefix has shrunk to nothing, the array is completely sorted. Heapsort maps the binary tree to the array using a top-down breadth-first traversal of the tree; the array begins with the root of the tree, then its two children, then four grandchildren, and so on. Every element has a well-defined depth below the root of the tree, and every element except the root has its parent earlier in the array. Its height above the leaves, however, depends on the size of the array. This has the disadvantage that every element must be moved as part of the sorting process: it must pass through the root before being moved to its final location. Smoothsort uses a different mapping, a bottom-up depth-first post-order traversal. A left child is followed by the subtree rooted at its sibling, and a right child is followed by its parent. Every element has a well-defined height above the leaves, and every non-leaf element has its children earlier in the array. Its depth below the root, however, depends on the size of the array. The algorithm is organized so the root is at the end of the heap, and at the moment that an element is extracted from the heap it is already in its final location and does not need to be moved. Also, a sorted array is already a valid heap, and many sorted intervals are valid heap-ordered subtrees. More formally, every position is the root of a unique subtree, whose nodes occupy a contiguous interval that ends at . An initial prefix of the array (including the whole array), might be such an interval corresponding to a subtree, but in general decomposes as a union of a number of successive such subtree intervals, which Dijkstra calls "stretches". Any subtree without a parent (i.e. rooted at a position whose parent lies beyond the prefix under consideration) gives a stretch in the decomposition of that interval, which decomposition is therefore unique. When a new node is appended to the prefix, one of two cases occurs: either the position is a leaf and adds a stretch of length 1 to the decomposition, or it combines with the last two stretches, becoming the parent of their respective roots, thus replacing the two stretches by a new stretch co
https://en.wikipedia.org/wiki/Matrox
Matrox Graphics, Inc. is a producer of video card components and equipment for personal computers and workstations. Based in Dorval, Quebec, Canada, it was founded in 1976 by Lorne Trottier and Branko Matić. The name is derived from "Ma" in Matić and "Tro" in Trottier. Company Matrox Graphics, Inc., the entity most recognized by the public which has been making graphics cards since 1978. Matrox Video Products Group, which produces video-editing products for professional video production and broadcast markets. A division of Matrox Graphics, Inc. Former divisions Matrox Electronic Systems Ltd., the former parent company. Sold to Zebra Technologies as part of the divestiture of Matrox Imaging on June 6, 2022 and succeeded by Matrox Graphics, Inc. Matrox Imaging, which produces frame grabbers, smart cameras and image processing/analysis software. Matrox Networks, which produced corporate-grade networking equipment. Date of closure unknown. History Matrox's first graphics card product was the ALT-256 for S-100 bus computers, released in 1978. The ALT-256 produced a 256 by 256 pixel monochrome display using an 8 kilobyte (64 kilobit) frame buffer consisting of 16 TMS4027 DRAM chips (4 kilobits each). An expanded version followed, the ALT-512, both available for Intel SBC bus machines as well. Through the 1980s, Matrox's cards followed changes in the hardware side of the market, to Multibus and then the variety of PC standards. During the 1990s, the Matrox Millennium series of cards attracted buyers willing to pay for a higher quality and sharper display. In 1994, Matrox introduced the Matrox Impression, an add-on card that worked in conjunction with a Millennium card to provide 3D acceleration. The Impression was aimed primarily at the CAD market. A later version of the Millennium included features similar to the Impression but by this time the series was lagging behind emerging vendors like 3dfx Interactive. Matrox made several attempts to increase its share of the market for 3D-capable cards. The Matrox Mystique, released in 1996, was the company's first attempt to make a card with good gaming performance and with pricing suitable for that market. The product had good 2D and 3D performance but produced poor 3D images with the result that it was derided in reviews, being compared unfavorably with the Voodoo1 and even being nicknamed the "Matrox Mystake". Another attempt was the Matrox G100 and G200. The G200 was sold as two models, the Millennium G200 was a higher-end version typically equipped with 8 MB SGRAM memory, while the Mystique G200 used slower SDRAM memory but added a TV-out port. The G200 offered competent 3D performance for the first time, but was released shortly before a new generation of cards from Nvidia and ATI which completely outperformed it. Later versions in the Matrox G400 series were never able to regain the crown, and despite huge claims for the Matrox Parhelia, their performance continued to be quickly outpaced by
https://en.wikipedia.org/wiki/A%2A%20search%20algorithm
A* (pronounced "A-star") is a graph traversal and path search algorithm, which is used in many fields of computer science due to its completeness, optimality, and optimal efficiency. One major practical drawback is its space complexity, as it stores all generated nodes in memory. Thus, in practical travel-routing systems, it is generally outperformed by algorithms that can pre-process the graph to attain better performance, as well as memory-bounded approaches; however, A* is still the best solution in many cases. Peter Hart, Nils Nilsson and Bertram Raphael of Stanford Research Institute (now SRI International) first published the algorithm in 1968. It can be seen as an extension of Dijkstra's algorithm. A* achieves better performance by using heuristics to guide its search. Compared to Dijkstra's algorithm, the A* algorithm only finds the shortest path from a specified source to a specified goal, and not the shortest-path tree from a specified source to all possible goals. This is a necessary trade-off for using a specific-goal-directed heuristic. For Dijkstra's algorithm, since the entire shortest-path tree is generated, every node is a goal, and there can be no specific-goal-directed heuristic. History A* was created as part of the Shakey project, which had the aim of building a mobile robot that could plan its own actions. Nils Nilsson originally proposed using the Graph Traverser algorithm for Shakey's path planning. Graph Traverser is guided by a heuristic function , the estimated distance from node to the goal node: it entirely ignores , the distance from the start node to . Bertram Raphael suggested using the sum, . Peter Hart invented the concepts we now call admissibility and consistency of heuristic functions. A* was originally designed for finding least-cost paths when the cost of a path is the sum of its costs, but it has been shown that A* can be used to find optimal paths for any problem satisfying the conditions of a cost algebra. The original 1968 A* paper contained a theorem stating that no A*-like algorithm could expand fewer nodes than A* if the heuristic function is consistent and A*'s tie-breaking rule is suitably chosen. A "correction" was published a few years later claiming that consistency was not required, but this was shown to be false in Dechter and Pearl's definitive study of A*'s optimality (now called optimal efficiency), which gave an example of A* with a heuristic that was admissible but not consistent expanding arbitrarily more nodes than an alternative A*-like algorithm. Description A* is an informed search algorithm, or a best-first search, meaning that it is formulated in terms of weighted graphs: starting from a specific starting node of a graph, it aims to find a path to the given goal node having the smallest cost (least distance travelled, shortest time, etc.). It does this by maintaining a tree of paths originating at the start node and extending those paths one edge at a time until its termi
https://en.wikipedia.org/wiki/System%20on%20a%20chip
A system on a chip or system-on-chip (SoC ; pl. SoCs ) is an integrated circuit that integrates most or all components of a computer or other electronic system. These components almost always include on-chip central processing unit (CPU), memory interfaces, input/output devices and interfaces, and secondary storage interfaces, often alongside other components such as radio modems and a graphics processing unit (GPU) – all on a single substrate or microchip. SoCs may contain digital and also analog, mixed-signal and often radio frequency signal processing functions (otherwise it may be considered on a discrete application processor). Higher-performance SoCs are often paired with dedicated and physically separate memory and secondary storage (such as LPDDR and eUFS or eMMC, respectively) chips, that may be layered on top of the SoC in what's known as a package on package (PoP) configuration, or be placed close to the SoC. Additionally, SoCs may use separate wireless modems. SoCs are in contrast to the common traditional PC architecture, which separates hardware components based on function and connects them through a central interfacing circuit board called the motherboard. Whereas a motherboard houses and connects detachable or replaceable components, SoCs integrate all of these components into a single integral circuit. An SoC will typically integrate a CPU, graphics and memory interfaces, secondary storage and USB connectivity, I/O interfaces on a single chip, whereas a motherboard would connect these modules as discrete components or expansion cards. An SoC integrates a microcontroller, microprocessor or perhaps several processor cores with peripherals like a GPU, Wi-Fi and cellular network radio modems, and/or one or more coprocessors. Similar to how a microcontroller integrates a microprocessor with peripheral circuits and memory, an SoC can be seen as integrating a microcontroller with even more advanced peripherals. Compared to a multi-chip architecture, an SoC with equivalent functionality will have increased performance and reduced power consumption as well as a smaller semiconductor die area. This comes at the cost of reduced replaceability of components. By definition, SoC designs are fully or nearly fully integrated across different component modules. For these reasons, there has been a general trend towards tighter integration of components in the computer hardware industry, in part due to the influence of SoCs and lessons learned from the mobile and embedded computing markets. SoCs can be viewed as part of a larger trend towards embedded computing and hardware acceleration. SoCs are very common in the mobile computing (as in smart devices such as smartphones and tablet computers) and edge computing markets. They are also commonly used in embedded systems such as WiFi routers and the Internet of things. Types In general, there are three distinguishable types of SoCs: SoCs built around a microcontroller, SoCs built around a
https://en.wikipedia.org/wiki/Bhrigu
Bhrigu (, ) was a rishi in Hinduism. He was one of the seven great sages, the Saptarshis, one of the many Prajapatis (the facilitators of creation) created by Brahma. The first compiler of predictive astrology, and also the author of Bhrigu Samhita, the astrological (jyotisha) classic, Bhrigu is considered a manasaputra ("mind-born-son") of Brahma. The adjectival form of the name, Bhargava, is used to refer to the descendants and the school of Bhrigu. According to Manusmriti, Bhrigu was a compatriot of and lived during the time of Manu, the progenitor of humanity. Along with Manu, Bhrigu had made important contributions to the Manusmriti, which was constituted out of a sermon to a congregation of saints in the state of Brahmavarta, after the great floods in this area. As per the Skanda Purana, Bhrigu migrated to Bhrigukaccha, modern Bharuch, on the banks of the Narmada river in Gujarat, leaving his son Chyavana at Dhosi Hill. He was married to Khyati, one of the nine daughters of Prajapati Kardama. She is the daughter of Prajapati Daksha. She was the mother of Lakshmi as Bhargavi. They also had two sons named Dhata and Vidhata. He had one more son with Kavyamata (Usana), who is better known than Bhrigu himself – Shukra, learned sage and guru of the asuras. The sage Chyavana is also said to be his son with Puloma, as is the folk hero Mrikanda. [Maha:1.5] One of his descendants was sage Jamadagni, who in turn was the father of sage Parashurama, considered an avatar of Vishnu. Legends Bhrigu is mentioned in the Shiva Purana and the Vayu Purana, where he is shown present during the great yajna of Daksha (his father-in-law). He supports the continuation of the Daksha yajna even after being warned that without an offering for Shiva, it was asking for a catastrophe for everyone present there. In the Tattiriya Upanishad, he is described to have had a conversation with his father Varuni on Brahman. In the Bhagavad Gita, Krishna says that among sages, Bhrigu is representative of the opulence of God. Testing the Trimurti The Bhagavata Purana describes a legend in which a sages gathered at the bank of the river Sarasvati to participate in a great yajna. The gathered sages could not decide who among the Trimurti (supreme trinity) of Brahma, Vishnu, and Shiva was pre-eminent and should be the recipient of the yajna. They deputed Bhrigu to determine this answer. Upon being entrusted with the task, Bhrigu decided to test each of the Trimurti. He first visited Brahma at Satyaloka, and to test his patience, he refused to sing in his praise or prostrate before him. Brahma grew angry, but realised that his son was testing him and allowed him to pass. Bhrigu left for Kailasha, the abode of Shiva. Upon seeing the sage, Shiva rose to his feet and moved forward with great joy to embrace the sage. Bhrigu, however, refused the embrace, and tested him by calling the deity a maligner of social conventions and rituals. Shiva was infuriated and prepared to strike the
https://en.wikipedia.org/wiki/Parallel%20port
In computing, a parallel port is a type of interface found on early computers (personal and otherwise) for connecting peripherals. The name refers to the way the data is sent; parallel ports send multiple bits of data at once (parallel communication), as opposed to serial communication, in which bits are sent one at a time. To do this, parallel ports require multiple data lines in their cables and port connectors and tend to be larger than contemporary serial ports, which only require one data line. There are many types of parallel ports, but the term has become most closely associated with the printer port or Centronics port found on most personal computers from the 1970s through the 2000s. It was an industry de facto standard for many years, and was finally standardized as IEEE 1284 in the late 1990s, which defined the Enhanced Parallel Port (EPP) and Extended Capability Port (ECP) bi-directional versions. Today, the parallel port interface is virtually non-existent in new computers because of the rise of Universal Serial Bus (USB) devices, along with network printing using Ethernet and Wi-Fi connected printers. The parallel port interface was originally known as the Parallel Printer Adapter on IBM PC-compatible computers. It was primarily designed to operate printers that used IBM's eight-bit extended ASCII character set to print text, but could also be used to adapt other peripherals. Graphical printers, along with a host of other devices, have been designed to communicate with the system. History Centronics An Wang, Robert Howard and Prentice Robinson began development of a low-cost printer at Centronics, a subsidiary of Wang Laboratories that produced specialty computer terminals. The printer used the dot matrix printing principle, with a print head consisting of a vertical row of seven metal pins connected to solenoids. When power was applied to the solenoids, the pin was pushed forward to strike the paper and leave a dot. To make a complete character glyph, the print head would receive power to specified pins to create a single vertical pattern, then the print head would move to the right by a small amount, and the process repeated. On their original design, a typical glyph was printed as a matrix seven high and five wide, while the "A" models used a print head with 9 pins and formed glyphs that were 9 by 7. This left the problem of sending the ASCII data to the printer. While a serial port does so with the minimum of pins and wires, it requires the device to buffer up the data as it arrives bit by bit and turn it back into multi-bit values. A parallel port makes this simpler; the entire ASCII value is presented on the pins in complete form. In addition to the eight data pins, the system also needed various control pins as well as electrical grounds. Wang happened to have a surplus stock of 20,000 Amphenol 36-pin micro ribbon connectors that were originally used for one of their early calculators. The interface only required 21 of t
https://en.wikipedia.org/wiki/Jef%20Raskin
Jef Raskin (born Jeff Raskin; March 9, 1943 – February 26, 2005) was an American human–computer interface expert who conceived and initiated the Macintosh project at Apple in the late 1970s. Early life and education Jef Raskin was born in New York City to a secular Jewish family, whose surname is a matronymic from "Raske", Yiddish nickname for Rachel. He received a BA in mathematics and a BS in physics with minors in philosophy and music from Stony Brook University. In 1967, he received a master's degree in computer science from Pennsylvania State University, after having switched from mathematical logic due to differences of opinion with his advisor. Even though he had completed work typical for a PhD, the university was not accredited for a PhD in computer science. The first original computer application he wrote was a music application as part of his master's thesis. Raskin later enrolled in a graduate music program at the University of California, San Diego (UCSD), but quit to teach art, photography, and computer science there. He worked as an assistant professor in the Visual Arts department from 1968 until 1974. There, he presented shows about toys as works of art. Raskin announced his resignation from the assistant professorship by flying over the Chancellor's house in a hot air balloon. He was awarded a National Science Foundation grant to establish a Computer and Humanities center which used several 16-bit Data General Nova computers and CRTs rather than the teletypes which were more common at that time. Along with his undergraduate student Jonathan (Jon) Collins, Raskin developed the FLOW programming language for use in teaching programming to the art and humanities students. The language was first used at the Humanities Summer Training Institute held in 1970 at the University of Kansas in Lawrence, Kansas. The language has only seven statements (, GET IT, PRINT IT, PRINT "text", JUMP TO, IF IT IS " " JUMP TO, and STOP) and can not manipulate numbers. The language was first implemented in Fortran by Collins in under a week. Later versions of the language utilized "typing amplification" in which only the first letter is typed and the computer provides the balance of the instruction eliminating typing errors. It was also the basis for programming classes taught by Raskin and Collins in the UCSD Visual Arts Department. Raskin curated several art shows including one featuring his collection of unusual toys, and presenting toys as works of art. During this period, he changed the spelling of his name from "Jeff" to "Jef" after having met Jon Collins and liking the lack of extraneous letters. Raskin occasionally wrote for computer publications, such as Dr. Dobb's Journal. He formed a company named Bannister and Crun, which was named for two characters playing in the BBC radio comedy The Goon Show. Career history Apple Contractor writer Raskin first met Apple Computer co-founders Steve Jobs and Steve Wozniak in their garage workshop fo
https://en.wikipedia.org/wiki/Variable%20bitrate
Variable bitrate (VBR) is a term used in telecommunications and computing that relates to the bitrate used in sound or video encoding. As opposed to constant bitrate (CBR), VBR files vary the amount of output data per time segment. VBR allows a higher bitrate (and therefore more storage space) to be allocated to the more complex segments of media files while less space is allocated to less complex segments. The average of these rates can be calculated to produce an average bitrate for the file. MP3, WMA and AAC audio files can optionally be encoded in VBR, while Opus and Vorbis are encoded in VBR by default. Variable bit rate encoding is also commonly used on MPEG-2 video, MPEG-4 Part 2 video (Xvid, DivX, etc.), MPEG-4 Part 10/H.264 video, Theora, Dirac and other video compression formats. Additionally, variable rate encoding is inherent in lossless compression schemes such as FLAC and Apple Lossless. Advantages and disadvantages of VBR The advantages of VBR are that it produces a better quality-to-space ratio compared to a CBR file of the same data. The bits available are used more flexibly to encode the sound or video data more accurately, with fewer bits used in less demanding passages and more bits used in difficult-to-encode passages. The disadvantages are that it may take more time to encode, as the process is more complex, and that some hardware might not be compatible with VBR files. Methods of VBR encoding Multi-pass encoding and single-pass encoding VBR is created using so-called single-pass encoding or multi-pass encoding. Single-pass encoding analyzes and encodes the data "on the fly" and it is also used in constant bitrate encoding. Single-pass encoding is used when the encoding speed is most important — e.g. for real-time encoding. Single-pass VBR encoding is usually controlled by the fixed quality setting or by the bitrate range (minimum and maximum allowed bitrate) or by the average bitrate setting. Multi-pass encoding is used when the encoding quality is most important. Multi-pass encoding cannot be used in real-time encoding, live broadcast or live streaming. Multi-pass encoding takes much longer than single-pass encoding, because every pass means one pass through the input data (usually through the whole input file). Multi-pass encoding is used only for VBR encoding, because CBR encoding doesn't offer any flexibility to change the bitrate. The most common multi-pass encoding is two-pass encoding. In the first pass of two-pass encoding, the input data is being analyzed and the result is stored in a log file. In the second pass, the collected data from the first pass is used to achieve the best encoding quality. In a video encoding, two-pass encoding is usually controlled by the average bitrate setting or by the bitrate range setting (minimal and maximal allowed bitrate) or by the target video file size setting. Bitrate range This VBR encoding method allows the user to specify a bitrate range — a minimum and/or maximu
https://en.wikipedia.org/wiki/VBR
VBR may refer to: Computing Variable bitrate, in telecommunications and computing, a non-constant sound or video encoding bitrate Volume boot record, in computer disks, a type of boot sector that contains code for bootstrapping programs Vouch by Reference, a standard way for email certification providers to vouch for outbound email sent by others Other Reverse breakdown voltage, a diode characteristic in electronics
https://en.wikipedia.org/wiki/VLAN
A virtual local area network (VLAN) is any broadcast domain that is partitioned and isolated in a computer network at the data link layer (OSI layer 2). In this context, virtual, refers to a physical object recreated and altered by additional logic, within the local area network. VLANs work by applying tags to network frames and handling these tags in networking systems – creating the appearance and functionality of network traffic that is physically on a single network but acts as if it is split between separate networks. In this way, VLANs can keep network applications separate despite being connected to the same physical network, and without requiring multiple sets of cabling and networking devices to be deployed. VLANs allow network administrators to group hosts together even if the hosts are not directly connected to the same network switch. Because VLAN membership can be configured through software, this can greatly simplify network design and deployment. Without VLANs, grouping hosts according to their resource needs the labor of relocating nodes or rewiring data links. VLANs allow devices that must be kept separate to share the cabling of a physical network and yet be prevented from directly interacting with one another. This managed sharing yields gains in simplicity, security, traffic management, and economy. For example, a VLAN can be used to separate traffic within a business based on individual users or groups of users or their roles (e.g. network administrators), or based on traffic characteristics (e.g. low-priority traffic prevented from impinging on the rest of the network's functioning). Many Internet hosting services use VLANs to separate customers' private zones from one other, allowing each customer's servers to be grouped in a single network segment no matter where the individual servers are located in the data center. Some precautions are needed to prevent traffic "escaping" from a given VLAN, an exploit known as VLAN hopping. To subdivide a network into VLANs, one configures network equipment. Simpler equipment might partition only each physical port (if even that), in which case each VLAN runs over a dedicated network cable. More sophisticated devices can mark frames through VLAN tagging, so that a single interconnect (trunk) may be used to transport data for multiple VLANs. Since VLANs share bandwidth, a VLAN trunk can use link aggregation, quality-of-service prioritization, or both to route data efficiently. Uses VLANs address issues such as scalability, security, and network management. Network architects set up VLANs to provide network segmentation. Routers between VLANs filter broadcast traffic, enhance network security, perform address summarization, and mitigate network congestion. In a network utilizing broadcasts for service discovery, address assignment and resolution and other services, as the number of peers on a network grows, the frequency of broadcasts also increases. VLANs can help manage broadcast tra
https://en.wikipedia.org/wiki/Serial%20Line%20Internet%20Protocol
The Serial Line Internet Protocol (SLIP) is an encapsulation of the Internet Protocol designed to work over serial ports and router connections. It is documented in . On personal computers, SLIP has largely been replaced by the Point-to-Point Protocol (PPP), which is better engineered, has more features, and does not require its IP address configuration to be set before it is established. On microcontrollers, however, SLIP is still the preferred way of encapsulating IP packets, due to its very small overhead. Some people refer to the successful and widely used RFC 1055 Serial Line Internet Protocol as "Rick Adams' SLIP", to avoid confusion with other proposed protocols named "SLIP". Those other protocols include the much more complicated appendix D Serial Line Interface Protocol. Description SLIP modifies a standard TCP/IP datagram by: appending a special "END" byte to it, which distinguishes datagram boundaries in the byte stream, if the END byte occurs in the data to be sent, the two byte sequence ESC, ESC_END is sent instead, if the ESC byte occurs in the data, the two byte sequence ESC, ESC_ESC is sent. variants of the protocol may begin, as well as end, packets with END. SLIP requires a serial port configuration of 8 data bits, no parity, and either EIA hardware flow control, or CLOCAL mode (3-wire null-modem) UART operation settings. SLIP does not provide error detection, being reliant on upper layer protocols for this. Therefore, SLIP on its own is not satisfactory over an error-prone dial-up connection. It is however still useful for testing operating systems' response capabilities under load (by looking at flood-ping statistics). SLIP escape characters were also required on some modem connections to escape Hayes command set, allowing therefore to pass binary data through those modems that would recognize some characters as commands. CSLIP A version of SLIP with header compression is called Compressed SLIP (CSLIP). The compression algorithm used in CSLIP is known as Van Jacobson TCP/IP Header Compression. CSLIP has no effect on the data payload of a packet and is independent of any compression by the serial line modem used for transmission. It reduces the Transmission Control Protocol (TCP) header from twenty bytes to seven bytes. CSLIP has no effect on User Datagram Protocol (UDP) datagrams. History RFC 1055, a "non-standard" for SLIP, traces its origins to the 3COM UNET TCP/IP implementation from the 1980s. Rick Adams added SLIP to the popular 4.2BSD in 1984 and it "quickly caught on". By the time of the RFC (1988), it is described as "commonly used on dedicated serial links and sometimes for dialup purposes". The last version of FreeBSD to include "slattach" (a command for connecting to slip) in the manual database is FreeBSD 7.4, released 2011. The manual claims that auto-negotiation exists for CSLIP. The FreeBSD version is inherited from 4.3BSD. Linux formerly used the same code base for SLIP and KISS (TNC). The spl
https://en.wikipedia.org/wiki/ALOHAnet
ALOHAnet, also known as the ALOHA System, or simply ALOHA, was a pioneering computer networking system developed at the University of Hawaii. The ALOHAnet used a new method of medium access, called ALOHA random access, and experimental ultra high frequency (UHF) for its operation. In its simplest form, later known as Pure ALOHA, remote units communicated with a base station (Menehune) over two separate radio frequencies (for inbound and outbound respectively). Nodes did not wait for the channel to be clear before sending, but instead waited for acknowledgement of successful receipt of a message, and re-sent it if this was not received. Nodes would also stop and re-transmit data if they detected any other messages while transmitting. While simple to implement, this results in an efficiency of only 18.4%. A later advancement, Slotted ALOHA, improved the efficiency of the protocol by reducing the chance of collision, improving throughput to 36.8%. ALOHAnet became operational in June 1971, providing the first public demonstration of a wireless packet data network. ALOHA was subsequently employed in the Ethernet cable based network in the 1970s, and following regulatory developments in the early 1980s it became possible to use the ALOHA random-access techniques in both Wi-Fi and in mobile telephone networks. ALOHA channels were used in a limited way in the 1980s in 1G mobile phones for signaling and control purposes. In the late 1980s, the European standardization group GSM who worked on the Pan-European Digital mobile communication system GSM greatly expanded the use of ALOHA channels for access to radio channels in mobile telephony. In the early 2000s additional ALOHA channels were added to 2.5G and 3G mobile phones with the widespread introduction of General Packet Radio Service (GPRS), using a slotted-ALOHA random-access channel combined with a version of the Reservation ALOHA scheme first analyzed by a group at BBN Technologies. History One of the early computer networking designs, development of the ALOHA network was begun in September 1968 at the University of Hawaii under the leadership of Norman Abramson and Franklin Kuo, along with Thomas Gaarder, Shu Lin, Wesley Peterson and Edward ("Ned") Weldon. The goal was to use low-cost commercial radio equipment to connect users on Oahu and the other Hawaiian islands with a central time-sharing computer on the main Oahu campus. The first packet broadcasting unit went into operation in June 1971. Terminals were connected to a special purpose terminal connection unit using RS-232 at 9600 bit/s. ALOHA was originally a contrived acronym standing for Additive Links On-line Hawaii Area. The original version of ALOHA used two distinct frequencies in a hub configuration, with the hub machine broadcasting packets to everyone on the outbound channel, and the various client machines sending data packets to the hub on the inbound channel. If data was received correctly at the hub, a short acknowledgment
https://en.wikipedia.org/wiki/Hilbert%27s%20tenth%20problem
Hilbert's tenth problem is the tenth on the list of mathematical problems that the German mathematician David Hilbert posed in 1900. It is the challenge to provide a general algorithm that, for any given Diophantine equation (a polynomial equation with integer coefficients and a finite number of unknowns), can decide whether the equation has a solution with all unknowns taking integer values. For example, the Diophantine equation has an integer solution: . By contrast, the Diophantine equation has no such solution. Hilbert's tenth problem has been solved, and it has a negative answer: such a general algorithm cannot exist. This is the result of combined work of Martin Davis, Yuri Matiyasevich, Hilary Putnam and Julia Robinson that spans 21 years, with Matiyasevich completing the theorem in 1970. The theorem is now known as Matiyasevich's theorem or the MRDP theorem (an initialism for the surnames of the four principal contributors to its solution). When all coefficients and variables are restricted to be positive integers, the related problem of polynomial identity testing becomes a decidable (exponentiation-free) variation of Tarski's high school algebra problem, sometimes denoted Background Original formulation Hilbert formulated the problem as follows: Given a Diophantine equation with any number of unknown quantities and with rational integral numerical coefficients: To devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers. The words "process" and "finite number of operations" have been taken to mean that Hilbert was asking for an algorithm. The term "rational integral" simply refers to the integers, positive, negative or zero: 0, ±1, ±2, ... . So Hilbert was asking for a general algorithm to decide whether a given polynomial Diophantine equation with integer coefficients has a solution in integers. Hilbert's problem is not concerned with finding the solutions. It only asks whether, in general, we can decide whether one or more solutions exist. The answer to this question is negative, in the sense that no "process can be devised" for answering that question. In modern terms, Hilbert's 10th problem is an undecidable problem. Although it is unlikely that Hilbert had conceived of such a possibility, before going on to list the problems, he did presciently remark: Occasionally it happens that we seek the solution under insufficient hypotheses or in an incorrect sense, and for this reason do not succeed. The problem then arises: to show the impossibility of the solution under the given hypotheses or in the sense contemplated. Proving the 10th problem undecidable is then a valid answer even in Hilbert's terms, since it is a proof about "the impossibility of the solution". Diophantine sets In a Diophantine equation, there are two kinds of variables: the parameters and the unknowns. The Diophantine set consists of the parameter assignments for w
https://en.wikipedia.org/wiki/International%20Union%20for%20Conservation%20of%20Nature
The International Union for Conservation of Nature (IUCN) is an international organization working in the field of nature conservation and sustainable use of natural resources. It is involved in data gathering and analysis, research, field projects, advocacy, and education. IUCN's mission is to "influence, encourage and assist societies throughout the world to conserve nature and to ensure that any use of natural resources is equitable and ecologically sustainable". Over the past decades, IUCN has widened its focus beyond conservation ecology and now incorporates issues related to sustainable development in its projects. IUCN does not itself aim to mobilize the public in support of nature conservation. It tries to influence the actions of governments, business and other stakeholders by providing information and advice and through building partnerships. The organization is best known to the wider public for compiling and publishing the IUCN Red List of Threatened Species, which assesses the conservation status of species worldwide. IUCN has a membership of over 1,400 governmental and non-governmental organizations. Some 16,000 scientists and experts participate in the work of IUCN commissions on a voluntary basis. It employs over 900 full-time staff in more than 50 countries. Its headquarters are in Gland, Switzerland. IUCN has observer and consultative status at the United Nations, and plays a role in the implementation of several international conventions on nature conservation and biodiversity. It was involved in establishing the World Wide Fund for Nature and the World Conservation Monitoring Centre. In the past, IUCN has been criticized for placing the interests of nature over those of indigenous peoples. In recent years, its closer relations with the business sector have caused controversy. IUCN was established in 1948. It was initially called the International Union for the Protection of Nature (1948–1956) and has also been formerly known as the World Conservation Union (1990–2008). History Establishment IUCN was established on 5 October 1948, in Fontainebleau, France, when representatives of governments and conservation organizations spurred by UNESCO signed a formal act constituting the International Union for the Protection of Nature (IUPN). The initiative to set up the new organisation came from UNESCO and especially from its first Director General, the British biologist Julian Huxley. At the time of its founding IUCN was the only international organisation focusing on the entire spectrum of nature conservation (an international organisation for the protection of birds, now BirdLife International, had been established in 1922). Early years: 1948–1956 IUCN (International Union for conservation of Nature) started out with 65 members in Brussels and was closely associated to UNESCO. They jointly organized the 1949 Conference on Protection of Nature Lake Success, US and drafted the first list of gravely endangered species. In the
https://en.wikipedia.org/wiki/List%20of%20fictional%20computers
Computers have often been used as fictional objects in literature, movies and in other forms of media. Fictional computers may be depicted as considerably more sophisticated than anything yet devised in the real world. Fictional computers may be referred to with a made-up manufacturer's brand name and model number or a nickname. This is a list of computers that have appeared in notable works of fiction. The work may be about the computer, or the computer may be an important element of the story. Only static computers are included. Robots and other fictional computers that are described as existing in a mobile or humanlike form are discussed in a separate list of fictional robots and androids. Literature Before 1950 The Engine, a kind of mechanical information generator featured in Jonathan Swift's Gulliver's Travels. This is considered to be the first description of a fictional device that in any way resembles a computer. (1726) The Machine from E. M. Forster's short story "The Machine Stops" (1909) The Brain from Lionel Britton’s Brain: A Play of the Whole Earth (1930). The Government Machine from Miles J. Breuer's short story "Mechanocracy" (1932). The Brain from Laurence Manning's novel The Man Who Awoke (1933). The Machine City from John W. Campbell's short story "Twilight" (1934). The ship's navigation computer in "Misfit", a short story by Robert A. Heinlein (1939) The Games Machine, a vastly powerful computer that plays a major role in A. E. van Vogt's The World of Null-A (serialized in Astounding Science Fiction in 1945) The Brain, a supercomputer with a childish, human-like personality appearing in the short story "Escape!" by Isaac Asimov (1945) Joe, a "logic" (that is to say, a personal computer) in Murray Leinster's short story "A Logic Named Joe" (1946) 1950s The Machines, positronic supercomputers that manage the world in Isaac Asimov's short story "The Evitable Conflict" (1950) MARAX (MAchina RAtiocinatriX), the spaceship Kosmokrators AI in Stanisław Lem's novel The Astronauts (1951) EPICAC, in Kurt Vonnegut's Player Piano and other of his writings, EPICAC coordinates the United States economy. Named similarly to ENIAC, its name also resembles that of 'ipecac', a plant-based preparation that was used in over-the-counter poison-antidote syrups for its emetic (vomiting-inducing) properties. (1952) EMSIAC, in Bernard Wolfe's Limbo, the war computer in World War III. (1952) Vast anonymous computing machinery possessed by the Overlords, an alien race who administer Earth while the human population merges with the Overmind. Described in Arthur C. Clarke's novel Childhood's End. (1953) The Prime Radiant, Hari Seldon's desktop on Trantor in Second Foundation by Isaac Asimov (1953) Mark V, a computer used by monks at a Tibetan lamasery to encode all the possible names of God which resulted in the end of the universe in Arthur C. Clarke's short story "The Nine Billion Names of God" (1953) Karl, a computer (named for
https://en.wikipedia.org/wiki/Network%20%281976%20film%29
Network is a 1976 American satirical drama film produced by Metro-Goldwyn-Mayer, released by United Artists, written by Paddy Chayefsky and directed by Sidney Lumet. It is about a fictional television network, the Union Broadcasting System (UBS; sometimes referred to as "UBS-TV"), and its struggle with poor ratings. The film stars Faye Dunaway, William Holden, Peter Finch (in his final film role), Robert Duvall, Wesley Addy, Ned Beatty, and Beatrice Straight. Network received widespread critical acclaim, with particular praise for the screenplay and performances. The film was a commercial success, with nine Oscar nominations at the 49th Academy Awards, including Best Picture, that led to four wins: Best Actor (Finch), Best Actress (Dunaway), Best Supporting Actress (Straight), and Best Original Screenplay. In 2000, the film was selected for preservation in the United States National Film Registry by the Library of Congress as being "culturally, historically, or aesthetically significant". In 2002, it was inducted into the Producers Guild of America Hall of Fame as a film that has "set an enduring standard for American entertainment". Widely considered one of the greatest screenplays of all time, the two Writers Guilds of America in 2005 voted Chayefsky's script one of the 10 greatest screenplays in the history of cinema. Dunaway (Chinatown, 1974), Duvall (The Godfather, 1972) and Holden (Sunset Boulevard, 1950) appear in one other film each from the same WGA top ten list. In 2007, the film was 64th among the 100 greatest American films as chosen by the American Film Institute, a ranking slightly higher than the one AFI had given it ten years earlier. Plot Howard Beale, the longtime anchor for the UBS Evening News, learns from friend and news division president Max Schumacher that he has just two more weeks on the air because of declining ratings. The following night, Beale announces to his audience that he will commit suicide on next Tuesday's newscast. UBS tries to immediately fire Beale, but Schumacher intervenes so that he can have a dignified farewell. Beale promises to apologize for his outburst, but once on the air, he launches into a rant about life being "bullshit." Beale's outburst causes ratings to spike, and much to Schumacher's dismay, UBS executives decide to exploit the situation. When Beale's ratings soon top out, programming chief Diana Christensen reaches out to Schumacher with an offer to help "develop" Beale's show. He declines the professional proposal but accepts her more personal pitch; the two begin an affair. When Schumacher decides to end Beale's "angry man" format, Christensen persuades her boss, Frank Hackett, to slot the evening news show under the entertainment division so she can develop it. Hackett bullies UBS executives to consent and fire Schumacher. In one impassioned diatribe, Beale galvanizes the nation, persuading viewers to shout, "I'm as mad as hell, and I'm not going to take this anymore!" from their
https://en.wikipedia.org/wiki/Presentation%20layer
In the seven-layer OSI model of computer networking, the presentation layer is layer 6 and serves as the data translator for the network. It is sometimes called the syntax layer. Description Within the service layering semantics of the OSI network architecture, the presentation layer responds to service requests from the application layer and issues service requests to the session layer through a unique presentation service access point (PSAP). The presentation layer ensures the information that the application layer of one system sends out is readable by the application layer of another system. On the sending system it is responsible for conversion to standard, transmittable formats. On the receiving system it is responsible for the translation, formatting, and delivery of information for processing or display. In theory, it relieves application layer protocols of concern regarding syntactical differences in data representation within the end-user systems. An example of a presentation service would be the conversion of an extended binary coded decimal interchange code (EBCDIC-coded) text computer file to an ASCII-coded file. If necessary, the presentation layer might be able to translate between multiple data formats using a common format. In many widely used applications and protocols no distinction is actually made between the presentation and application layers. For example, HyperText Transfer Protocol (HTTP), generally regarded as an application-layer protocol, has presentation-layer aspects such as the ability to identify character encoding for proper conversion, which is then done in the application layer. The presentation layer is the lowest layer at which application programmers consider data structure and presentation, instead of simply sending data in the form of datagrams or packets between hosts. This layer deals with issues of string representation - whether they use the Pascal method (an integer length field followed by the specified amount of bytes) or the C/C++ method (null-terminated strings, e.g. "thisisastring\0"). The idea is that the application layer should be able to point at the data to be moved, and the presentation layer will translate this to commands able to be understood by other applications and processes. Serialization of complex data structures into flat byte-strings (using mechanisms such as TLV, XML or JSON) can be thought of as the key functionality of the presentation layer. Structure representation is normally standardized at this level, often by using XML or JSON. As well as simple pieces of data, like strings, more complicated things are standardized in this layer. Two common examples are 'objects' in object-oriented programming, and the exact way that streaming video is transmitted. Encryption and Decryption are typically done at this level too, although it can be done on the application, session, transport, or network layers, each having its own advantages and disadvantages. For example, when loggi
https://en.wikipedia.org/wiki/Bo%20Jangeborg
Bo Jangeborg is a Swedish computer programmer. He made several programs for the ZX Spectrum, the best known being the game Fairlight (1985), its sequel Fairlight II (1986), and the graphic tool The Artist. He also wrote Flash!, the art package provided with every SAM Coupé. Today he runs the small software company Softwave. References External links Softography Bo Jangeborg interview from issue 6 of The ZX Files, 1998. Bo Jangeborg interview from a Swedish newspaper, 2015. Year of birth missing (living people) Living people Swedish video game designers Swedish computer programmers Place of birth missing (living people)
https://en.wikipedia.org/wiki/SuperH
SuperH (or SH) is a 32-bit reduced instruction set computing (RISC) instruction set architecture (ISA) developed by Hitachi and currently produced by Renesas. It is implemented by microcontrollers and microprocessors for embedded systems. At the time of introduction, SuperH was notable for having fixed-length 16-bit instructions in spite of its 32-bit architecture. Using smaller instructions had consequences: the register file was smaller and instructions were generally two-operand format. However for the market the SuperH was aimed at, this was a small price to pay for the improved memory and processor cache efficiency. Later versions of the design, starting with SH-5, included both 16- and 32-bit instructions, with the 16-bit versions mapping onto the 32-bit version inside the CPU. This allowed the machine code to continue using the shorter instructions to save memory, while not demanding the amount of instruction decoding logic needed if they were completely separate instructions. This concept is now known as a compressed instruction set and is also used by other companies, the most notable example being ARM for its Thumb instruction set. In 2015, many of the original patents for the SuperH architecture expired and the SH-2 CPU was reimplemented as open source hardware under the name J2. History SH-1 and SH-2 The SuperH processor core family was first developed by Hitachi in the early 1990s. The design concept was for a single instruction set (ISA) that would be upward compatible across a series of CPU cores. In the past, this sort of design problem would have been solved using microcode, with the low-end models in the series performing non-implemented instructions as a series of more basic instructions. For instance, a "long multiply" (multiplying two 32-bit registers to produce a 64-bit product) might be implemented in hardware on high-end models but instead be performed as a series of additions on low-end models. One of the key realizations during the development of the RISC concept was that the microcode had a finite decoding time, and as processors became faster, this represented an unacceptable performance overhead. To address this, Hitachi instead developed a single ISA for the entire line, with unsupported instructions causing traps on those implementations that didn't include hardware support. For instance, the initial models in the line, the SH-1 and SH-2, differed only in their support for 64-bit multiplication; the SH-2 supported , and , whereas the SH-1 would cause a trap if these were encountered. The SH-1 was the basic model, supporting a total of 56 instructions. The SH-2 added 64-bit multiplication and a few additional commands for branching and other duties, bringing the total to 62 supported instructions. The SH-1 and the SH-2 were used in the Sega Saturn, Sega 32X and Capcom CPS-3. The ISA uses 16-bit instructions for better code density than 32-bit instructions, which was important at the time due to the high co