source
stringlengths
32
199
text
stringlengths
26
3k
https://en.wikipedia.org/wiki/PC%20Card
PC Card is a parallel peripheral interface for laptop computers and PDAs. The Personal Computer Memory Card International Association (PCMCIA) originally introduced the 16-bit ISA-based PCMCIA Card in 1990, but renamed it to PC Card in March 1995 to avoid confusion with the name of the organization. The CardBus PC Card was introduced as a 32-bit version of the original PC Card, based on the PCI specification. The card slots are backwards compatible for the original 16-bit card, older slots are not forward compatible with newer cards. Although originally designed as a standard for memory-expansion cards for computer storage, the existence of a usable general standard for notebook peripherals led to the development of many kinds of devices including network cards, modems, and hard disks. The PC Card port has been superseded by the ExpressCard interface since 2003, which was also initially developed by the PCMCIA. The organization dissolved in 2009, with its assets merged into the USB Implementers Forum. History The PCMCIA 1.0 card standard was published by the Personal Computer Memory Card International Association in November 1990 and was soon adopted by more than eighty vendors. It corresponds with the Japanese JEIDA memory card 4.0 standard. SanDisk (operating at the time as "SunDisk") launched its PCMCIA card in October 1992. The company was the first to introduce a writeable Flash RAM card for the HP 95LX (an early MS-DOS pocket computer). These cards conformed to a supplemental PCMCIA-ATA standard that allowed them to appear as more conventional IDE hard drives to the 95LX or a PC. This had the advantage of raising the upper limit on capacity to the full 32 MB available under DOS 3.22 on the 95LX. New Media Corporation was one of the first companies established for the express purpose of manufacturing PC Cards; they became a major OEM for laptop manufacturers such as Toshiba and Compaq for PC Card products. It soon became clear that the PCMCIA card standard needed expansion to support "smart" I/O cards to address the emerging need for fax, modem, LAN, harddisk and floppy disk cards. It also needed interrupt facilities and hot plugging, which required the definition of new BIOS and operating system interfaces. This led to the introduction of release 2.0 of the PCMCIA standard and JEIDA 4.1 in September 1991, which saw corrections and expansion with Card Services (CS) in the PCMCIA 2.1 standard in November 1992. To recognize increased scope beyond memory, and to aid in marketing, the association acquired the rights to the simpler term "PC Card" from IBM. This was the name of the standard from version 2 of the specification onwards. These cards were used for wireless networks, modems, and other functions in notebook PCs. After the release of PCIe-based ExpressCard in 2003, laptop manufacturers started to fit ExpressCard slots to new laptops instead of PC Card slots. Form factors All PC Card devices use a similar sized package whic
https://en.wikipedia.org/wiki/Genera%20%28operating%20system%29
Genera is a commercial operating system and integrated development environment for Lisp machines created by Symbolics. It is essentially a fork of an earlier operating system originating on the Massachusetts Institute of Technology (MIT) AI Lab's Lisp machines which Symbolics had used in common with Lisp Machines, Inc. (LMI), and Texas Instruments (TI). Genera was also sold by Symbolics as Open Genera, which runs Genera on computers based on a Digital Equipment Corporation (DEC) Alpha processor using Tru64 UNIX. In 2021 a new version was released as Portable Genera which runs on DEC Alpha, Tru64 UNIX, x86-64 and Arm64 Linux, x86-64 and Apple Silicon M Series macOS. It is released and licensed as proprietary software. Genera is an example of an object-oriented operating system based on the programming language Lisp. Genera supports incremental and interactive development of complex software using a mix of programming styles with extensive support for object-oriented programming. MIT's Lisp machine operating system The Lisp Machine operating system was written in Lisp Machine Lisp. It was a one-user workstation initially targeted at software developers for artificial intelligence (AI) projects. The system had a large bitmap screen, a mouse, a keyboard, a network interface, a disk drive, and slots for expansion. The operating system was supporting this hardware and it provided (among others): code for a frontend processor means to boot the operating system virtual memory management garbage collection interface to various hardware: mouse, keyboard, bitmap frame buffer, disk, printer, network interface an interpreter and a native code compiler for Lisp Machine Lisp an object system: Flavors a graphical user interface (GUI) window system and window manager a local file system support for the Chaosnet (CHAOS) network an Emacs-like Editor named Zmacs a mail program named Zmail a Lisp listener a debugger This was already a complete one-user Lisp-based operating system and development environment. The MIT Lisp machine operating system was developed from the middle 1970s to the early 1980s. In 2006, the source code for this Lisp machine operating system from MIT was released as free and open-source software. Genera operating system Symbolics developed new Lisp machines and published the operating system under the name Genera. The latest version is 8.5. Symbolics Genera was developed in the early 1980s and early 1990s. In the final years, development entailed mostly patches, with very little new function. Symbolics developed Genera based on this foundation of the MIT Lisp machine operating system. It sells the operating system and layered software. Some of the layered software has been integrated into Genera in later releases. Symbolics improved the operating system software from the original MIT Lisp machine and expanded it. The Genera operating system was only available for Symbolics Lisp machines and the Open Genera virtual machi
https://en.wikipedia.org/wiki/Param
Param may refer to: PARAM, a series of Indian supercomputers Param (company), a video game developer Param, Iran, a village in East Azerbaijan Province, Iran Param, Mazandaran, a village in Mazandaran Province, Iran Param, Federated States of Micronesia, a municipality Param, Rampur, India, a village an abbreviation for parameter See also Para (disambiguation)
https://en.wikipedia.org/wiki/American%20Broadcasting%20Company
The American Broadcasting Company (ABC) is an American commercial broadcast television network. It is the flagship property of the Disney Entertainment division of The Walt Disney Company. The network is headquartered in Burbank, California, on Riverside Drive, directly across the street from Walt Disney Studios and adjacent to the Roy E. Disney Animation Building. The network's secondary offices, and headquarters of its news division, are in New York City, at its broadcast center at 77 West 66th Street on the Upper West Side of Manhattan. Since 2007, when ABC Radio (also known as Cumulus Media Networks) was sold to Citadel Broadcasting, ABC has reduced its broadcasting operations almost exclusively to television. The youngest of the "Big Three" U.S. television networks, the network is sometimes referred to as the Alphabet Network, as its initialism also represents the first three letters of the English alphabet in order. ABC launched as a radio network in 1943, as the successor to the NBC Blue Network, which had been purchased by Edward J. Noble. It extended its operations to television in 1948, following in the footsteps of established broadcast networks CBS and NBC, as well as the lesser-known DuMont. In the mid-1950s, ABC merged with United Paramount Theatres (UPT), a chain of movie theaters that formerly operated as a subsidiary of Paramount Pictures. Leonard Goldenson, who had been the head of UPT, made the new television network profitable by helping develop and greenlighting many successful series. In the 1980s, after purchasing an 80 percent interest in cable sports channel ESPN, the network's corporate parent, American Broadcasting Companies, Inc., merged with Capital Cities Communications, owner of several print publications, and television and radio stations. Most of Capital Cities/ABC's assets were purchased by Disney in 1996. ABC has eight owned-and-operated and more than 230 affiliated television stations throughout the United States and its territories. Some ABC-affiliated stations can also be seen in Canada via pay-television providers, and certain other affiliates can also be received over-the-air in areas near the Canada–United States border, although most of its prime time programming is subject to simultaneous substitution regulations for pay television providers imposed by the Canadian Radio-television and Telecommunications Commission (CRTC) to protect rights held by domestically based networks. ABC News provides news and features content for select radio stations owned by Cumulus Media, as these stations are former ABC Radio properties. History In 1927, NBC operated a radio network called the NBC Blue Network. It would later become an independent radio (and, eventually, television) network known as the American Broadcasting Company (ABC) in 1943. ABC later joined United Paramount Theatres forming American Broadcasting-Paramount Theatres (later American Broadcasting Companies, Inc.). After its venture into radio and t
https://en.wikipedia.org/wiki/CNN
The Cable News Network (CNN) is a multinational news channel and website headquartered in Atlanta, Georgia, U.S. Founded in 1980 by American media proprietor Ted Turner and Reese Schonfeld as a 24-hour cable news channel, and presently owned by the Manhattan-based media conglomerate Warner Bros. Discovery (WBD), CNN was the first television channel to provide 24-hour news coverage and the first all-news television channel in the United States. As of February 2023, CNN had 80 million television households as subscribers in the US. According to Nielsen, in June 2021 CNN ranked third in viewership among cable news networks, behind Fox News and MSNBC, averaging 580,000 viewers throughout the day, down 49% from a year earlier, amid sharp declines in viewers across all cable news networks. While CNN ranked 14th among all basic cable networks in 2019, then jumped to 7th during a major surge for the three largest cable news networks (completing a rankings streak of Fox News at number 5 and MSNBC at number 6 for that year), it settled back to number 11 in 2021 and had further declined to number 21 in 2022. Globally, CNN programming has aired through CNN International, seen by viewers in over 212 countries and territories; since May 2019, however, the US domestic version has absorbed international news coverage in order to reduce programming costs. The American version, sometimes referred to as CNN (US), is also available in Canada, and some islands in the Caribbean. CNN also broadcasts in India where it is called CNN-News18, and in Japan, where it was first broadcast on CNNj in 2003, with simultaneous translation in Japanese. History The Cable News Network launched at 5:00 p.m. Eastern Time on June 1, 1980. After an introduction by Ted Turner, the husband and wife team of David Walker and Lois Hart anchored the channel's first newscast. Burt Reinhardt, the executive vice president of CNN, hired most of the channel's first 200 employees, including the network's first news anchor, Bernard Shaw. Since its debut, CNN has expanded its reach to several cable and satellite television providers, websites, and specialized closed-circuit channels (such as CNN Airport). The company has 42 bureaus (12 domestic, 31 international), more than 900 affiliated local stations (which also receive news and features content via the video newswire service CNN Newsource), and several regional and foreign-language networks around the world. The channel's success made a bona-fide mogul of founder Ted Turner and set the stage for conglomerate Time Warner's (later WarnerMedia which merged with Discovery Inc. forming Warner Bros. Discovery) eventual acquisition of the Turner Broadcasting System in 1996. Programming CNN's current weekday schedule consists mostly of rolling news programming during daytime hours, followed by in-depth news and information programs during the evening and prime time hours. The network's morning programming consists of Early Start, an early-morning
https://en.wikipedia.org/wiki/Scope%20%28computer%20science%29
In computer programming, the scope of a name binding (an association of a name to an entity, such as a variable) is the part of a program where the name binding is valid; that is, where the name can be used to refer to the entity. In other parts of the program, the name may refer to a different entity (it may have a different binding), or to nothing at all (it may be unbound). Scope helps prevent name collisions by allowing the same name to refer to different objects – as long as the names have separate scopes. The scope of a name binding is also known as the visibility of an entity, particularly in older or more technical literature—this is from the perspective of the referenced entity, not the referencing name. The term "scope" is also used to refer to the set of all name bindings that are valid within a part of a program or at a given point in a program, which is more correctly referred to as context or environment. Strictly speaking and in practice for most programming languages, "part of a program" refers to a portion of source code (area of text), and is known as lexical scope. In some languages, however, "part of a program" refers to a portion of run time (time period during execution), and is known as dynamic scope. Both of these terms are somewhat misleading—they misuse technical terms, as discussed in the definition—but the distinction itself is accurate and precise, and these are the standard respective terms. Lexical scope is the main focus of this article, with dynamic scope understood by contrast with lexical scope. In most cases, name resolution based on lexical scope is relatively straightforward to use and to implement, as in use one can read backwards in the source code to determine to which entity a name refers, and in implementation one can maintain a list of names and contexts when compiling or interpreting a program. Difficulties arise in name masking, forward declarations, and hoisting, while considerably subtler ones arise with non-local variables, particularly in closures. Definition The strict definition of the (lexical) "scope" of a name (identifier) is unambiguous: lexical scope is "the portion of source code in which a binding of a name with an entity applies". This is virtually unchanged from its 1960 definition in the specification of ALGOL 60. Representative language specifications follow: ALGOL 60 (1960) The following kinds of quantities are distinguished: simple variables, arrays, labels, switches, and procedures. The scope of a quantity is the set of statements and expressions in which the declaration of the identifier associated with that quantity is valid. C (2007) An identifier can denote an object; a function; a tag or a member of a structure, union, or enumeration; a typedef name; a label name; a macro name; or a macro parameter. The same identifier can denote different entities at different points in the program. [...] For each different entity that an identifier designates, the identifier is vis
https://en.wikipedia.org/wiki/Klez
Klez is a computer worm that propagates via e-mail. It first appeared in October 2001. A number of variants of the worm exist. Klez infects Microsoft Windows systems, exploiting a vulnerability in Internet Explorer's Trident layout engine, used by both Microsoft Outlook and Outlook Express to render HTML mail. The e-mail through which the worm spreads always includes a text portion and one or more attachments. The text portion consists of either an HTML internal frame tag which causes buggy e-mail clients to automatically execute the worm, or a few lines of text that attempt to induce the recipient to execute the worm by opening the attachment (sometimes by claiming that the attachment is a patch from Microsoft; sometimes by claiming that the attachment is an antidote for the Klez worm). The first attachment is always the worm, whose internals vary. Once the worm is executed, either automatically by the buggy HTML engine or manually by a user, it searches for addresses to send itself to. When it sends itself out, it may attach a file from the infected machine, leading to possible privacy breaches. Later variants of the worm would use a false From address, picking an e-mail address at random from the infected machine's Outlook or Outlook Express address book, making it impossible for casual observers to determine which machine is infected, and making it difficult for experts to determine anything more than the infected machine's Internet Service Provider. See also Timeline of computer viruses and worms Comparison of computer viruses Computer viruses External links Anti-virus provider F-Secure Klez information Anti-virus provider Trend Micro Klez information Anti-virus provider Symantec Klez information AUSCERT External Security Bulletin, ESB-2001.456, "Malicious software report W32/KLEZ", 29 October 2001. Email worms Hacking in the 2000s 2001 in computing
https://en.wikipedia.org/wiki/Division
Division or divider may refer to: Mathematics Division (mathematics), the inverse of multiplication Division algorithm, a method for computing the result of mathematical division Military Division (military), a formation typically consisting of 10,000 to 25,000 troops Divizion, a subunit in some militaries Division (naval), a collection of warships Science Cell division, the process in which biological cells multiply Continental divide, the geographical term for separation between watersheds Division (biology), used differently in botany and zoology Division (botany), a taxonomic rank for plants or fungi, equivalent to phylum in zoology Division (horticulture), a method of vegetative plant propagation, or the plants created by using this method Division, a medical/surgical operation involving cutting and separation, see ICD-10 Procedure Coding System Technology Beam compass, a compass with a beam and sliding sockets for drawing and dividing circles larger than those made by a regular pair of compasses Divider caliper or compass, a caliper Frequency divider, a circuit that divides the frequency of a clock signal Society Administrative division, territory into which a country is divided Census division, an official term in Canada and the United States Diairesis, Plato's method of definition by division Division (business), of a business entity is a distinct part of that business but the primary business is legally responsible for all of the obligations and debts of the division Division (political geography), a name for a subsidiary state or prefecture of a country Division (sport), a group of teams in organised sport who compete for a divisional title In parliamentary procedure: Division of the assembly, a type of formally recorded vote by assembly members Division of a question, to split a question into two or more questions Partition (politics), the process of changing national borders or separating political entities Police division, a large territorial unit of the British police Places Division station (CTA North Side Main Line), a station on the Chicago Transit Authority's North Side Main Line Division station (CTA Blue Line), a station on the Chicago Transit Authority's 'L' system, serving the Blue Line Division Mountain, on the Continental Divide along the Alberta - British Columbia border of Canada Division Range, Humboldt County, Nevada Music Division (10 Years album), 2008 Division (The Gazette album), 2012 Divisions (album), by Starset, 2019 Division (music), a type of ornamentation or variation found in early music Divider, as in Schenkerian music analysis, a consonant subdivision of a consonant interval "Division", a song by Aly & AJ from Insomniatic, 2007 "Divider", a song by Scott Weiland from the album 12 Bar Blues (album), 1998 Other uses Divider, a central reservation in Bangladesh Division of the field, a concept in heraldry Division (logical fallacy), when one reasons logically that something true of a thing must also
https://en.wikipedia.org/wiki/Backus%E2%80%93Naur%20form
In computer science, Backus–Naur form () or Backus normal form (BNF) is a metasyntax notation for context-free grammars, often used to describe the syntax of languages used in computing, such as computer programming languages, document formats, instruction sets and communication protocols. It is applied wherever exact descriptions of languages are needed: for instance, in official language specifications, in manuals, and in textbooks on programming language theory. Many extensions and variants of the original Backus–Naur notation are used; some are exactly defined, including extended Backus–Naur form (EBNF) and augmented Backus–Naur form (ABNF). Overview A BNF specification is a set of derivation rules, written as <symbol> ::= __expression__ where: <symbol> is a nonterminal variable that is always enclosed between the pair <>. means that the symbol on the left must be replaced with the expression on the right. __expression__ consists of one or more sequences of either terminal or nonterminal symbols where each sequence is separated by a vertical bar "|" indicating a choice, the whole being a possible substitution for the symbol on the left. Example As an example, consider this possible BNF for a U.S. postal address: <postal-address> ::= <name-part> <street-address> <zip-part> <name-part> ::= <personal-part> <last-name> <opt-suffix-part> <EOL> | <personal-part> <name-part> <personal-part> ::= <initial> "." | <first-name> <street-address> ::= <house-num> <street-name> <opt-apt-num> <EOL> <zip-part> ::= <town-name> "," <state-code> <ZIP-code> <EOL> <opt-suffix-part> ::= "Sr." | "Jr." | <roman-numeral> | "" <opt-apt-num> ::= <apt-num> | "" This translates into English as: A postal address consists of a name-part, followed by a street-address part, followed by a zip-code part. A name-part consists of either: a personal-part followed by a last name followed by an optional suffix (Jr., Sr., or dynastic number) and end-of-line, or a personal part followed by a name part (this rule illustrates the use of recursion in BNFs, covering the case of people who use multiple first and middle names and initials). A personal-part consists of either a first name or an initial followed by a dot. A street address consists of a house number, followed by a street name, followed by an optional apartment specifier, followed by an end-of-line. A zip-part consists of a town-name, followed by a comma, followed by a state code, followed by a ZIP-code followed by an end-of-line. An opt-suffix-part consists of a suffix, such as "Sr.", "Jr." or a roman-numeral, or an empty string (i.e. nothing). An opt-apt-num consists of an apartment number or an empty string (i.e. nothing). Note that many things (such as the format of a first-name, apartment number, ZIP-code, and Roman numeral) are left unspecified here. If necessary, they may be described using additional BNF rules. History The idea of describing the structure of language using r
https://en.wikipedia.org/wiki/Sircam
Sircam is a computer worm that first propagated in 2001 by e-mail in Microsoft Windows systems. It affected computers running Windows 95, Windows 98, and Windows Me (Millennium). It began with one of the following lines of text and had an attachment consisting of the worm's executable with some file from the infected computer appended: I send you this file in order to have your advice I hope you like the file that I sent you I hope you can help me with this file that I send This is the file with the information you ask for Te mando este archivo para que me des tu punto de vista Espero te guste este archivo que te mando Espero me puedas ayudar con el archivo que te mando Este es el archivo con la informacion que me pediste Due to an error in the worm, the message was rarely sent in any form other than "I send you this file in order to have your advice." This subsequently became an in-joke among those who were using the Internet at the time, and were spammed with e-mails containing this string sent by the worm. Sircam was notable during its outbreak for the way it distributed itself. Document files (usually .doc or .xls) on the infected computer were chosen at random, infected with the virus and emailed out to email addresses in the host's address book. Opening the infected file resulted in infection of the target computer. During the outbreak, many personal or private files were emailed to people who otherwise should not have received them. It could also spread via open shares on a network. Sircam scanned the network for computers with shared drives and copied itself to a machine with an open (non-password protected) drive or directory. A simple RPC (Remote Procedure Call) was then executed to start the process on the target machine, usually unknown to the owner of the now-compromised computer. Over a year after the initial 2001 outbreak, Sircam was still in the top 10 on virus charts. See also Timeline of computer viruses and worms References Email worms
https://en.wikipedia.org/wiki/Closure%20%28computer%20programming%29
In programming languages, a closure, also lexical closure or function closure, is a technique for implementing lexically scoped name binding in a language with first-class functions. Operationally, a closure is a record storing a function together with an environment. The environment is a mapping associating each free variable of the function (variables that are used locally, but defined in an enclosing scope) with the value or reference to which the name was bound when the closure was created. Unlike a plain function, a closure allows the function to access those captured variables through the closure's copies of their values or references, even when the function is invoked outside their scope. History and etymology The concept of closures was developed in the 1960s for the mechanical evaluation of expressions in the λ-calculus and was first fully implemented in 1970 as a language feature in the PAL programming language to support lexically scoped first-class functions. Peter Landin defined the term closure in 1964 as having an environment part and a control part as used by his SECD machine for evaluating expressions. Joel Moses credits Landin with introducing the term closure to refer to a lambda expression with open bindings (free variables) that have been closed by (or bound in) the lexical environment, resulting in a closed expression, or closure. This use was subsequently adopted by Sussman and Steele when they defined Scheme in 1975, a lexically scoped variant of Lisp, and became widespread. Sussman and Abelson also use the term closure in the 1980s with a second, unrelated meaning: the property of an operator that adds data to a data structure to also be able to add nested data structures. This use of the term comes from mathematics use, rather than the prior use in computer science. The authors consider this overlap in terminology to be "unfortunate." Anonymous functions The term closure is often used as a synonym for anonymous function, though strictly, an anonymous function is a function literal without a name, while a closure is an instance of a function, a value, whose non-local variables have been bound either to values or to storage locations (depending on the language; see the lexical environment section below). For example, in the following Python code: def f(x): def g(y): return x + y return g # Return a closure. def h(x): return lambda y: x + y # Return a closure. # Assigning specific closures to variables. a = f(1) b = h(1) # Using the closures stored in variables. assert a(5) == 6 assert b(5) == 6 # Using closures without binding them to variables first. assert f(1)(5) == 6 # f(1) is the closure. assert h(1)(5) == 6 # h(1) is the closure. the values of a and b are closures, in both cases produced by returning a nested function with a free variable from the enclosing function, so that the free variable binds to the value of parameter x of the enclosing function. The closures in a and b are f
https://en.wikipedia.org/wiki/RSX-11
RSX-11 is a discontinued family of multi-user real-time operating systems for PDP-11 computers created by Digital Equipment Corporation. In widespread use through the late 1970s and early 1980s, RSX-11 was influential in the development of later operating systems such as VMS and Windows NT. As the original Real-Time System Executive name suggests, RSX was designed (and commonly used) for real time use, with process control a major use, thereof. It was also popular for program development and general computing. History Name and origins RSX-11 began as a port to the PDP-11 architecture of the earlier RSX-15 operating system for the PDP-15 minicomputer, first released in 1971. The main architect for RSX-15 (later renamed XVM/RSX) was Dennis “Dan” Brevik. Commenting on the RSX acronym, Brevik says: RSX-11D and IAS The porting effort first produced small paper tape based real-time executives (RSX-11A, RSX-11C) which later gained limited support for disks (RSX-11B). RSX-11B then evolved into the fully fledged RSX-11D disk-based operating system, which first appeared on the PDP-11/40 and PDP-11/45 in early 1973. The project leader for RSX-11D up to version 4 was Henry Krejci. While RSX-11D was being completed, Digital set out to adapt it for a small memory footprint giving birth to RSX-11M, first released in 1973. From 1971 to 1976, the RSX-11M project was spearheaded by noted operating system designer Dave Cutler, then at his first project. Principles first tried in RSX-11M appear also in later designs led by Cutler, DEC's VMS and Microsoft's Windows NT. Under the direction of Ron McLean a derivative of RSX-11M, called RSX-20F, was developed to run on the PDP-11/40 front-end processor for the KL10 PDP-10 CPU. Meanwhile, RSX-11D saw further developments: under the direction of Garth Wolfendale (project leader 1972–1976) the system was redesigned and saw its first commercial release. Support for the 22-bit PDP-11/70 system was added. Wolfendale, originally from the UK, also set up the team that designed and prototyped the Interactive Application System (IAS) operating system in the UK; IAS was a variant of RSX-11D more suitable for time sharing. Later development and release of IAS was led by Andy Wilson, in Digital's UK facilities. Release dates Below are estimated release dates for RSX-11 and IAS. Data is taken from the printing date of the associated documentation. General availability date is expected to come closely after. When manuals have different printing dates, the latest date is used. RSX-11S is a proper subset of RSX-11M, so release dates are always assumed to be the same as the corresponding version of RSX-11M. On the other side, RSX-11M Plus is an enhanced version of RSX-11M, so it is expected to be later than the corresponding version of RSX-11M. Legal ownership, development model and availability RSX-11 is proprietary software. Copyright is asserted in binary files, source code and documentation alike. It was entirely develo
https://en.wikipedia.org/wiki/Location-based%20service
Location-based service (LBS) is a general term denoting software services which use geographic data and information to provide services or information to users. LBS can be used in a variety of contexts, such as health, indoor object search, entertainment, work, personal life, etc. Commonly used examples of location-based services include navigation software, social networking services, location-based advertising, and tracking systems. LBS can also include mobile commerce when taking the form of coupons or advertising directed at customers based on their current location. LBS also includes personalized weather services and even location-based games. LBS is critical to many businesses as well as government organizations to drive real insight from data tied to a specific location where activities take place. The spatial patterns that location-related data and services can provide is one of its most powerful and useful aspects where location is a common denominator in all of these activities and can be leveraged to better understand patterns and relationships. Banking, surveillance, online commerce, and many weapon systems are dependent on LBS. Access policies are controlled by location data or time-of-day constraints, or a combination thereof. As such, an LBS is an information service and has a number of uses in social networking today as information, in entertainment or security, which is accessible with mobile devices through the mobile network and which uses information on the geographical position of the mobile device. This concept of location-based systems is not compliant with the standardized concept of real-time locating systems (RTLS) and related local services, as noted in ISO/IEC 19762-5 and ISO/IEC 24730-1. While networked computing devices generally do very well to inform consumers of days old data, the computing devices themselves can also be tracked, even in real-time. LBS privacy issues arise in that context, and are documented below. History Location-based services (LBSs) are widely used in many computer systems and applications. Modern location-based services are made possible by technological developments such as the World Wide Web, satellite navigation systems, and the widespread use of mobile phones. Location-based services were developed by integrating data from satellite navigation systems, cellular networks, and mobile computing, to provide services based on the geographical locations of users. Over their history, location-based software has evolved from simple synchronization-based service models to authenticated and complex tools for implementing virtually any location-based service model or facility. There is currently no agreed upon criteria for defining the market size of location-based services, but the European GNSS Agency estimated that 40% of all computer applications used location-based software as of 2013, and 30% of all Internet searches were for locations. LBS is the ability to open and close specific dat
https://en.wikipedia.org/wiki/Global%20Crossing
Global Crossing Limited, was a telecommunications company that provided computer networking services and operated a tier 1 carrier. It maintained a large backbone network and offered peering, virtual private networks, leased lines, audio and video conferencing, long-distance telephone, managed services, dialup, colocation centres and VoIP. Its customer base ranged from individuals to large enterprises and other carriers, with emphasis on higher-margin layered services such as managed services and VoIP with leased lines. Its core network delivered services to more than 700 cities in more than 70 countries. Global Crossing was the first global communications provider with IPv6 natively deployed in both its private and public networks. It was legally domiciled in Bermuda and had its administrative headquarters in New Jersey. In 1999, during the dot-com bubble, the company was valued at $47 billion, but it never had a profitable year. In 2002, the company filed for one of the largest bankruptcies in history and its executives were accused of covering up an accounting scandal. On October 3, 2011, Global Crossing was acquired by Level 3 Communications for $3 billion, including the assumption of $1.1 billion in debt. History Founding and early growth In March 1997, Global Crossing was founded by Gary Winnick, the former manager of the bond desk of Drexel Burnham Lambert, and his Drexel colleagues who moved on to work at Canadian Imperial Bank of Commerce (CIBC): Abbott L. Brown, David L. Lee, and Barry Porter. In 1997, the company raised $35 million, including investments by Winnick and the CIBC Argosy Merchant Funds (later Trimaran Capital Partners). Winnick was chairman of the company from 1997 until 2002. In 1998, he hired Lodwrick Cook, former CEO of Atlantic Richfield Company, as co-chairman. John Scanlon became the first CEO of the company in the same year, but was replaced in March 1999 by Robert Annunziata, who had resigned as president of AT&T Corporation's Business Services group to "build a company from start to finish". In May 1999, Global Crossing made an offer to acquire US West, but was outbid by Qwest. In July 1999, the company acquired Global Marine Systems, the undersea cable maintenance arm of Cable & Wireless, for $885 million. Later that year, in September 1999, the company acquired Frontier Communications, the former Rochester Telephone Corporation, for $9.9 billion and renamed it Global Crossing North America. That same month, Global Crossing acquired 49% of SB Submarine Systems, and formed Asia Global Crossing, a $1.3 billion joint venture with SoftBank Group and Microsoft to build a fiber-optic network in Asia linking Japan, China, Singapore, Hong Kong, Taiwan, South Korea, Malaysia and the Philippines. In November 1999, Global Crossing acquired Racal Telecom for $1.65 billion. In January 2000, the company formed a 50/50 joint venture with Hutchison Whampoa, valued at $1.2 billion, for a fiber-optic network in Hong Kong.
https://en.wikipedia.org/wiki/Tier%201%20network
A Tier 1 network is an Internet Protocol (IP) network that can reach every other network on the Internet solely via settlement-free interconnection (also known as settlement-free peering). Tier 1 networks can exchange traffic with other Tier 1 networks without paying any fees for the exchange of traffic in either direction. In contrast, some Tier 2 networks and all Tier 3 networks must pay to transmit traffic on other networks. There is no authority that defines tiers of networks participating in the Internet. The most common and well-accepted definition of a Tier 1 network is a network that can reach every other network on the Internet without purchasing IP transit or paying for peering. By this definition, a Tier 1 network must be a transit-free network (purchases no transit) that peers for free with every other Tier 1 network and can reach all major networks on the Internet. Not all transit-free networks are Tier 1 networks, as it is possible to become transit-free by paying for peering, and it is also possible to be transit-free without being able to reach all major networks on the Internet. The most widely quoted source for identifying Tier 1 networks is published by Renesys Corporation, but the base information to prove the claim is publicly accessible from many locations, such as the RIPE RIS database, the Oregon Route Views servers, Packet Clearing House, and others. It can be difficult to determine whether a network is paying for peering or transit, as these business agreements are rarely public information, or are covered under a non-disclosure agreement. The Internet peering community is roughly the set of peering coordinators present at the Internet exchange points on more than one continent. The subset representing Tier 1 networks is collectively understood in a loose sense, but not published as such. Common definitions of Tier 2 and Tier 3 networks: Tier 2 network: A network that peers for free with some networks, but still purchases IP transit or pays for peering to reach at least some portion of the Internet. Tier 3 network: A network that solely purchases transit/peering from other networks to participate in the Internet. History The original Internet backbone was the ARPANET when it provided the routing between most participating networks. The development of the British JANET (1984) and U.S. NSFNET (1985) infrastructure programs to serve their nations' higher education communities, regardless of discipline, resulted in 1989 with the NSFNet backbone. The Internet could be defined as the collection of all networks connected and able to interchange Internet Protocol datagrams with this backbone. Such was the weight of the NSFNET program and its funding ($200 million from 1986 to 1995)—and the quality of the protocols themselves—that by 1990 when the ARPANET itself was finally decommissioned, TCP/IP had supplanted or marginalized most other wide-area computer network protocols worldwide. When the Internet was opened to the co
https://en.wikipedia.org/wiki/Partial%20evaluation
In computing, partial evaluation is a technique for several different types of program optimization by specialization. The most straightforward application is to produce new programs that run faster than the originals while being guaranteed to behave in the same way. A computer program prog is seen as a mapping of input data into output data: where , the static data, is the part of the input data known at compile time. The partial evaluator transforms into by precomputing all static input at compile time. is called the "residual program" and should run more efficiently than the original program. The act of partial evaluation is said to "residualize" to . Futamura projections A particularly interesting example of the use of partial evaluation, first described in the 1970s by Yoshihiko Futamura, is when prog is an interpreter for a programming language. If Istatic is source code designed to run inside that interpreter, then partial evaluation of the interpreter with respect to this data/program produces prog*, a version of the interpreter that only runs that source code, is written in the implementation language of the interpreter, does not require the source code to be resupplied, and runs faster than the original combination of the interpreter and the source. In this case prog* is effectively a compiled version of Istatic. This technique is known as the first Futamura projection, of which there are three: Specializing an interpreter for given source code, yielding an executable. Specializing the specializer for the interpreter (as applied in #1), yielding a compiler. Specializing the specializer for itself (as applied in #2), yielding a tool that can convert any interpreter to an equivalent compiler. They were described by Futamura in Japanese in 1971 and in English in 1983. See also Compile-time function execution Memoization Partial application Run-time algorithm specialisation smn theorem Strength reduction Template metaprogramming References General references External links Applying Dynamic Partial Evaluation to dynamic, reflective programming languages Compiler optimizations Evaluation strategy
https://en.wikipedia.org/wiki/SCADA
Supervisory control and data acquisition (SCADA) is a control system architecture comprising computers, networked data communications and graphical user interfaces for high-level supervision of machines and processes. It also covers sensors and other devices, such as programmable logic controllers, which interface with process plant or machinery. Explanation The operator interfaces which enable monitoring and the issuing of process commands, like controller set point changes, are handled through the SCADA computer system. The subordinated operations, e.g. the real-time control logic or controller calculations, are performed by networked modules connected to the field sensors and actuators. The SCADA concept was developed to be a universal means of remote-access to a variety of local control modules, which could be from different manufacturers and allowing access through standard automation protocols. In practice, large SCADA systems have grown to become very similar to distributed control systems in function, while using multiple means of interfacing with the plant. They can control large-scale processes that can include multiple sites, and work over large distances as well as small distance. It is one of the most commonly-used types of industrial control systems, in spite of concerns about SCADA systems being vulnerable to cyberwarfare/cyberterrorism attacks. Control operations The key attribute of a SCADA system is its ability to perform a supervisory operation over a variety of other proprietary devices. The accompanying diagram is a general model which shows functional manufacturing levels using computerised control. Referring to the diagram, Level 0 contains the field devices such as flow and temperature sensors, and final control elements, such as control valves. Level 1 contains the industrialised input/output (I/O) modules, and their associated distributed electronic processors. Level 2 contains the supervisory computers, which collate information from processor nodes on the system, and provide the operator control screens. Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and targets. Level 4 is the production scheduling level. Level 1 contains the programmable logic controllers (PLCs) or remote terminal units (RTUs). Level 2 contains the SCADA to readings and equipment status reports that are communicated to level 2 SCADA as required. Data is then compiled and formatted in such a way that a control room operator using the HMI (Human Machine Interface) can make supervisory decisions to adjust or override normal RTU (PLC) controls. Data may also be fed to a historian, often built on a commodity database management system, to allow trending and other analytical auditing. SCADA systems typically use a tag database, which contains data elements called tags or points, which relate to specific instrumentation or actuators within the process system.
https://en.wikipedia.org/wiki/V5%20interface
V5 is a family of telephone network protocols defined by ETSI which allow communications between the telephone exchange, also known in the specifications as the local exchange (LE), and the local loop. With potentially thousands of subscribers connected to the LE there is the problem of physically managing thousands of wires out to the local subscribers (and the costs associated with that). Prior to the specification of V5 the manufacturers of exchange equipment had proprietary solutions to the problem. These solutions did not inter-operate and meant being tied into a single manufacturer's method at each exchange. V5 provided a standard set of protocols from the subscriber to the LE. The AN (or Access Network) was defined as a reference point. Signalling between this point and the LE was standardised and therefore allowed a multiple vendor solution, provided the specifications were followed. This resulted in a single link (or in the case of V5.2 multiple links) from the AN to the LE, reducing the need for many lines along this point (or more likely no need for a proprietary solution to manage the single link). The final link to the local loop remained the same with digital signalling (ISDN) and analogue signalling for basic telephony (also known as POTS in the industry). The protocols are based on the principle of common-channel signaling where message-based signalling for all subscribers uses the same signalling channel(s) rather than separate channels existing for different subscribers. V5 comes in two forms: V5.1 (ETS 300 324-1) in which there is a 1 to 1 correspondence between subscriber lines and bearer channels in the aggregate link to the exchange. A V5.1 interface relates to a single aggregate E1 (2 Mbit/s) link between a multiplexer and an exchange. V5.2 (ETS 300 347-1) which provides for concentration where there are not enough bearer channels in the aggregate link(s) to accommodate all subscribers at the same time. A single V5.2 interface can control up to 16 E1 links at once and can include protection of the signalling channels. The layer 3 protocols Control protocol - This controls the setup and basic management of the V5 link from the Access Network (AN) to the Local Exchange (LE). PSTN protocol - Translation of the analogue signals for POTS into a digital form for transfer from AN to LE. (i.e. off-hook, digit dialling, on hook etc.). BCC protocol - In V5.2 since any channel could be allocated to the call, this protocol is assigned the job of managing the assignment of channels to a call. (Only in v5.2) Link control protocol - For managing up to 16 E1 links. It controls the status of the links (i.e. in service/out of service). Protection protocol - Used in V5.2; this protocol is duplicated on two or more channels on two or more links and provides instant failover in the event of one failing. V5.1 only supports the Control, PSTN and ISDN protocols. V5.2 also supports BCC, Link Control and Protection protocols. V5 lay
https://en.wikipedia.org/wiki/Shannon%E2%80%93Fano%20coding
In the field of data compression, Shannon–Fano coding, named after Claude Shannon and Robert Fano, is a name given to two different but related techniques for constructing a prefix code based on a set of symbols and their probabilities (estimated or measured). Shannon's method chooses a prefix code where a source symbol is given the codeword length . One common way of choosing the codewords uses the binary expansion of the cumulative probabilities. This method was proposed in Shannon's "A Mathematical Theory of Communication" (1948), his article introducing the field of information theory. Fano's method divides the source symbols into two sets ("0" and "1") with probabilities as close to 1/2 as possible. Then those sets are themselves divided in two, and so on, until each set contains only one symbol. The codeword for that symbol is the string of "0"s and "1"s that records which half of the divides it fell on. This method was proposed in a later (in print) technical report by Fano (1949). Shannon–Fano codes are suboptimal in the sense that they do not always achieve the lowest possible expected codeword length, as Huffman coding does. However, Shannon–Fano codes have an expected codeword length within 1 bit of optimal. Fano's method usually produces encoding with shorter expected lengths than Shannon's method. However, Shannon's method is easier to analyse theoretically. Shannon–Fano coding should not be confused with Shannon–Fano–Elias coding (also known as Elias coding), the precursor to arithmetic coding. Naming Regarding the confusion in the two different codes being referred to by the same name, Krajči et al. write: Around 1948, both Claude E. Shannon (1948) and Robert M. Fano (1949) independently proposed two different source coding algorithms for an efficient description of a discrete memoryless source. Unfortunately, in spite of being different, both schemes became known under the same name Shannon–Fano coding. There are several reasons for this mixup. For one thing, in the discussion of his coding scheme, Shannon mentions Fano’s scheme and calls it “substantially the same” (Shannon, 1948, p. 17 [reprint]). For another, both Shannon’s and Fano’s coding schemes are similar in the sense that they both are efficient, but suboptimal prefix-free coding schemes with a similar performance. Shannon's (1948) method, using predefined word lengths, is called Shannon–Fano coding by Cover and Thomas, Goldie and Pinch, Jones and Jones, and Han and Kobayashi. It is called Shannon coding by Yeung. Fano's (1949) method, using binary division of probabilities, is called Shannon–Fano coding by Salomon and Gupta. It is called Fano coding by Krajči et al. Shannon's code: predefined word lengths Shannon's algorithm Shannon's method starts by deciding on the lengths of all the codewords, then picks a prefix code with those word lengths. Given a source with probabilities the desired codeword lengths are . Here, is the ceiling function, meaning
https://en.wikipedia.org/wiki/Arithmetic%20coding
Arithmetic coding (AC) is a form of entropy encoding used in lossless data compression. Normally, a string of characters is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic encoding, frequently used characters will be stored with fewer bits and not-so-frequently occurring characters will be stored with more bits, resulting in fewer bits used in total. Arithmetic coding differs from other forms of entropy encoding, such as Huffman coding, in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number, an arbitrary-precision fraction q, where . It represents the current information as a range, defined by two numbers. A recent family of entropy coders called asymmetric numeral systems allows for faster implementations thanks to directly operating on a single natural number representing the current information. Implementation details and examples Equal probabilities In the simplest case, the probability of each symbol occurring is equal. For example, consider a set of three symbols, A, B, and C, each equally likely to occur. Simple block encoding would require 2 bits per symbol, which is wasteful: one of the bit variations is never used. That is to say, symbols A, B and C might be encoded respectively as 00, 01 and 10, with 11 unused. A more efficient solution is to represent a sequence of these three symbols as a rational number in base 3 where each digit represents a symbol. For example, the sequence "ABBCAB" could become 0.0112013, in arithmetic coding as a value in the interval [0, 1). The next step is to encode this ternary number using a fixed-point binary number of sufficient precision to recover it, such as 0.00101100012 – this is only 10 bits; 2 bits are saved in comparison with naïve block encoding. This is feasible for long sequences because there are efficient, in-place algorithms for converting the base of arbitrarily precise numbers. To decode the value, knowing the original string had length 6, one can simply convert back to base 3, round to 6 digits, and recover the string. Defining a model In general, arithmetic coders can produce near-optimal output for any given set of symbols and probabilities. (The optimal value is −log2P bits for each symbol of probability P; see Source coding theorem.) Compression algorithms that use arithmetic coding start by determining a model of the data – basically a prediction of what patterns will be found in the symbols of the message. The more accurate this prediction is, the closer to optimal the output will be. Example: a simple, static model for describing the output of a particular monitoring instrument over time might be: 60% chance of symbol NEUTRAL 20% chance of symbol POSITIVE 10% chance of symbol NEGATIVE 10% chance of symbol END-OF-DATA. (The presence of this symbol means that the stream will be 'internally terminated',
https://en.wikipedia.org/wiki/Event-driven%20programming
In computer programming, event-driven programming is a programming paradigm in which the flow of the program is determined by events such as user actions from mice, keyboards, touchpads and touchscreens. Non-user initiated events can involve sensor inputs, or be programmatically generated (message passing) from other programs or threads. Event-driven programming is the dominant paradigm used in graphical user interfaces and other applications (e.g., JavaScript web applications) that are centered on performing certain actions in response to user input. This is also true of programming for device drivers (e.g., P in USB device driver stacks). In an event-driven application, there is generally a main loop that listens for events and then triggers a callback function when one of those events is detected. In embedded systems, the same may be achieved using hardware interrupts instead of a constantly running main loop. Event-driven programs can be written in any programming language, although the task is easier in languages that provide high-level abstractions, such as await and closures. Event handlers A trivial event handler Because the code for checking of events and the main loop are common amongst applications, many programming frameworks take care of their implementation and expect the user to provide only the code for the event handlers. In this simple example, there may be a call to an event handler called that includes an argument with a string of characters, corresponding to what the user typed before hitting the ENTER key. To add two numbers, storage outside the event handler must be used. The implementation might look like below. globally declare the counter K and the integer T. OnKeyEnter(character C) { convert C to a number N if K is zero store N in T and increment K otherwise, add N to T, print the result and reset K to zero } While keeping track of history is normally trivial in a sequential program because event handlers execute in response to external events, correctly structuring the handlers to work when called in any order can require special attention and planning in an event-driven program. Creating event handlers The first step in developing an event-driven program is to write a series of subroutines, or methods, called event-handler routines. These routines handle the events to which the main program will respond. For example, a single left-button mouse-click on a command button in a GUI program may trigger a routine that will open another window, save data to a database or exit the application. Many modern-day programming environments provide the programmer with event templates, allowing the programmer to focus on writing the event code. The second step is to bind event handlers to events so that the correct function is called when the event takes place. Graphical editors combine the first two steps: double-click on a button, and the editor creates an (empty) event handler associated with the user clicking th
https://en.wikipedia.org/wiki/Jasper%20%28disambiguation%29
Jasper is an opaque mineral. Jasper or Jaspers may also refer to: Computing JasPer, a project to create an open-source implementation of the JPEG-2000 codec Tomcat Jasper, a software engine used by Apache Tomcat for JavaServer Pages Jasper, a hardware revision of Microsoft's Xbox 360 video game console Jasper Technologies, Inc., an American corporation that provides a cloud-based software platform for the Internet of Things Jasper Design Automation, an electronic design automation company, now part of Cadence Design Systems JasperReports, an open source Java reporting library Music "Jasper", a 1976 Jim Stafford song "Jasper" (Kaela Kimura song), a 2008 song People and fictional characters Jasper (given name) Jasper (surname) Places Australia Lake Jasper, a permanent freshwater lake in Western Australia Canada Jasper National Park Jasper, Alberta, a specialized municipality Jasper station, a Canadian National Railway station United States Jasper, Alabama, a city Jasper, Arkansas, a city Jasper, Florida, a city Jasper, Georgia, a city Jasper, Indiana, a city Jasper, Minnesota, a city Jasper, Missouri, a city Jasper, New York, a town Jasper, Ohio, an unincorporated community Jasper, Oregon, an unincorporated community Jasper, Tennessee, a town Jasper, Texas, a city Jasper, Virginia, an unincorporated community Jasper County, includes a list of U.S. counties with that name Jasper Creek (disambiguation) Jasper Township (disambiguation) Mount Jasper, a mountain in Colorado Other uses , six ships of the Royal Navy Jasper High School (disambiguation), various American schools Jasper (San Francisco), a residential skyscraper Jasper ware, a type of fine pottery developed by Josiah Wedgwood Joint Actinide Shock Physics Experimental Research Jasper United, a Nigerian former football club based in the city of Onitsha A colloquial name for the common wasp in southern England and the English Midlands Jasper Ocean Terminal, a planned deepwater container port in South Carolina Manhattan Jaspers and Lady Jaspers, the nicknames of the Manhattan College sports teams Jasper (film), a 2022 Indian Tamil-language drama film See also Jaspers (disambiguation)
https://en.wikipedia.org/wiki/PDF%20%28disambiguation%29
PDF often refers to the Portable Document Format in computing. PDF, pdf, Pdf, PdF or similar may also refer to: Computing and telecommunications Pop Directional Formatting (Unicode character U+202C), a formatting character in bi-directional formatting Printer description file, describing capabilities of PostScript printers Profile-directed feedback, a compiler optimization better known as Profile-Guided Optimization (PGO) Program Development Facility, in the IBM z/OS operating system Pair distribution function Powder Diffraction File Probability density function Mathematics Probability density function Probability distribution function (disambiguation) Organisations Parkinson's Disease Foundation, a medical foundation PDF Solutions, a company based in San Jose, California Peace Development Fund, a non-profit public foundation based in Amherst, Massachusetts Military Panama Defense Forces People's Defence Force (Myanmar), armed wing of the Burmese government-in-exile since 2021 People's Defence Force (Singapore) Permanent Defence Forces, the standing branches of Ireland's military Politics Parti de la France People's Democratic Front (Hyderabad), a political party that existed in India during the early 1950s Other uses PDF (gene), a gene that in humans encodes the enzyme peptide deformylase Palladium fluoride (PdF), a series of chemical compounds Parton distribution function, in particle physics Peak draw force, in a compound bow in archery Percival David Foundation of Chinese Art Pigment dispersing factor, in biology Planar deformation features, in geology Playa del Fuego, a Delaware art festival Post-Doctoral Fellowship See also KPDF-CD, a television station in Phoenix, Arizona PDF417, or "portable data file 417", a two-dimensional barcode
https://en.wikipedia.org/wiki/Asure%20Software
Asure Software is a software company. Prior to September 13, 2007, the company was known as Forgent Networks. After rebranding as Asure Software, the company expanded into offering human capital management (HCM) solutions, including payroll, time & attendance, talent management, human resource management, benefits administration and insurance services. It also had a software division, NetSimplicity, which specialized in room scheduling and fixed assets' management software., which was spun off in 2019. Patents and litigation JPEG In 2002, while known as Forgent, the company claimed that through its subsidiary, Compression Labs, it owned the patent rights on the JPEG image compression standard, which is widely used on the World Wide Web. Its claim arose from a patent that had been filed on October 27, 1986, and granted on October 6, 1987: by Wen-Hsiung Chen and Daniel J. Klenke. While Forgent did not own Compression Labs at the time, Chen later sold the company to Forgent before joining Cisco. Critics claim that the legal principle of laches, hence not asserting one's rights in a timely manner, invalidates Forgent's claims on the patent. They also noted the similarity to Unisys' attempts to assert rights over the GIF image compression standard via LZW patent enforcement. The JPEG committee responded to Forgent's claims, stating that it believes prior art exists that would invalidate Forgent's claims, and launched a search for prior art evidence. The 1992 JPEG specification cited two earlier research papers written by Wen-Hsiung Chen, published in 1977 and 1984. JPEG representative Richard Clark also claimed that Chen sat in one of the JPEG committees, but Forgent denied this claim. In April 2004, Forgent stated that 30 companies had already paid US$90 million in royalties. On April 23, lawsuits were filed against 31 companies, including Adobe Systems, Apple Computer and IBM, for infringement of their patent. On September 26, 2005, Axis Communications, one of the defendants, announced a settlement with Compression Labs Inc.; the terms were not disclosed. As of late October 2005, six companies were known to have licensed the patent from Forgent including Adobe, Macromedia, Axis, Color Dreams, and Research In Motion. On May 25, 2006, the United States Patent and Trademark Office rejected the broadest part of Forgent's claims, stating prior art submitted by the Public Patent Foundation invalidated those claims. PubPat's Executive Director, Dan Ravicher, says that the submitters knew about the prior art but failed to tell the USPTO about it. On August 11, 2006, Forgent received notice from the NASDAQ stock market regarding non-compliance with the minimum bid price rule, which can lead to delisting, before coming back into compliance in January 2007. The company issued a press release on November 1, 2006, stating that they settled their remaining claims against roughly 60 companies for a total of $8 million which was paid by, among other compan
https://en.wikipedia.org/wiki/Risc%20PC
Risc PC was a range of personal computers launched in 1994 by Acorn and replaced the preceding Archimedes series. The machines had a unique architecture unrelated to IBM PC clones and were notable for using the Acorn developed ARM CPU which is now widely used in mobile devices. At launch, the original Risc PC 600 model was fitted as standard with an ARM 610, a 32-bit RISC CPU with 4KB of cache and clocked at 30Mhz. CPU technology advanced rapidly in this period though and within only two years a DEC StrongARM could be installed at 233Mhz which was around 8 times faster. The machines ran the RISC OS operating system which has a windowed cooperative multi-tasking design. Unusually for a PC of the period the O/S was stored in ROM, which enabled a relatively fast boot time. In contrast to most contemporary IBM clones, the machines supported multiple processors as a standard feature. Secondary (or "guest") CPUs did not need to be ARM based and could be an entirely different architecture. It was possible to add an x86 CPU which enabled use of operating systems including DOS and Windows 95. Cards could often be added to other machines of the era to run DOS software but more usually these would implement the majority of an IBM PC clone on the card. The Risc PC required only the addition of the relevant CPU with some interface logic. Alternate operating systems ran concurrently with RISC OS in a window. Applications from both operating systems could run at the same time in a similar fashion to a virtual machine with data shared between them. While now a ubiquitous technology, this was a less common feature in 1994 and more usually only one operating system would run at once on a single PC. The Risc PC had a novel case design where additional chassis, known as "slices", could be stacked on top of each other, expanding the height of the machine. Up to six additional slices could be stacked, each containing additional drives or expansion cards (known as "podules"). At the time the IBM clone industry was standardised around the PCI bus, but Acorn used its own bus implementation that was not compatible and required its own unique expansion cards. The machines did use the then industry standard IDE or SCSI drives found in contemporary PCs. Acorn discontinued production of the Risc PC in 1998 after a corporate reorganisation but Castle Technology continued manufacturing the machines until 2003 and subsequently then produced their own similar designs. RISC OS is still available after becoming an open source product. Technical specifications Use The Risc PC was used by music composers and scorewriters to run the Sibelius scorewriting software. Between 1994 and 2008, the Risc PC and A7000+ were used in television for broadcast automation, programmed by the UK company OmniBus Systems: once considered "the world leader in television station automation" and at one point automating "every national news programme on terrestrial television in the United King
https://en.wikipedia.org/wiki/Defensive%20programming
Defensive programming is a form of defensive design intended to develop programs that are capable of detecting potential security abnormalities and make predetermined responses. It ensures the continuing function of a piece of software under unforeseen circumstances. Defensive programming practices are often used where high availability, safety, or security is needed. Defensive programming is an approach to improve software and source code, in terms of: General quality – reducing the number of software bugs and problems. Making the source code comprehensible – the source code should be readable and understandable so it is approved in a code audit. Making the software behave in a predictable manner despite unexpected inputs or user actions. Overly defensive programming, however, may safeguard against errors that will never be encountered, thus incurring run-time and maintenance costs. There is also a risk that code traps prevent too many exceptions, potentially resulting in unnoticed, incorrect results. Secure programming Secure programming is the subset of defensive programming concerned with computer security. Security is the concern, not necessarily safety or availability (the software may be allowed to fail in certain ways). As with all kinds of defensive programming, avoiding bugs is a primary objective; however, the motivation is not as much to reduce the likelihood of failure in normal operation (as if safety were the concern), but to reduce the attack surface – the programmer must assume that the software might be misused actively to reveal bugs, and that bugs could be exploited maliciously. int risky_programming(char *input) { char str[1000]; // ... strcpy(str, input); // Copy input. // ... } The function will result in undefined behavior when the input is over 1000 characters. Some programmers may not feel that this is a problem, supposing that no user will enter such a long input. This particular bug demonstrates a vulnerability which enables buffer overflow exploits. Here is a solution to this example: int secure_programming(char *input) { char str[1000+1]; // One more for the null character. // ... // Copy input without exceeding the length of the destination. strncpy(str, input, sizeof(str)); // If strlen(input) >= sizeof(str) then strncpy won't null terminate. // We counter this by always setting the last character in the buffer to NUL, // effectively cropping the string to the maximum length we can handle. // One can also decide to explicitly abort the program if strlen(input) is // too long. str[sizeof(str) - 1] = '\0'; // ... } Offensive programming Offensive programming is a category of defensive programming, with the added emphasis that certain errors should not be handled defensively. In this practice, only errors from outside the program's control are to be handled (such as user input); the software itself, as well as data from within the program's line of defense,
https://en.wikipedia.org/wiki/Reverse%20Address%20Resolution%20Protocol
The Reverse Address Resolution Protocol (RARP) is an obsolete computer communication protocol used by a client computer to request its Internet Protocol (IPv4) address from a computer network, when all it has available is its link layer or hardware address, such as a MAC address. The client broadcasts the request and does not need prior knowledge of the network topology or the identities of servers capable of fulfilling its request. RARP is described in Internet Engineering Task Force (IETF) publication RFC 903. It has been rendered obsolete by the Bootstrap Protocol (BOOTP) and the modern Dynamic Host Configuration Protocol (DHCP), which both support a much greater feature set than RARP. RARP requires one or more server hosts to maintain a database of mappings of Link Layer addresses to their respective protocol addresses. MAC addresses need to be individually configured on the servers by an administrator. RARP is limited to serving only IP addresses. Reverse ARP differs from the Inverse Address Resolution Protocol (InARP) described in RFC 2390, which is designed to obtain the IP address associated with a local Frame Relay data link connection identifier. InARP is not used in Ethernet. Modern Day Uses Although the original uses for RARP have been superseded by different protocols, some modern day protocols use RARP to handle MAC migration, particularly in virtual machines, using a technique originating in QEMU. Examples are: Cisco's Overlay Transport Virtualization (OTV). RARP is used to update the layer 2 forwarding tables when a MAC address moves between data centers. VMware vSphere's vMotion. RARP is used when a VM MAC moves between hosts. See also Maintenance Operations Protocol (MOP) References Internet protocols Internet Standards Link protocols
https://en.wikipedia.org/wiki/Finger%20%28protocol%29
In computer networking, the Name/Finger protocol and the Finger user information protocol are simple network protocols for the exchange of human-oriented status and user information. Name/Finger protocol The Name/Finger protocol is based on Request for Comments document RFC 742 (December 1977) as an interface to the name and finger programs that provide status reports on a particular computer system or a particular person at network sites. The finger program was written in 1971 by Les Earnest who created the program to solve the need of users who wanted information on other users of the network. Information on who is logged in was useful to check the availability of a person to meet. This was probably the earliest form of presence information for remote network users. Prior to the finger program, the only way to get this information on WAITS was with a WHO program that showed IDs and terminal line numbers (the server's internal number of the communication line, over which the user's terminal is connected) for logged-in users. In reference to the name FINGER, Les Earnest, wrote that he saw users of the WAITS time-sharing system run their fingers down the output of the WHO command. Finger user information protocol The finger daemon runs on TCP port 79. The client will (in the case of remote hosts) open a connection to port 79. An RUIP (Remote User Information Program) is started on the remote end of the connection to process the request. The local host sends the RUIP one line query based upon the Finger query specification, and waits for the RUIP to respond. The RUIP receives and processes the query, returns an answer, then initiates the close of the connection. The local host receives the answer and the close signal, then proceeds to close its end of the connection. The Finger user information protocol is based on RFC 1288 (The Finger User Information Protocol, December 1991). Typically the server side of the protocol is implemented by a program fingerd or in.fingerd (for finger daemon), while the client side is implemented by the name and finger programs which are supposed to return a friendly, human-oriented status report on either the system at the moment or a particular person in depth. There is no required format, and the protocol consists mostly of specifying a single command line. The program would supply information such as whether a user is currently logged-on, e-mail address, full name etc. As well as standard user information, finger displays the contents of the .project and .plan files in the user's home directory. Often this file (maintained by the user) contains either useful information about the user's current activities, similar to micro-blogging, or alternatively all manner of humor. Security concerns Supplying such detailed information as e-mail addresses and full names was considered acceptable and convenient in the early days of networking, but later was considered questionable for privacy and security reasons. Finger
https://en.wikipedia.org/wiki/Acorn%20Archimedes
Acorn Archimedes is a family of personal computers designed by Acorn Computers of Cambridge, England. The systems are based on Acorn's own ARM architecture processors and the proprietary operating systems Arthur and RISC OS. The first models were introduced in 1987, and systems in the Archimedes family were sold until the mid-1990s. ARM's RISC design, a 32-bit CPU (using 26-bit addressing), running at 8 MHz, was stated as achieving 4 MIPS, which provided a significant upgrade from 8-bit home computers, such as Acorn's previous machines. Claims of being the fastest micro in the world and running at 18 MIPS were also made during tests. Two of the first models—the A305 and A310—were given the BBC branding, with BBC Enterprises regarding the machines as "a continuing part of the original computer literacy project". Dissatisfaction with the branding arrangement was voiced by competitor Research Machines and an industry group led by a Microsoft representative, the British Micro Federation, who advocated the use of "business standard" operating systems such as MS-DOS. Responding to claims that the BBC branding was "unethical" and "damaging", a BBC Enterprises representative claimed that, with regard to the BBC's ongoing computer literacy initiatives, bringing in "something totally new would be irresponsible". The name "Acorn Archimedes" is commonly used to describe any of Acorn's contemporary designs based on the same architecture. This architecture can be broadly characterised as involving the ARM CPU and the first generation chipset consisting of MEMC (MEMory Controller), VIDC (VIDeo and sound Controller) and IOC (Input Output Controller). History Having introduced the BBC Micro in 1981, Acorn established itself as a major supplier to primary and secondary education in the United Kingdom. However, attempts to replicate this dominance in other sectors, such as home computing with the BBC Micro and Acorn Electron, and in other markets, including the United States and West Germany, were less successful. As microprocessor and computing technology advanced in the early 1980s, microcomputer manufacturers had to consider evolving their product lines to offer increased capabilities and performance. Acorn's strategy for business computing and the introduction of more capable machines involved a range of "second processor" expansions, including a Z80 second processor running the CP/M operating system, a commitment made by Acorn when securing the BBC Micro contract. Meanwhile, established platforms like CP/M running on Z80 processors faced competition from the IBM PC running PC DOS and computers with a variety of operating systems on Intel processors such as the 8088 and 8086. Systems using the Motorola 68000 and other processors running the Unix operating system also became available. Apple launched the Lisa and Macintosh computers, and Digital Research introduced its own GEM graphical user interface software, building on previous work by Xerox. Acorn's
https://en.wikipedia.org/wiki/Norbert%20Wiener
Norbert Wiener (November 26, 1894 – March 18, 1964) was an American mathematician, computer scientist and philosopher. He became a professor of mathematics at the Massachusetts Institute of Technology (MIT). A child prodigy, Wiener later became an early researcher in stochastic and mathematical noise processes, contributing work relevant to electronic engineering, electronic communication, and control systems. Wiener is considered the originator of cybernetics, the science of communication as it relates to living things and machines, with implications for engineering, systems control, computer science, biology, neuroscience, philosophy, and the organization of society. His work heavily influenced computer pioneer John von Neumann, information theorist Claude Shannon, anthropologists Margaret Mead and Gregory Bateson, and others. Norbert Wiener is credited as being one of the first to theorize that all intelligent behavior was the result of feedback mechanisms, that could possibly be simulated by machines and was an important early step towards the development of modern artificial intelligence. Biography Youth Wiener was born in Columbia, Missouri, the first child of Leo Wiener and Bertha Kahn, Jewish immigrants from Lithuania and Germany, respectively. Through his father, he was related to Maimonides, the famous rabbi, philosopher and physician from Al Andalus, as well as to Akiva Eger, chief rabbi of Posen from 1815 to 1837. Leo had educated Norbert at home until 1903, employing teaching methods of his own invention, except for a brief interlude when Norbert was 7 years of age. Earning his living teaching German and Slavic languages, Leo read widely and accumulated a personal library from which the young Norbert benefited greatly. Leo also had ample ability in mathematics and tutored his son in the subject until he left home. In his autobiography, Norbert described his father as calm and patient, unless he (Norbert) failed to give a correct answer, at which his father would lose his temper. In “The Theory of Ignorance”, a paper he wrote at the age of 10, he disputed “man’s presumption in declaring that his knowledge has no limits”, arguing that all human knowledge “is based on an approximation”, and acknowledging “the impossibility of being certain of anything.” He graduated from Ayer High School in 1906 at 11 years of age, and Wiener then entered Tufts College. He was awarded a BA in mathematics in 1909 at the age of 14, whereupon he began graduate studies of zoology at Harvard. In 1910 he transferred to Cornell to study philosophy. He graduated in 1911 at 17 years of age. Harvard and World War I The next year he returned to Harvard, while still continuing his philosophical studies. Back at Harvard, Wiener became influenced by Edward Vermilye Huntington, whose mathematical interests ranged from axiomatic foundations to engineering problems. Harvard awarded Wiener a PhD in June 1913, when he was only 19 years old, for a dissertation o
https://en.wikipedia.org/wiki/SGP
SGP may refer to: Events Secret Garden Party, a UK music festival Speedway Grand Prix, a series of motorcycling contests Symposium on Geometry Processing, of European Association For Computer Graphics Organisations Businesses Simmering-Graz-Pauker, an Austrian machine/vehicle manufacturer Stockland Corporation Limited, an Australian property developer (ASX ticker: SGP) Simply Good Production, a Russian video production agency Political parties Reformed Political Party (Staatkundig Gereformeerde Partij), the Netherlands Socialist Equality Party (Sozialistische Gleichheitspartei), Germany Scottish Green Party, Scotland Professional associations Sociedad de Gestión de Productores Fonográficos del Paraguay, for Paraguayan record producers Society of General Physiologists, for biomedical scientists Science Simplified General Perturbations model, for orbital calculations Social Golfer Problem, a problem in discrete mathematics Transport Shay Gap Airport, IATA airport code "SGP" Schweizer SGP 1-1, an American glider Subaru Global Platform, unibody automobile platform Other uses SGP, the ISO 3166-1 alpha-3 country code for Singapore sgp, the ISO 639-3 code for the Singpho dialect Stability and Growth Pact, the main EU fiscal agreement SpaceGhostPurrp, American rapper and record producer
https://en.wikipedia.org/wiki/Microsoft%20Developer%20Network
Microsoft Developer Network (MSDN) was the division of Microsoft responsible for managing the firm's relationship with developers and testers, such as hardware developers interested in the operating system (OS), and software developers developing on the various OS platforms or using the API or scripting languages of Microsoft's applications. The relationship management is situated in assorted media: web sites, newsletters, developer conferences, trade media, blogs and DVD distribution. Starting in January 2020, the website is fully integrated with Microsoft Docs. Websites MSDN's primary web presence at msdn.microsoft.com is a collection of sites for the developer community that provide information, documentation, and discussion that is authored both by Microsoft and by the community at large. Recently, Microsoft has placed emphasis on incorporation of forums, blogs, library annotations and social bookmarking to make MSDN an open dialog with the developer community rather than a one-way service. The main website, and most of its constituent applications below are available in 56 or more languages. Library MSDN Library is a library of official technical documentation intended for independent developers of software for Microsoft Windows. MSDN Library documents the APIs that ship with Microsoft products and also includes sample code, technical articles, and other programming information. The library was freely available on the web, with CDs and DVDs of the most recent materials initially issued quarterly as part of an MSDN subscription. However, since 2006, they can be freely downloaded from Microsoft Download Center in the form of ISO images. Visual Studio Express edition integrates only with MSDN Express Library, which is a subset of the full MSDN Library, although either edition of the MSDN Library can be freely downloaded and installed standalone. In Visual Studio 2010 MSDN Library is replaced with the new Help System, which is installed as a part of Visual Studio 2010 installation. Help Library Manager is used to install Help Content books covering selected topics. In 2016, Microsoft introduced the new technical documentation platform, Microsoft Docs, intended as a replacement of TechNet and MSDN libraries. Over the next two years, the content of MSDN Library was gradually migrated into Microsoft Docs. Now most of MSDN Library pages redirect to the corresponding Microsoft Docs pages. Integration with Visual Studio Each edition of MSDN Library can only be accessed with one help viewer (Microsoft Document Explorer or other help viewer), which is integrated with the then current single version or sometimes two versions of Visual Studio. In addition, each new version of Visual Studio does not integrate with an earlier version of MSDN. A compatible MSDN Library is released with each new version of Visual Studio and included on Visual Studio DVD. As newer versions of Visual Studio are released, newer editions of MSDN Library do not integra
https://en.wikipedia.org/wiki/Atari%208-bit%20family
The Atari 8-bit family is a series of 8-bit home computers introduced by Atari, Inc. in 1979 with the Atari 400 and Atari 800. As the first home computer architecture with coprocessors, it has graphics and sound more advanced than most of its contemporaries. Video games were a major appeal, and first-person space combat simulator Star Raiders is considered the platform's killer app. The "Atari 8-bit family" label was not contemporaneous. Atari, Inc., used the term "Atari 800 [or 400] home computer system", often combining the model names into "Atari 400/800" or "Atari home computers". The Atari 800 was packaged as a high-end model, and the 400 was more affordable. The 400 has a pressure-sensitive, spillproof membrane keyboard and initially shipped with of RAM. The 800 has a conventional keyboard, a second (rarely used) cartridge slot, and allows easy RAM upgrades to 48K. Both use identical technology: the MOS Technology 6502 CPU at ( for PAL versions) and the same custom coprocessor chips. The plug-and-play peripherals use the Atari SIO serial bus, and one of the SIO developers eventually went on to co-patent USB (Universal Serial Bus). The core architecture of the Atari 8-bit family was reused in the 1982 Atari 5200 game console, but games for the two systems are incompatible. The 400 and 800 were replaced by multiple computers with the same technology and different presentation. The three models of the XL series were released in 1983: the 1200XL, 600XL, and 800XL. After the company was sold and reestablished, Atari Corporation released the XE series in 1985: the 65XE, also sold as the 800XE, and 130XE. The XL and XE are lighter in construction, have two joystick ports instead of four, and Atari BASIC is built-in. The 130XE has 128 KB of bank-switched RAM. In 1987, Atari Corporation repackaged the 65XE as a console, with an optional keyboard, as the Atari XEGS. It is backward compatible with computer software. Two million Atari 8-bit computers were sold during its major production run between late 1979 and mid-1985. In 1984, Atari reported 4 million owners of its computers and its 5200 game console combined. The 8-bit family was sold both in computer stores and department stores such as Sears using an in-store demo to attract customers. The primary global competition came when the similarly equipped Commodore 64 was introduced in 1982. In 1992, Atari Corporation officially dropped all remaining support for the 8-bit line. History Design of the 8-bit family started at Atari as soon as the Atari Video Computer System was released in late 1977. While designing the VCS in 1976, the engineering team from Atari Grass Valley Research Center (originally Cyan Engineering) said the system would have a three-year lifespan before becoming obsolete. They started blue sky designs for a new console that would be ready to replace it around 1979. They developed essentially a greatly updated version of the VCS, fixing its major limitations but sharing a
https://en.wikipedia.org/wiki/List%20of%20roads%20and%20highways
List of articles related to roads and highways around the world. International/World Asian Highway Network Arab Mashreq International Road Network Alaska Highway International E-road network Pan-American Highway Trans-African Highway network Interoceanic Highway Africa Botswana Madagascar Morocco South Africa Numbered routes in South Africa List of national routes in South Africa List of provincial routes in South Africa List of regional routes in South Africa List of metropolitan routes in South Africa Ring Roads in South Africa Zambia Roads in Zambia Asia Bangladesh List of roads in Bangladesh Cambodia Ancient Khmer Highway China National Trunk Highway System Ring roads of Beijing Expressways of Beijing China National Highways Expressways of China List of roads and streets in Hong Kong Daxue Road Central Beijing Road India Roads in India National Highways of India List of National Highways in India State highways in India Transport in India Indonesia List of toll roads in Indonesia Iran Iraq List of Highways in Iraq Israel Japan Road transport in Japan Korea, South Expressways in South Korea Kuwait Transport in Kuwait Malaysia Malaysian Federal Roads System Malaysian State Roads system Pakistan Motorways of Pakistan National Highways of Pakistan List of expressways of Pakistan Philippines Expressways of the Philippines Highways of the Philippines Singapore Sri Lanka Taiwan Highway System in Taiwan Thailand Thai highway network Turkey Vietnam Expressways of Vietnam Europe Belgium List of motorways in Belgium Czech Republic Highways in the Czech Republic Cyprus Motorways and roads in Cyprus Finland Highways in Finland France Route nationale Germany German autobahns List of federal highways in Germany Greece Highways in Greece Iceland List of roads in Iceland Ireland List of streets and squares in Dublin Italy State highway (Italy) Regional road (Italy) Provincial road (Italy) Malta Transport in Malta List of streets and piazzas in Valletta, Malta The Netherlands Poland Amber Road Voivodeship roads Portugal Roads in Portugal Romania Roads in Romania Russia Georgian Military Road Amur Cart Road (historical) Spain Sweden List of motorways in Sweden Route 136 (Öland, Sweden) Switzerland cantonal roads municipal roads List of highest roads in Switzerland United Kingdom Prehistoric roads Sweet track Icknield Way Roman roads Ermine Street Fosse Way Watling Street North America Canada Trans-Canada Highway United States Numbered highways in the United States List of Interstate Highways List of United States Numbered Highways Further information: Interstate Highway System United States Numbered Highway System Historic trails and roads in the United States South America Argentina List of highways in Argentina Bolivia Yungas Road Brazil List of highways in Brazil Rodovia Presidente Dutra Interoceanic Highway (unde
https://en.wikipedia.org/wiki/Helix%20%28multimedia%20project%29
Helix DNA was a project to produce computer software that can play audio and video media in various formats and aid in creating such media. It is intended as a largely free and open-source digital media framework that runs on numerous operating systems and processors (including mobile phones) and it was started by RealNetworks, which contributed much of the code. The Helix Community was an open collaborative effort to develop and extend the Helix DNA platform. The Helix Project has been discontinued. Helix DNA Client is a software package for multi-platform, multi-format media playback. Helix Player is a media player that runs on Linux, Solaris, Symbian, and FreeBSD and uses the Helix DNA Client. The Helix DNA Producer application aids in producing media files, and Helix DNA Server can stream media files over a network. Licenses The code is released in binary and source code form under various licenses, notably the proprietary RealNetworks Community Source License and the free and open source software RealNetworks Public Source License. Additionally, the Helix DNA Client and the Helix Player are licensed under the popular GNU General Public License (GPL) free and open source license. Use of the RDT, the default proprietary Real data transport, and of the RealVideo and RealAudio codecs requires binary components distributed under the Helix DNA Technology Binary Research Use License. Helix DNA Client Helix DNA Client powers many digital media applications, including RealPlayer for MS Windows, Mac OS and Linux (since version 10), RealPlayer Mobile, and Helix Player. It is used on Nokia, Motorola, Samsung and Sony Ericsson mobile phones. 800 million mobile phones with the Helix client have been shipped since 2004. It is also being used in embedded devices like the Internet Tablet OS from Nokia, which is found on the Nokia 770, N800 and N810 Internet Tablets. Cingular Video is also based on the framework. Other projects that use the Helix framework include RealNetwork's Rhapsody online music service, the Banshee and Amarok music players, and MediaReady 4000. Helix DNA also manifests itself as the RealPlayer on Mobile Internet Devices (MID) and on Netbooks. Developers from the Open Source Lab announced in 2007 they would use Helix technologies for content creation applications and collaboration in the One Laptop Per Child project. Helix DNA client contains support for the following media formats: Audio formats: Vorbis, AAC, AAC+, M4A, MP3, AMR-NB, AMR-WB, RealAudio, WMA, a-law, u-law and audio containers AIFF, WAV, AU Video formats: Theora, RealVideo, WMV, H.263, H.264, VC-1, H.261, MJPEG and container formats RealMedia file format, 3GP, 3G2, AVI, ASF Description formats: SMIL, SDP Image formats: JPEG, GIF, PNG Protocols: RTSP, RTP, HTTP, Multicast, RDT Helix DNA Client for Android Helix DNA Client for Android provides an HLS, MPEG-DASH, Verimatrix DRM and Microsoft PlayReady DRM media player for Android 2.2 to latest devices. Supporting
https://en.wikipedia.org/wiki/Discrete%20element%20method
A discrete element method (DEM), also called a distinct element method, is any of a family of numerical methods for computing the motion and effect of a large number of small particles. Though DEM is very closely related to molecular dynamics, the method is generally distinguished by its inclusion of rotational degrees-of-freedom as well as stateful contact and often complicated geometries (including polyhedra). With advances in computing power and numerical algorithms for nearest neighbor sorting, it has become possible to numerically simulate millions of particles on a single processor. Today DEM is becoming widely accepted as an effective method of addressing engineering problems in granular and discontinuous materials, especially in granular flows, powder mechanics, and rock mechanics. DEM has been extended into the Extended Discrete Element Method taking heat transfer, chemical reaction and coupling to CFD and FEM into account. Discrete element methods are relatively computationally intensive, which limits either the length of a simulation or the number of particles. Several DEM codes, as do molecular dynamics codes, take advantage of parallel processing capabilities (shared or distributed systems) to scale up the number of particles or length of the simulation. An alternative to treating all particles separately is to average the physics across many particles and thereby treat the material as a continuum. In the case of solid-like granular behavior as in soil mechanics, the continuum approach usually treats the material as elastic or elasto-plastic and models it with the finite element method or a mesh free method. In the case of liquid-like or gas-like granular flow, the continuum approach may treat the material as a fluid and use computational fluid dynamics. Drawbacks to homogenization of the granular scale physics, however, are well-documented and should be considered carefully before attempting to use a continuum approach. The DEM family The various branches of the DEM family are the distinct element method proposed by Peter A. Cundall and Otto D. L. Strack in 1979, the generalized discrete element method , the discontinuous deformation analysis (DDA) and the finite-discrete element method concurrently developed by several groups (e.g., Munjiza and Owen). The general method was originally developed by Cundall in 1971 to problems in rock mechanics. showed that DEM could be viewed as a generalized finite element method. Its application to geomechanics problems is described in the book Numerical Methods in Rock Mechanics . The 1st, 2nd and 3rd International Conferences on Discrete Element Methods have been a common point for researchers to publish advances in the method and its applications. Journal articles reviewing the state of the art have been published by Williams, Bicanic, and Bobet et al. (see below). A comprehensive treatment of the combined Finite Element-Discrete Element Method is contained in the book The Combined Finit
https://en.wikipedia.org/wiki/Don%20Lancaster
Donald E. Lancaster was an American author, inventor, and microcomputer pioneer. Background Don graduated from North Allegheny High School in Wexford, Pennsylvania. He received a BSEE degree from Lafayette College in 1961, and a MSEE from Arizona State University in 1967. Lancaster was a writer and engineer, who wrote multiple articles for computer and electronics magazines of the 1970s, including Popular Electronics, Radio-Electronics, Dr. Dobb's Journal, 73 Magazine, and Byte. He has written books on electronics, computers, and entrepreneurship, both commercially published and self-published. One of his early projects was "TV Typewriter" serial terminal. The design was accepted by early microcomputer users as it used an ordinary television set for the display and could be built for around USD$200 in parts, at a time when commercial terminals were selling for over $1,000. Lancaster was an early advocate and developer of what is now known as print-on-demand technology. Lancaster produced his self-published books by re-purposing the game port of an Apple II to transfer PostScript code directly to a laser printer, rather than using a Macintosh running PageMaker. This enabled continuous book production using an inexpensive Apple II, rather than tying up an expensive Macintosh until the print run was complete. He formerly held a ham radio license (K3BYG). Publications IC books RTL Cookbook (1ed, 1969) (3ed, 2010, , archive) TTL Cookbook (1ed, 1974, , archive) CMOS Cookbook (1ed, 1977) (4ed, 2019, , archive) Active Filter Cookbook (1ed, 1975) (2ed, 1995, , archive) Project books TV Typewriter Cookbook (1ed, 1976) (3ed, 2010, , archive) Cheap Video Cookbook (1ed, 1978, , archive) Son of Cheap Video (1ed, 1980, , archive) Apple books Assembly Cookbook for Apple II/IIe (1ed, 1984) (3ed, 2011, ) Enhancing Your Apple II - Volume 1 (1ed, 1985, ) Enhancing Your Apple II and IIe - Volume 2 (1ed, 1985, ) Applewriter Cookbook (1ed, 1986, ) Programming books The Hexadecimal Chronicles (1981) Don Lancaster's Micro Cookbook (Sams, 1982) Other The Incredible Secret Money Machine (1978) The Incredible Secret Money Machine II Book-On-Demand Resource Kit The Case Against Patents: Selected Reprints from "Midnight Engineering" & "Nuts & Volts" Magazines (Synergetics Press, January 1996). Paperback References External links Don Lancaster's Guru's Lair (official site) List of Don's magazine articles American technology writers Amateur radio people
https://en.wikipedia.org/wiki/QuickBASIC
Microsoft QuickBASIC (also QB) is an Integrated Development Environment (or IDE) and compiler for the BASIC programming language that was developed by Microsoft. QuickBASIC runs mainly on DOS, though there was also a short-lived version for the classic Mac OS. It is loosely based on GW-BASIC but adds user-defined types, improved programming structures, better graphics and disk support and a compiler in addition to the interpreter. Microsoft marketed QuickBASIC as the introductory level for their BASIC Professional Development System. Microsoft marketed two other similar IDEs for C and Pascal, viz QuickC and QuickPascal. History Microsoft released the first version of QuickBASIC on August 18, 1985 on a single 5.25-inch 360 KB floppy disk. QuickBASIC version 2.0 and later contained an Integrated Development Environment (IDE), allowing users to edit directly in its on-screen text editor. Although still supported in QuickBASIC, line numbers became optional. Program jumps also worked with named labels. Later versions also added control structures, such as multiline conditional statements and loop blocks. Microsoft's "PC BASIC Compiler" was included for compiling programs into DOS executables. Beginning with version 4.0, the editor included an interpreter that allowed the programmer to run the program without leaving the editor. The interpreter was used to debug a program before creating an executable file. Unfortunately, there were some subtle differences between the interpreter and the compiler, which meant that large programs that ran correctly in the interpreter might fail after compilation, or not compile at all because of differences in the memory management routines. The last version of QuickBASIC was version 4.5 (1988), although development of the Microsoft BASIC Professional Development System (PDS) continued until its last release of version 7.1 in October 1990. At the same time, the QuickBASIC packaging was silently changed so that the disks used the same compression used for BASIC PDS 7.1. The Basic PDS 7.x version of the IDE was called QuickBASIC Extended (QBX), and it only ran on DOS, unlike the rest of Basic PDS 7.x, which also ran on OS/2. The successor to QuickBASIC and Basic PDS was Visual Basic version 1.0 for MS-DOS, shipped in Standard and Professional versions. Later versions of Visual Basic did not include DOS versions, as Microsoft concentrated on Windows applications. A subset of QuickBASIC 4.5, named QBasic, was included with MS-DOS 5 and later versions, replacing the GW-BASIC included with previous versions of MS-DOS. Compared to QuickBASIC, QBasic is limited to an interpreter only, lacks a few functions, can only handle programs of a limited size, and lacks support for separate program modules. Since it lacks a compiler, it cannot be used to produce executable files, although its program source code can still be compiled by a QuickBASIC 4.5, PDS 7.x or VBDOS 1.0 compiler, if available. QuickBASIC 1.00 for t
https://en.wikipedia.org/wiki/Zinf
Zinf is a free audio player for Unix-like and Windows operating systems. Zinf is released under the GNU General Public License. Zinf is a continuation of the FreeAmp project and uses the same source code. Technical features Zinf can play sound files in MP3, Vorbis, and WAV formats, among others. It supports skins and is part of the MusicBrainz network. The player features an optimized version of the Xing MPEG decoder, a powerful music browser and playlist editor, and a built in download manager which supports downloading files from sites using the RMP (RealJukebox) download process. Zinf was also notable for handling all audio files based on their metadata (Author, Album, Song Title), and hiding more-technical details like actual locations and file names (but these features are now standard in many players). Naming Zinf is a recursive acronym that stands for "Zinf Is Not FreeAmp!" Use of the name FreeAmp had to be discontinued due to trademark issues, as "AMP" is a trademark of PlayMedia Systems, Inc. History/Funding The FreeAmp project was originally funded by EMusic, who paid the salaries of 3 developers working on the player. Later, Relatable joined EMusic to help support continued development. In January 2001, after 2 years of funding the project EMusic pulled their support, and subsequently fired the developers. The Zinf project was unable to find another sponsor, and development slowed greatly. The most recent release was made in early 2004. As of 2008, nearly all development of Zinf has ceased. Adoption Once a popular open-source Linux audio player, it has now been largely surpassed by newcomers such as Audacious, Amarok, Exaile, Banshee and (more recently) Songbird. This is largely because Zinf has not seen an official new release since early 2004, and many new features that are now standard in rival players have not been implemented; such as cover art and lyric support. In 2010 the zinf.com website was bought by a domain squatter for the purpose of capitalizing on the site's traffic for monetary gains. A new link called "QnA" and "Ads" are now visible on the zinf.com website that is a redirect to the squatter's site. References External links Audio player software that uses GTK Free software programmed in C++ Cross-platform software Free audio software Free media players Linux media players Windows media players 2002 software
https://en.wikipedia.org/wiki/Shorten%20%28codec%29
Shorten (SHN) is a file format used for compressing audio data. It is a form of data compression of files and is used to losslessly compress CD-quality audio files (44.1 kHz 16-bit stereo PCM). Shorten is no longer developed and other lossless audio codecs such as FLAC, Monkey's Audio (APE), TTA, and WavPack (WV) have become more popular. It is still in use to trade concert recordings that are already encoded as Shorten files. Shorten files use the .shn file extension. Handling Shorten files Since few players or media writers attempt to decompress Shorten files, a standalone decompression program is usually required to convert to a different file format that those applications can handle. Some Rockbox applications can play Shorten files without decompression, and third-party Shorten plug-ins exist for Nero Burning ROM, Foobar2000, and Winamp. All libavcodec based players and converters support the Shorten codec. Converting on Linux Current versions of ffmpeg or avconv support the shorten format. To convert all .shn files in the current directory to FLAC on Linux: for f in *.shn; do ffmpeg -i "$f" "${f/%.shn/.flac}"; done There are also various GUI programs which can be used, like SoundConverter. Converting on Windows A similar command using the freely available ffmpeg for the Microsoft Windows command line: for /r %i in (*.shn) do ffmpeg -i "%~ni%~xi" "%~ni.flac"For a GUI-based solution, dBpoweramp can be used, however on a 64-bit version of Windows the 32-bit version of the app must be installed, as the Shorten codec does not come in a 64-bit variant. To install the 32-bit version on a 64-bit system, hold-down the right shift key and double-click the installer; keep it held-down until the installer is on-screen. Converting on macOS X Lossless Decoder (XLD), an open source graphical and command line application powered by the libsndfile and SoX libraries, supports transcoding Shorten files to a variety of lossless and lossy formats. ffmpeg is also available and can be interfaced with through the terminal identically to how it is used on Linux. See also FLAC MPEG-4 ALS Meridian Lossless Packing Monkey's Audio (APE) TTA WavPack References External links Shorten Research Paper, written by the author of Shorten and detailing how it works. Trader's Little Helper Download page. Trader's Little helper converts shn to wav among other things etree.org Wiki article. etree.org is a trading site for authorized recordings of live performances; etree formerly used Shorten exclusively but is increasingly using FLAC. Shorten FAQ (note: If looking for software to play .shn files, you will probably be better served by the etree software page, as the Shorten FAQ has many broken and outdated links.) Lossless audio formats, a performance comparison of lossless audio formats, including Shorten. A Small SHN and MD5 FAQ Includes a decent list of programs to handle Shorten files. Lossless audio codecs Cross-platform software
https://en.wikipedia.org/wiki/DICOM
Digital Imaging and Communications in Medicine (DICOM) is the standard for the communication and management of medical imaging information and related data. DICOM is most commonly used for storing and transmitting medical images enabling the integration of medical imaging devices such as scanners, servers, workstations, printers, network hardware, and picture archiving and communication systems (PACS) from multiple manufacturers. It has been widely adopted by hospitals and is making inroads into smaller applications such as dentists' and doctors' offices. DICOM files can be exchanged between two entities that are capable of receiving image and patient data in DICOM format. The different devices come with DICOM Conformance Statements which state which DICOM classes they support. The standard includes a file format definition and a network communications protocol that uses TCP/IP to communicate between systems. The National Electrical Manufacturers Association (NEMA) holds the copyright to the published standard which was developed by the DICOM Standards Committee, whose members are also partly members of NEMA. It is also known as NEMA standard PS3, and as ISO standard 12052:2017 "Health informatics – Digital imaging and communication in medicine (DICOM) including workflow and data management". Applications DICOM is used worldwide to store, exchange, and transmit medical images. DICOM has been central to the development of modern radiological imaging: DICOM incorporates standards for imaging modalities such as radiography, ultrasonography, computed tomography (CT), magnetic resonance imaging (MRI), and radiation therapy. DICOM includes protocols for image exchange (e.g., via portable media such as DVDs), image compression, 3-D visualization, image presentation, and results reporting. Parts of the standard The DICOM standard is divided into related but independent parts. History DICOM is a standard developed by American College of Radiology (ACR) and National Electrical Manufacturers Association (NEMA). In the beginning of the 1980s, it was very difficult for anyone other than manufacturers of computed tomography or magnetic resonance imaging devices to decode the images that the machines generated. Radiologists and medical physicists wanted to use the images for dose-planning for radiation therapy. ACR and NEMA collaborated and formed a standard committee in 1983. Their first standard, ACR/NEMA 300, entitled "Digital Imaging and Communications", was released in 1985. Very soon after its release, it became clear that improvements were needed. The text was vague and had internal contradictions. In 1988 the second version was released. This version gained more acceptance among vendors. The image transmission was specified as over a dedicated 2 pair cable (EIA-485). The first demonstration of ACR/NEMA V2.0 interconnectivity technology was held at Georgetown University, May 21–23, 1990. Six companies participated in this event, DeJarnette R
https://en.wikipedia.org/wiki/Palermo%20Technical%20Impact%20Hazard%20Scale
The Palermo Technical Impact Hazard Scale is a logarithmic scale used by astronomers to rate the potential hazard of impact of a near-Earth object (NEO). It combines two types of data—probability of impact and estimated kinetic yield—into a single "hazard" value. A rating of 0 means the hazard is equivalent to the background hazard (defined as the average risk posed by objects of the same size or larger over the years until the date of the potential impact). A rating of +2 would indicate the hazard is 100 times as great as a random background event. Scale values less than −2 reflect events for which there are no likely consequences, while Palermo Scale values between −2 and 0 indicate situations that merit careful monitoring. A similar but less complex scale is the Torino Scale, which is used for simpler descriptions in the non-scientific media. As of June 2023, one asteroid has a cumulative Palermo Scale value above −2: 101955 Bennu (−1.41). Seven have cumulative Palermo Scale values between −2 and −3: (29075) 1950 DA (−2.05), 1979 XB (−2.72), 2021 EU (−2.74), (−2.79), (−2.83), (−2.98), and (−2.98). Of those that have a cumulative Palermo Scale value between −3 and −4, one was discovered in 2023: 2023 DO (−3.60). Scale The scale compares the likelihood of the detected potential impact with the average risk posed by objects of the same size or larger over the years until the date of the potential impact. This average risk from random impacts is known as the background risk. The Palermo Scale value, P, is defined by the equation: where pi is the impact probability T is the time interval over which pi is considered fB is the background impact frequency The background impact frequency is defined for this purpose as: where the energy threshold E is measured in megatons, and yr is the unit of T divided by one year. Positive rating In 2002 the near-Earth object reached a positive rating on the scale of 0.18, indicating a higher-than-background threat. The value was subsequently lowered after more measurements were taken. is no longer considered to pose any risk and was removed from the Sentry Risk Table on 1 August 2002. In September 2002, the highest Palermo rating was that of asteroid (29075) 1950 DA, with a value of 0.17 for a possible collision in the year 2880. By March 2022, the rating had been reduced to −2.0. For a brief period in late December 2004, with an observation arc of 190 days, asteroid (then known only by its provisional designation ) held the record for the highest Palermo scale value, with a value of 1.10 for a possible collision in the year 2029. The 1.10 value indicated that a collision with this object was considered to be almost 12.6 times as likely as a random background event: 1 in 37 instead of 1 in 472. With further observation through 2021 there is no risk from Apophis for the next 100+ years. See also Asteroid impact avoidance Asteroid impact prediction Earth-grazing fireball Impact event List of asteroid
https://en.wikipedia.org/wiki/Static%20random-access%20memory
Static random-access memory (static RAM or SRAM) is a type of random-access memory (RAM) that uses latching circuitry (flip-flop) to store each bit. SRAM is volatile memory; data is lost when power is removed. The term static differentiates SRAM from DRAM (dynamic random-access memory): SRAM will hold its data permanently in the presence of power, while data in DRAM decays in seconds and thus must be periodically refreshed. SRAM is faster than DRAM but it is more expensive in terms of silicon area and cost. SRAM is typically used for the cache and internal registers of a CPU while DRAM is used for a computer's main memory. History Semiconductor bipolar SRAM was invented in 1963 by Robert Norman at Fairchild Semiconductor. MOS SRAM was invented in 1964 by John Schmidt at Fairchild Semiconductor. It was a 64-bit MOS p-channel SRAM. SRAM was the main driver behind any new CMOS-based technology fabrication process since 1959 when CMOS was invented. In 1964, Arnold Farber and Eugene Schlig, working for IBM, created a hard-wired memory cell, using a transistor gate and tunnel diode latch. They replaced the latch with two transistors and two resistors, a configuration that became known as the Farber-Schlig cell. That year they submitted an invention closure, but it was initially rejected. In 1965, Benjamin Agusta and his team at IBM created a 16-bit silicon memory chip based on the Farber-Schlig cell, with 80 transistors, 64 resistors, and 4 diodes. In April 1969, Intel Inc. introduced its first product, Intel 3101, a SRAM memory chip intended to replace bulky magnetic-core memory modules; Its capacity was 64 bits (only 63 bits were usable due to a bug) and was based on bipolar junction transistors it was designed by using rubylith. Characteristics Though it can be characterized as volatile memory, SRAM exhibits data remanence. SRAM offers a simple data access model and does not require a refresh circuit. Performance and reliability are good and power consumption is low when idle. Since SRAM requires more transistors per bit to implement, it is less dense and more expensive than DRAM and also has a higher power consumption during read or write access. The power consumption of SRAM varies widely depending on how frequently it is accessed. Applications Embedded use Many categories of industrial and scientific subsystems, automotive electronics, and similar embedded systems, contain SRAM which, in this context, may be referred to as ESRAM. Some amount (kilobytes or less) is also embedded in practically all modern appliances, toys, etc. that implement an electronic user interface. SRAM in its dual-ported form is sometimes used for real-time digital signal processing circuits. In computers SRAM is also used in personal computers, workstations, routers and peripheral equipment: CPU register files, internal CPU caches, internal GPU caches and external burst mode SRAM caches, hard disk buffers, router buffers, etc. LCD screens and printers a
https://en.wikipedia.org/wiki/Warchalking
Warchalking is the drawing of symbols in public places to advertise an open Wi-Fi network. Inspired by hobo symbols, the warchalking marks were conceived by a group of friends in June 2002 and publicised by Matt Jones who designed the set of icons and produced a downloadable document containing them. Within days of Jones publishing a blog entry about warchalking, articles appeared in dozens of publications and stories appeared on several major television news programs around the world. The word is formed by analogy to wardriving, the practice of driving around an area in a car to detect open Wi-Fi nodes. That term in turn is based on wardialing, the practice of dialing many phone numbers hoping to find a modem. Having found a Wi-Fi node, the warchalker draws a special symbol on a nearby object, such as a wall, the pavement, or a lamp post. Those offering Wi-Fi service might also draw such a symbol to advertise the availability of their Wi-Fi location, whether commercial or personal. See also Hotspot (Wi-Fi) Mesh networking SSID Wifi analyzer NetStumbler Wardriving Hobo code References External links Computer security exploits Wi-Fi Graffiti and unauthorised signage
https://en.wikipedia.org/wiki/Wi-Fi
Wi-Fi () is a family of wireless network protocols based on the IEEE 802.11 family of standards, which are commonly used for local area networking of devices and Internet access, allowing nearby digital devices to exchange data by radio waves. These are the most widely used computer networks, used globally in home and small office networks to link devices and to provide Internet access with wireless routers and wireless access points in public places such as coffee shops, hotels, libraries, and airports to provide visitors. Wi-Fi is a trademark of the Wi-Fi Alliance, which restricts the use of the term "Wi-Fi Certified" to products that successfully complete interoperability certification testing. the Wi-Fi Alliance consisted of more than 800 companies from around the world. over 3.05 billion Wi-Fi-enabled devices are shipped globally each year. Wi-Fi uses multiple parts of the IEEE 802 protocol family and is designed to work seamlessly with its wired sibling, Ethernet. Compatible devices can network through wireless access points with each other as well as with wired devices and the Internet. Different versions of Wi-Fi are specified by various IEEE 802.11 protocol standards, with different radio technologies determining radio bands, maximum ranges, and speeds that may be achieved. Wi-Fi most commonly uses the UHF and SHF radio bands; these bands are subdivided into multiple channels. Channels can be shared between networks, but, within range, only one transmitter can transmit on a channel at a time. Wi-Fi's radio bands work best for line-of-sight use. Many common obstructions such as walls, pillars, home appliances, etc. may greatly reduce range, but this also helps minimize interference between different networks in crowded environments. The range of an access point is about indoors, while some access points claim up to a range outdoors. Hotspot coverage can be as small as a single room with walls that block radio waves or as large as many square kilometres using many overlapping access points with roaming permitted between them. Over time, the speed and spectral efficiency of Wi-Fi have increased. some versions of Wi-Fi, running on suitable hardware at close range, can achieve speeds of 9.6 Gbit/s (gigabit per second). History A 1985 ruling by the U.S. Federal Communications Commission released parts of the ISM bands for unlicensed use for communications. These frequency bands include the same 2.4 GHz bands used by equipment such as microwave ovens, and are thus subject to interference. In 1991 in the Netherlands, the NCR Corporation and AT&T invented the precursor to 802.11, intended for use in cashier systems, under the name WaveLAN. NCR's Vic Hayes, who held the chair of IEEE 802.11 for ten years, along with Bell Labs engineer Bruce Tuch, approached the Institute of Electrical and Electronics Engineers (IEEE) to create a standard and were involved in designing the initial 802.11b and 802.11a specifications within the IEEE. T
https://en.wikipedia.org/wiki/Bresenham%27s%20line%20algorithm
Bresenham's line algorithm is a line drawing algorithm that determines the points of an n-dimensional raster that should be selected in order to form a close approximation to a straight line between two points. It is commonly used to draw line primitives in a bitmap image (e.g. on a computer screen), as it uses only integer addition, subtraction, and bit shifting, all of which are very cheap operations in historically common computer architectures. It is an incremental error algorithm, and one of the earliest algorithms developed in the field of computer graphics. An extension to the original algorithm called the midpoint circle algorithm may be used for drawing circles. While algorithms such as Wu's algorithm are also frequently used in modern computer graphics because they can support antialiasing, Bresenham's line algorithm is still important because of its speed and simplicity. The algorithm is used in hardware such as plotters and in the graphics chips of modern graphics cards. It can also be found in many software graphics libraries. Because the algorithm is very simple, it is often implemented in either the firmware or the graphics hardware of modern graphics cards. The label "Bresenham" is used today for a family of algorithms extending or modifying Bresenham's original algorithm. History Bresenham's line algorithm is named after Jack Elton Bresenham who developed it in 1962 at IBM. In 2001 Bresenham wrote: I was working in the computation lab at IBM's San Jose development lab. A Calcomp plotter had been attached to an IBM 1401 via the 1407 typewriter console. [The algorithm] was in production use by summer 1962, possibly a month or so earlier. Programs in those days were freely exchanged among corporations so Calcomp (Jim Newland and Calvin Hefte) had copies. When I returned to Stanford in Fall 1962, I put a copy in the Stanford comp center library. A description of the line drawing routine was accepted for presentation at the 1963 ACM national convention in Denver, Colorado. It was a year in which no proceedings were published, only the agenda of speakers and topics in an issue of Communications of the ACM. A person from the IBM Systems Journal asked me after I made my presentation if they could publish the paper. I happily agreed, and they printed it in 1965. Bresenham's algorithm has been extended to produce circles, ellipses, cubic and quadratic bezier curves, as well as native anti-aliased versions of those. Method The following conventions will be utilized: the top-left is (0,0) such that pixel coordinates increase in the right and down directions (e.g. that the pixel at (7,4) is directly above the pixel at (7,5)), and the pixel centers have integer coordinates. The endpoints of the line are the pixels at and , where the first coordinate of the pair is the column and the second is the row. The algorithm will be initially presented only for the octant in which the segment goes down and to the right ( and ), and its horiz
https://en.wikipedia.org/wiki/Multiprocessing
Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined (multiple cores on one die, multiple dies in one package, multiple packages in one system unit, etc.). According to some on-line dictionaries, a multiprocessor is a computer system having two or more processing units (multiple processors) each sharing main memory and peripherals, in order to simultaneously process programs. A 2009 textbook defined multiprocessor system similarly, but noting that the processors may share "some or all of the system’s memory and I/O facilities"; it also gave tightly coupled system as a synonymous term. At the operating system level, multiprocessing is sometimes used to refer to the execution of multiple concurrent processes in a system, with each process running on a separate CPU or core, as opposed to a single process at any one instant. When used with this definition, multiprocessing is sometimes contrasted with multitasking, which may use just a single processor but switch it in time slices between tasks (i.e. a time-sharing system). Multiprocessing however means true parallel execution of multiple processes using more than one processor. Multiprocessing doesn't necessarily mean that a single process or task uses more than one processor simultaneously; the term parallel processing is generally used to denote that scenario. Other authors prefer to refer to the operating system techniques as multiprogramming and reserve the term multiprocessing for the hardware aspect of having more than one processor. The remainder of this article discusses multiprocessing only in this hardware sense. In Flynn's taxonomy, multiprocessors as defined above are MIMD machines. As the term "multiprocessor" normally refers to tightly coupled systems in which all processors share memory, multiprocessors are not the entire class of MIMD machines, which also contains message passing multicomputer systems. Pre-history Possibly the first expression of the idea of multiprocessing was written by Luigi Federico Menabrea in 1842, about Charles Babbage's analytical engine (as translated by Ada Lovelace): "the machine can be brought into play so as to give several results at the same time, which will greatly abridge the whole amount of the processes." Key topics Processor symmetry In a multiprocessing system, all CPUs may be equal, or some may be reserved for special purposes. A combination of hardware and operating system software design considerations determine the symmetry (or lack thereof) in a given system. For example, hardware or software considerations may require that only one particular CPU respond to all hardware interrupts, whereas all other w
https://en.wikipedia.org/wiki/Locality%20of%20reference
In computer science, locality of reference, also known as the principle of locality, is the tendency of a processor to access the same set of memory locations repetitively over a short period of time. There are two basic types of reference locality temporal and spatial locality. Temporal locality refers to the reuse of specific data and/or resources within a relatively small time duration. Spatial locality (also termed data locality) refers to the use of data elements within relatively close storage locations. Sequential locality, a special case of spatial locality, occurs when data elements are arranged and accessed linearly, such as traversing the elements in a one-dimensional array. Locality is a type of predictable behavior that occurs in computer systems. Systems that exhibit strong locality of reference are great candidates for performance optimization through the use of techniques such as the caching, prefetching for memory and advanced branch predictors at the pipelining stage of a processor core. Types of locality There are several different types of locality of reference: Temporal locality: If at one point a particular memory location is referenced, then it is likely that the same location will be referenced again in the near future. There is temporal proximity between adjacent references to the same memory location. In this case it is common to make efforts to store a copy of the referenced data in faster memory storage, to reduce the latency of subsequent references. Temporal locality is a special case of spatial locality (see below), namely when the prospective location is identical to the present location. Spatial locality: If a particular storage location is referenced at a particular time, then it is likely that nearby memory locations will be referenced in the near future. In this case it is common to attempt to guess the size and shape of the area around the current reference for which it is worthwhile to prepare faster access for subsequent reference. Memory locality (or data locality): Spatial locality explicitly relating to memory. Branch locality: If there are only a few possible alternatives for the prospective part of the path in the spatial-temporal coordinate space. This is the case when an instruction loop has a simple structure, or the possible outcome of a small system of conditional branching instructions is restricted to a small set of possibilities. Branch locality is typically not spatial locality since the few possibilities can be located far away from each other. Equidistant locality: Halfway between spatial locality and branch locality. Consider a loop accessing locations in an equidistant pattern, i.e., the path in the spatial-temporal coordinate space is a dotted line. In this case, a simple linear function can predict which location will be accessed in the near future. In order to benefit from temporal and spatial locality, which occur frequently, most of the information storage systems are hie
https://en.wikipedia.org/wiki/Norwegian%20Computing%20Center
Norwegian Computing Center (NR, in Norwegian: Norsk Regnesentral) is a private, independent, non-profit research foundation. NR carries out contract research and development in the areas of computing and quantitative methods for a broad range of industrial, commercial and public service organizations in Norway and internationally. NR is one of Europe's largest research environments in applied statistics and its projects cover a large variety of applied and academic problems. NR's offices are located near the university campus Blindern in Oslo, and adjacent to Oslo Science Park (Forskningsparken). History NR was established in 1952. Until 1970 an important part of the activity was to perform mathematical computations for other organizations. NR has worked with data communication since 1963. The Simula programming language was designed and built by Ole-Johan Dahl and Kristen Nygaard and their research group at the Norwegian Computing Center in Oslo between 1962 and 1967. After 1970 NR has been a methodological research institute. In 1985, NR became an independent institute and moved to its present location in 1988. It has worked with the Internet since 1973, ICT security since 1988, multimedia since 1994, e-Inclusion since 2005. It started working with remote sensing in 1982, geostatistics and petroleum in 1983, marine resources in 1988 and electricity prices and finance in 1994. NR employees Ole-Johan Dahl and Kristen Nygaard received the Turing Award in 2001 and the 2002 IEEE John von Neumann Medal for the introduction of the concepts underlying object-oriented programming through the design and implementation of Simula 67. A book about the history of NR, Norsk Regnesentrals historie 1952 - 2002 was published in 2002. Scientific departments The Department of Applied Research in Information Technology (DART) works with project-oriented applied research within multimedia, information security, information privacy and risks, universal design, and e-inclusion. In addition to research, DART's work covers concept studies, analysis, consultancy, prototyping, training, development, and evaluation. The Department of Statistical Analysis, Image Analysis, and Pattern Recognition (SAMBA) works with project-oriented applied research in all areas of mathematical statistics. The main application areas are Statistics for Climate, Environment, Marine Resources and Health, Statistics for Finance, Insurance and Commodity Markets, Statistics for Technology, Industry and Administration, Earth Observation, and Image Analysis and Pattern Recognition. The Department of Statistical Analysis of Natural Resource Data (SAND) works with project-oriented applied research statistics related to the petroleum industry. The group is a significant international contributor to research and services within reservoir description, stochastic modeling and geostatistics for the petroleum industry. The primary goal is to use statistical methods to reduce and quantify risk and
https://en.wikipedia.org/wiki/Computer%20Olympiad
The Computer Olympiad is a multi-games event in which computer programs compete against each other. For many games, the Computer Olympiads are an opportunity to claim the "world's best computer player" title. First contested in 1989, the majority of the games are board games but other games such as bridge take place as well. In 2010, several puzzles were included in the competition. History Developed in the 1980s by David Levy, the first Computer Olympiad took place in 1989 at the Park Lane Hotel in London. The games ran on a yearly basis until after the 1992 games, when the Olympiad's ruling committee was unable to find a new organiser. This resulted in the games being suspended until 2000 when the Mind Sports Olympiad resurrected them. Recently, the International Computer Games Association (ICGA) has adopted the Computer Olympiad and tries to organise the event on an annual basis. Games contested The games which have been contested at each Olympiad are: 1st–5th Olympiads (1989–1992) 6th–10th Olympiads (2000–2004) After an eight-year hiatus, the Computer Olympiad was revived by bringing it into the Mind Sports Olympiad. The chess competition was a special event, since it was adopted by the International Correspondence Chess Association (ICCA) as the 17th World Microcomputer Chess Championship (WMCC 2000). The 5th Olympiad was in 2000 at London's Alexandra Palace; the 6th, in 2001 at Ad Fundunm at Maastricht University; the 7th, in 2002 in Maastricht; the 8th, in 2003 in Graz; and the 9th, in 2004 in Ramat Gan. The 7th Olympiad was adopted by the ICCA as the 10th World Computer Chess Championship (WCCC), and the 8th was held in conjugation with both 11th WCCC and the 10th Advances in Computer Games Conference. Because of this, no medals were awarded for the two chess events. The 9th was held in conjugation with WCCC and the Computers and Games 2004 Conference; no medals were awarded to the two chess events. Jonathan Schaeffer and J. W. H. M. Uiterwijk were the tournament directors. 10th–14th Olympiads (2005–2009) The 10th Olympiad was in 2005 in Taipei; the 11th, in 2006 in Turin; the 12th, in 2007 at the Amsterdam Science Park; the 13th, in 2008 at the Beijing Golden Century Golf Club; and the 14th, in 2009 in Pamplona. The 10th Olympiad wasa held at the same time and location as the 11th Advances in Computer Games and its organizing committee was made up of J. W. Hellemons (chair), H. H. L. M. Donkers, M. Greenspan, T-s Hsu, H. J. van den Herik, and M. Tiessen. Hand Talk, which won the gold medal in Computer Go, was originally written in assembly language by a retired chemistry professor of Sun Yat-sen University, China. The 11th Olympiad was held in conjugation with the 14th World Computer Chess Championship and the 5th Computer and Games Conference. Human FIDE 37th Chess Olympiad co-hosted this event; the 12th, with the 15th World Computer Chess Championship and the Computer Games Workshop; the 13th, with the International Computer Ga
https://en.wikipedia.org/wiki/Ole-Johan%20Dahl
Ole-Johan Dahl (12 October 1931 – 29 June 2002) was a Norwegian computer scientist. Dahl was a professor of computer science at the University of Oslo and is considered to be one of the fathers of Simula and object-oriented programming along with Kristen Nygaard. Career Dahl was born in Mandal, Norway. He was the son of Finn Dahl (1898–1962) and Ingrid Othilie Kathinka Pedersen (1905–80). When he was seven, his family moved to Drammen. When he was thirteen, the whole family fled to Sweden during the German occupation of Norway in World War II. After the war's end, Dahl studied numerical mathematics at the University of Oslo. Dahl became a full professor at the University of Oslo in 1968 and was a gifted teacher as well as researcher. Here he worked on Hierarchical Program Structures, probably his most influential publication, which appeared co-authored with C.A.R. Hoare in the influential book Structured Programming of 1972 by Dahl, Edsger Dijkstra, and Hoare, perhaps the best-known academic book concerning software in the 1970s. As his career advanced, Dahl grew increasingly interested in the use of formal methods, to rigorously reason about object-orientation for example. His expertise ranged from the practical application of ideas to their formal mathematical underpinning to ensure the validity of the approach. Dahl is widely accepted as Norway's foremost computer scientist. With Kristen Nygaard, he produced the initial ideas for object-oriented (OO) programming in the 1960s at the Norwegian Computing Center (Norsk Regnesentral (NR)) as part of the Simula I (1961–1965) and Simula 67 (1965–1968) simulation programming languages, which began as an extended variant and superset of ALGOL 60. Dahl and Nygaard were the first to develop the concepts of class, subclass (allowing implicit information hiding), inheritance, dynamic object creation, etc., all important aspects of the OO paradigm. An object is a self-contained component (with a data structure and associated procedures or methods) in a software system. These are combined to form a complete system. The object-oriented approach is now pervasive in modern software development, including widely used imperative programming languages such as C++ and Java. He received the Turing Award for his work in 2001 (with Kristen Nygaard). He received the 2002 Institute of Electrical and Electronics Engineers (IEEE) John von Neumann Medal (with Kristen Nygaard) and was named Commander of the Royal Norwegian Order of St. Olav in 2000. The Association Internationale pour les Technologies Objets named the Dahl-Nygaard Prize after Dahl. Early papers Organized by IFIP Technical Committee 2, programming languages; O.-J. Dahl, conference chairman. See also List of pioneers in computer science References Sources From Object-Orientation to Formal Methods: Essays in Memory of Ole-Johan Dahl, Olaf Owe, Stein Krogdahl and Tom Lyche (eds.), Springer, Lecture Notes in Computer Science, Volume 2635, 2004. . .
https://en.wikipedia.org/wiki/PowerBASIC
PowerBASIC, formerly Turbo Basic, is the brand of several commercial compilers by PowerBASIC Inc. that compile a dialect of the BASIC programming language. There are both MS-DOS and Windows versions, and two kinds of the latter: Console and Windows. The MS-DOS version has a syntax similar to that of QBasic and QuickBASIC. The Windows versions use a BASIC syntax expanded to include many Windows functions, and the statements can be combined with calls to the Windows API. History The first version of the DOS compiler was published as BASIC/Z, the very first interactive compiler for CP/M and MDOS. Later it was extended to MS-DOS/PC DOS and in 1987 Borland distributed it as Turbo Basic. Turbo Basic was originally created by Robert "Bob" Zale (1945–2012) and bought from him by Borland. When Borland decided to stop publishing it (1989), Zale bought it back from them, renamed it PowerBASIC and set up PowerBASIC Inc. to continue support and development of it; it was later called PBDOS. PowerBASIC went on to develop BASIC compilers for Windows, first PBWIN — their flagship product — and then PBCC, described below. On November 6, 2012, Robert Zale, the creator of PowerBASIC, died. For a time, it was assumed that the company might cease operations. His wife, Vivian Zale, posted on 8 March 2014 to the PowerBASIC forums a statement that the company would continue in operation. On May 10, 2015, Vivian Zale announced that work was continuing on new versions of PowerBASIC compilers. On November 2, 2016, Vivian Zale announced her intention to seek a buyer for the company. On January 31, 2017, Adam Drake announced Drake Software had acquired the PowerBASIC source code from PowerBASIC, Inc., with the intention of updating and improving the functionality of the product. This was later confirmed by Vivian Zale with a forum post thanking the members for their support. When Bob Zale died, PBWin11 and PBCC7 were in beta testing, and 64-bit compilers and PB/Pro (PBWin and CC in one compiler) were in the alpha stages. However, development of PowerBASIC products has stopped. No new version has been released since v10.03 (11 years ago as of May 2022). No 64-bit version or beta release has been announced. No development activity has been reported. No corrections (such as adding the correct DPI settings for the IDE) have been released. PowerBASIC Tools LLC still sells new licenses for the 32-bit Windows compilers. Compilers PowerBASIC programs are self-contained and use no runtime file to execute. In all versions of the compiler, the applications compile without external libraries, though it can use such libraries if needed. PBDOS creates 16-bit DOS MZ executable files, while PBWIN and PBCC create 32-bit Portable Executable (PE) files. Turbo Basic Borland's Turbo Basic contains extensions to classic BASIC (without breaking compatibility), such as a drawing API and mouse access. Unlike most BASIC implementations of its time, Turbo Basic was a full compiler which gener
https://en.wikipedia.org/wiki/Reed%27s%20law
Reed's law is the assertion of David P. Reed that the utility of large networks, particularly social networks, can scale exponentially with the size of the network. The reason for this is that the number of possible sub-groups of network participants is 2N − N − 1, where N is the number of participants. This grows much more rapidly than either the number of participants, N, or the number of possible pair connections, N(N − 1)/2 (which follows Metcalfe's law). so that even if the utility of groups available to be joined is very small on a per-group basis, eventually the network effect of potential group membership can dominate the overall economics of the system. Derivation Given a set A of N people, it has 2N possible subsets. This is not difficult to see, since we can form each possible subset by simply choosing for each element of A one of two possibilities: whether to include that element, or not. However, this includes the (one) empty set, and N singletons, which are not properly subgroups. So 2N − N − 1 subsets remain, which is exponential, like 2N. Quote From David P. Reed's, "The Law of the Pack" (Harvard Business Review, February 2001, pp 23–4): "[E]ven Metcalfe's law understates the value created by a group-forming network [GFN] as it grows. Let's say you have a GFN with n members. If you add up all the potential two-person groups, three-person groups, and so on that those members could form, the number of possible groups equals 2n. So the value of a GFN increases exponentially, in proportion to 2n. I call that Reed's Law. And its implications are profound." Business implications Reed's Law is often mentioned when explaining competitive dynamics of internet platforms. As the law states that a network becomes more valuable when people can easily form subgroups to collaborate, while this value increases exponentially with the number of connections, business platform that reaches a sufficient number of members can generate network effects that dominate the overall economics of the system. Criticism Other analysts of network value functions, including Andrew Odlyzko, have argued that both Reed's Law and Metcalfe's Law overstate network value because they fail to account for the restrictive impact of human cognitive limits on network formation. According to this argument, the research around Dunbar's number implies a limit on the number of inbound and outbound connections a human in a group-forming network can manage, so that the actual maximum-value structure is much sparser than the set-of-subsets measured by Reed's law or the complete graph measured by Metcalfe's law. See also Andrew Odlyzko's "Content is Not King" Beckstrom's law Coase's penguin List of eponymous laws Metcalfe's law Six Degrees of Kevin Bacon Sarnoff's law Social capital References External links That Sneaky Exponential—Beyond Metcalfe's Law to the Power of Community Building Weapon of Math Destruction: A simple formula explains why the Intern
https://en.wikipedia.org/wiki/Concatenation
In formal language theory and computer programming, string concatenation is the operation of joining character strings end-to-end. For example, the concatenation of "snow" and "ball" is "snowball". In certain formalisations of concatenation theory, also called string theory, string concatenation is a primitive notion. Syntax In many programming languages, string concatenation is a binary infix operator, and in some it is written without an operator. This is implemented in different ways: Overloading the plus sign + Example from C#: "Hello, " + "World" has the value "Hello, World". Dedicated operator, such as . in PHP, & in Visual Basic and || in SQL. This has the advantage over reusing + that it allows implicit type conversion to string. string literal concatenation, which means that adjacent strings are concatenated without any operator. Example from C: "Hello, " "World" has the value "Hello, World". Implementation In programming, string concatenation generally occurs at run time, as string values are typically not known until run time. However, in the case of string literals, the values are known at compile time, and thus string concatenation can be done at compile time, either via string literal concatenation or via constant folding. Concatenation of sets of strings In formal language theory and pattern matching (including regular expressions), the concatenation operation on strings is generalised to an operation on sets of strings as follows: For two sets of strings S1 and S2, the concatenation S1S2 consists of all strings of the form vw where v is a string from S1 and w is a string from S2, or formally . Many authors also use concatenation of a string set and a single string, and vice versa, which are defined similarly by and . In these definitions, the string vw is the ordinary concatenation of strings v and w as defined in the introductory section. For example, if , and , then FR denotes the set of all chess board coordinates in algebraic notation, while eR denotes the set of all coordinates of the kings' file. In this context, sets of strings are often referred to as formal languages. The concatenation operator is usually expressed as simple juxtaposition (as with multiplication). Algebraic properties The strings over an alphabet, with the concatenation operation, form an associative algebraic structure with identity element the null string—a free monoid. Sets of strings with concatenation and alternation form a semiring, with concatenation (*) distributing over alternation (+); 0 is the empty set and 1 the set consisting of just the null string. Applications Audio/telephony In programming for telephony, concatenation is used to provide dynamic audio feedback to a user. For example, in a "time of day" speaking clock, concatenation is used to give the correct time by playing the appropriate recordings concatenated together. For example: "At the tone the time will be" "Eight" "Thirty" "Five" "and" "Twenty" "Five"
https://en.wikipedia.org/wiki/%21%20%28disambiguation%29
! is a punctuation mark, called an exclamation mark (33 in ASCII), exclamation point, ecphoneme, or bang. ! or exclamation point may also refer to: Mathematics and computers Factorial, a mathematical function Derangement, a related mathematical function Negation, in logic and some programming languages Uniqueness quantification, in mathematics and logic ! (CONFIG.SYS directive), usage for unconditional execution of directives in FreeDOS configuration files Music ! (The Dismemberment Plan album), released in 1995 ! (Donnie Vie album), released in 2016 "!" (The Song Formerly Known As), a single on the 1997 album Unit by Regurgitator Exclamation Mark (album), a 2011 album by Jay Chou Exclamation Point, a 2010 LP by DA! ! (Trippie Redd album), released in 2019 ! (Trippie Redd song), that album's title track ! (Cláudia Pascoal album), released in 2020 Other ǃ, the IPA symbol for postalveolar click in speech An indicator of a good chess move in punctuation A dereference operator in BCPL See also !! (disambiguation) !!! (disambiguation) Interrobang, the nonstandard mix of a question mark and an exclamation mark ḷ, not the exclamation mark, but a lower-case letter Ḷ used in Asturian
https://en.wikipedia.org/wiki/1-2-3
1-2-3; 1, 2, 3; or One, Two, Three may refer to: Brands 1-2-3 (fuel station), in Norway Lotus 1-2-3, a computer spreadsheet program .123, a file extension used by Lotus 1-2-3 Jell-O 1-2-3, a dessert Film, TV and books One, Two, Three, a 1961 film by Billy Wilder One Two Three, a 2008 comedy film 123 (film), a 2002 Tamil romantic comedy One, Two, Three and Away!, a set of children's stories by Sheila K. McCullagh Music 1,2,3, a band from Pittsburgh later reformed as Animal Scream 1-2-3, a band from Edinburgh later known as Clouds One, Two, Three, a 1980s electronic disco group produced by Bobby Orlando Albums 1-2-3 (APO Hiking Society album) 1-2-3 (Howling Hex album) I-II-III (Icon of Coil albums), a set of three albums released in 2006 Uno Dos Tres 1•2•3, a 1966 album by Willie Bobo Songs "1-2-3" (Len Barry song), 1965 "1, 2, 3" (Sofía Reyes song), 2018 "1-2-3" (The Chimes song), 1990 "1-2-3" (Gloria Estefan and Miami Sound Machine song), 1988 "1, 2, 3!" (Seungri song), 2018 "1. 2. 3. ...", a 2006 song by Bela B. and Charlotte Roche from the album Bingo "123" (Nikki Laoye song), 2012 "1 2 3" (Moneybagg Yo song), 2020 "One, Two, Three" (Ch!pz song), 2005 "One Two Three / The Matenrō Show", a 2012 song by Morning Musume "One, Two, Three, Go!", a 2008 song by Belanova "One Two Three", a 2012 song by E-girls "1-2-3! (Train with Me)", a song by Playahitty Other uses A 1-2-3 inning, in baseball See also 123 (disambiguation) 1 + 2 + 3 + 4 + ⋯ Raz, Dwa, Trzy, Polish music band Raz, dwa, trzy (newspaper), Polish sports weekly "(Un, Dos, Tres) María", a 1995 song by Ricky Martin Un, dos, tres... responda otra vez, a Spanish game show
https://en.wikipedia.org/wiki/Fast%20Ethernet
In computer networking, Fast Ethernet physical layers carry traffic at the nominal rate of 100 Mbit/s. The prior Ethernet speed was 10 Mbit/s. Of the Fast Ethernet physical layers, 100BASE-TX is by far the most common. Fast Ethernet was introduced in 1995 as the IEEE 802.3u standard and remained the fastest version of Ethernet for three years before the introduction of Gigabit Ethernet. The acronym GE/FE is sometimes used for devices supporting both standards. Nomenclature The 100 in the media type designation refers to the transmission speed of 100 Mbit/s, while the BASE refers to baseband signaling. The letter following the dash (T or F) refers to the physical medium that carries the signal (twisted pair or fiber, respectively), while the last character (X, 4, etc.) refers to the line code method used. Fast Ethernet is sometimes referred to as 100BASE-X, where X is a placeholder for the FX and TX variants. General design Fast Ethernet is an extension of the 10-megabit Ethernet standard. It runs on twisted pair or optical fiber cable in a star wired bus topology, similar to the IEEE standard 802.3i called 10BASE-T, itself an evolution of 10BASE5 (802.3) and 10BASE2 (802.3a). Fast Ethernet devices are generally backward compatible with existing 10BASE-T systems, enabling plug-and-play upgrades from 10BASE-T. Most switches and other networking devices with ports capable of Fast Ethernet can perform autonegotiation, sensing a piece of 10BASE-T equipment and setting the port to 10BASE-T half duplex if the 10BASE-T equipment cannot perform auto negotiation itself. The standard specifies the use of CSMA/CD for media access control. A full-duplex mode is also specified and in practice, all modern networks use Ethernet switches and operate in full-duplex mode, even as legacy devices that use half duplex still exist. A Fast Ethernet adapter can be logically divided into a media access controller (MAC), which deals with the higher-level issues of medium availability, and a physical layer interface (PHY). The MAC is typically linked to the PHY by a four-bit 25 MHz synchronous parallel interface known as a media-independent interface (MII), or by a two-bit 50 MHz variant called reduced media independent interface (RMII). In rare cases, the MII may be an external connection but is usually a connection between ICs in a network adapter or even two sections within a single IC. The specs are written based on the assumption that the interface between MAC and PHY will be an MII but they do not require it. Fast Ethernet or Ethernet hubs may use the MII to connect to multiple PHYs for their different interfaces. The MII fixes the theoretical maximum data bit rate for all versions of Fast Ethernet to 100 Mbit/s. The information rate actually observed on real networks is less than the theoretical maximum, due to the necessary header and trailer (addressing and error-detection bits) on every Ethernet frame, and the required interpacket gap between transmissions.
https://en.wikipedia.org/wiki/Basic%20Rate%20Interface
Basic Rate Interface (BRI, 2B+D, 2B1D) or Basic Rate Access is an Integrated Services Digital Network (ISDN) configuration intended primarily for use in subscriber lines similar to those that have long been used for voice-grade telephone service. As such, an ISDN BRI connection can use the existing telephone infrastructure at a business. The BRI configuration provides 2 data (bearer) channels (B channels) at 64 kbit/s each and 1 control (delta) channel (D channel) at 16 kbit/s. The B channels are used for voice or user data, and the D channel is used for any combination of data, control/signaling, and X.25 packet networking. The 2 B channels can be aggregated by channel bonding providing a total data rate of 128 kbit/s. The BRI ISDN service is commonly installed for residential or small business service (ISDN PABX) in many countries. In contrast to the BRI, the Primary Rate Interface (PRI) configuration provides more B channels and operates at a higher bit rate. Physical interfaces The BRI is split in two sections: a) in-house cabling (S/T reference point or S-bus) from the ISDN terminal up to the network termination 1 (NT1) and b) transmission from the NT1 to the central office (U reference point). The in-house part is defined in I.430 produced by the International Telecommunication Union (ITU). The S/T-interface (S0) uses four wires; one pair for the uplink and another pair for the downlink. It offers a full-duplex mode of operation. The I.430 protocol defines 48-bit packets comprising 16 bits from the B1 channel, 16 bits from B2 channel, 4 bits from the D channel, and 12 bits used for synchronization purposes. These packets are sent at a rate of , resulting in a gross bit rate of 192 kbit/s and - giving the data rates listed above - a maximum possible throughput of . The S0 offers point-to-point or point-to-multipoint operation; Max length: 900m (point-to-point), 300m (point-to-multipoint). The Up Interface uses two wires. The gross bit rate is 160 kbit/s; 144 kbit/s throughput, 12 kbit/s sync and 4 kbit/s maintenance. The signals on the U reference point are encoded by two modulation techniques: 2B1Q in North America, Italy and Switzerland, and 4B3T elsewhere. Depending on the applicable cable length, two varieties are implemented, UpN and Up0. The Uk0 interface uses one wire pair with echo cancellation for the long last mile cable between the telephone exchange and the network terminator. The maximum length of this BRI section is between 4 and 8 km. External links Integrated Services Digital Network
https://en.wikipedia.org/wiki/2B1Q
Two-binary, one-quaternary (2B1Q) is a line code used in the U interface of the Integrated Services Digital Network (ISDN) Basic Rate Interface (BRI) and the high-bit-rate digital subscriber line (HDSL). 2B1Q is a four-level pulse-amplitude modulation (PAM-4) scheme without redundancy, mapping two bits (2B) into one quaternary symbol (1Q). A competing encoding technique in the ISDN basic rate U interface, mainly used in Europe, is 4B3T. To minimize error propagation, bit pairs (dibits) are assigned to voltage levels according to a Gray code, as follows: If the voltage is misread as an adjacent level, this causes only a 1-bit error in the decoded data. 2B1Q code is not DC-balanced. Symbol rate is half of data rate. References Line codes Integrated Services Digital Network
https://en.wikipedia.org/wiki/Gigabit%20Ethernet
In computer networking, Gigabit Ethernet (GbE or 1 GigE) is the term applied to transmitting Ethernet frames at a rate of a gigabit per second. The most popular variant, 1000BASE-T, is defined by the IEEE 802.3ab standard. It came into use in 1999, and has replaced Fast Ethernet in wired local networks due to its considerable speed improvement over Fast Ethernet, as well as its use of cables and equipment that are widely available, economical, and similar to previous standards. The first standard for faster 10 Gigabit Ethernet was approved in 2002. History Ethernet was the result of research conducted at Xerox PARC in the early 1970s, and later evolved into a widely implemented physical and link layer protocol. Fast Ethernet increased the speed from 10 to 100 megabits per second (Mbit/s). Gigabit Ethernet was the next iteration, increasing the speed to 1000 Mbit/s. The initial standard for Gigabit Ethernet was produced by the IEEE in June 1998 as IEEE 802.3z, and required optical fiber. 802.3z is commonly referred to as 1000BASE-X, where -X refers to either -CX, -SX, -LX, or (non-standard) -ZX. (For the history behind the "X" see .) IEEE 802.3ab, ratified in 1999, defines Gigabit Ethernet transmission over unshielded twisted pair (UTP) category 5, 5e or 6 cabling, and became known as 1000BASE-T. With the ratification of 802.3ab, Gigabit Ethernet became a desktop technology as organizations could use their existing copper cabling infrastructure. IEEE 802.3ah, ratified in 2004, added two more gigabit fiber standards: 1000BASE-LX10 (which was already widely implemented as vendor-specific extension) and 1000BASE-BX10. This was part of a larger group of protocols known as Ethernet in the First Mile. Initially, Gigabit Ethernet was deployed in high-capacity backbone network links (for instance, on a high-capacity campus network). In 2000, Apple's Power Mac G4 and PowerBook G4 were the first mass-produced personal computers to feature the 1000BASE-T connection. It quickly became a built-in feature in many other computers. Half-duplex gigabit links connected through repeater hubs were part of the IEEE specification, but the specification is not updated anymore and full-duplex operation with switches is used exclusively. Varieties There are five physical layer standards for Gigabit Ethernet using optical fiber (1000BASE-X), twisted pair cable (1000BASE-T), or shielded balanced copper cable (1000BASE-CX). The IEEE 802.3z standard includes 1000BASE-SX for transmission over multi-mode fiber, 1000BASE-LX for transmission over single-mode fiber, and the nearly obsolete 1000BASE-CX for transmission over shielded balanced copper cabling. These standards use 8b/10b encoding, which inflates the line rate by 25%, from 1000 Mbit/s to 1250 Mbit/s, to ensure a DC balanced signal, and allow for clock recovery. The symbols are then sent using NRZ. Optical fiber transceivers are most often implemented as user-swappable modules in SFP form or GBIC on older de
https://en.wikipedia.org/wiki/Computer%20language
A computer language is a formal language used to communicate with a computer. Types of computer languages include: Construction language – all forms of communication by which a human can specify an executable problem solution to a computer Command language – a language used to control the tasks of the computer itself, such as starting programs Configuration language – a language used to write configuration files Programming language – a formal language designed to communicate instructions to a machine, particularly a computer Query language – a language used to make queries in databases and information systems Transformation language – designed to transform some input text in a certain formal language into a modified output text that meets some specific goal Data exchange language – a language that is domain-independent and can be used for data from any kind of discipline; examples: JSON, XML Markup language – a grammar for annotating a document in a way that is syntactically distinguishable from the text, such as HTML Modeling language – an artificial language used to express information or knowledge, often for use in computer system design Architecture description language – used as a language (or a conceptual model) to describe and represent system architectures Hardware description language – used to model integrated circuits Page description language – describes the appearance of a printed page in a higher level than an actual output bitmap Simulation language – a language used to describe simulations Specification language – a language used to describe what a system should do Style sheet language – a computer language that expresses the presentation of structured documents, such as CSS See also Serialization Domain-specific language – a language specialized to a particular application domain General-purpose language – a language that is broadly applicable across application domains, and lacks specialized features for a particular domain Lists of programming languages Natural language processing – the use of computers to process text or speech in human language External links
https://en.wikipedia.org/wiki/Motorola%2068000%20series
The Motorola 68000 series (also known as 680x0, m68000, m68k, or 68k) is a family of 32-bit complex instruction set computer (CISC) microprocessors. During the 1980s and early 1990s, they were popular in personal computers and workstations and were the primary competitors of Intel's x86 microprocessors. They were best known as the processors used in the early Apple Macintosh, the Sharp X68000, the Commodore Amiga, the Sinclair QL, the Atari ST, the Sega Genesis (Mega Drive), the Capcom System I (Arcade), the AT&T UNIX PC, the Tandy Model 16/16B/6000, the Sun Microsystems Sun-1, Sun-2 and Sun-3, the NeXT Computer, NeXTcube, NeXTstation, and NeXTcube Turbo, computers from MASSCOMP, the Texas Instruments TI-89/TI-92 calculators, the Palm Pilot (all models running Palm OS 4.x or earlier), the Control Data Corporation CDCNET Device Interface, and the Space Shuttle. Although no modern desktop computers are based on processors in the 680x0 series, derivative processors are still widely used in embedded systems. Motorola ceased development of the 680x0 series architecture in 1994, replacing it with the PowerPC RISC architecture, which was developed in conjunction with IBM and Apple Computer as part of the AIM alliance. Family members Generation one (internally 16/32-bit, and produced with 8-, 16-, and 32-bit interfaces) Motorola 68000 Motorola 68EC000 Motorola 68SEC000 Motorola 68HC000 Motorola 68008 Motorola 68010 Motorola 68012 Generation two (internally fully 32-bit) Motorola 68020 Motorola 68EC020 Motorola 68030 Motorola 68EC030 Generation three (pipelined) Motorola 68040 Motorola 68EC040 Motorola 68LC040 Generation four (superscalar) Motorola 68060 Motorola 68EC060 Motorola 68LC060 Others Freescale 683XX (CPU32 aka 68330, 68360 aka QUICC) Freescale ColdFire Freescale DragonBall Philips 68070 Improvement history 68010: Virtual memory support (restartable instructions) 'Loop mode' for faster string and memory library primitives Multiply instruction uses 14 clock ticks less 68020: 32-bit address & arithmetic logic unit (ALU) Three stage pipeline Instruction cache of 256 bytes Unrestricted word and longword data access (see alignment) 8× multiprocessing ability Larger multiply (32×32 -> 64 bits) and divide (64÷32 -> 32 bits quotient and 32 bits remainder) instructions, and bit field manipulations Addressing modes added scaled indexing and another level of indirection Low cost, EC = 24-bit address 68030: Split instruction and data cache of 256 bytes each On-chip memory management unit (MMU) (68851) Low cost EC = No MMU Burst Memory Interface 68040: Instruction and data caches of 4 KB each Six stage pipeline On-chip floating-point unit (FPU) FPU lacks IEEE transcendental function ability FPU emulation works with 2E71M and later chip revisions Low cost LC = No FPU Low cost EC = No FPU or MMU 68060: Instruction and data caches of 8 KB each 10 stage pipeline Two cycle integer multiplication unit Bran
https://en.wikipedia.org/wiki/Amiga%20600
The Amiga 600, also known as the A600, is a home computer introduced in March 1992. It is the final Amiga model based on the Motorola 68000 and the 1990 Amiga Enhanced Chip Set. A redesign of the Amiga 500 Plus, it adds the option of an internal hard disk drive and a PCMCIA port. Lacking a numeric keypad, the A600 is only slightly larger than an IBM PC keyboard, weighing approximately 6 pounds. It shipped with AmigaOS 2.0, which was considered more user-friendly than earlier versions of the operating system. Like the A500, the A600 was aimed at the lower end of the market. Commodore intended it to revitalize sales of the A500-related line before the introduction of the 32-bit Amiga 1200. According to Dave Haynie, the A600 "was supposed to be cheaper than the A500, but it came in at about that much more expensive." The A600 was originally to have been numbered the A300, positioning it as a lower-budget version of the Amiga 500 Plus. An A600HD model was sold with an internal 2.5" ATA hard disk drive of either 20 or 40 MB. Amiga 600's compatibility with earlier Amiga models is rather poor. Roughly one third of games and demos made for A1000 or A500 do not work on A600. Development and release Commodore Business Machines began the process of drastically changing its management in late 1990, when Irving Gould, its CEO and chairman, laid off six of its high-level executives. In the spring of 1991, Mehdi Ali, a former investment banker at Prudential Investments, was promoted to president of Commodore, and continued a program he started in 1989 involving cuts to the budget and staff, mostly from the sales and manufacturing divisions. After the release of the Amiga 3000T, Commodore's next project was a next-generation Amiga chipset, which became the Advanced Graphics Architecture (AGA). Concurrently, engineers Dave Haynie, Jeff Porter, and Eric Lavitsky began work on Amiga 3000+, which would have been the first computer to use the AGA chipset, and Joe Augenbraun was behind the Amiga 1000+, which also would have used the chipset. Meanwhile, George Robbins designed another low-end project called the Amiga 300. The computer, codenamed June Bug, had a floppy drive built in and was roughly the same size and weight as a Commodore 64. Development took a turn on all three projects when Ali dismissed the engineering management team and appointed former IBM executive Bill Sydnes as the company's engineering manager. Sydnes canceled the A1000+ and A3000+ models and delayed the AGA chipset, but simply changed the A300's design goals. The model was launched in mid-March 1992 as the Amiga 600, superseding the A500. Units were manufactured in Commodore's production plants in Irvine, Scotland; Braunschweig, Germany; Kwai Chung, Hong Kong; and the Philippines. In the United States, it and its hard disk drive variant, the Amiga 600HD, sold for and , respectively, the former of which was about $50 more than an A500 while the two systems were on sale, although the A6
https://en.wikipedia.org/wiki/Adapter%20pattern
In software engineering, the adapter pattern is a software design pattern (also known as wrapper, an alternative naming shared with the decorator pattern) that allows the interface of an existing class to be used as another interface. It is often used to make existing classes work with others without modifying their source code. An example is an adapter that converts the interface of a Document Object Model of an XML document into a tree structure that can be displayed. Overview The adapter design pattern is one of the twenty-three well-known Gang of Four design patterns that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse. The adapter design pattern solves problems like: How can a class be reused that does not have an interface that a client requires? How can classes that have incompatible interfaces work together? How can an alternative interface be provided for a class? Often an (already existing) class can not be reused only because its interface does not conform to the interface clients require. The adapter design pattern describes how to solve such problems: Define a separate adapter class that converts the (incompatible) interface of a class (adaptee) into another interface (target) clients require. Work through an adapter to work with (reuse) classes that do not have the required interface. The key idea in this pattern is to work through a separate adapter that adapts the interface of an (already existing) class without changing it. Clients don't know whether they work with a target class directly or through an adapter with a class that does not have the target interface. See also the UML class diagram below. Definition An adapter allows two incompatible interfaces to work together. This is the real-world definition for an adapter. Interfaces may be incompatible, but the inner functionality should suit the need. The adapter design pattern allows otherwise incompatible classes to work together by converting the interface of one class into an interface expected by the clients. Usage An adapter can be used when the wrapper must respect a particular interface and must support polymorphic behavior. Alternatively, a decorator makes it possible to add or alter behavior of an interface at run-time, and a facade is used when an easier or simpler interface to an underlying object is desired. Structure UML class diagram In the above UML class diagram, the client class that requires a target interface cannot reuse the adaptee class directly because its interface doesn't conform to the target interface. Instead, the client works through an adapter class that implements the target interface in terms of adaptee: The object adapter way implements the target interface by delegating to an adaptee object at run-time (adaptee.specificOperation()). The class adapter way implements the target interface by inh
https://en.wikipedia.org/wiki/DLL
DLL may refer to: Baraboo–Wisconsin Dells Airport (FAA ID), an airport near Baraboo, Wisconsin, U.S. Data link layer, a layer in the OSI network architecture model Davis–Putnam–Logemann–Loveland algorithm, an algorithm for deciding the satisfiability of propositional logic formulae in conjunctive normal form Delay-locked loop, a device to reduce clock skew in digital circuits Dillon County Airport (IATA code), an airport near Dillon, South Carolina, U.S. Distal-less (Dll) gene that controls development of limbs or other appendages in many animals DLL Group, a global financial solutions company Doubly linked list, a data structure in computer programming Dynamic-link library, or a DLL file, as implemented in Microsoft Windows and OS/2
https://en.wikipedia.org/wiki/Facade%20pattern
The facade pattern (also spelled façade) is a software-design pattern commonly used in object-oriented programming. Analogous to a facade in architecture, a facade is an object that serves as a front-facing interface masking more complex underlying or structural code. A facade can: improve the readability and usability of a software library by masking interaction with more complex components behind a single (and often simplified) API provide a context-specific interface to more generic functionality (complete with context-specific input validation) serve as a launching point for a broader refactor of monolithic or tightly-coupled systems in favor of more loosely-coupled code Developers often use the facade design pattern when a system is very complex or difficult to understand because the system has many interdependent classes or because its source code is unavailable. This pattern hides the complexities of the larger system and provides a simpler interface to the client. It typically involves a single wrapper class that contains a set of members required by the client. These members access the system on behalf of the facade client and hide the implementation details. Overview The Facade design pattern is one of the twenty-three well-known GoF design patterns that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse. What problems can the Facade design pattern solve? To make a complex subsystem easier to use, a simple interface should be provided for a set of interfaces in the subsystem. The dependencies on a subsystem should be minimized. Clients that access a complex subsystem directly refer to (depend on) many different objects having different interfaces (tight coupling), which makes the clients hard to implement, change, test, and reuse. What solution does the Facade design pattern describe? Define a Facade object that implements a simple interface in terms of (by delegating to) the interfaces in the subsystem and may perform additional functionality before/after forwarding a request. This enables to work through a Facade object to minimize the dependencies on a subsystem. See also the UML class and sequence diagram below. Usage A Facade is used when an easier or simpler interface to an underlying object is desired. Alternatively, an adapter can be used when the wrapper must respect a particular interface and must support polymorphic behavior. A decorator makes it possible to add or alter behavior of an interface at run-time. The facade pattern is typically used when a simple interface is required to access a complex system, a system is very complex or difficult to understand, an entry point is needed to each level of layered software, or the abstractions and implementations of a subsystem are tightly coupled. Structure UML class and sequence diagram In this UML class diagram, the Clien
https://en.wikipedia.org/wiki/Bridge%20pattern
The bridge pattern is a design pattern used in software engineering that is meant to "decouple an abstraction from its implementation so that the two can vary independently", introduced by the Gang of Four. The bridge uses encapsulation, aggregation, and can use inheritance to separate responsibilities into different classes. When a class varies often, the features of object-oriented programming become very useful because changes to a program's code can be made easily with minimal prior knowledge about the program. The bridge pattern is useful when both the class and what it does vary often. The class itself can be thought of as the abstraction and what the class can do as the implementation. The bridge pattern can also be thought of as two layers of abstraction. When there is only one fixed implementation, this pattern is known as the Pimpl idiom in the C++ world. The bridge pattern is often confused with the adapter pattern, and is often implemented using the object adapter pattern; e.g., in the Java code below. Variant: The implementation can be decoupled even more by deferring the presence of the implementation to the point where the abstraction is utilized. Overview The Bridge design pattern is one of the twenty-three well-known GoF design patterns that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse. What problems can the Bridge design pattern solve? An abstraction and its implementation should be defined and extended independently from each other. A compile-time binding between an abstraction and its implementation should be avoided so that an implementation can be selected at run-time. When using subclassing, different subclasses implement an abstract class in different ways. But an implementation is bound to the abstraction at compile-time and cannot be changed at run-time. What solution does the Bridge design pattern describe? Separate an abstraction (Abstraction) from its implementation (Implementor) by putting them in separate class hierarchies. Implement the Abstraction in terms of (by delegating to) an Implementor object. This enables to configure an Abstraction with an Implementor object at run-time. See also the Unified Modeling Language class and sequence diagram below. Structure UML class and sequence diagram In the above Unified Modeling Language class diagram, an abstraction (Abstraction) is not implemented as usual in a single inheritance hierarchy. Instead, there is one hierarchy for an abstraction (Abstraction) and a separate hierarchy for its implementation (Implementor), which makes the two independent from each other. The Abstraction interface (operation()) is implemented in terms of (by delegating to) the Implementor interface (imp.operationImp()). The UML sequence diagram shows the run-time interactions: The Abstraction1 object delegates implementation to the Implementor1
https://en.wikipedia.org/wiki/Singleton%20pattern
In software engineering, the singleton pattern is a software design pattern that restricts the instantiation of a class to a singular instance. One of the well-known "Gang of Four" design patterns, which describe how to solve recurring problems in object-oriented software, the pattern is useful when exactly one object is needed to coordinate actions across a system. More specifically, the singleton pattern allows objects to: Ensure they only have one instance Provide easy access to that instance Control their instantiation (for example, hiding the constructors of a class) The term comes from the mathematical concept of a singleton. Common uses Singletons are often preferred to global variables because they do not pollute the global namespace (or their containing namespace). Additionally, they permit lazy allocation and initialization, whereas global variables in many languages will always consume resources. The singleton pattern can also be used as a basis for other design patterns, such as the abstract factory, factory method, builder and prototype patterns. Facade objects are also often singletons because only one facade object is required. Logging is a common real-world use case for singletons, because all objects that wish to log messages require a uniform point of access and conceptually write to a single source. Implementations Implementations of the singleton pattern ensure that only one instance of the singleton class ever exists and typically provide global access to that instance. Typically, this is accomplished by: Declaring all constructors of the class to be private, which prevents it from being instantiated by other objects Providing a static method that returns a reference to the instance The instance is usually stored as a private static variable; the instance is created when the variable is initialized, at some point before when the static method is first called. This C++11 implementation is based on the pre C++98 implementation in the book. #include <iostream> class Singleton { public: // defines an class operation that lets clients access its unique instance. static Singleton& get() { // may be responsible for creating its own unique instance. if (nullptr == instance) instance = new Singleton; return *instance; } Singleton(const Singleton&) = delete; // rule of three Singleton& operator=(const Singleton&) = delete; static void destruct() { delete instance; instance = nullptr; } // existing interface goes here int getValue() { return value; } void setValue(int value_) { value = value_; } private: Singleton() = default; // no public constructor ~Singleton() = default; // no public destructor static Singleton* instance; // declaration class variable int value; }; Singleton* Singleton::instance = nullptr; // definition class variable int main() { Singleton::get().setValue(42); std::cout << "value=" << Singleton::get().getValue() << '\n'; Singleton:
https://en.wikipedia.org/wiki/Portable%20Executable
The Portable Executable (PE) format is a file format for executables, object code, DLLs and others used in 32-bit and 64-bit versions of Windows operating systems. The PE format is a data structure that encapsulates the information necessary for the Windows OS loader to manage the wrapped executable code. This includes dynamic library references for linking, API export and import tables, resource management data and thread-local storage (TLS) data. On NT operating systems, the PE format is used for EXE, DLL, SYS (device driver), MUI and other file types. The Unified Extensible Firmware Interface (UEFI) specification states that PE is the standard executable format in EFI environments. On Windows NT operating systems, PE currently supports the x86-32, x86-64 (AMD64/Intel 64), IA-64, ARM and ARM64 instruction set architectures (ISAs). Prior to Windows 2000, Windows NT (and thus PE) supported the MIPS, Alpha, and PowerPC ISAs. Because PE is used on Windows CE, it continues to support several variants of the MIPS, ARM (including Thumb), and SuperH ISAs. Analogous formats to PE are ELF (used in Linux and most other versions of Unix) and Mach-O (used in macOS and iOS). History Microsoft migrated to the PE format from the 16-bit NE formats with the introduction of the Windows NT 3.1 operating system. All later versions of Windows, including Windows 95/98/ME and the Win32s addition to Windows 3.1x, support the file structure. The format has retained limited legacy support to bridge the gap between DOS-based and NT systems. For example, PE/COFF headers still include a DOS executable program, which is by default a DOS stub that displays a message like "This program cannot be run in DOS mode" (or similar), though it can be a full-fledged DOS version of the program (a later notable case being the Windows 98 SE installer). This constitutes a form of fat binary. PE also continues to serve the changing Windows platform. Some extensions include the .NET PE format (see below), a version with 64-bit address space support called PE32+, and a specification for Windows CE. Technical details Layout A PE file consists of a number of headers and sections that tell the dynamic linker how to map the file into memory. An executable image consists of several different regions, each of which require different memory protection; so the start of each section must be aligned to a page boundary. For instance, typically the .text section (which holds program code) is mapped as execute/read-only, and the .data section (holding global variables) is mapped as no-execute/read write. However, to avoid wasting space, the different sections are not page aligned on disk. Part of the job of the dynamic linker is to map each section to memory individually and assign the correct permissions to the resulting regions, according to the instructions found in the headers. Import table One section of note is the import address table (IAT), which is used as a lookup table when the applic
https://en.wikipedia.org/wiki/Poplog
Poplog is an open source, reflective, incrementally compiled software development environment for the programming languages POP-11, Common Lisp, Prolog, and Standard ML, originally created in the UK for teaching and research in Artificial Intelligence at the University of Sussex, and later marketed as a commercial package for software development as well as for teaching and research. It was one of the initiatives supported for a while by the UK government-funded Alvey Programme. History After an incremental compiler for Prolog had been added to an implementation of POP-11, the name POPLOG was adopted, to reflect the fact that the expanded system supported programming in both languages. The name was retained, as a trade mark of the University of Sussex, when the system was later (mid 1980s) extended with incremental compilers for Common Lisp and Standard ML based on a set of tools for implementing new languages in the Poplog Virtual Machine. The user-accessible incremental-compiler tools that allow compilers for all these languages to be added also allow extensions to be made within a language to provide new powers that cannot be added using standard macros that merely allow new text to be equivalent to a longer portion of old text. For some time after 1983, Poplog was sold and supported internationally as a commercial product, on behalf of the University of Sussex by Systems Designers Ltd (SDL), whose name changed as ownership changed. The main development work continued to be done by a small team at Sussex University until 1998, while marketing, sales, and support (except for UK academic users, who dealt directly with the Sussex team) was done by SDL and its successors (SD, then SD-Scicon then EDS) until 1991. At that time a management buy-out produced a spin-off company Integral Solutions Ltd (ISL), to sell and support Poplog in collaboration with Sussex University, who retained the rights to the name 'Poplog' and were responsible for the core software development while it was a commercial product. In 1992 ISL and Sussex University won a "Smart Award" in recognition of Poplog sales worth $5M. ISL and its clients used Poplog for a number of development projects, especially ISL's data-mining system Clementine, mostly implemented in POP-11, using powerful graphical tools implemented also in POP-11 running on the X Window System. Clementine was so successful that in 1998 ISL was bought by SPSS Inc who had been selling the statistics and data-mining package SPSS for which they needed a better graphical interface suited to expert and non-expert users. SPSS did not wish to sell and support Poplog as such, so Poplog then became available as a free open source software package, hosted at the University of Birmingham, which had also been involved in development after 1991. Later IBM bought SPSS and Clementine is now marketed and supported as SPSS Modeler. Supported languages Poplog's core language is POP-11. It is used to implement the other languages
https://en.wikipedia.org/wiki/QNX
QNX ( or ) is a commercial Unix-like real-time operating system, aimed primarily at the embedded systems market. The product was originally developed in the early 1980s by Canadian company Quantum Software Systems, later renamed QNX Software Systems. , it is used in a variety of devices including cars, medical devices, program logic controllers, robots, trains, and more. History Gordon Bell and Dan Dodge, both students at the University of Waterloo in 1980, took a course in real-time operating systems, in which the students constructed a basic real-time microkernel and user programs. Both were convinced there was a commercial need for such a system, and moved to the high-tech planned community Kanata, Ontario, to start Quantum Software Systems that year. In 1982, the first version of QUNIX was released for the Intel 8088 CPU. In 1984, Quantum Software Systems renamed QUNIX to QNX in an effort to avoid any trademark infringement challenges. One of the first widespread uses of the QNX real-time OS (RTOS) was in the nonembedded world when it was selected as the operating system for the Ontario education system's own computer design, the Unisys ICON. Over the years QNX was used mostly for larger projects, as its 44k kernel was too large to fit inside the one-chip computers of the era. The system garnered a reputation for reliability and became used in running machinery in many industrial applications. In the late-1980s, Quantum realized that the market was rapidly moving towards the Portable Operating System Interface (POSIX) model and decided to rewrite the kernel to be much more compatible at a low level. The result was QNX 4. During this time Patrick Hayden, while working as an intern, along with Robin Burgener (a full-time employee at the time), developed a new windowing system. This patented concept was developed into the embeddable graphical user interface (GUI) named the QNX Photon microGUI. QNX also provided a version of the X Window System. To demonstrate the OS's capability and relatively small size, in the late 1990s QNX released a demo image that included the POSIX-compliant QNX 4 OS, a full graphical user interface, graphical text editor, TCP/IP networking, web browser and web server that all fit on a bootable 1.44 MB floppy disk for the 386 PC. Toward the end of the 1990s, the company, then named QNX Software Systems, began work on a new version of QNX, designed from the ground up to be symmetric multiprocessing (SMP) capable, and to support all current POSIX application programming interfaces (APIs) and any new POSIX APIs that could be anticipated while still retaining the microkernel architecture. This resulted in QNX Neutrino, released in 2001. Along with the Neutrino kernel, QNX Software Systems became a founding member of the Eclipse (integrated development environment) consortium. The company released a suite of Eclipse plug-ins packaged with the Eclipse workbench in 2002, and named QNX Momentics Tool Suite. In 2004, th
https://en.wikipedia.org/wiki/Support%20vector%20machine
In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratories by Vladimir Vapnik with colleagues (Boser et al., 1992, Guyon et al., 1993, Cortes and Vapnik, 1995, Vapnik et al., 1997) SVMs are one of the most robust prediction methods, being based on statistical learning frameworks or VC theory proposed by Vapnik (1982, 1995) and Chervonenkis (1974). Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). SVM maps training examples to points in space so as to maximise the width of the gap between the two categories. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. The support vector clustering algorithm, created by Hava Siegelmann and Vladimir Vapnik, applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data. These data sets require unsupervised learning approaches, which attempt to find natural clustering of the data to groups and, then, to map new data according to these clusters. Motivation Classifying data is a common task in machine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class a new data point will be in. In the case of support vector machines, a data point is viewed as a -dimensional vector (a list of numbers), and we want to know whether we can separate such points with a -dimensional hyperplane. This is called a linear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two classes. So we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum-margin classifier; or equivalently, the perceptron of optimal stability. More formally, a support vector machine constructs a hyperplane or set of hyperplanes in a high or infinite-dimensional space, which can be used for classification, regression, or other tasks like outliers detection. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training
https://en.wikipedia.org/wiki/Video%20Electronics%20Standards%20Association
VESA (), formally known as Video Electronics Standards Association, is an American technical standards organization for computer display standards. The organization was incorporated in California in July 1989 and has its office in San Jose. It claims a membership of over 300 companies. In November 1988, NEC Home Electronics announced its creation of the association to develop and promote a Super VGA computer display standard as a successor to IBM's proprietary Video Graphics Array (VGA) display standard. Super VGA enabled graphics display resolutions up to 800×600 pixels, compared to VGA's maximum resolution of 640×480 pixels—a 56% increase. The organization has since issued several additional standards related to computer video displays. Widely used VESA standards include DisplayHDR, DisplayPort, and Flat Display Mounting Interface. Standards Feature connector (VFC), obsolete connector that was often present on older videocards, used as an 8-bit video bus to other devices VESA Advanced Feature Connector (VAFC), newer version of the VFC that widens the bus to either a 16-bit or 32-bit bus VESA Local Bus (VLB), once used as a fast video bus (akin to the more recent Accelerated Graphics Port (AGP)) VESA BIOS Extensions (VBE), used for enabling standard support for advanced video modes Display Data Channel (DDC), a data link protocol which allows a host device to control an attached display and communicate EDID, DPMS, MCCS and similar messages Extended Display Identification Data (E-EDID), a data format for display identification data Monitor Control Command Set (MCCS), a message protocol for controlling display parameters such as brightness, contrast, display orientation from the host device DisplayID, display identification data format, which is a replacement for E-EDID VESA Display Power Management Signaling (DPMS), which allows monitors to be queried on the types of power saving modes they support Digital Packet Video Link (DPVL), a display link standard that allows to update only portions of the screen VESA Stereo, a standard 3-pin connector for synchronization of stereoscopic images with LC shutter glasses Flat Display Mounting Interface (FDMI) Generalized Timing Formula (GTF), video timing standard Coordinated Video Timings (CVT), a replacement for GTF VESA Video Interface Port (VIP), a digital video interface standard DisplayPort (DP), a digital display interface standard VESA Enhanced Video Connector, an obsolete standard for reducing the number of cables around computers DisplayHDR, a standard to simplify HDR specifications for the display industry and consumers Company membership The following major companies are members of VESA. AMD Apple Inc. Canon Inc. Casio Dell Dolby Laboratories Foxconn Fujitsu Gigabyte Technology Google HP HTC Huawei Ikegami Tsushinki Intel Corporation JVC Kenwood Lenovo LG Electronics Maxell Microsoft NEC Nvidia Panasonic Parade Technologies Samsung Electronics Seik
https://en.wikipedia.org/wiki/WABC-TV
WABC-TV (channel 7) is a television station in New York City, serving as the flagship of the ABC network. Owned and operated by the network's ABC Owned Television Stations division, the station maintains studios in the Lincoln Square neighborhood of Manhattan, adjacent to ABC's corporate headquarters; its transmitter is located at the Empire State Building. WABC-TV is best known in broadcasting circles for its version of the Eyewitness News format and for its morning show, syndicated nationally by corporate cousin Disney General Entertainment Content. History As WJZ-TV (1948–1953) The station signed on August 10, 1948, as WJZ-TV, the first of three television stations signed on by ABC during that same year, with WENR-TV in Chicago and WXYZ-TV in Detroit being the other two. Channel 7's call letters came from its then-sister radio station, WJZ. In its early years, WJZ-TV was programmed much like an independent station, as the ABC television network was still, for the most part, in its very early stages of development; the ABC-owned stations did air some common programming during this period, especially after the 1949 fall season when the network's prime time schedule began to expand. The station's original transmitter site was located at The Pierre Hotel at 2 East 61st Street, before moving to the Empire State Building a few years later. The station's original studios were located at 77 West 66th Street, with additional studios at 7 West 66th Street. A tunnel linked ABC studios at 7 West 66th Street to the lobby of the Hotel des Artistes, a block north on West 67th Street. Another studio inside the Hotel des Artistes was used for Eyewitness News Conference. As WABC-TV (1953–present) The station's call letters were changed to WABC-TV on March 1, 1953, after ABC merged its operations with United Paramount Theatres, a firm which was broken off from former parent company Paramount Pictures by decree of the U.S. government. The WJZ-TV callsign was later reassigned to Westinghouse Broadcasting (the original owners of WJZ radio in New York) as an historical nod in 1957 for their newly acquired television station in Baltimore – a station that was, by coincidence, an ABC affiliate until 1995. As part of ABC's expansion program, initiated in 1977, ABC built 7 Lincoln Square on the southeast corner of West 67th Street and Columbus Avenue, on the site of an abandoned moving and storage warehouse. At about the same time, construction was started at 30 West 67th Street on the site of a former parking lot. Both buildings were completed in June 1979 and WABC-TV moved its offices from 77 West 66th Street to 7 Lincoln Square. On September 11, 2001, the transmitter facilities of WABC-TV, as well as eight other local television stations and several radio stations, were destroyed when two hijacked airplanes crashed into and destroyed the north and south towers of the World Trade Center. WABC-TV's transmitter maintenance engineer Donald DiFranco died in the atta
https://en.wikipedia.org/wiki/Metcalfe%27s%20law
Metcalfe's law states that the financial value or influence of a telecommunications network is proportional to the square of the number of connected users of the system (2). The law is named for Robert Metcalfe and first proposed in 1980, albeit not in terms of users, but rather of "compatible communicating devices" (e.g., fax machines, telephones). It later became associated with users on the Ethernet after a 13 September 1993 Forbes article by George Gilder. Network effects Metcalfe's law characterizes many of the network effects of communication technologies and networks such as the Internet, social networking and the World Wide Web. Former Chairman of the U.S. Federal Communications Commission Reed Hundt said that this law gives the most understanding to the workings of the Internet. Metcalfe's Law is related to the fact that the number of unique possible connections in a network of nodes can be expressed mathematically as the triangular number , which is asymptotically proportional to . The law has often been illustrated using the example of fax machines: a single fax machine is useless, but the value of every fax machine increases with the total number of fax machines in the network, because the total number of people with whom each user may send and receive documents increases. Likewise, in social networks, the greater the number of users with the service, the more valuable the service becomes to the community. History and derivation Metcalfe's law was conceived in 1983 in a presentation to the 3Com sales force. It stated would be proportional to the total number of possible connections, or approximately -squared. The original incarnation was careful to delineate between a linear cost (), non-linear growth, 2, and a non-constant proportionality factor , affinity. The break-even point where costs are recouped is given by: At some size, the right-hand side of the equation , value, exceeds the cost, and describes the relationship between size and net value added. For large , net network value is then: Metcalfe properly dimensioned as "value per user". Affinity is also a function of network size, and Metcalfe correctly asserted that must decline as grows large. In a 2006 interview, Metcalfe stated: Growth of Network size, and hence value, does not grow unbounded but is constrained by practical limitations such as infrastructure, access to technology, and bounded rationality such as Dunbar's number. It is almost always the case that user growth reaches a saturation point. With technologies, substitutes, competitors and technical obsolescence constrain growth of . Growth of n is typically assumed to follow a sigmoid function such as a logistic curve or Gompertz curve. Density A is also governed by the connectivity or density of the network topology. In an undirected network, every edge connects two nodes such that there are 2m nodes per edge. The proportion of nodes in actual contact are given by . The maximum possible number
https://en.wikipedia.org/wiki/Eliezer%20Yudkowsky
Eliezer S. Yudkowsky ( ; born September 11, 1979) is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea that there might not be a "fire alarm" for AI. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies. Work in artificial intelligence safety Goal learning and incentives in software systems Yudkowsky's views on the safety challenges future generations of AI systems pose are discussed in Stuart Russell's and Peter Norvig's undergraduate textbook Artificial Intelligence: A Modern Approach. Noting the difficulty of formally specifying general-purpose goals by hand, Russell and Norvig cite Yudkowsky's proposal that autonomous and adaptive systems be designed to learn correct behavior over time: In response to the instrumental convergence concern, that autonomous decision-making systems with poorly designed goals would have default incentives to mistreat humans, Yudkowsky and other MIRI researchers have recommended that work be done to specify software agents that converge on safe default behaviors even when their goals are misspecified. Capabilities forecasting In the intelligence explosion scenario hypothesized by I. J. Good, recursively self-improving AI systems quickly transition from subhuman general intelligence to superintelligent. Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies sketches out Good's argument in detail, while citing Yudkowsky on the risk that anthropomorphizing advanced AI systems will cause people to misunderstand the nature of an intelligence explosion. "AI might make an apparently sharp jump in intelligence purely as the result of anthropomorphism, the human tendency to think of 'village idiot' and 'Einstein' as the extreme ends of the intelligence scale, instead of nearly indistinguishable points on the scale of minds-in-general." In Artificial Intelligence: A Modern Approach, Russell and Norvig raise the objection that there are known limits to intelligent problem-solving from computational complexity theory; if there are strong limits on how efficiently algorithms can solve various computer science tasks, then intelligence explosion may not be possible. Time op-ed In a 2023 op-ed in Time, Yudkowsky discussed the risk of artificial intelligence and proposed action that could be taken to limit it, including a total halt on the development of AI or even "destroy[ing] a rogue datacenter by airstrike". The article helped introduce the debate about AI alignment to the mainstream, leading a reporter to ask President Joe Biden a question about AI safety at a press briefing. Rationality writing Between 2006 and 2009
https://en.wikipedia.org/wiki/Plankalk%C3%BCl
Plankalkül () is a programming language designed for engineering purposes by Konrad Zuse between 1942 and 1945. It was the first high-level programming language to be designed for a computer. Kalkül is the German term for a formal system—as in Hilbert-Kalkül, the original name for the Hilbert-style deduction system—so Plankalkül refers to a formal system for planning. History of programming In the domain of creating computing machines, Zuse was self-taught, and developed them without knowledge about other mechanical computing machines that existed already – although later on (building the Z3) being inspired by Hilbert's and Ackermann's book on elementary mathematical logic (see Principles of Mathematical Logic). To describe logical circuits, Zuse invented his own diagram and notation system, which he called "combinatorics of conditionals" (). After finishing the Z1 in 1938, Zuse discovered that the calculus he had independently devised already existed and was known as propositional calculus. What Zuse had in mind, however, needed to be much more powerful (propositional calculus is not Turing-complete and is not able to describe even simple arithmetic calculations). In May 1939, he described his plans for the development of what would become Plankalkül. He wrote the following in his notebook: While working on his doctoral dissertation, Zuse developed the first known formal system of algorithm notation capable of handling branches and loops. In 1942 he began writing a chess program in Plankalkül. In 1944, Zuse met with the German logician and philosopher Heinrich Scholz, who expressed appreciation for Zuse's utilization of logical calculus. In 1945, Zuse described Plankalkül in an unpublished book. The collapse of Nazi Germany, however, prevented him from submitting his manuscript. At that time the only two working computers in the world were ENIAC and Harvard Mark I, neither of which used a compiler, and ENIAC needed to be reprogrammed for each task by changing how the wires were connected. Although most of his computers were destroyed by Allied bombs, Zuse was able to rescue one machine, the Z4, and move it to the Alpine village of Hinterstein (part of Bad Hindelang). Unable to continue building computers – which was also forbidden by the Allied Powers – Zuse devoted his time to the development of a higher-level programming model and language. In 1948 he published a paper in the Archiv der Mathematik and presented at the Annual Meeting of the GAMM. His work failed to attract much attention. In a 1957 lecture, Zuse expressed his hope that Plankalkül, "after some time as a Sleeping Beauty, will yet come to life." He expressed disappointment that the designers of ALGOL 58 never acknowledged the influence of Plankalkül on their own work. Plankalkül was more comprehensively published in 1972. The first compiler was implemented by Joachim Hohmann in his 1975 dissertation. Other independent implementations followed in 1998 and 2000 at the Free
https://en.wikipedia.org/wiki/Intel%208008
The Intel 8008 ("eight-thousand-eight" or "eighty-oh-eight") is an early byte-oriented microprocessor designed by Computer Terminal Corporation (CTC), implemented and manufactured by Intel, and introduced in April 1972. It is an 8-bit CPU with an external 14-bit address bus that could address 16 KB of memory. Originally known as the 1201, the chip was commissioned by Computer Terminal Corporation (CTC) to implement an instruction set of their design for their Datapoint 2200 programmable terminal. As the chip was delayed and did not meet CTC's performance goals, the 2200 ended up using CTC's own TTL-based CPU instead. An agreement permitted Intel to market the chip to other customers after Seiko expressed an interest in using it for a calculator. History CTC formed in San Antonio in 1968 under the direction of Austin O. "Gus" Roche and Phil Ray, both NASA engineers. Roche, in particular, was primarily interested in producing a desktop computer. However, given the immaturity of the market, the company's business plan mentioned only a Teletype Model 33 ASR replacement, which shipped as the Datapoint 3300. The case was deliberately designed to fit in the same space as an IBM Selectric typewriter and used a video screen shaped to have the same aspect ratio as an IBM punched card. Although commercially successful, the 3300 had ongoing heat problems due to the amount of circuitry packed into such a small space. In order to address the heating and other issues, a re-design started that featured the CPU part of the internal circuitry re-implemented on a single chip. Looking for a company able to produce their chip design, Roche turned to Intel, then primarily a vendor of memory chips. Roche met with Bob Noyce, who expressed concern with the concept; John Frassanito recalls that "Noyce said it was an intriguing idea, and that Intel could do it, but it would be a dumb move. He said that if you have a computer chip, you can only sell one chip per computer, while with memory, you can sell hundreds of chips per computer." Another major concern was that Intel's existing customer base purchased their memory chips for use with their own processor designs; if Intel introduced their own processor, they might be seen as a competitor, and their customers might look elsewhere for memory. Nevertheless, Noyce agreed to a $50,000 development contract in early 1970. Texas Instruments (TI) was also brought in as a second supplier. TI was able to make samples of the 1201 based on Intel drawings, but these proved to be buggy and were rejected. Intel's own versions were delayed. CTC decided to re-implement the new version of the terminal using discrete TTL instead of waiting for a single-chip CPU. The new system was released as the Datapoint 2200 in the spring 1970, with their first sale to General Mills on May 25, 1970. CTC paused development of the 1201 after the 2200 was released, as it was no longer needed. Six months later, Seiko approached Intel, expressing an intere
https://en.wikipedia.org/wiki/DuMont%20Television%20Network
The DuMont Television Network (also known as the DuMont Network, DuMont Television, simply DuMont/Du Mont, or (incorrectly) Dumont ) was one of America's pioneer commercial television networks, rivaling NBC and CBS for the distinction of being first overall in the United States. It was owned by Allen B. DuMont Laboratories, a television equipment and set manufacturer, and began operation on April 13, 1940. The network was hindered by the prohibitive cost of broadcasting, a freeze on new television stations in 1948 by the Federal Communications Commission (FCC) that restricted the network's growth, and even the company's partner, Paramount Pictures. Despite several innovations in broadcasting and the creation of one of television's biggest stars of the 1950s—Jackie Gleason—the network never found itself on solid financial ground. Forced to expand on UHF channels during an era when UHF tuning was not yet a standard feature on television sets, DuMont fought an uphill battle for program clearances outside its three owned-and-operated stations in New York City, Washington, D.C., and Pittsburgh, ultimately ending network operations on August 6, 1956. DuMont's latter-day obscurity, caused mainly by the destruction of its extensive program archive by the 1970s, has prompted TV historian David Weinstein to refer to it as the "forgotten network". A few popular DuMont programs, such as Cavalcade of Stars and Emmy Award winner Life Is Worth Living, appear in television retrospectives or are mentioned briefly in books about U.S. television history. History Origins Allen B. DuMont Laboratories was founded in 1931 by Allen B. DuMont with only $1,000, and a laboratory in his basement. He and his staff were responsible for many early technical innovations, including the first consumer all-electronic television receiver in 1938. Their most revolutionary contribution came when the team successfully extended the life of a cathode ray tube from 24 to 1000 hours, making television sets a practical product for consumers. The company's television receivers soon became the standard of the industry. In 1942, DuMont worked with the US Army in developing radar technology during World War II. This brought in $5 million for the company. Early sales of television receivers were hampered by the lack of regularly scheduled programming being broadcast. A few months after selling his first set in 1938, DuMont opened his own New York-area experimental television station (W2XVT) in Passaic, New Jersey. In 1940, the station moved to Manhattan as W2XWV on channel 4 and commenced broadcasting on April 13, 1940. Unlike CBS and NBC, which reduced their hours of television broadcasting during World War II, DuMont continued full-scale experimental and commercial broadcasts throughout the war. In 1944, W2XWV received a commercial license, the third in New York, under the call letters WABD (derived from DuMont's initials). In 1945, it moved to channel 5. On May 19, 1945, DuMont opened
https://en.wikipedia.org/wiki/Farscape
Farscape is an Australian-American science fiction television series, produced originally for the Nine Network. It premiered in the US on Sci-Fi Channel's SciFi Friday, 19 March 1999, at 8:00 pm EST as their anchor series. The series was conceived by Rockne S. O'Bannon and produced by The Jim Henson Company and Hallmark Entertainment. The Jim Henson Company was responsible for the various alien make-up and prosthetics, and two regular characters (the animatronic puppets Rygel and Pilot) are entirely Creature Shop creations. Although the series was planned for five seasons, it was abruptly cancelled after production had ended on its fourth season, ending the series on a cliffhanger. Co-producer Brian Henson later secured the rights to Farscape, paving the way for a three-hour miniseries to wrap up the cliffhanger, titled Farscape: The Peacekeeper Wars, which Henson directed. In 2007, it was announced that the creator was returning for a web-series but production has been repeatedly delayed. A comic book miniseries was released in December 2008 that was in continuity with both the series and the hoped-for webisodes. In 2019, Amazon Prime Video released a remaster of Farscape: The Peacekeeper Wars. Overview Farscape features a diverse ensemble of characters who are initially escaping from corrupt authorities in the form of a militaristic organization called the Peacekeepers. The protagonists live inside a large bio-mechanical ship called Moya which is a living entity. In the first episode, they are joined by the main character, John Crichton (Ben Browder), a modern-day American astronaut who accidentally flies into a wormhole near Earth during an experimental space flight. On the same day, another stranger is picked up by Moya: a Peacekeeper named Aeryn Sun (Claudia Black). Despite his best intentions, Crichton makes enemies, the primary one being Scorpius. There are a few standalone plots, but the show gradually unfolds progressive story arcs, beginning with their recapture by the Peacekeepers, followed by Crichton's search to find another wormhole back to Earth, and an eventual arms race for wormhole technology weapons. Secondary arcs explore the way in which the characters change due to their influences and adventures together, most notably Crichton and his obsession with wormhole technology, his relationship with Aeryn, and the neural clone of Scorpius in his brain that haunts him. Production and broadcast The series was originally conceived in the early 1990s by Rockne S. O'Bannon and Brian Henson under the title Space Chase. The series is told in a serialized format, with each episode involving a self-contained story while contributing to a larger storyline. Nearly the entire cast originates from Australia and New Zealand, with the exception of Ben Browder, who is an American actor. Farscape'''s characters frequently make use of slang such as "frell", "dren" and "hezmana" as a substitute for English expletives.Farscape first ran on the Au
https://en.wikipedia.org/wiki/DR-DOS
DR-DOS (written as DR DOS, without a hyphen, in versions up to and including 6.0) is a disk operating system for IBM PC compatibles. Upon its introduction in 1988, it was the first DOS attempting to be compatible with IBM PC DOS and MS-DOS (which were the same product sold under different names ). DR-DOS was developed by Gary A. Kildall's Digital Research and derived from Concurrent PC DOS 6.0, which was an advanced successor of CP/M-86. As ownership changed, various later versions were produced with names including Novell DOS and Caldera OpenDOS. History Origins in CP/M Digital Research's original CP/M for the 8-bit Intel 8080- and Z-80-based systems spawned numerous spin-off versions, most notably CP/M-86 for the Intel 8086/8088 family of processors. Although CP/M had dominated the market since the mid-1970s, and was shipped with the vast majority of non-proprietary-architecture personal computers, the IBM PC in 1981 brought the beginning of what was eventually to be a massive change. IBM originally approached Digital Research in 1980, seeking an x86 version of CP/M. However, there were disagreements over the contract, and IBM withdrew. Instead, a deal was struck with Microsoft, who purchased another operating system, 86-DOS, from Seattle Computer Products (SCP). This became Microsoft MS-DOS and IBM PC DOS. 86-DOS's command structure and application programming interface imitated that of CP/M 2.2 (with BDOS 2.2). Digital Research threatened legal action, claiming PC DOS/MS-DOS to be too similar to CP/M. In early 1982, IBM settled by agreeing to sell Digital Research's x86 version of CP/M, CP/M-86, alongside PC DOS. However, PC DOS sold for while CP/M-86 had a $240 price tag. The proportion of PC buyers prepared to spend six times as much to buy CP/M-86 was very small, and the limited availability of compatible application software, at first in Digital Research's favor, was only temporary. Digital Research fought a long losing battle to promote CP/M-86 and its multi-tasking multi-user successors MP/M-86 and Concurrent CP/M-86, and eventually decided that they could not beat the Microsoft-IBM lead in application software availability, so they modified Concurrent CP/M-86 to allow it to run the same applications as MS-DOS and PC DOS. This was shown publicly in December 1983 and shipped in March 1984 as Concurrent DOS 3.1 (a.k.a. CDOS with BDOS 3.1) to hardware vendors. While Concurrent DOS continued to evolve in various flavours over the years to eventually become Multiuser DOS and REAL/32, it was not specifically tailored for the desktop market and too expensive for single-user applications. Therefore, over time two attempts were made to sideline the product: In 1985, Digital Research developed DOS Plus 1.0 to 2.1, a stripped-down and modified single-user derivative of Concurrent DOS 4.1 and 5.0, which ran applications for both platforms, and allowed switching between several tasks as did the original CP/M-86. Its DOS compatibility was lim
https://en.wikipedia.org/wiki/Tate
Tate is an institution that houses, in a network of four art galleries, the United Kingdom's national collection of British art, and international modern and contemporary art. It is not a government institution, but its main sponsor is the UK Department for Culture, Media and Sport. The name "Tate" is used also as the operating name for the corporate body, which was established by the Museums and Galleries Act 1992 as "The Board of Trustees of the Tate Gallery". The gallery was founded in 1897 as the National Gallery of British Art. When its role was changed to include the national collection of modern art as well as the national collection of British art, in 1932, it was renamed the Tate Gallery after sugar magnate Henry Tate of Tate & Lyle, who had laid the foundations for the collection. The Tate Gallery was housed in the current building occupied by Tate Britain, which is situated in Millbank, London. In 2000, the Tate Gallery transformed itself into the current-day Tate, consisting of a network of four museums: Tate Britain, which displays the collection of British art from 1500 to the present day; Tate Modern, also in London, which houses the Tate's collection of British and international modern and contemporary art from 1900 to the present day; Tate Liverpool (founded in 1988), which has the same purpose as Tate Modern but on a smaller scale; and Tate St Ives in Cornwall (founded in 1993), which displays modern and contemporary art by artists who have connections with the area. All four museums share the Tate Collection. One of the Tate's most publicised art events is the awarding of the annual Turner Prize, which takes place at Tate Britain every other year (taking place at venues outside of London in alternate years). History and development The original Tate was called the National Gallery of British Art, situated on Millbank, Pimlico, London at the site of the former Millbank Prison. The idea of a National Gallery of British Art was first proposed in the 1820s by Sir John Leicester, Baron de Tabley. It took a step nearer when Robert Vernon gave his collection to the National Gallery in 1847. A decade later John Sheepshanks gave his collection to the South Kensington Museum (later the Victoria & Albert Museum), known for years as the National Gallery of Art (the same title as the Tate Gallery had). Forty years later Sir Henry Tate who was a sugar magnate and a major collector of Victorian art, offered to fund the building of the gallery to house British Art on the condition that the State pay for the site and revenue costs. Henry Tate also donated his own collection to the gallery. It was initially a collection solely of modern British art, concentrating on the works of modern—that is Victorian era—painters. It was controlled by the National Gallery until 1954. Following the death of Sir Hugh Lane in the sinking of the RMS Lusitania in 1915, an oversight in his will meant that the collection of European modern art he had intended
https://en.wikipedia.org/wiki/Kent%20Pitman
Kent M. Pitman (KMP) is a programmer who has been involved for many years in the design, implementation, and use of systems based on the programming languages Lisp and Scheme. , he has been President of HyperMeta, Inc. Pitman was chair of the ad hoc group (part of X3J13) that designed the Common Lisp Error and Condition System and is author of the proposal document that was ultimately adopted, and many papers on Lisp programming and computer programming in general. While in high school, he saw output from one of the guess the animal pseudo-artificial intelligence (AI) games then popular. He considered implementing a version of the program in BASIC, but once at the Massachusetts Institute of Technology (MIT), instead he implemented it in several dialects of Lisp, including Maclisp. He was a technical contributor to X3J13, the American National Standards Institute (ANSI) subcommittee that standardized Common Lisp and contributed to the design of the programming language. He prepared the document that became ANSI Common Lisp, the Common Lisp HyperSpec (a hypertext conversion of the standard), and the document that became International Organization for Standardization (ISO) ISLISP. He can often be found on the Usenet newsgroup comp.lang.lisp, where he is involved in discussions about Lisp and computer programming, and insider perspectives on Lisp evolution and Common Lisp standardization. In some posts there, he has expressed his opinion on open-source software, including open source implementations of Lisp and Scheme, as something that should be judged individually on its essential merits, rather than automatically considered good merely by being free or open. References External links Lisp (programming language) people Living people Year of birth missing (living people) Massachusetts Institute of Technology alumni
https://en.wikipedia.org/wiki/Role-based%20access%20control
In computer systems security, role-based access control (RBAC) or role-based security is an approach to restricting system access to authorized users, and to implementing mandatory access control (MAC) or discretionary access control (DAC). Role-based access control is a policy-neutral access control mechanism defined around roles and privileges. The components of RBAC such as role-permissions, user-role and role-role relationships make it simple to perform user assignments. A study by NIST has demonstrated that RBAC addresses many needs of commercial and government organizations. RBAC can be used to facilitate administration of security in large organizations with hundreds of users and thousands of permissions. Although RBAC is different from MAC and DAC access control frameworks, it can enforce these policies without any complication. Design Within an organization, roles are created for various job functions. The permissions to perform certain operations are assigned to specific roles. Since users are not assigned permissions directly, but only acquire them through their role (or roles), management of individual user rights becomes a matter of simply assigning appropriate roles to the user's account; this simplifies common operations, such as adding a user, or changing a user's department. Three primary rules are defined for RBAC: Role assignment: A subject can exercise a permission only if the subject has selected or been assigned a role. Role authorization: A subject's active role must be authorized for the subject. With rule 1 above, this rule ensures that users can take on only roles for which they are authorized. Permission authorization: A subject can exercise a permission only if the permission is authorized for the subject's active role. With rules 1 and 2, this rule ensures that users can exercise only permissions for which they are authorized. Additional constraints may be applied as well, and roles can be combined in a hierarchy where higher-level roles subsume permissions owned by sub-roles. With the concepts of role hierarchy and constraints, one can control RBAC to create or simulate lattice-based access control (LBAC). Thus RBAC can be considered to be a superset of LBAC. When defining an RBAC model, the following conventions are useful: S = Subject = A person or automated agent R = Role = Job function or title which defines an authority level P = Permissions = An approval of a mode of access to a resource SE = Session = A mapping involving S, R and/or P SA = Subject Assignment PA = Permission Assignment RH = Partially ordered Role Hierarchy. RH can also be written: ≥ (The notation: x ≥ y means that x inherits the permissions of y.) A subject can have multiple roles. A role can have multiple subjects. A role can have many permissions. A permission can be assigned to many roles. An operation can be assigned to many permissions. A permission can be assigned to many operations. A constraint places a restrict
https://en.wikipedia.org/wiki/Political%20corruption
Political corruption is the use of powers by government officials or their network contacts for illegitimate private gain. Forms of corruption vary, but can include bribery, lobbying, extortion, cronyism, nepotism, parochialism, patronage, influence peddling, graft, and embezzlement. Corruption may facilitate criminal enterprise such as drug trafficking, money laundering, and human trafficking, though it is not restricted to these activities. Misuse of government power for other purposes, such as repression of political opponents and general police brutality, is also considered political corruption. Over time, corruption has been defined differently. For example, in a simple context, while performing work for a government or as a representative, it is unethical to accept a gift. Any free gift could be construed as a scheme to lure the recipient towards some biases. In most cases, the gift is seen as an intention to seek certain favors such as work promotion, tipping in order to win a contract, job or exemption from certain tasks in the case of junior worker handing in the gift to a senior employee who can be key in winning the favor. Some forms of corruption – now called "institutional corruption" – are distinguished from bribery and other kinds of obvious personal gain. For example, certain state institutions may consistently act against the interests of the public, such as by misusing public funds for their own interest, or by engaging in illegal or immoral behavior with impunity. Bribery and overt criminal acts by individuals may not necessarily be evident, but the institution nonetheless acts immorally as a whole. The mafia state phenomenon is an example of institutional corruption. An illegal act by an officeholder constitutes political corruption only if the act is directly related to their official duties, is done under color of law or involves trading in influence. The activities that constitute illegal corruption differ depending on the country or jurisdiction. For instance, some political funding practices that are legal in one place may be illegal in another. In some cases, government officials have broad or ill-defined powers, which make it difficult to distinguish between legal and illegal actions. Worldwide, bribery alone is estimated to involve over 1 trillion US dollars annually. A state of unrestrained political corruption is known as a kleptocracy, literally meaning "rule by thieves". Definition Corruption is a difficult concept to define. A proper definition of corruption requires a multi-dimensional approach. Machiavelli popularized the oldest dimension of corruption as the decline of virtue among political officials and the citizenry. The psychologist Horst-Eberhard Richter's modernized version defines corruption as the undermining of political values. Corruption as the decline of virtue has been criticized as too broad and far too subjective to be universalized. The second dimension of corruption is corruption as devia
https://en.wikipedia.org/wiki/Bzip2
bzip2 is a free and open-source file compression program that uses the Burrows–Wheeler algorithm. It only compresses single files and is not a file archiver. It relies on separate external utilities for tasks such as handling multiple files, encryption, and archive-splitting. bzip2 was initially released in 1996 by Julian Seward. It compresses most files more effectively than older LZW and Deflate compression algorithms but is slower. bzip2 is particularly efficient for text data, and decompression is relatively fast. The algorithm uses several layers of compression techniques, such as run-length encoding (RLE), Burrows–Wheeler transform (BWT), move-to-front transform (MTF), and Huffman coding. bzip2 compresses data in blocks between 100 and 900 kB and uses the Burrows–Wheeler transform to convert frequently recurring character sequences into strings of identical letters. The move-to-front transform and Huffman coding are then applied. The compression performance is asymmetric, with decompression being faster than compression. The algorithm has gone through multiple maintainers since its initial release, with Micah Snyder being the maintainer since June 2021. There have been some modifications to the algorithm, such as pbzip2, which uses multi-threading to improve compression speed on multi-CPU and multi-core computers. bzip2 is suitable for use in big data applications with cluster computing frameworks like Hadoop and Apache Spark, as the compressed blocks can be independently decompressed. History Seward made the first public release of bzip2, version 0.15, in July 1996. The compressor's stability and popularity grew over the next several years, and Seward released version 1.0 in late 2000. Following a nine-year hiatus of updates for the project since 2010, on 4 June 2019 Federico Mena accepted maintainership of the bzip2 project. Since June 2021, the maintainer is Micah Snyder. Implementation bzip2 uses several layers of compression techniques stacked on top of each other, which occur in the following order during compression and the reverse order during decompression: Run-length encoding (RLE) of initial data. Burrows–Wheeler transform (BWT), or block sorting. Move-to-front (MTF) transform. Run-length encoding (RLE) of MTF result. Huffman coding. Selection between multiple Huffman tables. Unary base-1 encoding of Huffman table selection. Delta encoding (Δ) of Huffman-code bit lengths. Sparse bit array showing which symbols are used. Any sequence of 4 to 255 consecutive duplicate symbols is replaced by the first 4 symbols and a repeat length between 0 and 251. Thus the sequence AAAAAAABBBBCCCD is replaced with AAAA\3BBBB\0CCCD, where \3 and \0 represent byte values 3 and 0 respectively. Runs of symbols are always transformed after 4 consecutive symbols, even if the run-length is set to zero, to keep the transformation reversible. In the worst case, it can cause an expansion of 1.25, and in the best case, a reduction to <0
https://en.wikipedia.org/wiki/Reinforcement%20learning
Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning. Reinforcement learning differs from supervised learning in not needing labelled input/output pairs to be presented, and in not needing sub-optimal actions to be explicitly corrected. Instead the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge). The environment is typically stated in the form of a Markov decision process (MDP), because many reinforcement learning algorithms for this context use dynamic programming techniques. The main difference between the classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the Markov decision process and they target large Markov decision processes where exact methods become infeasible. Introduction Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment. In economics and game theory, reinforcement learning may be used to explain how equilibrium may arise under bounded rationality. Basic reinforcement learning is modeled as a Markov decision process: a set of environment and agent states, ; a set of actions, , of the agent; , the probability of transition (at time ) from state to state under action . , the immediate reward after transition from to with action . The purpose of reinforcement learning is for the agent to learn an optimal, or nearly-optimal, policy that maximizes the "reward function" or other user-provided reinforcement signal that accumulates from the immediate rewards. This is similar to processes that appear to occur in animal psychology. For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements. In some circumstances, animals can learn to engage in behaviors that optimize these rewards. This suggests that animals are capable of reinforcement learning. A basic reinforcement learning agent AI interacts w
https://en.wikipedia.org/wiki/TOPS-20
The TOPS-20 operating system by Digital Equipment Corporation (DEC) is a proprietary OS used on some of DEC's 36-bit mainframe computers. The Hardware Reference Manual was described as for "DECsystem-10/DECSYSTEM-20 Processor" (meaning the DEC PDP-10 and the DECSYSTEM-20). TOPS-20 began in 1969 as the TENEX operating system of Bolt, Beranek and Newman (BBN) and shipped as a product by DEC starting in 1976. TOPS-20 is almost entirely unrelated to the similarly named TOPS-10, but it was shipped with the PA1050 TOPS-10 Monitor Calls emulation facility which allowed most, but not all, TOPS-10 executables to run unchanged. As a matter of policy, DEC did not update PA1050 to support later TOPS-10 additions except where required by DEC software. TOPS-20 competed with TOPS-10, ITS and WAITS—all of which were notable time-sharing systems for the PDP-10 during this timeframe. TENEX TOPS-20 was based upon the TENEX operating system, which had been created by Bolt Beranek and Newman for Digital's PDP-10 computer. After Digital started development of the KI-10 version of the PDP-10, an issue arose: by this point TENEX was the most popular customer-written PDP-10 operating systems, but it would not run on the new, faster KI-10s. To correct this problem, the DEC PDP-10 sales manager purchased the rights to TENEX from BBN and set up a project to port it to the new machine. In the end, very little of the original TENEX code remained, and Digital ultimately named the resulting operating system TOPS-20. PA1050 Some of what came with TOPS-20 was merely an emulation of the TOPS-10 Operating System's calls. These were known as UUO's, standing for Unimplemented User Operation, and were needed both for compilers, which were not 20-specific, to run, as well as user-programs written in these languages. The package that was mapped into a user's address space was named PA1050: PA as in PAT as in compatibility; 10 as in DEC or PDP 10; 50 as in a PDP 10 Model 50, 10/50, 1050. Sometimes PA1050 was referred to as PAT, a name that was a good fit to the fact that PA1050, "was simply unprivileged user-mode code" that "performed the requested action, using JSYS calls where necessary." TOPS-20 capabilities The major ways to get at TOPS-20 capabilities, and what made TOPS-20 important, were Commands entered via the command processor, EXEC.EXE JSYS (Jump to System) calls from MACro-language (.MAC) programs The "EXEC" accomplished its work primarily using internal code, including calls via JSYS requesting services from "GALAXY" components (e.g. spoolers) Command processor Rather advanced for its day were some TOPS-20-specific features: Command completion Dynamic help in the form of noise-words - typing DIR and then pressing the ESCape key resulted in DIRectory (of files) typing and pressing the key resulted in Information (about) One could then type to find out what operands were permitted/required. Pressing displays status information. Commands The following
https://en.wikipedia.org/wiki/Final%20Fantasy%20III
is a role-playing video game developed and published by Square for the Family Computer. The third installment in the Final Fantasy series, it is the first numbered Final Fantasy game to feature the job-change system. The story revolves around four orphaned youths drawn to a crystal of light. The crystal grants them some of its power, and instructs them to go forth and restore balance to the world. Not knowing what to make of the crystal's pronouncements, but nonetheless recognizing the importance of its words, the four inform their adoptive families of their mission and set out to explore and bring back balance to the world. The game was originally released in Japan on April 27, 1990. The original Famicom version sold 1.4 million copies in Japan. It had not been released outside Japan until a remake, also called Final Fantasy III, was developed by Matrix Software for the Nintendo DS on August 24, 2006. At that time, it was the only Final Fantasy game not previously released in North America or Europe. There had been earlier plans to remake the game for Bandai's WonderSwan Color handheld, as had been done with the first, second, and fourth installments of the series, but the game faced several delays and was eventually canceled after the premature cancellation of the platform. The Nintendo DS version of the game was positively received, selling nearly 2 million copies worldwide. It was also released for many other systems: the Japanese Famicom version via the Virtual Console on July 21, 2009 (Wii) and January 8, 2014 (Wii U), an iOS port of the Nintendo DS remake on March 24, 2011, an Android port on March 12, 2012, a PlayStation Portable port in late September 2012 (downloadable-only format outside Japan via PlayStation Network) and a Windows port via Steam in 2014. An updated release based on the Famicom version of Final Fantasy III was released as part of the Final Fantasy Pixel Remaster collection, marking the first time the original version of Final Fantasy III was released outside of Japan. This version was released in July 2021 for Windows, Android and iOS, and in April 2023 for PlayStation 4 and Nintendo Switch. Gameplay The gameplay of Final Fantasy III combines elements of the first two Final Fantasy games with new features. The turn-based combat system remains in place from the first two games, but hit points are now shown above the target following attacks or healing actions, rather than captioned as in the previous two games. Auto-targeting for physical attacks after a friendly or enemy unit is killed is also featured for the first time. Unlike subsequent games in the series, magical attacks are not auto-targeted in the same fashion. The experience point system featured in Final Fantasy makes a return following its absence from the second game. The character class system featured in the first game also reappears, with some modifications. Whereas in the original game the player chooses each character's class alignment at the start
https://en.wikipedia.org/wiki/Beowulf%20cluster
A Beowulf cluster is a computer cluster of what are normally identical, commodity-grade computers networked into a small local area network with libraries and programs installed which allow processing to be shared among them. The result is a high-performance parallel computing cluster from inexpensive personal computer hardware. The name Beowulf originally referred to a specific computer built in 1994 by Thomas Sterling and Donald Becker at NASA. The name "Beowulf" comes from the Old English epic poem of the same name. No particular piece of software defines a cluster as a Beowulf. Typically only free and open source software is used, both to save cost and to allow customization. Most Beowulf clusters run a Unix-like operating system, such as BSD, Linux, or Solaris. Commonly used parallel processing libraries include Message Passing Interface (MPI) and Parallel Virtual Machine (PVM). Both of these permit the programmer to divide a task among a group of networked computers, and collect the results of processing. Examples of MPI software include Open MPI or MPICH. There are additional MPI implementations available. Beowulf systems operate worldwide, chiefly in support of scientific computing. Since 2017, every system on the Top500 list of the world's fastest supercomputers has used Beowulf software methods and a Linux operating system. At this level, however, most are by no means just assemblages of commodity hardware; custom design work is often required for the nodes (often blade servers), the networking and the cooling systems. Development A description of the Beowulf cluster, from the original "how-to", which was published by Jacek Radajewski and Douglas Eadline under the Linux Documentation Project in 1998: Operating systems a number of Linux distributions, and at least one BSD, are designed for building Beowulf clusters. These include: MOSIX, geared toward computationally intensive, IO-low applications Rocks Cluster Distribution, latest 2017 DragonFly BSD OS, latest 2022 Quantian OS latest 2006, a live DVD with scientific applications, remastered from Knoppix Kentucky Linux Athlon Testbed, physical installation at University of Kentucky The following are no longer maintained: Kerrighed (EOL: 2013) OpenMosix (EOL: 2008), forked from MOSIX ClusterKnoppix OS, forked from Knoppix OS, forked from OpenMosix PelicanHPC OS latest 2016, based on Debian Live A cluster can be set up by using Knoppix bootable CDs in combination with OpenMosix. The computers will automatically link together, without need for complex configurations, to form a Beowulf cluster using all CPUs and RAM in the cluster. A Beowulf cluster is scalable to a nearly unlimited number of computers, limited only by the overhead of the network. Provisioning of operating systems and other software for a Beowulf Cluster can be automated using software, such as Open Source Cluster Application Resources. OSCAR installs on top of a standard installation of a supported
https://en.wikipedia.org/wiki/ENIAC
ENIAC (; Electronic Numerical Integrator and Computer) was the first programmable, electronic, general-purpose digital computer, completed in 1945. There were other computers that had combinations of these features, but the ENIAC had all of them in one computer. It was Turing-complete and able to solve "a large class of numerical problems" through reprogramming. ENIAC was designed by John Mauchly and J. Presper Eckert to calculate artillery firing tables for the United States Army's Ballistic Research Laboratory (which later became a part of the Army Research Laboratory). However, its first program was a study of the feasibility of the thermonuclear weapon. ENIAC was completed in 1945 and first put to work for practical purposes on December 10, 1945. ENIAC was formally dedicated at the University of Pennsylvania on February 15, 1946, having cost $487,000 (), and called a "Giant Brain" by the press. It had a speed on the order of one thousand times faster than that of electro-mechanical machines; this computational power, coupled with general-purpose programmability, excited scientists and industrialists alike. The combination of speed and programmability allowed for thousands more calculations for problems. ENIAC was formally accepted by the U.S. Army Ordnance Corps in July 1946. It was transferred to Aberdeen Proving Ground in Aberdeen, Maryland in 1947, where it was in continuous operation until 1955. Development and design ENIAC's design and construction was financed by the United States Army, Ordnance Corps, Research and Development Command, led by Major General Gladeon M. Barnes. The total cost was about $487,000, . The construction contract was signed on June 5, 1943; work on the computer began in secret at the University of Pennsylvania's Moore School of Electrical Engineering the following month, under the code name "Project PX", with John Grist Brainerd as principal investigator. Herman H. Goldstine persuaded the Army to fund the project, which put him in charge to oversee it for them. ENIAC was designed by Ursinus College physics professor John Mauchly and J. Presper Eckert of the University of Pennsylvania, U.S. The team of design engineers assisting the development included Robert F. Shaw (function tables), Jeffrey Chuan Chu (divider/square-rooter), Thomas Kite Sharpless (master programmer), Frank Mural (master programmer), Arthur Burks (multiplier), Harry Huskey (reader/printer) and Jack Davis (accumulators). Significant development work was undertaken by the female mathematicians who handled the bulk of the ENIAC programming: Jean Jennings, Marlyn Wescoff, Ruth Lichterman, Betty Snyder, Frances Bilas, and Kay McNulty. In 1946, the researchers resigned from the University of Pennsylvania and formed the Eckert–Mauchly Computer Corporation. ENIAC was a large, modular computer, composed of individual panels to perform different functions. Twenty of these modules were accumulators that could not only add and subtract, but hold a t
https://en.wikipedia.org/wiki/The%20Limits%20to%20Growth
The Limits to Growth (LTG) is a 1972 report that discussed the possibility of exponential economic and population growth with finite supply of resources, studied by computer simulation. The study used the World3 computer model to simulate the consequence of interactions between the Earth and human systems. The model was based on the work of Jay Forrester of MIT, as described in his book World Dynamics. Commissioned by the Club of Rome, the findings of the study were first presented at international gatherings in Moscow and Rio de Janeiro in the summer of 1971. The report's authors are Donella H. Meadows, Dennis L. Meadows, Jørgen Randers, and William W. Behrens III, representing a team of 17 researchers. The report's findings suggest that in the absence of significant alterations in resource utilization, it is highly likely that there would be an abrupt and unmanageable decrease in both population and industrial capacity. Despite facing severe criticism and scrutiny upon its initial release, subsequent research aimed at verifying its predictions consistently supports the notion that there have been inadequate modifications made since 1972 to substantially alter its essence. Since its publication, some 30 million copies of the book in 30 languages have been purchased. It continues to generate debate and has been the subject of several subsequent publications. Beyond the Limits and The Limits to Growth: The 30-Year Update were published in 1992 and 2004 respectively, in 2012, a 40-year forecast from Jørgen Randers, one of the book's original authors, was published as 2052: A Global Forecast for the Next Forty Years, and in 2022 two of the original Limits to Growth authors, Dennis Meadows and Jørgen Randers, joined 19 other contributors to produce Limits and Beyond. Purpose In commissioning the MIT team to undertake the project that resulted in LTG, the Club of Rome had three objectives: Gain insights into the limits of our world system and the constraints it puts on human numbers and activity. Identify and study the dominant elements, and their interactions, that influence the long-term behavior of world systems. To warn of the likely outcome of contemporary economic and industrial policies, with a view to influencing changes to a sustainable lifestyle. Method The World3 model is based on five variables: "population, food production, industrialization, pollution, and consumption of nonrenewable natural resources". At the time of the study, all these variables were increasing and were assumed to continue to grow exponentially, while the ability of technology to increase resources grew only linearly. The authors intended to explore the possibility of a sustainable feedback pattern that would be achieved by altering growth trends among the five variables under three scenarios. They noted that their projections for the values of the variables in each scenario were predictions "only in the most limited sense of the word", and were only indicati
https://en.wikipedia.org/wiki/CTV%20Television%20Network
The CTV Television Network, commonly known as CTV, is a Canadian English-language terrestrial television network. Launched in 1961 and acquired by BCE Inc. in 2000, CTV is Canada's largest privately owned television network and is now a division of the Bell Media subsidiary of BCE. It is Canada's largest privately or commercially owned network consisting of 22 owned-and-operated stations nationwide and two privately owned affiliates, and has consistently been placed as Canada's top-rated network in total viewers and in key demographics since 2002, after several years trailing the rival Global Television Network in key markets. Bell Media also operates additional CTV-branded properties, including the 24-hour national cable news network CTV News Channel and the secondary CTV 2 television system. There has never been an official full name corresponding to the initials "CTV"; prior to CTV's launch in 1961, it was given the proposed branding of "Canadian Television Network" (CTN). But that branding was dropped before the network's launch when the Canadian Broadcasting Corporation (CBC) objected to it, claiming exclusive rights to the term "Canadian". History Formation In 1958, Prime Minister John Diefenbaker's government passed the Broadcasting Act, which established the Board of Broadcast Governors (BBG), a forerunner to the Canadian Radio-television and Telecommunications Commission (CRTC), as the governing body of Canadian broadcasting, effectively ending the Canadian Broadcasting Corporation's (CBC) dual role as regulator and broadcaster. The new board's first act was to take applications for "second" television stations in Halifax, Montreal (in both English and French), Ottawa, Toronto, Winnipeg, Calgary, Edmonton, and Vancouver in response to an outcry for an alternative to the CBC's television service. Calgary and Edmonton were served by privately owned CBC affiliates; the other six markets by CBC owned-and-operated stations (O&Os). The nine winners, in order of their first sign-on, were: CFCN-TV Calgary (September 9, 1960) CHAN-TV Vancouver (October 31, 1960) CJAY-TV Winnipeg (November 12, 1960) CFTO-TV Toronto (December 31, 1960) CJCH-TV Halifax (January 1, 1961) CFCF-TV Montreal (English; January 20, 1961) CFTM-TV Montreal (French; February 19, 1961) CJOH-TV Ottawa (March 12, 1961) CBXT Edmonton (October 1, 1961) The first eight stations were privately owned; the Edmonton station was a CBC O&O, thus CFRN-TV, the existing local station, would lose its CBC affiliation once CBXT signed on. Even before his station was licensed, John Bassett, the chief executive of the ultimately successful Toronto applicant Baton Aldred Rogers Broadcasting, had expressed interest in participating in the creation of a second television network, "of which we see the Toronto station as anchor". Indeed, Baton had already begun quietly contacting the successful applicants in other cities to gauge their interest in forming a cooperative group to share Canadian
https://en.wikipedia.org/wiki/Interchange%20File%20Format
Interchange File Format (IFF) is a generic digital container file format originally introduced by Electronic Arts (in cooperation with Commodore) in 1985 to facilitate transfer of data between software produced by different companies. IFF files do not have any standard filename extension. On many systems that generate IFF files, file extensions are not important because the operating system stores file format metadata separately from the file name. The .iff filename extension is commonly used for the ILBM image file format, which uses the IFF container format. Resource Interchange File Format is a format developed by Microsoft and IBM in 1991 that is based on IFF, except the byte order has been changed to little-endian to match the x86 microprocessor architecture. Apple's Audio Interchange File Format (AIFF) is a big-endian audio file format developed from IFF. The TIFF image file format is not related to IFF. Structure An IFF file is built up from chunks. Each chunk begins with what the specification calls a "Type ID" (what the Macintosh called an OSType, and Windows developers might call a FourCC). This is followed by a 32-bit signed integer (all integers in IFF file structure are big-endian) specifying the size of the following data (the chunk content) in bytes. Because the specification includes explicit lengths for each chunk, it is possible for a parser to skip over chunks that it either can't or doesn't care to process. This structure is closely related to the type–length–value (TLV) representation. There are predefined group chunks, with type IDs FORM, LIST and CAT . A FORM chunk is like a record structure, containing a type ID (indicating the record type) followed by nested chunks specifying the record fields. A LIST is a factoring structure containing a series of PROP (property) chunks plus nested group chunks to which those properties apply. A CAT  is just a collection of nested chunks with no special semantics. Group chunks can contain other group chunks, depending on the needs of the application. Group chunks, like their simpler counterparts, contain a length element. Skipping over a group can thus be done with a simple relative seek operation. Chunks must begin on even file offsets, as befits the origins of IFF on the Motorola 68000 processor, which couldn't address quantities larger than a byte on odd addresses. Thus chunks with odd lengths will be "padded" to an even byte boundary by adding a so-called "pad byte" after their regular end. The top-level structure of an IFF file consists of exactly one of the group chunks: FORM, LIST or CAT , where FORM is by far the most common one. Each type of chunk typically has a different internal structure, which could be numerical data, text, or raw data. It is also possible to include other IFF files as if they are chunks (note that they have the same structure: four letters followed with length), and some formats use this. There are standard chunks that could be present in any IFF f
https://en.wikipedia.org/wiki/Audio%20Interchange%20File%20Format
Audio Interchange File Format (AIFF) is an audio file format standard used for storing sound data for personal computers and other electronic audio devices. The format was developed by Apple Inc. in 1988 based on Electronic Arts' Interchange File Format (IFF, widely used on Amiga systems) and is most commonly used on Apple Macintosh computer systems. The audio data in most AIFF files is uncompressed pulse-code modulation (PCM). This type of AIFF file uses much more disk space than lossy formats like MP3—about 10 MB for one minute of stereo audio at a sample rate of 44.1 kHz and a bit depth of 16 bits. There is also a compressed variant of AIFF known as AIFF-C or AIFC, with various defined compression codecs. In addition to audio data, AIFF can include loop point data and the musical note of a sample, for use by hardware samplers and musical applications. The file extension for the standard AIFF format is .aiff or .aif. For the compressed variants it is supposed to be .aifc, but .aiff or .aif are accepted as well by audio applications supporting the format. AIFF on macOS With the development of the OS X operating system now known as macOS, Apple created a new type of AIFF which is, in effect, an alternative little-endian byte order format. Because the AIFF architecture has no provision for alternative byte order, Apple used the existing AIFF-C compression architecture, and created a "pseudo-compressed" codec called sowt (twos spelled backwards). The only difference between a standard AIFF file and an AIFF-C/sowt file is the byte order; there is no compression involved at all. Apple uses this new little-endian AIFF type as its standard on macOS. When a file is imported to or exported from iTunes in "AIFF" format, it is actually AIFF-C/sowt that is being used. When audio from an audio CD is imported by dragging to the macOS Desktop, the resulting file is also an AIFF-C/sowt. In all cases, Apple refers to the files simply as "AIFF", and uses the ".aiff" extension. For the vast majority of users this technical situation is completely unnoticeable and irrelevant. The sound quality of standard AIFF and AIFF-C/sowt are identical, and the data can be converted back and forth without loss. Users of older audio applications, however, may find that an AIFF-C/sowt file will not play, or will prompt the user to convert the format on opening, or will play as static. All traditional AIFF and AIFF-C files continue to work normally on macOS, and many third-party audio applications as well as hardware continue to use the standard AIFF big-endian byte order. AIFF Apple Loops Apple has also created another recent extension to the AIFF format in the form of Apple Loops used by GarageBand and Logic Pro, which allows the inclusion of data for pitch and tempo shifting by an application in the more common variety, and MIDI-sequence data and references to GarageBand playback instruments in another variety. Apple Loops use either the .aiff (or .aif) or .caf exten
https://en.wikipedia.org/wiki/Global%20Television%20Network
The Global Television Network (more commonly called Global, or occasionally Global TV) is a Canadian English-language terrestrial television network. It is currently Canada's second most-watched private terrestrial television network after CTV, and has fifteen owned-and-operated stations throughout the country. Global is owned by Corus Entertainment — the media holdings of JR Shaw and other members of his family. Global has its origins in a regional television station of the same name, serving Southern Ontario, which launched in 1974. The Ontario station was soon purchased by the now-defunct CanWest Global Communications, and that company gradually expanded its national reach in the subsequent decades through both acquisitions and new station launches, building up a quasi-network of independent stations, known as the CanWest Global System, until the stations were unified under the Ontario station's branding in 1997. History NTV The network has its origins in NTV, a new network first proposed in 1966 by Hamilton media proprietor Ken Soble, the co-founder and owner of independent station CHCH-TV through his Niagara Television company. Financially backed by Power Corporation of Canada, Soble submitted a brief to the Board of Broadcast Governors in 1966 proposing a national satellite-fed network. Under the plan, Soble's company would launch Canada's first broadcast satellite, and would use it to relay the programming of CHCH to 96 new transmitters across Canada. Soble died in December of that year; his widow Frances took over as president of Niagara Television, while former CTV executive Michael Hind-Smith and Niagara Television vice-president Al Bruner handled the network application. Soble had originally formulated the plan after failing in a bid to acquire CTV. The original proposal was widely criticized on various grounds, including claims that it exceeded the board's concentration of media ownership limits and that it was overly ambitious and financially unsustainable. As well, it failed to include any plan for local news content on any of its individual stations beyond possibly the metropolitan Toronto, Montreal, and Vancouver markets. By 1968, NTV put forward its first official licence application, under which the original 96 transmitters would be supplemented by 43 more transmitters to distribute a separate French language service, along with provisions for the free distribution of CBC Television, Radio-Canada and a new noncommercial educational television service on the network's satellite. Transponder space would also be leased to CTV and Télé-Métropole, but as competing commercial services they would not have been granted the free distribution rights the plan offered to the public television services. However, after federal communications minister Paul Hellyer announced plans to move forward with the publicly owned Anik series of broadcast satellites through Telesat Canada instead of leaving the rollout of satellite technology in the
https://en.wikipedia.org/wiki/Memory%20management
Memory management is a form of resource management applied to computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time. Several methods have been devised that increase the effectiveness of memory management. Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the size of the virtual address space beyond the available amount of RAM using paging or swapping to secondary storage. The quality of the virtual memory manager can have an extensive effect on overall system performance. In some operating systems, e.g. OS/360 and successors, memory is managed by the operating system. In other operating systems, e.g. Unix-like operating systems, memory is managed at the application level. Memory management within an address space is generally categorized as either manual memory management or automatic memory management. Manual memory management The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size. Memory requests are satisfied by allocating portions from a large pool of memory called the heap or free store. At any given time, some parts of the heap are in use, while some are "free" (unused) and thus available for future allocations. In the C language, the function which allocates memory from the heap is called and the function which takes previously allocated memory and marks it as "free" (to be used by future allocations) is called . Several issues complicate the implementation, such as external fragmentation, which arises when there are many small gaps between allocated memory blocks, which invalidates their use for an allocation request. The allocator's metadata can also inflate the size of (individually) small allocations. This is often managed by chunking. The memory management system must track outstanding allocations to ensure that they do not overlap and that no memory is ever "lost" (i.e. that there are no "memory leaks"). Efficiency The specific dynamic memory allocation algorithm implemented can impact performance significantly. A study conducted in 1994 by Digital Equipment Corporation illustrates the overheads involved for a variety of allocators. The lowest average instruction path length required to allocate a single memory slot was 52 (as measured with an instruction level profiler on a variety of software). Implementations Since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually through a pointer reference. The specific algorithm used to organize the memory area and allocate and deallocate chunks is interlinked with the kernel, and may use any of the following methods:
https://en.wikipedia.org/wiki/Demodulation
Demodulation is extracting the original information-bearing signal from a carrier wave. A demodulator is an electronic circuit (or computer program in a software-defined radio) that is used to recover the information content from the modulated carrier wave. There are many types of modulation so there are many types of demodulators. The signal output from a demodulator may represent sound (an analog audio signal), images (an analog video signal) or binary data (a digital signal). These terms are traditionally used in connection with radio receivers, but many other systems use many kinds of demodulators. For example, in a modem, which is a contraction of the terms modulator/demodulator, a demodulator is used to extract a serial digital data stream from a carrier signal which is used to carry it through a telephone line, coaxial cable, or optical fiber. History Demodulation was first used in radio receivers. In the wireless telegraphy radio systems used during the first 3 decades of radio (1884–1914) the transmitter did not communicate audio (sound) but transmitted information in the form of pulses of radio waves that represented text messages in Morse code. Therefore, the receiver merely had to detect the presence or absence of the radio signal, and produce a click sound. The device that did this was called a detector. The first detectors were coherers, simple devices that acted as a switch. The term detector stuck, was used for other types of demodulators and continues to be used to the present day for a demodulator in a radio receiver. The first type of modulation used to transmit sound over radio waves was amplitude modulation (AM), invented by Reginald Fessenden around 1900. An AM radio signal can be demodulated by rectifying it to remove one side of the carrier, and then filtering to remove the radio-frequency component, leaving only the modulating audio component. This is equivalent to peak detection with a suitably long time constant. The amplitude of the recovered audio frequency varies with the modulating audio signal, so it can drive an earphone or an audio amplifier. Fessendon invented the first AM demodulator in 1904 called the electrolytic detector, consisting of a short needle dipping into a cup of dilute acid. The same year John Ambrose Fleming invented the Fleming valve or thermionic diode which could also rectify an AM signal. Techniques There are several ways of demodulation depending on how parameters of the base-band signal such as amplitude, frequency or phase are transmitted in the carrier signal. For example, for a signal modulated with a linear modulation like AM (amplitude modulation), we can use a synchronous detector. On the other hand, for a signal modulated with an angular modulation, we must use an FM (frequency modulation) demodulator or a PM (phase modulation) demodulator. Different kinds of circuits perform these functions. Many techniques such as carrier recovery, clock recovery, bit slip, frame syn
https://en.wikipedia.org/wiki/Word-sense%20disambiguation
Word-sense disambiguation (WSD) is the process of identifying which sense of a word is meant in a sentence or other segment of context. In human language processing and cognition, it is usually subconscious/automatic but can often come to conscious attention when ambiguity impairs clarity of communication, given the pervasive polysemy in natural language. In computational linguistics, it is an open problem that affects other computer-related writing, such as discourse, improving relevance of search engines, anaphora resolution, coherence, and inference. Given that natural language requires reflection of neurological reality, as shaped by the abilities provided by the brain's neural networks, computer science has had a long-term challenge in developing the ability in computers to do natural language processing and machine learning. Many techniques have been researched, including dictionary-based methods that use the knowledge encoded in lexical resources, supervised machine learning methods in which a classifier is trained for each distinct word on a corpus of manually sense-annotated examples, and completely unsupervised methods that cluster occurrences of words, thereby inducing word senses. Among these, supervised learning approaches have been the most successful algorithms to date. Accuracy of current algorithms is difficult to state without a host of caveats. In English, accuracy at the coarse-grained (homograph) level is routinely above 90% (as of 2009), with some methods on particular homographs achieving over 96%. On finer-grained sense distinctions, top accuracies from 59.1% to 69.0% have been reported in evaluation exercises (SemEval-2007, Senseval-2), where the baseline accuracy of the simplest possible algorithm of always choosing the most frequent sense was 51.4% and 57%, respectively. Variants Disambiguation requires two strict inputs: a dictionary to specify the senses which are to be disambiguated and a corpus of language data to be disambiguated (in some methods, a training corpus of language examples is also required). WSD task has two variants: "lexical sample" (disambiguating the occurrences of a small sample of target words which were previously selected) and "all words" task (disambiguation of all the words in a running text). "All words" task is generally considered a more realistic form of evaluation, but the corpus is more expensive to produce because human annotators have to read the definitions for each word in the sequence every time they need to make a tagging judgement, rather than once for a block of instances for the same target word. History WSD was first formulated as a distinct computational task during the early days of machine translation in the 1940s, making it one of the oldest problems in computational linguistics. Warren Weaver first introduced the problem in a computational context in his 1949 memorandum on translation. Later, Bar-Hillel (1960) argued that WSD could not be solved by "electronic com
https://en.wikipedia.org/wiki/Neuro-linguistic%20programming
Neuro-linguistic programming (NLP) is a pseudoscientific approach to communication, personal development and psychotherapy, that first appeared in Richard Bandler and John Grinder's 1975 book The Structure of Magic I. NLP asserts that there is a connection between neurological processes, language and acquired behavioral patterns, and that these can be changed to achieve specific goals in life. According to Bandler and Grinder, NLP can treat problems such as phobias, depression, tic disorders, psychosomatic illnesses, near-sightedness, allergy, the common cold, and learning disorders, often in a single session. They also claim that NLP can "model" the skills of exceptional people, allowing anyone to acquire them. NLP has been adopted by some hypnotherapists as well as by companies that run seminars marketed as leadership training to businesses and government agencies. There is no scientific evidence supporting the claims made by NLP advocates, and it has been called a pseudoscience. Scientific reviews have shown that NLP is based on outdated metaphors of the brain's inner workings that are inconsistent with current neurological theory, and that NLP contain numerous factual errors. Reviews also found that research that favored NLP contained significant methodological flaws, and that there were three times as many studies of a much higher quality that failed to reproduce the claims made by Bandler, Grinder, and other NLP practitioners. Early development According to Bandler and Grinder, NLP consists of a methodology termed modeling, plus a set of techniques that they derived from its initial applications. They derived many of the fundamental techniques from the work of Virginia Satir, Milton Erickson and Fritz Perls. Bandler and Grinder also drew upon the theories of Gregory Bateson, Alfred Korzybski and Noam Chomsky (particularly transformational grammar), as well as ideas and techniques from Carlos Castaneda. Bandler and Grinder claim that their methodology can codify the structure inherent to the therapeutic "magic" as performed in therapy by Perls, Satir and Erickson, and indeed inherent to any complex human activity. From that codification, they claim, the structure and its activity can be learned by others. Their 1975 book, The Structure of Magic I: A Book about Language and Therapy, is intended to be a codification of the therapeutic techniques of Perls and Satir. Bandler and Grinder say that they used their own process of modeling to model Virginia Satir so they could produce what they termed the Meta-Model, a model for gathering information and challenging a client's language and underlying thinking. They claim that by challenging linguistic distortions, specifying generalizations, and recovering deleted information in the client's statements, the transformational grammar concept of surface structure yields a more complete representation of the underlying deep structure and therefore has therapeutic benefit. Also derived from Satir we