source
stringlengths
32
199
text
stringlengths
26
3k
https://en.wikipedia.org/wiki/ICL%20VME
VME (Virtual Machine Environment) is a mainframe operating system developed by the UK company International Computers Limited (ICL, now part of the Fujitsu group). Originally developed in the 1970s (as VME/B, later VME 2900) to drive ICL's then new 2900 Series mainframes, the operating system is now known as OpenVME incorporating a Unix subsystem, and runs on ICL Series 39 and Trimetra mainframe computers, as well as industry-standard x64 servers. Origins The development program for the New Range system started on the merger of International Computers and Tabulators (ICT) and English Electric Computers in 1968. One of the fundamental decisions was that it would feature a new operating system. A number of different feasibility and design studies were carried out within ICL, the three most notable being: VME/B (originally System B), targeted at large processors such as the 2970/2980 and developed in Kidsgrove, Staffordshire and West Gorton, Manchester VME/K (originally System T), targeted at the mid-range systems such as the 2960 and developed at Bracknell after the original design for these small processors, System D, was dropped. VME/K was developed and introduced to the market but was eventually replaced by VME/B VME/T, which was never actually launched, but warrants a mention as it was conceived to support "fault tolerance", and predated the efforts of the successful American startup company Tandem Computers in this area. The chief architect of VME/B was Brian Warboys, who subsequently became professor of software engineering at the University of Manchester. A number of influences can be seen in its design, for example Multics and ICL's earlier George 3 operating system; however it was essentially designed from scratch. VME/B was viewed as primarily competing with the System/370 IBM mainframe as a commercial operating system, and adopted the EBCDIC character encoding. History When New Range was first launched in October 1974, its operating system was referred to as "System B". By the time it was first delivered it had become "VME/B". VME/K was developed independently (according to Campbell-Kelly, "on a whim of Ed Mack"), and was delivered later with the smaller mainframes such as the 2960. At the time VME/B was still plagued with performance and reliability problems, and the mainly American management team had misgivings about it. ICL had sold a large system to the European Space Agency to process data from Meteosat at its operation centre in Darmstadt. A bespoke variant of VME/K, known as VME/ESA was developed on-site to meet the customer's requirements. Following a financial crisis in 1980, new management was brought into ICL (Christopher Laidlaw as chairman, and Robb Wilmot as managing director). An early decision of the new management was to drop VME/K. Thus in July 1981 "VME2900" was launched: although presented to the customer base as a merger of VME/B and VME/K, it was in reality the VME/B base with a few selected feature
https://en.wikipedia.org/wiki/Etch
Etch may refer to: to carry out an etching process Etch (protocol), a network protocol Etch (Toy Story), a character from the film Toy Story Etch, the codename of version 4.0 of the Debian Linux operating system East Tennessee Children's Hospital
https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Welch
Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978. The algorithm is simple to implement and has the potential for very high throughput in hardware implementations. It is the algorithm of the Unix file compression utility compress and is used in the GIF image format. Algorithm The scenario described by Welch's 1984 paper encodes sequences of 8-bit data as fixed-length 12-bit codes. The codes from 0 to 255 represent 1-character sequences consisting of the corresponding 8-bit character, and the codes 256 through 4095 are created in a dictionary for sequences encountered in the data as it is encoded. At each stage in compression, input bytes are gathered into a sequence until the next character would make a sequence with no code yet in the dictionary. The code for the sequence (without that character) is added to the output, and a new code (for the sequence with that character) is added to the dictionary. The idea was quickly adapted to other situations. In an image based on a color table, for example, the natural character alphabet is the set of color table indexes, and in the 1980s, many images had small color tables (on the order of 16 colors). For such a reduced alphabet, the full 12-bit codes yielded poor compression unless the image was large, so the idea of a variable-width code was introduced: codes typically start one bit wider than the symbols being encoded, and as each code size is used up, the code width increases by 1 bit, up to some prescribed maximum (typically 12 bits). When the maximum code value is reached, encoding proceeds using the existing table, but new codes are not generated for addition to the table. Further refinements include reserving a code to indicate that the code table should be cleared and restored to its initial state (a "clear code", typically the first value immediately after the values for the individual alphabet characters), and a code to indicate the end of data (a "stop code", typically one greater than the clear code). The clear code lets the table be reinitialized after it fills up, which lets the encoding adapt to changing patterns in the input data. Smart encoders can monitor the compression efficiency and clear the table whenever the existing table no longer matches the input well. Since codes are added in a manner determined by the data, the decoder mimics building the table as it sees the resulting codes. It is critical that the encoder and decoder agree on the variety of LZW used: the size of the alphabet, the maximum table size (and code width), whether variable-width encoding is used, initial code size, and whether to use the clear and stop codes (and what values they have). Most formats that employ LZW build this information into the format specification or provide explicit fields for t
https://en.wikipedia.org/wiki/LZ77%20and%20LZ78
LZ77 and LZ78 are the two lossless data compression algorithms published in papers by Abraham Lempel and Jacob Ziv in 1977 and 1978. They are also known as LZ1 and LZ2 respectively. These two algorithms form the basis for many variations including LZW, LZSS, LZMA and others. Besides their academic influence, these algorithms formed the basis of several ubiquitous compression schemes, including GIF and the DEFLATE algorithm used in PNG and ZIP. They are both theoretically dictionary coders. LZ77 maintains a sliding window during compression. This was later shown to be equivalent to the explicit dictionary constructed by LZ78—however, they are only equivalent when the entire data is intended to be decompressed. Since LZ77 encodes and decodes from a sliding window over previously seen characters, decompression must always start at the beginning of the input. Conceptually, LZ78 decompression could allow random access to the input if the entire dictionary were known in advance. However, in practice the dictionary is created during encoding and decoding by creating a new phrase whenever a token is output. The algorithms were named an IEEE Milestone in 2004. In 2021 Jacob Ziv was awarded the IEEE Medal of Honor for his involvement in their development. Theoretical efficiency In the second of the two papers that introduced these algorithms they are analyzed as encoders defined by finite-state machines. A measure analogous to information entropy is developed for individual sequences (as opposed to probabilistic ensembles). This measure gives a bound on the data compression ratio that can be achieved. It is then shown that there exists finite lossless encoders for every sequence that achieve this bound as the length of the sequence grows to infinity. In this sense an algorithm based on this scheme produces asymptotically optimal encodings. This result can be proven more directly, as for example in notes by Peter Shor. LZ77 LZ77 algorithms achieve compression by replacing repeated occurrences of data with references to a single copy of that data existing earlier in the uncompressed data stream. A match is encoded by a pair of numbers called a length-distance pair, which is equivalent to the statement "each of the next length characters is equal to the characters exactly distance characters behind it in the uncompressed stream". (The distance is sometimes called the offset instead.) To spot matches, the encoder must keep track of some amount of the most recent data, such as the last 2 KB, 4 KB, or 32 KB. The structure in which this data is held is called a sliding window, which is why LZ77 is sometimes called sliding-window compression. The encoder needs to keep this data to look for matches, and the decoder needs to keep this data to interpret the matches the encoder refers to. The larger the sliding window is, the longer back the encoder may search for creating references. It is not only acceptable but frequently useful to allow length-distance pa
https://en.wikipedia.org/wiki/Deflate
In computing, Deflate (stylized as DEFLATE, and also called Flate) is a lossless data compression file format that uses a combination of LZ77 and Huffman coding. It was designed by Phil Katz, for version 2 of his PKZIP archiving tool. Deflate was later specified in RFC 1951 (1996). Katz also designed the original algorithm used to construct Deflate streams. This algorithm was patented as , and assigned to PKWARE, Inc. As stated in the RFC document, an algorithm producing Deflate files was widely thought to be implementable in a manner not covered by patents. This led to its widespread use – for example, in gzip compressed files and PNG image files, in addition to the ZIP file format for which Katz originally designed it. The patent has since expired. Stream format A Deflate stream consists of a series of blocks. Each block is preceded by a 3-bit header: First bit: Last-block-in-stream marker: 1: This is the last block in the stream. 0: There are more blocks to process after this one. Second and third bits: Encoding method used for this block type: 00: A stored (a.k.a. raw or literal) section, between 0 and 65,535 bytes in length 01: A static Huffman compressed block, using a pre-agreed Huffman tree defined in the RFC 10: A dynamic Huffman compressed block, complete with the Huffman table supplied 11: Reserved—don't use. The stored block option adds minimal overhead and is used for data that is incompressible. Most compressible data will end up being encoded using method 10, the dynamic Huffman encoding, which produces an optimized Huffman tree customized for each block of data individually. Instructions to generate the necessary Huffman tree immediately follow the block header. The static Huffman option is used for short messages, where the fixed saving gained by omitting the tree outweighs the percentage compression loss due to using a non-optimal (thus, not technically Huffman) code. Compression is achieved through two steps: The matching and replacement of duplicate strings with pointers. Replacing symbols with new, weighted symbols based on the frequency of use. Duplicate string elimination Within compressed blocks, if a duplicate series of bytes is spotted (a repeated string), then a back-reference is inserted, linking to the previous location of that identical string instead. An encoded match to an earlier string consists of an 8-bit length (3–258 bytes) and a 15-bit distance (1–32,768 bytes) to the beginning of the duplicate. Relative back-references can be made across any number of blocks, as long as the distance appears within the last 32 KiB of uncompressed data decoded (termed the sliding window). If the distance is less than the length, the duplicate overlaps itself, indicating repetition. For example, a run of 10 identical bytes can be encoded as one byte, followed by a duplicate of length 9, beginning with the previous byte. Searching the preceding text for duplicate substrings is the most computational
https://en.wikipedia.org/wiki/Content%20management%20system
A content management system (CMS) is computer software used to manage the creation and modification of digital content (content management). A CMS is typically used for enterprise content management (ECM) and web content management (WCM). ECM typically supports multiple users in a collaborative environment by integrating document management, digital asset management, and record retention. Alternatively, WCM is the collaborative authoring for websites and may include text and embed graphics, photos, video, audio, maps, and program code that display content and interact with the user. ECM typically includes a WCM function. Structure A CMS typically has two major components: a content management application (CMA), as the front-end user interface that allows a user, even with limited expertise, to add, modify, and remove content from a website without the intervention of a webmaster; and a content delivery application (CDA), that compiles the content and updates the website. Installation type There are two types of CMS installation: on-premises and cloud-based. On-premises installation means that the CMS software can be installed on the server. This approach is usually taken by businesses that want flexibility in their setup. Notable CMSs which can be installed on-premises are Wordpress.org, Drupal, Joomla, Grav, ModX and others. The cloud-based CMS is hosted on the vendor environment. Examples of notable cloud-based CMSs are SquareSpace, Contentful, Wordpress.com, Webflow, Ghost and WIX. Common features The core CMS features are: indexing, search and retrieval, format management, revision control, and management. Features may vary depending on the system application but will typically include: Intuitive indexing, search, and retrieval features index all data for easy access through search functions and allow users to search by attributes such as publication dates, keywords or author. Format management facilitates turning scanned paper documents and legacy electronic documents into HTML or PDF documents. Revision features allow content to be updated and edited after initial publication. Revision control also tracks any changes made to files by individuals. Publishing functionality allows individuals to use a template or a set of templates approved by the organization, as well as wizards and other tools to create or modify content. Popular additional features may include: SEO-friendly URLs Integrated and online help, including discussion boards Group-based permission systems Full template support and customizable templates Easy wizard-based install and versioning procedures Admin panel with multiple language support Content hierarchy with unlimited depth and size Minimal server requirements Integrated file managers Integrated audit logs Support AMP page for Google Support schema markup Designed as per Google quality guidelines for website architecture Availability of plug-ins for additional functionalities. Securit
https://en.wikipedia.org/wiki/KPMG
KPMG International Limited (or simply KPMG) is a multinational professional services network, and one of the Big Four accounting organizations, along with Ernst & Young (EY), Deloitte, and PwC. The name "KPMG" stands for "Klynveld Peat Marwick Goerdeler". The initialism was chosen when KMG (Klynveld Main Goerdeler) merged with Peat Marwick in 1987. Headquartered in Amstelveen, Netherlands, although incorporated in London, England, KPMG is a network of firms in 145 countries with over 265,000 employees. It has three lines of services: financial audit, tax, and advisory. Its tax and advisory services are further divided into various service groups. Over the past decade, various parts of the firm's global network of affiliates have been involved in regulatory actions as well as lawsuits. History Early years and mergers William Barclay Peat joined Robert Fletcher & Co. in London in 1870 at 17 and became head of the firm in 1891, renamed William Barclay Peat & Co. by then. In 1877, Thomson McLintock founded Thomson McLintock & Co in Glasgow. In 1897, Marwick Mitchell & Co. was founded by James Marwick and Roger Mitchell in New York City. In 1899, Ferdinand William LaFrentz founded the American Audit Co. in New York. In 1923, The American Audit Company was renamed FW LaFrentz & Co. In about 1913, Frank Wilber Main founded Main & Co. in Pittsburgh. In March 1917, Piet Klijnveld and Jaap Kraayenhof opened an accounting firm called Klynveld Kraayenhof & Co. in Amsterdam. In 1925, William Barclay Peat & Co. and Marwick Mitchell & Co., merged to form Peat Marwick Mitchell & Co. In 1963, Main LaFrentz & Co was formed by the merger of Main & Co and FW LaFrentz & Co. In 1969 Thomson McLintock and Main LaFrentz merged forming McLintock Main LaFrentz International and McLintock Main LaFrentz International absorbed the general practice of Grace, Ryland & Co. In 1979, Klynveld Kraayenhof & Co. (Netherlands), McLintock Main LaFrentz (United Kingdom / United States), and Deutsche Treuhand-Gesellschaft (Germany) formed KMG (Klynveld Main Goerdeler) as a grouping of independent national practices to create a strong European-based international firm. Deutsche Treuhand-Gesellschaft CEO Reinhard Goerdeler (son of leading anti-Nazi activist Carl Goerdeler, who would have become Chancellor if Operation Valkyrie had succeeded) became the first CEO of KMG. In the United States, Main Lafrentz & Co. merged with Hurdman and Cranstoun to form Main Hurdman & Cranstoun. In 1987, KMG and Peat Marwick joined forces in the first mega-merger of large accounting firms and formed a firm called KPMG in the United States and most of the rest of the world and Peat Marwick McLintock in the United Kingdom. In the Netherlands, due to the merger between PMI and KMG in 1988, PMI tax advisors joined Meijburg & Co. (The tax advisory agency Meijburg & Co. was founded by Willem Meijburg, Inspector of National Taxes, in 1939). Today, the Netherlands is the only country with two members o
https://en.wikipedia.org/wiki/List%20of%20Interstate%20Highways
There are 70 primary Interstate Highways in the Interstate Highway System, a network of freeways in the United States. These primary highways are assigned one- or two-digit route numbers, whereas their associated auxiliary Interstate Highways receive three-digit route numbers. Typically, odd-numbered Interstates run south-north, with lower numbers in the west and higher numbers in the east; even-numbered Interstates run west-east, with lower numbers in the south and higher numbers in the north. Route numbers divisible by 5 usually represent major coast-to-coast or border-to-border routes (ex. I-10 connects Santa Monica, California to Jacksonville, Florida, extending between the Pacific and Atlantic oceans). Auxiliary highways have an added digit prefixing the number of the parent highway. Five route numbers are duplicated in the system; the corresponding highways are in different regions, reducing potential confusion. In addition to primary highways in the contiguous United States, there are signed Interstates in Hawaii and unsigned Interstates in Alaska and Puerto Rico. Contiguous United States There are 70 primary interstate highways in the 48 contiguous United States as well as five former and one future primary interstate highways. This number does not include auxiliary Interstate Highways. Other regions In addition to the 48 contiguous states, Interstate Highways are found in Hawaii, Alaska, and Puerto Rico. The Federal Highway Administration funds four routes in Alaska and three routes in Puerto Rico under the same program as the rest of the Interstate Highway System. However, these routes are not required to meet the same standards as the mainland routes: Hawaii The Interstate Highways on the island of Oahu, Hawaii are signed with the standard Interstate Highway shield, with the letter "H-" prefixed before the number. They are fully controlled-access routes built to the same standards as the mainland Interstate Highways. Alaska Alaska's Interstate Highways are unsigned as such, although they all have state highway numbers that do not match the Interstate Highway numbers. Puerto Rico Puerto Rico signs its Interstate Highways as territorial routes, as the numbers do not match their official Interstate Highway designations. Many of the territory's routes are freeway-standard toll roads. See also List of United States Numbered Highways References External links Alaska Roads - Interstate ends photos Puerto Rico road photos The Dwight D. Eisenhower National System of Interstate and Defense Highways Route Log and Finder List 3-digit Interstate Highways Interstate
https://en.wikipedia.org/wiki/Software%20patent
A software patent is a patent on a piece of software, such as a computer program, libraries, user interface, or algorithm. Background A patent is a set of exclusionary rights granted by a state to a patent holder for a limited period of time, usually 20 years. These rights are granted to patent applicants in exchange for their disclosure of the inventions. Once a patent is granted in a given country, no person may make, use, sell or import/export the claimed invention in that country without the permission of the patent holder. Permission, where granted, is typically in the form of a license which conditions are set by the patent owner: it may be free or in return for a royalty payment or lump sum fee. Patents are territorial in nature. To obtain a patent, inventors must file patent applications in each and every country in which they want a patent. For example, separate applications must be filed in Japan, China, the United States and India if the applicant wishes to obtain patents in those countries. However, some regional offices exist, such as the European Patent Office (EPO), which act as supranational bodies with the power to grant patents which can then be brought into effect in the member states, and an international procedure also exists for filing a single international application under the Patent Cooperation Treaty (PCT), which can then give rise to patent protection in most countries. These different countries and regional offices have different standards for granting patents. This is particularly true of software or computer-implemented inventions, especially where the software is implementing a business method. Early example of a software patent On 21 May 1962, a British patent application entitled "A Computer Arranged for the Automatic Solution of Linear Programming Problems" was filed. The invention was concerned with efficient memory management for the simplex algorithm, and could be implemented by purely software means. The patent struggled to establish that it represented a 'vendible product'. "The focus of attention shifted to look at the relationship between the [unpatentable] computer program and the [potentially patentable] programmed computer". The patent was granted on August 17, 1966, and seems to be one of the first software patents, establishing the principle that the computer program itself was unpatentable and therefore covered by copyright law, while the computer program embedded in hardware was potentially patentable. Jurisdictions Most countries place some limits on the patenting of inventions involving software, but there is no one legal definition of a software patent. For example, U.S. patent law excludes "abstract ideas", and this has been used to refuse some patents involving software. In Europe, "computer programs as such" are excluded from patentability, thus European Patent Office policy is consequently that a program for a computer is not patentable if it does not have the potential to cause a "tech
https://en.wikipedia.org/wiki/Principal%20component%20analysis
Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and enabling the visualization of multidimensional data. Formally, PCA is a statistical technique for reducing the dimensionality of a dataset. This is accomplished by linearly transforming the data into a new coordinate system where (most of) the variation in the data can be described with fewer dimensions than the initial data. Many studies use the first two principal components in order to plot the data in two dimensions and to visually identify clusters of closely related data points. Principal component analysis has applications in many fields such as population genetics, microbiome studies, and atmospheric science. The principal components of a collection of points in a real coordinate space are a sequence of unit vectors, where the -th vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared perpendicular distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Principal component analysis is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest. In data analysis, the first principal component of a set of variables, presumed to be jointly normally distributed, is the derived variable formed as a linear combination of the original variables that explains the most variance. The second principal component explains the most variance in what is left once the effect of the first component is removed, and we may proceed through iterations until all the variance is explained. PCA is most commonly used when many of the variables are highly correlated with each other and it is desirable to reduce their number to an independent set. PCA is used in exploratory data analysis and for making predictive models. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible. The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The -th principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data. For either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of
https://en.wikipedia.org/wiki/IBM%20704
The IBM 704 is a large digital mainframe computer introduced by IBM in 1954. It was the first mass-produced computer with hardware for floating-point arithmetic. The IBM 704 Manual of operation states: The type 704 Electronic Data-Processing Machine is a large-scale, high-speed electronic calculator controlled by an internally stored program of the single address type. The 704 at that time was thus regarded as "pretty much the only computer that could handle complex math". The 704 was a significant improvement over the earlier IBM 701 in terms of architecture and implementation. Like the 701, the 704 uses vacuum-tube logic circuitry, but increased the instruction size from 18 bits to 36 bits, the same as the memory's word size. Changes from the 701 include the use of magnetic-core memory instead of Williams tubes, floating-point arithmetic instructions, 15-bit addressing and the addition of three index registers. To support these new features, the instructions were expanded to use the full 36-bit word. The new instruction set, which is not compatible with the 701, became the base for the "scientific architecture" subclass of the IBM 700/7000 series computers. The 704 can execute up to 12,000 floating-point additions per second. IBM produced 123 type 704 systems between 1955 and 1960. Landmarks The programming languages FORTRAN and LISP were first developed for the 704, as was the SAP assembler—Symbolic Assembly Program, later distributed by SHARE as SHARE Assembly Program. MUSIC, the first computer music program, was developed on the IBM 704 by Max Mathews. In 1962, physicist John Larry Kelly, Jr. created one of the most famous moments in the history of Bell Labs by using an IBM 704 computer to synthesize speech. Kelly's voice recorder synthesizer vocoder recreated the song Daisy Bell, with musical accompaniment from Max Mathews. Arthur C. Clarke was coincidentally visiting friend and colleague John Pierce at the Bell Labs Murray Hill facility at the time of this speech synthesis demonstration, and Clarke was so impressed that six years later he used it in the climactic scene of his novel and screenplay for 2001: A Space Odyssey, where the HAL 9000 computer sings the same song. (Bell Laboratories later released a recording, on ten inch 78-RPM records, of speech and music created this way. It was apparently made with an IBM 7090, the solid-state successor to the 704.) Edward O. Thorp, a math instructor at MIT, used the IBM 704 as a research tool to investigate the probabilities of winning while developing his blackjack gaming theory. He used FORTRAN to formulate the equations of his research model. The IBM 704 at the MIT Computation Center was used as the official tracker for the Smithsonian Astrophysical Observatory Operation Moonwatch in the fall of 1957. IBM provided four staff scientists to aid Smithsonian Astrophysical Observatory scientists and mathematicians in the calculation of satellite orbits: Dr. Giampiero Rossoni, Dr. John Gr
https://en.wikipedia.org/wiki/Francisco%20Varela
Francisco Javier Varela García (September 7, 1946 – May 28, 2001) was a Chilean biologist, philosopher, cybernetician, and neuroscientist who, together with his mentor Humberto Maturana, is best known for introducing the concept of autopoiesis to biology, and for co-founding the Mind and Life Institute to promote dialog between science and Buddhism. Life and career Varela was born in 1946 in Talcahuano in Chile, the son of Corina María Elena García Tapia and Raúl Andrés Varela Rodríguez. After completing secondary school at the Liceo Alemán del Verbo Divino in Santiago (1951–1963), like his mentor Humberto Maturana, Varela temporarily studied medicine at the Pontifical Catholic University of Chile and graduated with a degree in biology from the University of Chile. He later obtained a Ph.D. in biology at Harvard University. His thesis, defended in 1970 and supervised by Torsten Wiesel, was titled Insect Retinas: Information processing in the compound eye. After the 1973 military coup led by Augusto Pinochet, Varela and his family spent 7 years in exile in the United States before he returned to Chile to become a professor of biology at the Universidad de Chile. Varela became familiar, by practice, with Tibetan Buddhism in the 1970s, initially studying, together with Keun-Tshen Goba (né Ezequiel Hernandez Urdaneta), with the meditation master Chögyam Trungpa Rinpoche, founder of Vajradhatu and Shambhala Training, and later with Tulku Urgyen Rinpoche, a Tibetan meditation master of higher tantras. In 1986, he settled in France, where he first taught cognitive science and epistemology at the École Polytechnique, and later neuroscience at the University of Paris. From 1988 until his death, he led a research group, as Director of Research at the CNRS (Centre National de Recherche Scientifique). In 1987, Varela, along with R. Adam Engle, founded the Mind and Life Institute, initially to sponsor a series of dialogues between scientists and the Dalai Lama about the relationship between modern science and Buddhism. The Institute continues today as a major nexus for such dialog as well as promoting and supporting multidisciplinary scientific investigation in mind sciences, contemplative scholarship and practice and related areas in the interface of science with meditation and other contemplative practices, especially Buddhist practices. Varela died in 2001 in Paris of Hepatitis C after having written an account of his 1998 liver transplant. Varela had four children, including the actress, environmental spokesperson, and model Leonor Varela. Work and legacy Varela was trained as a biologist, mathematician and philosopher through the influence of different teachers, Humberto Maturana and Torsten Wiesel. He wrote and edited a number of books and numerous journal articles in biology, neurology, cognitive science, mathematics, and philosophy. He founded, with others, the Integral Institute, a thinktank dedicated to the cross-fertilization of ideas and d
https://en.wikipedia.org/wiki/Self-organizing%20map
A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine learning technique used to produce a low-dimensional (typically two-dimensional) representation of a higher dimensional data set while preserving the topological structure of the data. For example, a data set with variables measured in observations could be represented as clusters of observations with similar values for the variables. These clusters then could be visualized as a two-dimensional "map" such that observations in proximal clusters have more similar values than observations in distal clusters. This can make high-dimensional data easier to visualize and analyze. An SOM is a type of artificial neural network but is trained using competitive learning rather than the error-correction learning (e.g., backpropagation with gradient descent) used by other artificial neural networks. The SOM was introduced by the Finnish professor Teuvo Kohonen in the 1980s and therefore is sometimes called a Kohonen map or Kohonen network. The Kohonen map or network is a computationally convenient abstraction building on biological models of neural systems from the 1970s and morphogenesis models dating back to Alan Turing in the 1950s. SOMs create internal representations reminiscent of the cortical homunculus, a distorted representation of the human body, based on a neurological "map" of the areas and proportions of the human brain dedicated to processing sensory functions, for different parts of the body. Overview Self-organizing maps, like most artificial neural networks, operate in two modes: training and mapping. First, training uses an input data set (the "input space") to generate a lower-dimensional representation of the input data (the "map space"). Second, mapping classifies additional input data using the generated map. In most cases, the goal of training is to represent an input space with p dimensions as a map space with two dimensions. Specifically, an input space with p variables is said to have p dimensions. A map space consists of components called "nodes" or "neurons", which are arranged as a hexagonal or rectangular grid with two dimensions. The number of nodes and their arrangement are specified beforehand based on the larger goals of the analysis and exploration of the data. Each node in the map space is associated with a "weight" vector, which is the position of the node in the input space. While nodes in the map space stay fixed, training consists in moving weight vectors toward the input data (reducing a distance metric such as Euclidean distance) without spoiling the topology induced from the map space. After training, the map can be used to classify additional observations for the input space by finding the node with the closest weight vector (smallest distance metric) to the input space vector. Learning algorithm The goal of learning in the self-organizing map is to cause different parts of the network to respond similarly to certai
https://en.wikipedia.org/wiki/Apple%20Lisa
The Lisa is a desktop computer developed by Apple, released on January 19, 1983. It is generally considered the first mass market personal computer operable through a graphical user interface (GUI). In 1983, a machine like the Lisa was still so expensive that it was primarily marketed to individual and small and medium-size businesses, as a groundbreaking new alternative to much bigger, much more expensive (mainframe or "Mini") computers from firms such as IBM, that either require additional, expensive consultancy from the supplier, hiring specially trained personnel, or at least, a much steeper learning curve to maintain and operate. Earlier GUI controlled personal computers, like the Xerox Alto, although manufactured in several thousands, were only made for Xerox, the University of California, Berkeley, and select partners in Xerox PARC's developments, from the early to mid 1970s. Development of project "LISA" began in 1978. It underwent many changes and shipped at with a five-megabyte hard drive. It was affected by its high price, insufficient software, unreliable Apple FileWare floppy disks, and the imminent release of the cheaper and faster Macintosh. Only 10,000 Lisa units were sold in two years. Considered a commercial failure (albeit one with technical acclaim), Lisa introduced a number of advanced features that reappeared on the Macintosh and eventually IBM PC compatibles. Among them is an operating system with protected memory and a document-oriented workflow. The hardware was more advanced overall than the forthcoming Macintosh 128K; the Lisa included hard disk drive support, capacity for up to 2 megabytes (MB) of random-access memory (RAM), expansion slots, and a larger, higher-resolution display. The complexity of the Lisa operating system and its associated programs (most notably its office suite), as well as the ad hoc protected memory implementation (due to the lack of a Motorola MMU), placed a high demand on the CPU and, to some extent, the storage system. As a result of cost-cutting measures designed to bring it more into the consumer market, advanced software, and factors such as the delayed availability of the 68000 and its impact on the design process, many felt that the Lisa's user experience was sluggish overall. The workstation-tier price (albeit at the low end of the spectrum at the time) and lack of a technical software application library made it a difficult sell for much of the technical workstation market. Compounding matters, the runaway success of the IBM PC and Apple's decision to essentially compete with itself (via the lower-cost Macintosh) were further impediments to the Lisa's acceptance. In 1982, after Steve Jobs was forced out of the Lisa project by Apple's board of directors, he appropriated the Macintosh project from Jef Raskin, who had originally conceived of a sub-$1,000 text-based appliance computer in 1979. Jobs immediately redefined Macintosh as a less expensive and more focused version of the gra
https://en.wikipedia.org/wiki/NLS
NLS may refer to: Computing NLS (computer system) or oN-Line System, a pioneering computer system by Douglas Engelbart National Language Support or Native Language Support, in software Organisations National Language Services, South Africa National Longitudinal Surveys, U.S. National Lifeguard Service, Canada National Life Stories, U.K. Business Non-Linear Systems, electronics manufacturer Nautilus, Inc., stock symbol NLS, fitness products manufacturer Education North Leamington School, England Nottingham Law School, a law school in Nottingham, England National Law School of India University, Bangalore Libraries National Library Service for the Blind and Print Disabled, U.S. mail circulation service National Library of Scotland Science and technology Nuclear localization signal, in biology Nonlinear Schrödinger equation, in physics Non-linear least squares, in statistics, a method used in regression analysis Spaceflight Nanosatellite Launch System, based at the University of Toronto National Launch System, a 1991 Space Shuttle alternative study Transport Nailsea and Backwell railway station station code NLS, England Niles (Amtrak station) station code NLS, Michigan, United States Other Nimrod Line Squadron, former military service unit at RAF Kinloss National League System, in English football Nürburgring Langstrecken Serie, a motorsport endurance series held at the Nürburgring. Formerly known as VLN (Veranstaltergemeinschaft Langstreckenpokal Nürburgring) See also
https://en.wikipedia.org/wiki/Apple%20IIe%20Card
The Apple IIe Card is a hardware emulation board, also referred to as compatibility card, which allows compatible Macintosh computers to run software designed for the Apple II series of computers (with the exception of the IIGS). Released in March 1991 for use with the LC family, Apple targeted the card at its widely dominated educational market to ease the transition from Apple II-based classrooms, with thousands of entrenched educational software titles, to Macintosh-based classrooms. Overview Well into the 1990s, most schools still had a substantial investment in Apple II computers and software in their classrooms and labs. However, by that period Apple was looking to phase out the Apple II line, and so introduced the Apple IIe Card as a means to transition Apple II educators (and to a smaller degree, home and small business users) by migrating them over to the Macintosh. By adding the optional PDS card to low cost Macintosh computers, it provided backwards compatibility with the vast Apple II software library of over 10,000 titles. Software could even be run directly from an Apple II floppy diskette, the same way as with an Apple IIe (made possible via the card's cable-adapter that connected a standard Apple 5.25 drive). A similar "Apple IIGS Card" was planned for running 16-bit Apple IIGS software, but was canceled after being deemed too costly, therefore leaving no migration path for that segment of the Apple II line. Apple asked the media to call the peripheral the "Apple IIe option board", as earlier "emulator" cards were not successful. The Apple IIe Card worked in the Macintosh LC series (I, II, III, III+, 475, 520, 550, 575), as well as the LC-slot compatible Color Classic. When running in Apple II emulation mode, certain Macintosh peripherals and hardware could be "borrowed" and used as Apple II devices. For example, the mouse, keyboard, internal speaker, clock, serial ports (printer, modem, networking), extra RAM (up to 1024 KB), internal 3.5 floppy drive and hard disk all functioned as Apple II devices. Furthermore, with the included Y-cable, Apple II specific peripherals could be used as well: The Apple 5.25, Apple UniDisk 3.5, and an Apple II joystick or paddles. The host Macintosh required special emulation software (a boot disk) launched from System 6.0.8 to 7.5.5 in order to activate the IIe Card. Apple II mode runs only in full-screen (a windowed mode is not possible) and all Macintosh operations are suspended while running, as emulation takes over the host computer. Technical aspects Like the Apple IIe itself, the Apple IIe Card uses an onboard 65C02 CPU. The CPU is software-configurable to run at the Apple IIe's native 1.0 MHz speed or at an accelerated 1.9 MHz. Video emulation (text and graphics) is handled through software using native Macintosh QuickDraw routines, which often results in operations being slower than a real Apple IIe except on higher-end machines. Any Macintosh that supports the card can be switched i
https://en.wikipedia.org/wiki/Jeff%20Rulifson
Johns Frederick (Jeff) Rulifson (born August 20, 1941) is an American computer scientist. Early life and education Johns Frederick Rulifson was born August 20, 1941, in Bellefontaine, Ohio. His father was Erwin Charles Rulifson and mother was Virginia Helen Johns. Rulifson married Janet Irving on June 8, 1963, and had two children. He received a B.S. in mathematics from the University of Washington in 1966. Rulifson earned a Ph.D. in computer science from Stanford University in 1973. Career Rulifson joined the Augmentation Research Center, at the Stanford Research Institute (now SRI International) in 1966, working on a form of software called “timesharing”. He led the software team that implemented the oN-Line System (NLS), a system that foreshadowed many future developments in modern computing and networking. Specifically, Rulifson developed the command language for the NLS, among other features. His first job was to create the first display-based on the CDC 3100, and the programs he wrote included the first online editor. He also redesigned its file structure. Rulifson was also lead programmer and wrote the program and demonstration files for the first public demonstration of the computer mouse in 1968. He was also the chief programmer of the first use of hypertext. Although Douglas Engelbart was the founder and leader of ARC, Rulifson's innovative programming was essential to the realization of Engelbart's vision. Rulifson was also involved in the development of NIL. Rulifson was the SRI's representative to the "network working group" in 1968, which led to the first connection on the ARPANET. He described the Decode-Encode Language (DEL), which was designed to allow remote use of NLS over ARPANET. Although never used, the idea was small "programs" would be down-loaded to enhance user interaction. This concept was fully developed in Sun Microsystems's Java programming language almost 30 years later, as applets. Simultaneously, he was involved in the development of the AI programming language QA4. This system was used for the planning done by Shakey, one of the first robots. He left SRI to join the System Sciences Laboratory (SSL) within Xerox PARC in 1973. Here he began work on personal computing and the creation of local networks. One of his first actions was to develop the concept for the desktop icon. By 1978 he was the manager of the center's Office Research Group, where he introduced the use of interdisciplinary scholars into the group's work. Specifically, he was the first computer scientist to begin working alongside anthropologists, hiring several at Xerox to improve their use of field research and enter the field of social science research. At PARC, he worked on implementing distributed office systems. In 1980, he worked for ROLM as an engineering manager and joined Syntelligence, an artificial intelligence applications vendor in Sunnyvale, California, in 1985. He began working for Sun Microsystems Laboratories in 1987, and held
https://en.wikipedia.org/wiki/Butler%20Lampson
Butler W. Lampson, ForMemRS, (born December 23, 1943) is an American computer scientist best known for his contributions to the development and implementation of distributed personal computing. Education and early life After graduating from the Lawrenceville School (where in 2009 he was awarded the Aldo Leopold Award, also known as the Lawrenceville Medal, Lawrenceville's highest award to alumni), Lampson received an A.B. in physics (magna cum laude with highest honors in the discipline) from Harvard University in 1964 and a PhD in electrical engineering and computer science from the University of California, Berkeley in 1967. Career and research During the 1960s, Lampson and others were part of Project GENIE at UC Berkeley. In 1965, several Project GENIE members, specifically Lampson and Peter Deutsch, developed the Berkeley Timesharing System for Scientific Data Systems' SDS 940 computer. After completing his doctorate, Lampson stayed on at UC Berkeley as an assistant professor (1967-1970) and associate professor (1970-1971) of computer science. For a period of time, he concurrently served as director of system development for the Berkeley Computer Corporation (1969-1971). In 1971, Lampson became one of the founding members of Xerox PARC, where he worked in the Computer Science Laboratory (CSL) as a principal scientist (1971-1975) and senior research fellow (1975-1983). His now-famous vision of a personal computer was captured in the 1972 memo entitled "Why Alto?". In 1973, the Xerox Alto, with its three-button mouse and full-page-sized monitor, was born. It is now considered to be the first actual personal computer in terms of what has become the "canonical" GUI mode of operation. All the subsequent computers built at Xerox PARC except for the "Dolphin" (used in the Xerox 1100 LISP machine) and the "Dorado" (used in the Xerox 1132 LISP machine) followed a general blueprint called "Wildflower", written by Lampson, and this included the D-Series Machines: the "Dandelion" (used in the Xerox Star and Xerox 1108 LISP machine), "Dandetiger" (used in the Xerox 1109 LISP machine), "Daybreak" (Xerox 6085), and "Dicentra" (used internally to control various specialized hardware devices). At PARC, Lampson helped work on many other revolutionary technologies, such as laser printer design; two-phase commit protocols; Bravo, the first WYSIWYG text formatting program; and Ethernet, the first high-speed local area network (LAN). He designed several influential programming languages such as Euclid. Following the acrimonious resignation of Xerox PARC CSL manager Bob Taylor in 1983, Lampson and Chuck Thacker followed their longtime colleague to Digital Equipment Corporation's Systems Research Center. There, he was a senior consulting engineer (1984-1986), corporate consulting engineer (1986-1993) and senior corporate consulting engineer (1993-1995). Shortly before Taylor's retirement, Lampson left to work for Microsoft Research as an architect (1995-199
https://en.wikipedia.org/wiki/Xerox%20Alto
The Xerox Alto is a computer that was designed from its inception to support an operating system based on a graphical user interface (GUI), later using the desktop metaphor. The first machines were introduced on 1 March 1973, a decade before mass-market GUI machines became available. The Alto is contained in a relatively small cabinet and uses a custom central processing unit (CPU) built from multiple SSI and MSI integrated circuits. Each machine cost tens of thousands of dollars despite its status as a personal computer. Only small numbers were built initially, but by the late 1970s, about 1,000 were in use at various Xerox laboratories, and about another 500 in several universities. Total production was about 2,000 systems. The Alto became well known in Silicon Valley and its GUI was increasingly seen as the future of computing. In 1979, Steve Jobs arranged a visit to Xerox PARC, during which Apple Computer personnel would receive demonstrations of Xerox technology in exchange for Xerox being able to purchase stock options in Apple. After two visits to see the Alto, Apple engineers used the concepts in developing the Apple Lisa and Macintosh systems. Xerox eventually commercialized a heavily modified version of the Alto concepts as the Xerox Star, first introduced in 1981. A complete office system including several workstations, storage and a laser printer cost as much as $100,000, and like the Alto, the Star had little direct impact on the market. History The first computer with a graphical operating system, the Alto built on earlier graphical interface designs. It was conceived in 1972 in a memo written by Butler Lampson, inspired by the oN-Line System (NLS) developed by Douglas Engelbart and Dustin Lindberg at SRI International (SRI). Of further influence was the PLATO education system developed at the Computer-based Education Research Laboratory at the University of Illinois. The Alto was designed mostly by Charles P. Thacker. Industrial Design and manufacturing was sub-contracted to Xerox, whose Special Programs Group team included Doug Stewart as Program Manager, Abbey Silverstone Operations, Bob Nishimura, Industrial Designer. An initial run of 30 units was produced by Xerox El Segundo (Special Programs Group), working with John Ellenby at PARC and Doug Stewart and Abbey Silverstone at El Segundo, who were responsible for re-designing the Alto's electronics. Due to the success of the pilot run, the team went on to produce approximately 2,000 units over the next ten years. Several Xerox Alto chassis are now on display at the Computer History Museum in Mountain View, California, one is on display at the Computer Museum of America in Roswell, Georgia, and several are in private hands. Running systems are on display at the System Source Computer Museum in Hunt Valley, Maryland. Charles P. Thacker was awarded the 2009 Turing Award of the Association for Computing Machinery on March 9, 2010, for his pioneering design and realization of t
https://en.wikipedia.org/wiki/Celeron
Celeron is a discontinued series of low-end IA-32 and x86-64 computer microprocessor models targeted at low-cost personal computers, manufactured by Intel. The first Celeron-branded CPU was introduced on April 15, 1998, and was based on the Pentium II. Celeron-branded processors released from 2009 to 2023 are compatible with IA-32 software. They typically offer less performance per clock speed compared to flagship Intel CPU lines, such as the Pentium or Core brands. They often have less cache or intentionally disabled advanced features, with variable impact on performance. While some Celeron designs have achieved strong performance for their segment, the majority of the Celeron line has exhibited noticeably degraded performance. This has been the primary justification for the higher cost of other Intel CPU brands versus the Celeron range. In September 2022, Intel announced that the Celeron brand, along with Pentium, will be replaced with the new "Intel Processor" branding for low-end processors in desktops and laptops from 2023 onwards. Background As a product concept, the Celeron was introduced in response to Intel's loss of the low-end market, in particular to the Cyrix 6x86, the AMD K6, and the IDT Winchip. Intel's existing low-end product, the Pentium MMX, was no longer performance-competitive at 233 MHz. Although a faster Pentium MMX would have been a lower-risk strategy, the industry-standard Socket 7 platform hosted a market of competitor CPUs that could be drop-in replacements for the Pentium MMX. Instead, Intel pursued a budget part that was to be pin-compatible with their high-end Pentium II product, using the Pentium II's proprietary Slot 1 interface. The Celeron also effectively killed off the nine-year-old 80486 chip, which had been the low-end processor brand for entry-level desktops and laptops until 1998. Intel hired marketing firm Lexicon Branding, which had originally come up with the name "Pentium", to devise a name for the new product as well. The San Jose Mercury News described Lexicon's reasoning behind the name they chose: Celer is Latin for swift, as in the word 'accelerate', and 'on' as in 'turned on'. Celeron is seven letters and three syllables, like Pentium. The 'Cel' of Celeron rhymes with 'tel' of Intel." Desktop Celerons P6-based Celerons Covington Launched in April 1998, the first Covington Celeron was essentially a 266 MHz Pentium II manufactured without any secondary cache at all. Covington also shared the 80523 product code of Deschutes. Although clocked at 266 or 300 MHz (frequencies 33 or 66 MHz higher than the desktop version of the Pentium w/MMX), the cacheless Celerons had trouble outcompeting the parts they were designed to replace. Substantial numbers were sold on first release, largely on the strength of the Intel name, but the Celeron quickly achieved a poor reputation both in the trade press and among computer professionals. The initial market interest faded rapidly in the face of its poor p
https://en.wikipedia.org/wiki/Modifier%20key
In computing, a modifier key is a special key (or combination) on a computer keyboard that temporarily modifies the normal action of another key when pressed together. By themselves, modifier keys usually do nothing; that is, pressing any of the , , or keys alone does not (generally) trigger any action from the computer. For example, in most keyboard layouts the Shift key combination will produce a capital letter "A" instead of the default lower-case letter "a" (unless in Caps Lock or Shift lock mode). A combination of in Microsoft Windows will close the active window; in this instance, Alt is the modifier key. In contrast, pressing just or will probably do nothing unless assigned a specific function in a particular program (for example, activating input aids or the toolbar of the active window in Windows). User interface expert Jef Raskin coined the term "quasimode" to describe the state a computer enters into when a modifier key is pressed. Modifier keys on personal computers The most common are: (Control) (Alternate) – also labelled on Apple keyboards. (Alternate Graphic) – Meta key, found on MIT, Symbolics, and Sun Microsystems keyboards. – Hyper key, found on the Space-cadet keyboard – Super key, found on MIT, Symbolics, Linux, and BSD keyboards. (Windows logo) – found on Windows keyboards. – Command key, found on Apple keyboards. On older keyboards labelled (Apple logo). (Function) – often present on small-layout keyboards, or keyboard where the top row of function keys have multimedia functions like controlling volume attached. The (Sun) Meta key, Windows key, (Apple) Cmd key, and the analogous "Amiga key" () on Amiga computers, are usually handled equivalently. Under the Linux operating system, the desktop environment KDE calls this key Meta, while GNOME calls this key, neutrally, Super. This could be considered confusing, since the original space-cadet keyboard and the X Window System recognize a "" modifier distinct from "". The ZX Spectrum has a Symbol Shift key in addition to Caps Shift. This was used to access additional punctuation and keywords. The MSX computer keyboard, besides Shift and Control, also included two special modifier keys, Code and Graph. In some models, as in the Brazilian Gradiente Expert, the Code and Graph keys are labelled "" and "" (Left and Right Graphics). They are used to select special graphic symbols and extended characters. Likewise, the Commodore 64 and other Commodore computers had the Commodore key at the bottom left of the keyboard. Compact keyboards, such as those used in laptops, often have a Fn key to save space by combining two functions that are normally on separate keys. On laptops, pressing plus one of the function keys, e.g., F2, often control hardware functions. Keyboards that lack a dedicated numeric keypad may mimic its functionality by combining the Fn key with other keys. The MIT space-cadet keyboard had additional Top and Front modifier keys. Combine
https://en.wikipedia.org/wiki/Serial%20port
On computers, a serial port is a serial communication interface through which information transfers in or out sequentially one bit at a time. This is in contrast to a parallel port, which communicates multiple bits simultaneously in parallel. Throughout most of the history of personal computers, data has been transferred through serial ports to devices such as modems, terminals, various peripherals, and directly between computers. While interfaces such as Ethernet, FireWire, and USB also send data as a serial stream, the term serial port usually denotes hardware compliant with RS-232 or a related standard, such as RS-485 or RS-422. Modern consumer personal computers (PCs) have largely replaced serial ports with higher-speed standards, primarily USB. However, serial ports are still frequently used in applications demanding simple, low-speed interfaces, such as industrial automation systems, scientific instruments, point of sale systems and some industrial and consumer products. Server computers may use a serial port as a control console for diagnostics, while networking hardware (such as routers and switches) commonly use serial console ports for configuration, diagnostics, and emergency maintenance access. To interface with these and other devices, USB-to-serial converters can quickly and easily add a serial port to a modern PC. Hardware Modern devices use an integrated circuit called a UART to implement a serial port. This IC converts characters to and from asynchronous serial form, implementing the timing and framing of data specified by the serial protocol in hardware. The IBM PC implements its serial ports, when present, with one or more UARTs. Very low-cost systems, such as some early home computers, would instead use the CPU to send the data through an output pin, using the bit banging technique. These early home computers often had proprietary serial ports with pinouts and voltage levels incompatible with RS-232. Before large-scale integration (LSI) made UARTs common, serial ports were commonly used in mainframes and minicomputers, which would have multiple small-scale integrated circuits to implement shift registers, logic gates, counters, and all the other logic needed. As PCs evolved serial ports were included in the Super I/O chip and then in the chipset. DTE and DCE The individual signals on a serial port are unidirectional and when connecting two devices, the outputs of one device must be connected to the inputs of the other. Devices are divided into two categories: data terminal equipment (DTE) and data circuit-terminating equipment (DCE). A line that is an output on a DTE device is an input on a DCE device and vice versa, so a DCE device can be connected to a DTE device with a straight wired cable, in which each pin on one end goes to the same numbered pin on the other end. Conventionally, computers and terminals are DTE, while peripherals such as modems are DCE. If it is necessary to connect two DTE (or DCE) devices toge
https://en.wikipedia.org/wiki/PDS
PDS can refer to: Entertainment Partially Deceased Syndrome, a fictional condition in BBC show In the Flesh Panzer Dragoon Saga, a 1998 video game Computing Partitioned data sets, IBM Passive data structure, another term for record Processor Direct Slot, in some old Macintosh computers Personal data service, services to let individuals manage their own personal data by themselves Protective distribution system, a safeguarded telecommunication system Microsoft BASIC Professional Development System (PDS), a superset of QuickBASIC Paradox Development Studios, a game developer subsidiary of Paradox Interactive Medical Paroxysmal depolarizing shift, a pre-seizure interictal neuronal EEG spike in epilepsy. Pigment dispersion syndrome, an affliction of the eye Polydioxanone, a type of absorbable suture Pleomorphic dermal sarcoma, a form of histiocytoma Political parties Party of Democratic Socialism (Partei des Demokratischen Sozialismus), a defunct political party of Germany Social Democracy Party of Albania Party of Democratic Socialism (India) Party of the Sicilians Progressive Democrats, a defunct political party of Ireland Democratic Party of the Left (Partito Democratico della Sinistra), a defunct political party of Italy Senegalese Democratic Party (Parti Démocratique Sénégalais), a political party of Senegal Sudanese Democratic Party (Parti Démocratique Soudanais), a defunct political party of Sudan Democratic Social Party (Partido Democrático Social), a defunct political party of Brazil Prosperous Peace Party (Partai Damai Sejahtera), a political party of Indonesia Parti de la Democratie Socialiste, a defunct provincial political party of Quebec, Canada Schools Princeton Day School, New Jersey, US Peabody Demonstration School, former school in Nashville, Tennessee, US Presbyterian Day School, Memphis, Tennessee, US Providence Day School, Charlotte, North Carolina, US Patumwan Demonstration School, Srinakharinwirot University, Bangkok, Thailand Scientific Photothermal deflection spectroscopy Planetary Data System, a NASA data archive system Polydioxanone, a synthetic polymer 15-Cis-phytoene desaturase, an enzyme Poincaré dodecahedral space Other Particularly dangerous situation, special emphasis phrase in weather bulletins from the National Weather Service Piedras Negras International Airport, IATA code PDS Precision Drill Squad, a skilled military performance of Singapore Product design specification Public distribution system, Indian food distribution system The Philip DeFranco Show, on YouTube Permanent Duty Station, in the United States military forces (see temporary duty assignment)
https://en.wikipedia.org/wiki/Dish%20Network
DISH Network Corporation (commonly shortened to Dish, itself being an acronym for "Digital Sky Highway") is an American television provider and the owner of the direct-broadcast satellite provider Dish, commonly known as Dish Network, and the over-the-top IPTV service, Sling TV. Additionally, Dish offers mobile wireless service, Dish Wireless. On July 1, 2020, Dish acquired prepaid service Boost Mobile and added postpaid service Boost Infinite on December 7, 2022. Based in unincorporated Douglas County, Colorado, the company has approximately 16,000 employees. History In January 2008, EchoStar Communications Corporation, which was founded by Charlie Ergen as a satellite television equipment distributor in 1980, changed its name to DISH Network Corporation and spun off its technology arm as a new company named EchoStar Corporation. The company had begun using DISH Network as its consumer brand in 1996, after the launch of its first satellite, EchoStar I, in December 1995. That launch marked the beginning of its subscription television services. Joseph Clayton became president and chief executive officer of the company in June 2011, while Charlie Ergen remained chairman. Clayton remained in the position until March 31, 2015, when he retired, leaving Ergen to resume the post. Ergen has said diversifying and updating technology for the company will be a high priority, with an expectation that, over the coming decade, the company will provide internet, video, and telephone service for both home and mobile applications. In December 2017, DISH Network announced that Ergen would step down and be replaced by Erik Carlson. , the company provided services to 13.7 million television and 580,000 broadband subscribers. Founding and early growth Dish Network began operations on March 4, 1996, as a service of EchoStar. EchoStar was formed in 1980 by its chairman and chief executive officer, Charlie Ergen, along with colleagues Candy Ergen and Jim DeFranco, as a distributor of C-band satellite television systems. In 1987, EchoStar applied for a direct-broadcast satellite broadcast license with the FCC and was granted access to orbital slot 119° west longitude in 1992. On December 7, 2007, EchoStar announced it would spin off its technology and infrastructure assets into a separate company under the EchoStar name, after which the remainder of the company would be renamed DISH Network Corporation. The spun-off EchoStar began trading on January 3, 2008. Acquisitions and expansion In 2011, Dish Network spent over $3 billion in acquisitions of companies in bankruptcy, which The Motley Fool's Anders Bylund described as "a veritable buying rampage in the bargain bin." This includes the April 6, 2011, purchase of Blockbuster Inc. in a bankruptcy auction in New York, agreeing to pay $322 million in cash and assume $87 million in liabilities and other obligations for the nationwide video-rental company. DISH Network also acquired the defunct companies DBSD and Te
https://en.wikipedia.org/wiki/C-SPAN
Cable-Satellite Public Affairs Network (C-SPAN ) is an American cable and satellite television network, created in 1979 by the cable television industry as a nonprofit public service. It televises proceedings of the United States federal government and other public affairs programming. C-SPAN is a private, nonprofit organization funded by its cable and satellite affiliates. It does not have advertisements on any of its television networks or radio stations, nor does it solicit donations or pledges. The network operates independently; the cable industry and the U.S. Congress have no control over its programming content. The C-SPAN network includes the television channels C-SPAN, focusing on the U.S. House of Representatives; C-SPAN2, focusing on the U.S. Senate; and C-SPAN3, airing other government hearings and related programming; the radio station WCSP-FM; and a group of websites which provide streaming media and program archives. C-SPAN's television channels are available to approximately 100 million cable and satellite households within the United States. WCSP-FM is broadcast on FM radio in Washington, D.C., and is available throughout the U.S. on SiriusXM, via Internet streaming, and globally through iOS and Android apps. The network televises U.S. political events, particularly live and "gavel-to-gavel" coverage of the U.S. Congress, as well as other major events worldwide. Coverage of political and policy events is unmoderated, providing the audience with unfiltered information about politics and government. Non-political coverage includes historical programming, programs dedicated to non-fiction books, and interview programs with noteworthy individuals associated with public policy. History Development Brian Lamb, C-SPAN's chairman and former chief executive officer, conceived C-SPAN in 1975 while working as the Washington, D.C., bureau chief of Cablevision. Cable television was a rapidly growing industry, and Lamb envisioned a non-profit network, financed by the cable industry, that televised Congressional sessions, public affairs events, and policy discussions. Bob Rosencrans, providing $25,000 of initial funding in 1979, and John D. Evans, providing wiring and access to the headend needed for the distribution of the C-SPAN signal, were among those who helped Lamb launch the network. At meetings with House of Representatives leadership, Lamb and Rosencrans promised that the network would be non-political, which helped override broadcast and local network resistance. C-SPAN launched on March 19, 1979, for the first televised session made available by the House of Representatives, beginning with a speech by then-Tennessee representative Al Gore. Upon its debut, only 3.5 million homes were wired for C-SPAN, and the network had just three employees. For the first few years C-SPAN leased satellite time from the USA Network and had approximately 9 hours of daily programming. On February 1, 1982, C-SPAN launched its own transponder an
https://en.wikipedia.org/wiki/Fox%20Sports%20Networks
Fox Sports Networks (FSN), formerly known as Fox Sports Net, was the collective name for a group of regional sports channels in the United States. Formed in 1996 by News Corporation, the networks were acquired by The Walt Disney Company on March 20, 2019, following its acquisition of 21st Century Fox. A condition of that acquisition imposed by the U.S. Department of Justice required Disney to sell the regional networks by June 18, 2019, 90 days after the completion of its acquisition. Disney subsequently agreed to sell the networks (excluding the YES Network, being reacquired by Yankee Global Enterprises) to Sinclair; the transaction was completed on August 22, 2019. The networks continued to use the Fox Sports name only under a transitional license agreement while rebranding options were explored. A rebranding cross-partnership with Bally's Corporation took effect on March 31, 2021, and the networks were rebranded as Bally Sports, ending the Fox Sports Networks branding after 25 years. Each of the channels in the group carried regional broadcasts of sporting events from various professional, collegiate and high school sports teams (with broadcasts typically exclusive to each individual channel, although some were shown on multiple FSN channels or syndicated to a local broadcast station within a particular team's designated market area), along with regional and national sports discussion, documentary and analysis programs. Depending on their individual team rights, some Fox Sports Networks maintained overflow feeds available via subscription television providers in their home markets, which provided alternate programming when not used to carry game broadcasts that the main feed could not carry due to scheduling conflicts. Fox Sports Networks was headquartered in Houston, Texas, with master control facilities based in both Houston and Los Angeles; FSN also maintained production facilities at Stage 19 at Universal Studios Florida (which formerly served as home of Nickelodeon Studios until its closure in 2005). History Beginnings At the dawn of the cable television era, many regional sports networks (RSNs) vied to compete with the largest national sports network, ESPN. The most notable were the SportsChannel network, which first began operating in 1976 with the launch of the original SportsChannel (now MSG Sportsnet) in the New York City area and later branched out into channels serving Chicago and Florida; Prime Network, which launched in 1983 with Home Sports Entertainment (now Bally Sports Southwest) as its charter member network and later branched out onto the West Coast as "Prime Sports"; and SportSouth, an RSN operated by the Turner Broadcasting System. On October 31, 1995, News Corporation, which ten years earlier launched the Fox Broadcasting Company, a general entertainment broadcast network that formed its own sports division in 1994 with the acquisition of the television rights to the National Football Conference of the National Fo
https://en.wikipedia.org/wiki/TNT%20%28American%20TV%20network%29
TNT (originally an abbreviation for Turner Network Television) is an American basic cable television channel owned by the Warner Bros. Discovery Networks unit of Warner Bros. Discovery (WBD) that launched on October 3, 1988. TNT's original purpose was to air classic films and television series to which Turner Broadcasting maintained spillover rights through its sister station TBS. Since June 2001, the network has shifted its focus to dramatic television series and feature films, along with some sporting events (including NBA, NHL, U.S. Soccer, the NCAA Division I men's basketball tournament and professional wrestling shows AEW Rampage and AEW Collision), as TBS shifted its focus to comedic programming. , TNT was received by approximately 89.573 million households that subscribe to a subscription television service throughout the United States. History Beginnings Prior to the launch of the channel in 1988, the Turner Network Television name had been utilized by the Turner Broadcasting System for an ad hoc syndication service which produced and distributed various sporting events for carriage on Turner's Atlanta, Georgia superstation WTBS (channel 17, now WPCH-TV, which was separated from its national cable feed, TBS, in October 2007) as well as broadcast television stations throughout the United States. The Turner Network Television syndication service launched in 1982 to produce two exhibition games organized by the NFL Players Association (NFLPA) during the NFL strike, which were broadcast on WTBS and its national superstation feed. (The agreement with the NFLPA originally called for 18 games to be broadcast by WTBS on Sunday afternoons and Monday nights during the originally proposed strike season, but was reduced to the exhibition games amid lawsuits filed by the National Football League against Turner Broadcasting and the NFLPA union.) The TNT syndication service also produced and distributed the first Goodwill Games—organized by Ted Turner himself, in response to the Olympic boycotts involving the United States and the Soviet Union of the 1980 and 1984 Summer Olympics—in 1986. On October 6, 1987, Ted Turner announced the launch of Turner Network Television (TNT)—his fifth basic cable network venture, following SuperStation TBS, CNN, Headline News (now HLN) and the short-lived Cable Music Channel—in a keynote address at the opening day of the Atlantic Cable Show in Atlantic City, New Jersey, stating that the channel would center around major television events. Turner originally estimated that TNT would be offered to cable systems at a monthly rate of 10¢ per subscriber at launch (increasing to 20¢ per subscriber per month by March 1989), with 10 minutes of advertising being carried each hour (three to four minutes of which would be given to prospective cable systems for local advertising). Turner Broadcasting struggled to obtain carriage commitments from various cable providers to commence with the proposed service's launch plans, putti
https://en.wikipedia.org/wiki/USA%20Network
USA Network (simply USA) is an American basic cable television channel owned by the NBCUniversal Media Group division of Comcast's NBCUniversal. It was originally launched in 1977 as Madison Square Garden Sports Network, one of the first national sports cable television channels, before being relaunched under its current name on April 9, 1980. Since then, USA steadily gained popularity through its original programming, a long-established partnership with WWF/WWE and, for many years, limited sports programming that increased significantly in 2022 after the shutdown of NBCSN. USA Network is commercially available to about 90.4 million households (98% of households with pay television) in the US. History Madison Square Garden Sports Network (1977–1980) USA Network originally launched on September 22, 1977, as the Madison Square Garden Sports Network (not to be confused with the New York City-area regional sports network of the same name now simply known as the MSG Network). The network was founded by cable provider UA-Columbia Cablevision and the Madison Square Garden Corp. From its beginning (and for the next two decades) the network was run by chairwoman and CEO Kay Koplovitz. The channel was one of the first national cable television channels, utilizing satellite delivery as opposed to the then-industry standard microwave relay to distribute its programming to cable systems. Unlike other cable networks at the time, it also was the first to rely greatly on advertising revenue. At launch the network mostly broadcast sporting events from Madison Square Garden to a national audience (sharing programming with the aforementioned MSG Network). The network quickly added a mix of college and less well-known professional sports held at other venues, similar to those found during the early years of ESPN. In 1978, children's programming was also added to the lineup. MCA/Paramount ownership (1981–1994) and Time ownership (1981–1987) On April 9, 1980, the channel changed its name to USA Network. It also added a children's program called Calliope to its schedule and some talk shows in an effort to appeal to women. The new network also offered a programming block from Black Entertainment Television (which would eventually launch as its own network) and carried C-SPAN during the day. In 1981, ownership of the network changed. First, Time Inc. agreed to buy UA-Columbia's share of the network contingent upon Madison Square Garden owner Gulf + Western transferring its share of the network to its Paramount Pictures division. Shortly thereafter MCA Inc. also bought into the network with the three companies all owning equal shares. The three partners had a non-compete clause that would prevent them from owning other basic cable networks independently from the USA joint venture; however, it was acknowledged that Time also owned powerful USA Network rival Home Box Office. The said clause would cause Time Inc. to drop out of the venture in 1987, as the company atte
https://en.wikipedia.org/wiki/Showtime%20%28TV%20network%29
Showtime is an American premium television network and the flagship property Showtime Networks, a sub-division of the Paramount Media Networks division of Paramount Global. Showtime's programming includes theatrically released motion pictures, original television series, boxing and mixed martial arts matches, occasional stand-up comedy specials, and made-for-TV movies. Headquartered at Paramount Plaza, which is in the northern part of New York City's Broadway district, Showtime operates eight 24-hour, linear multiplex channels; a traditional subscription video on demand service, and two proprietary streaming platforms (the TV Everywhere offering Showtime Anytime which is included as part of a subscription to the linear Showtime television service and a namesake over-the-top service) sold directly to streaming-only consumers. In addition, the Showtime brand has been licensed for use by a number of channels and platforms worldwide including Showtime Arabia (it has been merged into OSN) in the Middle East and North Africa, and the now-defunct Showtime Movie Channels in Australia. Showtime is also sold independently of the traditional and over-the-top multichannel video programming distributors a la carte through Apple TV Channels and Amazon Channels which feature VOD library content and live feeds of Showtime's linear television services (consisting of the primary channel's East and West Coast feeds; for Amazon Video customers the East Coast feeds show its seven multiplex channels). , Showtime's programming was available to approximately 28.567 million U.S. households which subscribed to a multichannel television provider (28.318 million of which receive Showtime's primary channel at a minimum). On January 30, 2023, Paramount announced plans to fully integrate the Showtime direct-to-consumer service with the premium tier of the Paramount+ streaming service; the combined service will be branded as Paramount+ with Showtime, replacing a streaming bundle of the same name that launched in mid-2022. The merger was on June 27, 2023. The standalone Showtime and cable-specific Showtime Anytime apps will be shut down on December 31, 2023. History Early years (1976–1982) Showtime was launched on July 1, 1976, on Times-Mirror Cable systems in Escondido, Long Beach, and Palos Verdes, California through the conversion of 10,000 subscribers of the previous Channel One franchise. Exactly a week later Showtime launched on Viacom Cablevision's system in Dublin, California; the channel was originally owned by Viacom. The first program to be broadcast on Showtime was Celebration, a concert special featuring performances by Rod Stewart, Pink Floyd, and ABBA. By the end of its first year on the air, Showtime had 55,000 subscribers nationwide. On March 7, 1978, Showtime became a nationally distributed service when it was uplinked to satellite, becoming a competitor with HBO and other pay cable networks. In 1979, Viacom sold a 50% of Showtime to the TelePrompTer Cor
https://en.wikipedia.org/wiki/Ion%20Television
Ion Television is an American broadcast television network owned by the Katz Broadcasting subsidiary of the E. W. Scripps Company. The network first began broadcasting on August 31, 1998, as Pax TV, focusing primarily on family-oriented entertainment programming. It rebranded as i: Independent Television (commonly referred to as "i") on July 1, 2005, converting into a general entertainment network featuring recent and older acquired programs. The network adopted its identity as Ion Television on January 29, 2007, and airs programming in daily binge blocks of one program, usually acquired procedural dramas. The network also carries some holiday specials and films before Christmas. Ion is available throughout most of the United States through its group of 44 owned-and-operated stations and 20 network affiliates, as well as through distribution on pay-TV providers and streaming services; since 2014, the network has also increased affiliate distribution in several markets through the digital subchannels of local television stations owned by companies such as Gray Television and Nexstar Media Group where the network is unable to maintain a main channel affiliation with or own a standalone station, for the same purpose as the distribution of Ion's main network feed via pay-TV providers and streaming services. The network's stations cover all of the top 20 U.S. markets and 37 of the top 50 markets. Ion's owned-and-operated stations cover 64.8% of the United States population, by far the most of any U.S. station ownership group; it is able to circumvent the legal limit of covering 39% of the population because all of its stations operate on the UHF television band, which is subject to a discount in regard to that limit. In the digital age, the restoration of the UHF discount has proven controversial with other broadcast groups and FCC rulings between presidential administrations, though as the network's parent company mainly acquired low-performing stations and stations on the fringes of markets which targeted lower-profile cities in the analog age, it has not been an issue with Ion Media itself. History PAX (1998–2005) The network was launched by Bud Paxson, co-founder of the Home Shopping Network and chairman of parent company Paxson Communications (the forerunner to Ion Media). It was originally to be called Pax Net, but was renamed Pax TV (often referred to as simply "Pax"; stylized as "PAX") – a dual reference to its founder and corporate parent, and the Latin word for "peace" – shortly before its launch. Paxson, who felt that television programs aired by other broadcast networks were too raunchy and not family-friendly enough, had decided to create a network that he perceived as an alternative. Since the new network would focus on programming tailored to family audiences, PAX maintained a considerably more conservative programming content policy than the major commercial television networks, restricting profanity, violence and sexual content;
https://en.wikipedia.org/wiki/China%20Central%20Television
China Central Television (CCTV; ) is a national television broadcaster of China, established in 1958 as a propaganda outlet. Its 50 channels broadcast a variety of programming to more than one billion viewers in six languages. CCTV is operated by the National Radio and Television Administration which reports directly to the Chinese Communist Party (CCP)'s Central Propaganda Department. CCTV has a variety of functions, such as news communication, social education, culture, and entertainment information services. As a state television station it is responsible to both the Central Committee of the Chinese Communist Party and the State Council. It is a key player in the Chinese government's propaganda network. According to Freedom House and other media commentators, CCTV's reporting about topics sensitive to the Chinese government and CCP is distorted and often used as a weapon against the party's perceived enemies. History In 1954, CCP chairman Mao Zedong put forward that China should establish its own TV station. On 5 February 1955, the central broadcasting bureau reported to the State Council and proposed the program of establishing a medium-sized television station, later on premier Zhou Enlai included in China's first five-year plan the planned introduction of television broadcasts. In December 1957, the central broadcasting bureau sent Luo Donghe and Meng Qiyu to the Soviet Union and the German Democratic Republic for the inspection of their TV stations (see Television in the Soviet Union and Deutscher Fernsehfunk), then the duo returned to Beijing to prepare for the establishment of the TV station. In time for its 20th jubilee, Beijing Television was formally renamed China Central Television on May 1, 1978, and a new logo was unveiled. Until the late 1970s, CCTV held only evening broadcasts, usually closing down at midnight. During the summer and winter academic vacations, it occasionally transmitted daytime programming for students, while special daytime programs were aired during national holidays. In 1980, CCTV experimented with news relays from local and central television studios via microwave. In 1984, CCTV established the wholly-owned subsidiary (CITVC). By 1985, CCTV had already become a leading television network in China. In 1987, CCTV's grew due to the adaptation and presentation of Dream of the Red Chamber, the first Chinese television drama to enter the global market. In the same year, CCTV exported 10,216 show to 77 foreign television stations. Initially, the CCP's Central Propaganda Department issued directive censorship of programs. During reform in the 1990s, it adopted new standards for CCTV, "affordability" and "acceptability", loosening the previous government control. Affordability refers to purchasing ability of programs, while acceptability requires that a program has acceptable content, preventing the broadcast of material that contains inappropriate content or expresses views against the CCP. In March 2018, as
https://en.wikipedia.org/wiki/Engineering%20Research%20Associates
Engineering Research Associates, commonly known as ERA, was a pioneering computer firm from the 1950s. ERA became famous for their numerical computers, but as the market expanded they became better known for their drum memory systems. They were eventually purchased by Remington Rand and merged into their UNIVAC department. Many of the company founders later left to form Control Data Corporation. Wartime origins of ERA The ERA team started as a group of scientists and engineers working for the US Navy during WWII on code-breaking, a division known as the Communications Supplementary Activity - Washington (CSAW). After the war budgets were cut for most military projects, including CSAW. Joseph Wenger of the Navy's cryptoanalytic group was particularly worried that the CSAW team would spread to various companies and the Navy would lose their ability to quickly design new machines. Post-war organization Wenger and two members of the CSAW team, William Norris and Howard Engstrom, started looking for investors interested in supporting the development of a new computer company. Their only real lead, at Kuhn, Loeb & Co., eventually fell through. They then met John Parker, an investment banker who had run Northwest Aeronautical Corporation (NAC), a glider subsidiary of Chase Aircraft, in St. Paul, Minnesota. NAC was in the process of shutting down as the war ended most contracts, and Parker was looking for new projects to keep the factory running. He was told nothing about the work the team would do, but after being visited by a series of increasingly high-ranking naval officers culminating with James Forrestal, he knew "something" was up and decided to give it a try. Norris, Engstrom, and their group incorporated ERA in January, 1946, hired forty of their codebreaking colleagues, and moved to the NAC factory. During the early years, the company took on any engineering work that came their way, but were generally kept in business developing new code-breaking machines for the Navy. Most of the machines were custom-built to crack a specific code, and increasingly used magnetic drum memory to process and analyze the coded texts. To ensure secrecy, the factory was declared to be a Navy Reserve base, and armed guards were posted at the entrance. ERA's numerous military and intelligence projects contributed to Minnesota's becoming "the Land of 10,000 Top-Secret Computer Projects." Goldberg and Demon codebreakers Their first machine, Goldberg, completed in 1947, used a crude drum made by gluing magnetic tape to the surface of a large metal cylinder that could be spun at 50 RPM for reading (and much slower for writing). Over the next few years, the drum memory systems increased in capacity and speed, along with the paper tape readers needed to feed the data onto the drums. They later ended up in a major patent fight with Technitrol Engineering, who introduced a drum memory of their own in 1952. One of the follow-on machines, Demon, was built to crack a sp
https://en.wikipedia.org/wiki/DECnet
DECnet is a suite of network protocols created by Digital Equipment Corporation. Originally released in 1975 in order to connect two PDP-11 minicomputers, it evolved into one of the first peer-to-peer network architectures, thus transforming DEC into a networking powerhouse in the 1980s. Initially built with three layers, it later (1982) evolved into a seven-layer OSI-compliant networking protocol. DECnet was built right into the DEC flagship operating system OpenVMS since its inception. Later Digital ported it to Ultrix, OSF/1 (later Tru64) as well as Apple Macintosh and IBM PC running variants of DOS, OS/2 and Microsoft Windows under the name PATHWORKS, allowing these systems to connect to DECnet networks of VAX machines as terminal nodes. While the DECnet protocols were designed entirely by Digital Equipment Corporation, DECnet Phase II (and later) were open standards with published specifications, and several implementations were developed outside DEC, including ones for FreeBSD and Linux. DECnet code in the Linux kernel was marked as orphaned on February 18, 2010 and removed August 22, 2022. Evolution DECnet refers to a specific set of hardware and software networking products which implement the DIGITAL Network Architecture (DNA). The DIGITAL Network Architecture has a set of documents which define the network architecture in general, state the specifications for each layer of the architecture, and describe the protocols which operate within each layer. Although network protocol analyzer tools tend to categorize all protocols from DIGITAL as "DECnet", strictly speaking, non-routed DIGITAL protocols such as LAT, SCS, AMDS, LAST/LAD are not DECnet protocols and are not part of the DIGITAL Network Architecture. To trace the evolution of DECnet is to trace the development of DNA. The beginnings of DNA were in the early 1970s. DIGITAL published its first DNA specification at about the same time that IBM announced its Systems Network Architecture (SNA). Since that time, development of DNA has evolved through the following phases: 1970-1980 Phase I (1974) Support limited to two PDP-11s running the RSX-11 operating system, or a small number of PDP-8s running the RTS-8 operating system, with communication over point-to-point (DDCMP) links between nodes. Phase II (1975) Support for networks of up to 32 nodes with multiple, different implementations which could inter-operate with each other. Implementations expanded to include RSTS, TOPS-10, TOPS-20 and VAX/VMS with communications between processors still limited to point-to-point links only. Introduction of downline loading (MOP), and file transfer using File Access Listener (FAL), remote file access using Data Access Protocol (DAP), task-to-task programming interfaces and network management features. Phase III (1980). Support for networks of up to 255 nodes with 8-bit addresses, over point-to point and multi-drop links. Introduction of adaptive routing capability, record access, a network
https://en.wikipedia.org/wiki/Kevin%20Mitnick
Kevin David Mitnick (August 6, 1963 – July 16, 2023) was an American computer security consultant, author, and convicted hacker. He is best known for his high-profile 1995 arrest and five years in prison for various computer and communications-related crimes. Mitnick's pursuit, arrest, trial, and sentence along with the associated journalism, books, and films were all controversial. After his release from prison, he ran his own security firm, Mitnick Security Consulting, LLC, and was also involved with other computer security businesses. Early life and education Mitnick was born on August 6, 1963, in Van Nuys, California. His father was Alan Mitnick, his mother was Shelly Jaffe, and his maternal grandmother was Reba Vartanian. He grew up in Los Angeles, California. At age 12, Mitnick convinced a bus driver to tell him where he could buy his own ticket punch for "a school project", and was then able to ride any bus in the greater Los Angeles area using unused transfer slips he found in a dumpster next to the bus company garage. Mitnick attended James Monroe High School in North Hills, during which time he became a licensed amateur radio operator with callsign WA6VPS (his license was restored after imprisonment with callsign N6NHG). He chose the nickname "Condor" after watching the movie Three Days of the Condor. He was later enrolled at Los Angeles Pierce College and USC. Career For a time, Mitnick worked as a receptionist for Stephen S. Wise Temple in Los Angeles. Computer hacking Mitnick gained unauthorized access to a computer network in 1979, at 16, when a friend gave him the telephone number for the Ark, the computer system that Digital Equipment Corporation (DEC) used for developing its RSTS/E operating system software. He broke into DEC's computer network and copied the company's software, a crime for which he was charged and convicted in 1988. He was sentenced to 12 months in prison followed by three years of supervised release. Near the end of his supervised release, Mitnick hacked into Pacific Bell voicemail computers. After a warrant was issued for his arrest, Mitnick fled, becoming a fugitive for two-and-a-half years. According to the United States Department of Justice, Mitnick gained unauthorized access to dozens of computer networks while he was a fugitive. He used cloned cellular phones to hide his location and, among other things, copied valuable proprietary software from some of the country's largest cellular telephone and computer companies. Mitnick also intercepted and stole computer passwords, altered computer networks, and broke into and read private emails. Arrest, conviction, and incarceration After a well-publicized pursuit, the Federal Bureau of Investigation arrested Mitnick on February 15, 1995, at his apartment in Raleigh, North Carolina, on federal offenses related to a two-and-a-half-year period of computer hacking that included computer and wire fraud. He was found with cloned cellular phones, more than 100
https://en.wikipedia.org/wiki/EDVAC
EDVAC (Electronic Discrete Variable Automatic Computer) was one of the earliest electronic computers. It was built by Moore School of Electrical Engineering, Pennsylvania. Along with ORDVAC, it was a successor to the ENIAC. Unlike ENIAC, it was binary rather than decimal, and was designed to be a stored-program computer. ENIAC inventors, John Mauchly and J. Presper Eckert, proposed the EDVAC's construction in August 1944. A contract to build the new computer was signed in April 1946 with an initial budget of US$100,000. EDVAC was delivered to the Ballistic Research Laboratory in 1949. The Ballistic Research Laboratory became a part of the US Army Research Laboratory in 1952. Functionally, EDVAC was a binary serial computer with automatic addition, subtraction, multiplication, programmed division and automatic checking with an ultrasonic serial memory having a capacity of 1,024 44-bit words. EDVAC's average addition time was 864 microseconds and its average multiplication time was 2,900 microseconds. Project and plan ENIAC inventors John Mauchly and J. Presper Eckert proposed EDVAC's construction in August 1944, and design work for EDVAC commenced before ENIAC was fully operational. The design would implement a number of important architectural and logical improvements conceived during the ENIAC's construction and would incorporate a high-speed serial-access memory. Like the ENIAC, the EDVAC was built for the U.S. Army's Ballistics Research Laboratory at the Aberdeen Proving Ground by the University of Pennsylvania's Moore School of Electrical Engineering. Eckert and Mauchly and the other ENIAC designers were joined by John von Neumann in a consulting role; von Neumann summarized and discussed logical design developments in the 1945 First Draft of a Report on the EDVAC. A contract to build the new computer was signed in April 1946 with an initial budget of US$100,000. The contract named the device the Electronic Discrete Variable Automatic Calculator. The final cost of EDVAC, however, was similar to the ENIAC's, at just under $500,000. The Raytheon Company was a subcontractor on EDVAC machines. Technical description The EDVAC was a binary serial computer with automatic addition, subtraction, multiplication, programmed division and automatic checking with an ultrasonic serial memory capacity of 1,024 44-bit words, thus giving a memory, in modern terms, of 5.6 kilobytes. Physically, the computer comprised the following components: a magnetic tape reader-recorder (Wilkes 1956:36 describes this as a wire recorder.) a control unit with an oscilloscope a dispatcher unit to receive instructions from the control and memory and direct them to other units a computational unit to perform arithmetic operations on a pair of numbers and send the result to memory after checking on a duplicate unit a timer a dual memory unit consisting of two sets of 64 mercury acoustic delay lines of eight words capacity on each line three temporary delay
https://en.wikipedia.org/wiki/John%20Mauchly
John William Mauchly (August 30, 1907 – January 8, 1980) was an American physicist who, along with J. Presper Eckert, designed ENIAC, the first general-purpose electronic digital computer, as well as EDVAC, BINAC and UNIVAC I, the first commercial computer made in the United States. Together they started the first computer company, the Eckert–Mauchly Computer Corporation (EMCC), and pioneered fundamental computer concepts, including the stored program, subroutines, and programming languages. Their work, as exposed in the widely read First Draft of a Report on the EDVAC (1945) and as taught in the Moore School Lectures (1946), influenced an explosion of computer development in the late 1940s all over the world. Biography John W. Mauchly was born on August 30, 1907, to Sebastian and Rachel (Scheidemantel) Mauchly in Cincinnati, Ohio. He moved with his parents and sister, Helen Elizabeth (Betty), at an early age to Chevy Chase, Maryland, when Sebastian Mauchly obtained a position at the Carnegie Institution of Washington as head of its Section of Terrestrial Electricity. As a youth, Mauchly was interested in science, and in particular with electricity, and as a young teenager was known to fix neighbors' electric systems. Mauchly attended E.V. Brown Elementary School in Chevy Chase and McKinley Technical High School in Washington, DC. At McKinley, Mauchly was extremely active in the debate team, was a member of the national honor society, and became editor-in-chief of the school's newspaper, Tech Life. After graduating from high school in 1925, he earned a scholarship to study engineering at Johns Hopkins University. He subsequently transferred to the physics department, and without completing his undergraduate degree, instead earned a Ph.D. in physics in 1932. From 1932 to 1933, Mauchly served as a research assistant at Johns Hopkins University where he concentrated on calculating energy levels of the formaldehyde spectrum. Mauchly's teaching career truly began in 1933 at Ursinus College where he was appointed head of the physics department, where he was, in fact, the only staff member. In the summer of 1941, Mauchly took a Defense Training Course for Electronics at the University of Pennsylvania Moore School of Electrical Engineering. There he met the lab instructor, J. Presper Eckert (1919–1995), with whom he would form a long-standing working partnership. Following the course, Mauchly was hired as an instructor of electrical engineering and in 1943, he was promoted to assistant professor of electrical engineering. Following the outbreak of World War II, the United States Army Ordnance Department contracted the Moore School to build an electronic computer which, as proposed by Mauchly and Eckert, would accelerate the recomputation of artillery firing tables. In 1959, Mauchly left Sperry Rand and started Mauchly Associates, Inc. One of Mauchly Associates' notable achievements was the development of the Critical Path Method (CPM) which provided
https://en.wikipedia.org/wiki/BINAC
BINAC (Binary Automatic Computer) was an early electronic computer designed for Northrop Aircraft Company by the Eckert–Mauchly Computer Corporation (EMCC) in 1949. Eckert and Mauchly, though they had started the design of EDVAC at the University of Pennsylvania, chose to leave and start EMCC, the first computer company. BINAC was their first product, the first stored-program computer in the United States; BINAC is also sometimes claimed to be the world's first commercial digital computer even though it was limited in scope and never fully functional after delivery. Architecture The BINAC was a bit-serial binary computer with two independent CPUs, each with its own 512-word acoustic mercury delay-line memory. The CPUs continuously compared results to check for errors caused by hardware failures. It used approximately 700 vacuum tubes. The 512-word acoustic mercury delay-line memories were divided into 16 channels, each holding 32 words of 31 bits, with an additional 11-bit space between words to allow for circuit delays in switching. The clock rate was 4.25 MHz (1 MHz according to one source), which yielded a word access time of about 10 microseconds. The addition time was 800 microseconds, and the multiplication time was 1200 microseconds. Programs or data were entered manually in octal using an eight-key keypad or were loaded from magnetic tape. BINAC was significant for being able to perform high-speed arithmetic on binary numbers, with no provisions to store characters or decimal digits. Early test programs The BINAC ran a test program (consisting of 23 instructions) in March 1949, although it was not fully functional at the time. Here are early test programs that BINAC ran: February 7, 1949 – Ran a five-line program to fill the memory from register A. February 10, 1949 – Ran a five-line program to check memory. February 16, 1949 – Ran a six-line program to fill memory. March 7, 1949 – Ran 217 iterations of a 23-line program to compute squares. It was still running correctly when it stopped. April 4, 1949 – Ran a fifty-line program to fill memory and check all instructions. It ran for 2.5 hours before encountering an error. Shortly after that it ran for 31.5 hours without error. Customer acceptance Northrop accepted delivery of BINAC in September 1949. Northrop employees said that BINAC never worked properly after it was delivered, although it had worked at the Eckert-Mauchly workshop. It was able to run some small programs but did not work well enough to be used as a production machine. Northrop attributed the failures to it not being properly packed for shipping when Northrop picked it up; EMCC said that the problems were due to errors in re-assembly of the machine after shipping. Northrop, citing security considerations, refused to allow EMCC technicians near the machine after shipping, instead hiring a newly graduated engineering student to re-assemble it. EMCC said that the fact that it worked at all after this was test
https://en.wikipedia.org/wiki/Whirlwind%20I
Whirlwind I was a Cold War-era vacuum tube computer developed by the MIT Servomechanisms Laboratory for the U.S. Navy. Operational in 1951, it was among the first digital electronic computers that operated in real-time for output, and the first that was not simply an electronic replacement of older mechanical systems. It was one of the first computers to calculate in bit-parallel (rather than bit-serial), and was the first to use magnetic-core memory. Its development led directly to the Whirlwind II design used as the basis for the United States Air Force SAGE air defense system, and indirectly to almost all business computers and minicomputers in the 1960s, particularly because of the mantra "short word length, speed, people." Background During World War II, the U.S. Navy's Naval Research Lab approached MIT about the possibility of creating a computer to drive a flight simulator for training bomber crews. They envisioned a fairly simple system in which the computer would continually update a simulated instrument panel based on control inputs from the pilots. Unlike older systems such as the Link Trainer, the system they envisioned would have a considerably more realistic aerodynamics model that could be adapted to any type of plane. This was an important consideration at the time, when many new designs were being introduced into service. The Servomechanisms Lab in MIT building 32 conducted a short survey that concluded such a system was possible. The Navy's Office of Naval Research decided to fund development under Project Whirlwind (and its sister projects, Project Typhoon and Project Cyclone, with other institutions), and the lab placed Jay Forrester in charge of the project. They soon built a large analog computer for the task, but found that it was inaccurate and inflexible. Solving these problems in a general way would require a much larger system, perhaps one so large as to be impossible to construct. Judy Clapp was an early senior technical member of this team. Perry Crawford, another member of the MIT team, saw a demonstration of ENIAC in 1945. He then suggested that a digital computer would be the best solution. Such a machine would allow the accuracy of simulations to be improved with the addition of more code in the computer program, as opposed to adding parts to the machine. As long as the machine was fast enough, there was no theoretical limit to the complexity of the simulation. Until this point, all computers constructed were dedicated to single tasks, and run in batch mode. A series of inputs were set up in advance and fed into the computer, which would work out the answers and print them. This was not appropriate for the Whirlwind system, which needed to operate continually on an ever-changing series of inputs. Speed became a major issue: whereas with other systems it simply meant waiting longer for the printout, with Whirlwind it meant seriously limiting the amount of complexity the simulation could include. Technical de
https://en.wikipedia.org/wiki/Magnetic-core%20memory
Magnetic-core memory was the predominant form of random-access computer memory for 20 years between about 1955 and 1975. Such memory is often just called core memory, or, informally, core. Core memory uses toroids (rings) of a hard magnetic material (usually a semi-hard ferrite) as transformer cores, where each wire threaded through the core serves as a transformer winding. Two or more wires pass through each core. Magnetic hysteresis allows each of the cores to "remember", or store a state. Each core stores one bit of information. A core can be magnetized in either the clockwise or counter-clockwise direction. The value of the bit stored in a core is zero or one according to the direction of that core's magnetization. Electric current pulses in some of the wires through a core allow the direction of the magnetization in that core to be set in either direction, thus storing a one or a zero. Another wire through each core, the sense wire, is used to detect whether the core changed state. The process of reading the core causes the core to be reset to a zero, thus erasing it. This is called destructive readout. When not being read or written, the cores maintain the last value they had, even if the power is turned off. Therefore, they are a type of non-volatile memory. Using smaller cores and wires, the memory density of core slowly increased, and by the late 1960s a density of about 32 kilobits per cubic foot (about 0.9 kilobits per litre) was typical. However, reaching this density required extremely careful manufacture, which was almost always carried out by hand in spite of repeated major efforts to automate the process. The cost declined over this period from about $1 per bit to about 1 cent per bit. The introduction of the first semiconductor memory chips in the late 1960s, which initially created static random-access memory (SRAM), began to erode the market for core memory. The first successful dynamic random-access memory (DRAM), the Intel 1103, followed in 1970. Its availability in quantity at 1 cent per bit marked the beginning of the end for core memory. Improvements in semiconductor manufacturing led to rapid increases in storage capacity and decreases in price per kilobyte, while the costs and specs of core memory changed little. Core memory was driven from the market gradually between 1973 and 1978. Depending on how it was wired, core memory could be exceptionally reliable. Read-only core rope memory, for example, was used on the mission-critical Apollo Guidance Computer essential to NASA's successful Moon landings. Although core memory is obsolete, computer memory is still sometimes called "core" even though it is made of semiconductors, particularly by people who had worked with machines having actual core memory. The files that result from saving the entire contents of memory to disk for inspection, which is nowadays commonly performed automatically when a major error occurs in a computer program, are still called "core dum
https://en.wikipedia.org/wiki/Light%20gun
A light gun is a pointing device for computers and a control device for arcade and video games, typically shaped to resemble a pistol. Early history The first light guns were produced in the 1930s, following the development of light-sensing vacuum tubes. In 1936, the technology was introduced in arcade shooting games, beginning with the Seeburg Ray-O-Lite. These games evolved throughout subsequent decades, culminating in Sega's Periscope, released in 1966 as the company's first successful game, which requires the player to target cardboard ships. Periscope is an early electro-mechanical game, and the first arcade game to cost one quarter per play. Sega's 1969 game Missile features electronic sound and a moving film strip to represent the targets on a projection screen, and its 1972 game Killer Shark features a mounted light gun with targets whose movement and reactions are displayed using back image projection onto a screen. Nintendo released the Beam Gun in 1970 and the Laser Clay Shooting System in 1973, followed in 1974 by the arcade game Wild Gunman, which uses film projection to display the target on the screen. In 1975, Sega released the early co-operative light gun shooters Balloon Gun and Bullet Mark. Sequential targets The first detection method, used by the NES Zapper, involves drawing each target sequentially in white light after the screen blacks out. The computer knows that if the diode detects light as it is drawing a square (or after the screen refreshes), then that is the target at which the gun is pointed. Essentially, the diode tells the computer whether or not the player hit something, and for n objects, the sequence of the drawing of the targets tell the computer which target the player hit after 1 + ceil(log2(n)) refreshes (one refresh to determine if any target at all was hit and ceil(log2(n)) to do a binary search for the object that was hit). A side effect of this is that in some games, a player can point the gun at a light bulb or other bright light source, pull the trigger, and cause the system to falsely detect a hit on the first target every time. Some games account for this either by detecting if all targets appear to match or by displaying a black screen and verifying that no targets match. Infrared emitters The Wii Remote uses an infrared video camera in the handheld controller, rather than a simple sensor. Wesley Yin-Poole stated that the Wii Remote was not as accurate as a traditional light gun. GunCon 3 is an infrared light gun used for arcade games. Rectangular positioning Rectangular positioning is similar to image capture, except it disregards any on-screen details and only determines the rectangular outline of the game screen. By determining the size and distortion of the rectangle outline of the screen, it is possible to calculate where exactly the light gun is pointing. This method was introduced by the Sinden Lightgun. Positional gun The positional gun is common in video arcades, as a non-optical
https://en.wikipedia.org/wiki/UWM
UWM may stand for Universities: University of Wisconsin–Milwaukee University of Warmia and Mazury in Olsztyn, Poland In computing: Ultrix Window Manager UDE Window Manager Others Ticker symbol for ProShares Ultra Russell2000 at NYSE Arca United World Mission United Wholesale Mortgage
https://en.wikipedia.org/wiki/TX-0
The TX-0, for Transistorized Experimental computer zero, but affectionately referred to as tixo (pronounced "tix oh"), was an early fully transistorized computer and contained a then-huge 64K of 18-bit words of magnetic-core memory. Construction of the TX-0 began in 1955 and ended in 1956. It was used continually through the 1960s at MIT. The TX-0 incorporated around 3,600 Philco high-frequency surface-barrier transistors, the first transistor suitable for high-speed computers. The TX-0 and its direct descendant, the original PDP-1, were platforms for pioneering computer research and the development of what would later be called computer "hacker" culture. For MIT, this was the first computer to provide a System console which allowed for direct interaction, as opposed to previous computers, which required the use of punched card as a primary interface for programmers debugging their programs. Members of MIT's Tech Model Railroad Club, "the very first hackers at MIT", reveled in the interactivity afforded by the console, and were recruited by Marvin Minsky to work on this and other systems used by Minsky's AI group. Background Designed at the MIT Lincoln Laboratory largely as an experiment in transistorized design and the construction of very large core memory systems, the TX-0 was essentially a transistorized version of the equally famous Whirlwind, also built at Lincoln Lab. While the Whirlwind filled an entire floor of a large building, TX-0 fit in a single reasonably sized room and yet was somewhat faster. Like the Whirlwind, the TX-0 was equipped with a vector display system, consisting of a 12-inch oscilloscope with a working area of 7 by 7 inches connected to the 18-bit output register of the computer, allowing it to display points and vectors with a resolution up to 512×512 screen locations. The TX-0 was an 18-bit computer with a 16-bit address range. First two bits of machine word designate instruction and remaining 16 bits are used to specify memory location or operand for special "operate" instruction. First two bits could create four possible instructions, which included store, add, and conditional branch instructions as a basic set. The fourth instruction, "operate", took additional operands and allowed access to a number of "micro-orders" which could be used separately or together to provide many other useful instructions. An "add" instruction took 10 microseconds. Wesley A. Clark designed the logic and Ken Olsen oversaw the engineering development. Development Initially a vacuum-tube computer named TX-1 was being designed to test the first large magnetic-core memory bank. However, the design was never approved and the TX-1 was never built. Instead, the TX-0 was designed for the same purpose, except using transistors. With the successful completion of the TX-0, work turned immediately to the much larger and far more complex TX-2, completed in 1958. Since core memory was very expensive at the time, several parts of the TX-0 me
https://en.wikipedia.org/wiki/Graphics%20tablet
A graphics tablet (also known as a digitizer, digital graphic tablet, pen tablet, drawing tablet, external drawing pad or digital art board) is a computer input device that enables a user to hand-draw images, animations and graphics, with a special pen-like stylus, similar to the way a person draws images with a pencil and paper. These tablets may also be used to capture data or handwritten signatures. It can also be used to trace an image from a piece of paper that is taped or otherwise secured to the tablet surface. Capturing data in this way, by tracing or entering the corners of linear polylines or shapes, is called digitizing. The device consists of a rough surface upon which the user may "draw" or trace an image using the attached stylus, a pen-like drawing apparatus. The image is shown on the computer monitor, though some graphic tablets now also incorporate an LCD screen for more realistic or natural experience and usability. Some tablets are intended as a replacement for the computer mouse as the primary pointing and navigation device for desktop computers. History The first electronic handwriting device was the Telautograph, patented by Elisha Gray in 1888. The first graphic tablet resembling contemporary tablets and used for handwriting recognition by a computer was the Stylator in 1957. Better known (and often misstated as the first digitizer tablet) is the RAND Tablet also known as the Grafacon (for Graphic Converter), introduced in 1964. The RAND Tablet employed a grid of wires under the surface of the pad that encoded horizontal and vertical coordinates in a small electrostatic signal. The stylus received the signal by capacitive coupling, which could then be decoded back as coordinate information. The acoustic tablet, or spark tablet, used a stylus that generated clicks with a spark plug. The clicks were then triangulated by a series of microphones to locate the pen in space. The system was fairly complex and expensive, and the sensors were susceptible to interference by external noise. Digitizers were popularized in the mid-1970s and early 1980s by the commercial success of the ID (Intelligent Digitizer) and BitPad manufactured by the Summagraphics Corp. The Summagraphics digitizers were sold under the company's name but were also private labeled for HP, Tektronix, Apple, Evans and Sutherland and several other graphic system manufacturers. The ID model was the first graphics tablet to make use of what was at the time, the new Intel microprocessor technology. This embedded processing power allowed the ID models to have twice the accuracy of previous models while still making use of the same foundation technology. Key to this accuracy improvement were two US Patents issued to Stephen Domyan, Robert Davis, and Edward Snyder. The Bit Pad model was the first attempt at a low cost graphics tablet with an initial selling price of $555 when other graphics tablets were selling in the $2,000 to $3,000 price range. This lower cost
https://en.wikipedia.org/wiki/Ivan%20Sutherland
Ivan Edward Sutherland (born May 16, 1938) is an American computer scientist and Internet pioneer, widely regarded as a pioneer of computer graphics. His early work in computer graphics as well as his teaching with David C. Evans in that subject at the University of Utah in the 1970s was pioneering in the field. Sutherland, Evans, and their students from that era developed several foundations of modern computer graphics. He received the Turing Award from the Association for Computing Machinery in 1988 for the invention of the Sketchpad, an early predecessor to the sort of graphical user interface that has become ubiquitous in personal computers. He is a member of the National Academy of Engineering, as well as the National Academy of Sciences among many other major awards. In 2012, he was awarded the Kyoto Prize in Advanced Technology for "pioneering achievements in the development of computer graphics and interactive interfaces". Biography Sutherland's father was from New Zealand; his mother, Anne Sutherland, was from Scotland. His family moved to Wilmette, Illinois, then Scarsdale, New York, for his father's career. Bert Sutherland was his elder brother. Ivan Sutherland earned his bachelor's degree in electrical engineering from the Carnegie Institute of Technology, his master's degree from Caltech, and his Ph.D. from MIT in Electrical Engineering in 1963. Sutherland invented Sketchpad in 1962 while at MIT. Claude Shannon signed on to supervise Sutherland's computer drawing thesis. Among others on his thesis committee were Marvin Minsky and Steven Coons. Sketchpad was an innovative program that influenced alternative forms of interaction with computers. Sketchpad could accept constraints and specified relationships among segments and arcs, including the diameter of arcs. It could draw both horizontal and vertical lines and combine them into figures and shapes. Figures could be copied, moved, rotated, or resized, retaining their basic properties. Sketchpad also had the first window-drawing program and clipping algorithm, which allowed zooming. Sketchpad ran on the Lincoln TX-2 computer and influenced Douglas Engelbart's oN-Line System. Sketchpad, in turn, was influenced by the conceptual Memex as envisioned by Vannevar Bush in his influential paper "As We May Think". From 1963 to 1965, after he received his PhD, he served in the U.S. Army, commissioning as an officer through the ROTC program at Carnegie Institute of Technology. As a first lieutenant, Sutherland replaced J. C. R. Licklider as the head of the US Defense Department Advanced Research Project Agency's Information Processing Techniques Office (IPTO), when Licklider took a job at IBM in 1964. From 1965 to 1968, Sutherland was an associate professor of electrical engineering at Harvard University. Work with student Danny Cohen in 1967 led to the development of the Cohen–Sutherland computer graphics line clipping algorithm. In 1968, with his students Bob Sproull, Quintin Foster, Dan
https://en.wikipedia.org/wiki/Sketchpad
Sketchpad (a.k.a. Robot Draftsman) is a computer program written by Ivan Sutherland in 1963 in the course of his PhD thesis, for which he received the Turing Award in 1988, and the Kyoto Prize in 2012. It pioneered human–computer interaction (HCI), and is considered the ancestor of modern computer-aided design (CAD) programs as well as a major breakthrough in the development of computer graphics in general. For example, the graphical user interface (GUI) was derived from Sketchpad as well as modern object-oriented programming. Using the program, Ivan Sutherland showed that computer graphics could be used for both artistic and technical purposes in addition to demonstrating a novel method of human–computer interaction. History Sutherland was inspired by the Memex from "As We May Think" by Vannevar Bush. Sketchpad inspired Douglas Engelbart to design and develop oN-Line System at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI) during the 1960s. See History of the graphical user interface for a more detailed discussion of GUI development. Software Sketchpad was the earliest program ever to utilize a complete graphical user interface. The clever way the program organized its geometric data pioneered the use of "master" ("objects") and "occurrences" ("instances") in computing and pointed forward to object oriented programming. The main idea was to have master drawings which one could instantiate into many duplicates. If the user changed the master drawing, all the instances would change as well. Geometric constraints was another major invention in Sketchpad, letting the user easily constrain geometric properties in the drawing—for instance, the length of a line or the angle between two lines could be fixed. As a trade magazine said, clearly Sutherland "broke new ground in 3D computer modeling and visual simulation, the basis for computer graphics and CAD/CAM". Very few programs can be called precedents for his achievements. Patrick J. Hanratty is sometimes called the "father of CAD/CAM" and wrote PRONTO, a numerical control language at General Electric in 1957, and wrote CAD software while working for General Motors beginning in 1961. Sutherland wrote in his thesis that Bolt, Beranek and Newman had a "similar program" and T-Square was developed by Peter Samson and one or more fellow MIT students in 1962, both for the PDP-1. The Computer History Museum holds program listings for Sketchpad. Hardware Sketchpad ran on the Lincoln TX-2 (1958) computer at MIT, which had 64k of 36-bit words. The user drew on the screen with the recently invented light pen, which relayed information on its position by computing at what time the light from the scanning Cathode-ray tube screen is detected. To configure the initial position of the light pen, the word "INK" was displayed on the screen, which, upon tapping, initialised the program with a white cross to continue keeping track of the pen's movement relative to its previou
https://en.wikipedia.org/wiki/TX-2
The MIT Lincoln Laboratory TX-2 computer was the successor to the Lincoln TX-0 and was known for its role in advancing both artificial intelligence and human–computer interaction. Wesley A. Clark was the chief architect of the TX-2. Specifications The TX-2 was a transistor-based computer using the then-huge amount of 64K 36-bit words of magnetic-core memory. The TX-2 became operational in 1958. Because of its powerful capabilities, Ivan Sutherland's revolutionary Sketchpad program was developed for and ran on the TX-2. One of its key features was the ability to directly interact with the computer through a graphical display. The compiler was developed by Lawrence Roberts while he was studying at the MIT Lincoln Laboratory. Relationship with DEC Digital Equipment Corporation was a spin-off of the TX-0 and TX-2 projects. The TX-2 Tape System was a block addressable 1/2" tape developed for the TX-2 by Tom Stockebrand which evolved into LINCtape and DECtape. Role in creating the Internet Dr. Leonard Kleinrock developed the mathematical theory of packet networks which he successfully simulated on the TX-2 computer at Lincoln Lab. References External links TX-2 documentation at bitsavers.org Interview with UCLA's Dr. Leonard Kleinrock One-of-a-kind computers Transistorized computers 36-bit computers
https://en.wikipedia.org/wiki/Max-flow%20min-cut%20theorem
In computer science and optimization theory, the max-flow min-cut theorem states that in a flow network, the maximum amount of flow passing from the source to the sink is equal to the total weight of the edges in a minimum cut, i.e., the smallest total weight of the edges which if removed would disconnect the source from the sink. This is a special case of the duality theorem for linear programs and can be used to derive Menger's theorem and the Kőnig–Egerváry theorem. Definitions and statement The theorem equates two quantities: the maximum flow through a network, and the minimum capacity of a cut of the network. To state the theorem, each of these notions must first be defined. Network A network consists of a finite directed graph , where V denotes the finite set of vertices and is the set of directed edges; a source and a sink ; a capacity function, which is a mapping denoted by or for . It represents the maximum amount of flow that can pass through an edge. Flows A flow through a network is a mapping denoted by or , subject to the following two constraints: Capacity Constraint: For every edge , Conservation of Flows: For each vertex apart from and (i.e. the source and sink, respectively), the following equality holds: A flow can be visualized as a physical flow of a fluid through the network, following the direction of each edge. The capacity constraint then says that the volume flowing through each edge per unit time is less than or equal to the maximum capacity of the edge, and the conservation constraint says that the amount that flows into each vertex equals the amount flowing out of each vertex, apart from the source and sink vertices. The value of a flow is defined by where as above is the source and is the sink of the network. In the fluid analogy, it represents the amount of fluid entering the network at the source. Because of the conservation axiom for flows, this is the same as the amount of flow leaving the network at the sink. The maximum flow problem asks for the largest flow on a given network. Maximum Flow Problem. Maximize , that is, to route as much flow as possible from to . Cuts The other half of the max-flow min-cut theorem refers to a different aspect of a network: the collection of cuts. An s-t cut is a partition of such that and . That is, an s-t cut is a division of the vertices of the network into two parts, with the source in one part and the sink in the other. The cut-set of a cut is the set of edges that connect the source part of the cut to the sink part: Thus, if all the edges in the cut-set of are removed, then no positive flow is possible, because there is no path in the resulting graph from the source to the sink. The capacity of an s-t cut is the sum of the capacities of the edges in its cut-set, where if and , otherwise. There are typically many cuts in a graph, but cuts with smaller weights are often more difficult to find. Minimum s-t Cut Problem. M
https://en.wikipedia.org/wiki/HyperTalk
HyperTalk is a discontinued high-level, procedural programming language created in 1987 by Dan Winkler and used in conjunction with Apple Computer's HyperCard hypermedia program by Bill Atkinson. Because the main target audience of HyperTalk was beginning programmers, HyperTalk programmers were usually called "authors" and the process of writing programs was known as "scripting". HyperTalk scripts resembled written English and used a logical structure similar to that of the Pascal programming language. HyperTalk supported the basic control structures of procedural languages: repeat for/while/until, if/then/else, as well as function and message "handler" calls (a function handler was a subroutine and a message handler a procedure). Data types usually did not need to be specified by the programmer; conversion happened transparently in the background between strings and numbers. There were no classes or data structures in the traditional sense; in their place were special string literals, or "lists" of "items" delimited by commas (in later versions the "itemDelimiter" property allowed choosing an arbitrary character). Code execution typically began as a response to an event such as a mouse click on a UI widget. In the late 1980s, Apple considered using HyperCard's HyperTalk scripting language as the standard language across the company and within its classic Mac OS operating system, as well as for interprocess communication between Apple and non-Apple products. The company did not oppose the development of imitations like SuperCard, but it created the HyperTalk Standards Committee to avoid incompatibility between language variants. The case-insensitive language was initially interpreted, but gained just-in-time compilation with HyperCard 2.0. Description Fundamental operations For most basic operations including mathematical computations, HyperTalk favored natural-language ordering of predicates over the ordering used in mathematical notation. For example, in HyperTalk's put assignment command, the variable was placed at the end of the statement: put 5 * 4 into theResult whereas in the more traditional BASIC programming language (and most others), the same would be accomplished by writing: theResult = 5 * 4 The HyperTalk code has the side-effect of creating the variable theResult on the fly. Scripts could assign any type or value to a variable using the put command, making HyperTalk very weakly typed. Conversions between variable types were invisible and automatic: the string "3" could be multiplied by the number 5 to produce the number 15, or the number 5 concatenated onto the string "3" to produce the string "35". HyperTalk would not complain unless the types could not be automatically converted. Flow control and logic were generally similar to other common languages, using a if ... then ... else ... end if structure for conditionals and supporting loops based on a flexible repeat ... end repeat syntax. Comments were prefaced with two min
https://en.wikipedia.org/wiki/Bobcat%20%28disambiguation%29
Bobcat is a species of wild cat in North America. Bobcat may also refer to: Computing HP Bobcat, a minicomputer Bobcat (microarchitecture), AMD computer processor architecture Culture Bobcat (rapper), American rapper, "Do it"(1989) Bobcat (musician), American pop musician, "We Live for the Music" album(2011) Bobcat Goldthwait (b. 1962), American actor, comedian, screenwriter, and film and television director "Bobcat (Space Ghost Coast to Coast)", an episode of Space Ghost Coast to Coast Bubsy the Bobcat, the main protagonist in Bubsy video game series. Military devices Beretta 21 Bobcat, a handgun Bobcat (armored personnel carrier) Transportation Mercury Bobcat, American subcompact car Skid-steer loader, a compact construction/utility vehicle often nicknamed "bobcat" Cessna AT-17 Bobcat, aircraft Bay of Bengal Cooperative Air Traffic Flow Management System Organizations Bobcat Company, a manufacturer of farm and construction equipment, mainly skid steer loaders Sports Cynthia Lynch (b. 1971) or Bobcat, American professional wrestler Bob McCown (b. 1952), or The Bobcat, U.S.-born sports talk show host from Toronto Bournemouth Bobcats, English American football team in Bournemouth, Dorset Charlotte Bobcats, former name of Charlotte Hornets, professional basketball team in Charlotte, North Carolina Ohio Bobcats, several varsity teams at Ohio University Montana State Bobcats, varsity sports teams at Montana State University, Bozeman, Montana Texas State Bobcats, varsity sports teams at Texas State University, San Marcos, Texas Bobcat, mascot of the NYU Violets, the varsity sports team at New York University Other Bobcat, the joining rank in Cub Scouting (Boy Scouts of America) "bobtail" is a tractor unit.
https://en.wikipedia.org/wiki/High-Level%20Data%20Link%20Control
High-Level Data Link Control (HDLC) is a bit-oriented code-transparent synchronous data link layer protocol developed by the International Organization for Standardization (ISO). The standard for HDLC is ISO/IEC 13239:2002. HDLC provides both connection-oriented and connectionless service. HDLC can be used for point-to-multipoint connections via the original master-slave modes Normal Response Mode (NRM) and Asynchronous Response Mode (ARM), but they are now rarely used; it is now used almost exclusively to connect one device to another, using Asynchronous Balanced Mode (ABM). History HDLC is based on IBM's SDLC protocol, which is the layer 2 protocol for IBM's Systems Network Architecture (SNA). It was extended and standardized by the ITU as LAP (Link Access Procedure), while ANSI named their essentially identical version ADCCP. The HDLC specification does not specify the full semantics of the frame fields. This allows other fully compliant standards to be derived from it, and derivatives have since appeared in innumerable standards. It was adopted into the X.25 protocol stack as LAPB, into the V.42 protocol as LAPM, into the Frame Relay protocol stack as LAPF and into the ISDN protocol stack as LAPD. The original ISO standards for HDLC are the following: ISO 3309-1979 – Frame Structure ISO 4335-1979 – Elements of Procedure ISO 6159-1980 – Unbalanced Classes of Procedure ISO 6256-1981 – Balanced Classes of Procedure ISO/IEC 13239:2002, the current standard, replaced all of these specifications. HDLC was the inspiration for the IEEE 802.2 LLC protocol, and it is the basis for the framing mechanism used with the PPP on synchronous lines, as used by many servers to connect to a WAN, most commonly the Internet. A similar version is used as the control channel for E-carrier (E1) and SONET multichannel telephone lines. Cisco HDLC uses low-level HDLC framing techniques but adds a protocol field to the standard HDLC header. Framing HDLC frames can be transmitted over synchronous or asynchronous serial communication links. Those links have no mechanism to mark the beginning or end of a frame, so the beginning and end of each frame has to be identified. This is done by using a unique sequence of bits as a frame delimiter, or flag, and encoding the data to ensure that the flag sequence is never seen inside a frame. Each frame begins and ends with a frame delimiter. A frame delimiter at the end of a frame may also mark the start of the next frame. On both synchronous and asynchronous links, the flag sequence is binary "01111110", or hexadecimal 0x7E, but the details are quite different. Synchronous framing Because a flag sequence consists of six consecutive 1-bits, other data is coded to ensure that it never contains more than five 1-bits in a row. This is done by bit stuffing: any time that five consecutive 1-bits appear in the transmitted data, the data is paused and a 0-bit is transmitted. The receiving device knows that this is being do
https://en.wikipedia.org/wiki/SPX
SPX can refer to: S&P 500, a stock market index Sequenced Packet Exchange, a networking protocol IATA code of Sphinx International Airport, an airport in Giza, Egypt Small Press Expo, an alternative comics convention SpaceX (SpX), a rocket manufacturer Sports Performance eXtreme, a sports shoe and clothing brand SPX Corporation, a Fortune 500 electronics company St. Pius X, the 257th Pope of the Roman Catholic Church. St Pius X College, Sydney, Australia St. Pius X Seminary, Roxas City, Philippines Superphénix, a nuclear power plant Skulduggery Pleasant: Resurrection, the 10th book in the Skulduggery Pleasant series A file extension used for Speex-encoded audio files The National Rail code for St Pancras International railway station in Greater London The former IATA and FAA code for Houston Gulf Airport
https://en.wikipedia.org/wiki/Systems%20Network%20Architecture
Systems Network Architecture (SNA) is IBM's proprietary networking architecture, created in 1974. It is a complete protocol stack for interconnecting computers and their resources. SNA describes formats and protocols but, in itself, is not a piece of software. The implementation of SNA takes the form of various communications packages, most notably Virtual Telecommunications Access Method (VTAM), the mainframe software package for SNA communications. History SNA was made public as part of IBM's "Advanced Function for Communications" announcement in September, 1974, which included the implementation of the SNA/SDLC (Synchronous Data Link Control) protocols on new communications products: IBM 3767 communication terminal (printer) IBM 3770 data communication system They were supported by IBM 3704/3705 communication controllers and their Network Control Program (NCP), and by System/370 and their VTAM and other software such as CICS and IMS. This announcement was followed by another announcement in July, 1975, which introduced the IBM 3760 data entry station, the IBM 3790 communication system, and the new models of the IBM 3270 display system. SNA was designed in the era when the computer industry had not fully adopted the concept of layered communication. Applications, databases, and communication functions were mingled into the same protocol or product, which made it difficult to maintain and manage. SNA was mainly designed by the IBM Systems Development Division laboratory in Research Triangle Park, North Carolina, USA, helped by other laboratories that implemented SNA/SDLC. IBM later made the details public in its System Reference Library manuals and IBM Systems Journal. It is still used extensively in banks and other financial transaction networks, as well as in many government agencies. In 1999 there were an estimated 3,500 companies "with 11,000 SNA mainframes." One of the primary pieces of hardware, the 3745/3746 communications controller, has been withdrawn from the market by IBM. IBM continues to provide hardware maintenance service and microcode features to support users. A robust market of smaller companies continues to provide the 3745/3746, features, parts, and service. VTAM is also supported by IBM, as is the NCP required by the 3745/3746 controllers. In 2008 an IBM publication said: Objectives of SNA IBM in the mid-1970s saw itself mainly as a hardware vendor and hence all its innovations in that period aimed to increase hardware sales. SNA's objective was to reduce the costs of operating large numbers of terminals and thus induce customers to develop or expand interactive terminal-based systems as opposed to batch systems. An expansion of interactive terminal-based systems would increase sales of terminals and more importantly of mainframe computers and peripherals - partly because of the simple increase in the volume of work done by the systems and partly because interactive processing requires more computing power per transac
https://en.wikipedia.org/wiki/Physical%20layer
In the seven-layer OSI model of computer networking, the physical layer or layer 1 is the first and lowest layer: the layer most closely associated with the physical connection between devices. The physical layer provides an electrical, mechanical, and procedural interface to the transmission medium. The shapes and properties of the electrical connectors, the frequencies to broadcast on, the line code to use and similar low-level parameters, are specified by the physical layer. At the electrical layer, the physical layer is commonly implemented by dedicated PHY chip or, in electronic design automation (EDA), by a design block. In mobile computing, the MIPI Alliance *-PHY family of interconnect protocols are widely used. Historically, the OSI model is closely associated with internetworking, such as the Internet protocol suite and Ethernet, which were developed in the same era, along similar lines, though with somewhat different abstractions. Beyond internetworking, the OSI abstraction can be brought to bear on all forms of device interconnection in data communications and computational electronics. Role The physical layer defines the means of transmitting a stream of raw bits over a physical data link connecting network nodes. The bitstream may be grouped into code words or symbols and converted to a physical signal that is transmitted over a transmission medium. The physical layer consists of the electronic circuit transmission technologies of a network. It is a fundamental layer underlying the higher level functions in a network, and can be implemented through a great number of different hardware technologies with widely varying characteristics. Within the semantics of the OSI model, the physical layer translates logical communications requests from the data link layer into hardware-specific operations to cause transmission or reception of electronic (or other) signals. The physical layer supports higher layers responsible for generation of logical data packets. Physical signaling sublayer In a network using Open Systems Interconnection (OSI) architecture, the physical signaling sublayer is the portion of the physical layer that interfaces with the data link layer's medium access control (MAC) sublayer, performs symbol encoding, transmission, reception and decoding and, performs galvanic isolation. Relation to the Internet protocol suite The Internet protocol suite, as defined in RFC 1122 and RFC 1123, is a high-level networking description used for the Internet and similar networks. It does not define a layer that deals exclusively with hardware-level specifications and interfaces, as this model does not concern itself directly with physical interfaces. Services The major functions and services performed by the physical layer are: The physical layer performs bit-by-bit or symbol-by-symbol data delivery over a physical transmission medium. It provides a standardized interface to the transmission medium, including a mechanical s
https://en.wikipedia.org/wiki/London%20Waterloo%20station
Waterloo station (), also known as London Waterloo, is a central London terminus on the National Rail network in the United Kingdom, in the Waterloo area of the London Borough of Lambeth. It is connected to a London Underground station of the same name and is adjacent to Waterloo East station on the South Eastern Main Line. The station is the terminus of the South West Main Line to via Southampton, the West of England main line to Exeter via , the Portsmouth Direct line to which connects with ferry services to the Isle of Wight, and several commuter services around west and south-west London, Surrey, Hampshire and Berkshire. The station was opened in 1848 by the London and South Western Railway, and it replaced the earlier as it was closer to the West End. It was never designed to be a terminus, as the original intention was to continue the line towards the City of London, and consequently the station developed in a haphazard fashion, leading to difficulty finding the correct platform. The station was rebuilt in the early 20th century, opening in 1922, and included the Victory Arch over the main entrance, which commemorated World War I. Waterloo was the last London terminus to provide steam-powered services, which ended in 1967. The station was the London terminus for Eurostar international trains from 1994 until 2007, when they were transferred to St. Pancras. Waterloo is the busiest railway station in the UK, handling 41 million passengers in the year to March 2022. It is also the UK's largest station in terms of floor space and has the greatest number of platforms. Location The station's formal name is London Waterloo, and appears as such on all official documentation. It has the station code WAT. It is in the London Borough of Lambeth on the south bank of the River Thames, close to Waterloo Bridge and northeast of Westminster Bridge. The main entrance is to the south of the junction of Waterloo Road and York Road. It is named after the eponymous bridge, which itself was named after the Battle of Waterloo, a battle that occurred exactly two years prior to the opening ceremony for the bridge. History Background Waterloo was built by the London and South Western Railway (L&SWR). It was not designed to be a terminus, but simply a stop on an extension towards the City. It replaced the earlier , which had opened on 21 May 1838 and connected London to Southampton since 11 May 1840. By the mid-1840s, commuter services to Wandsworth, , Kingston upon Thames, and had become an important part of L&SWR traffic, so the company began to look for a terminus closer to Central London and the West End. An Act of Parliament was granted in 1845 to extend the line towards a site on York Road, close to Waterloo Bridge. The extension past Nine Elms involved demolishing 700 houses, and most of it was carried on a brick viaduct to minimise disruption. The longest bridge was long and took the line over Westminster Bridge Road. The approach to the new stati
https://en.wikipedia.org/wiki/Convex%20Computer
Convex Computer Corporation was a company that developed, manufactured and marketed vector minisupercomputers and supercomputers for small-to-medium-sized businesses. Their later Exemplar series of parallel computing machines were based on the Hewlett-Packard (HP) PA-RISC microprocessors, and in 1995, HP bought the company. Exemplar machines were offered for sale by HP for some time, and Exemplar technology was used in HP's V-Class machines. History Convex was formed in 1982 by Bob Paluck and Steve Wallach in Richardson, Texas. It was originally named Parsec and early prototype and production boards bear that name. They planned on producing a machine very similar in architecture to the Cray Research vector processor machines, with a somewhat lower performance, but with a much better price–performance ratio. In order to lower costs, the Convex designs were not as technologically aggressive as Cray's, and were based on more mainstream chip technology, attempting to make up for the loss in performance in other ways. Their first machine was the C1, released in 1985. The C1 was very similar to the Cray-1 in general design, but its CPU and main memory was implemented with slower but less expensive CMOS technology. They offset this by increasing the capabilities of the vector units, including doubling the vector registers' length to 128 64-bit elements each. It also used virtual memory as opposed to the static memory system of the Cray machines, which improved programming. It was generally rated at 20 MFLOPS peak for double precision (64-bit), and 40 MFLOPS peak for single precision (32-bit), about one fifth the normal speed of the Cray-1. They also invested heavily in advanced automatic vectorizing compilers in order to gain performance when existing programs were ported to their systems. The machines ran a BSD version of Unix known initially as Convex Unix then later as ConvexOS due to trademark and licensing issues. ConvexOS has DEC VMS compatibility features as well as Cray Fortran features. Their Fortran compiler went on to be licensed to other computers such as Ardent Computer and Stellar (and merged Stardent). The C2 was a crossbar-interconnected multiprocessor version of the C1, with up to four CPUs, released in 1988. It used newer 20,000-gate CMOS and 10,000-gate emitter-coupled logic (ECL) gate arrays for a boost in clock speed from 10 MHz to 25 MHz, and rated at 50 MFLOPS peak for double precision per CPU (100 MFLOPS peak for single precision). It was Convex's most successful product. The C2 was followed by the C3 in 1991, being essentially similar to the C2 but with a faster clock and support for up to eight CPUs implemented with low-density GaAs FPGAs. Various configurations of the C3 were offered, with 50 to 240 MFLOPS per CPU. However, the C3 and the Convex business model were overtaken by changes in the computer industry. The arrival of RISC microprocessors meant that it was no longer possible to develop cost-effective high-performa
https://en.wikipedia.org/wiki/RISC%20%28disambiguation%29
RISC is an abbreviation for reduced instruction set computer. RISC or Risc may also refer to: Computing Berkeley RISC Classic RISC pipeline, early RISC architecture CompactRISC, National Semiconductor family of RISC architectures MIPS RISC/os, a discontinued UNIX operating system developed by MIPS Computer Systems OpenRISC, a project to develop a series of open-source hardware PA-RISC, an instruction set architecture developed by Hewlett-Packard Research Institute for Symbolic Computation in Linz, Austria RISC iX, discontinued UNIX operating system RISC OS, operating system created by Acorn Computers RISC OS character set, used in the Acorn Archimedes History of RISC OS, Acorn Computers OS history RISC OS Open Ltd., (ROOL) a company engaged in computer software and IT consulting Risc PC, Acorn computer launched in 1994 RISC-V, an open-source hardware instruction set architecture (ISA) based on established reduced instruction set computer (RISC) principles. Other RNA-induced silencing complex, a multiprotein complex involved in gene silencing Rockwell Integrated Sciences Center, a building on the campus of Lafayette College. See also Risk (disambiguation)
https://en.wikipedia.org/wiki/Stobaeus
Joannes Stobaeus (; ; fl. 5th-century AD), from Stobi in Macedonia, was the compiler of a valuable series of extracts from Greek authors. The work was originally divided into two volumes containing two books each. The two volumes became separated in the manuscript tradition, and the first volume became known as the Extracts (also Eclogues) and the second volume became known as the Anthology (also Florilegium). Modern editions now refer to both volumes as the Anthology. The Anthology contains extracts from hundreds of writers, especially poets, historians, orators, philosophers and physicians. The subjects range from natural philosophy, dialectics, and ethics, to politics, economics, and maxims of practical wisdom. The work preserves fragments of many authors and works which otherwise might be unknown today. Life Nothing of his life is known. The age in which he lived cannot be fixed with accuracy. He quotes no writer later than the early 5th century, and he probably lived around this time. His surname apparently indicates that he was a native of Stobi in Macedonia Salutaris, while his given name, John, would probably indicate that he was a Christian, or at least the son of Christian parents, However, from his silence in regard to Christian authors, it has also been inferred that he was not a Christian. Work Stobaeus' anthology is a collection of extracts from earlier Greek writers, which he collected and arranged, in the order of subjects, as a repertory of valuable and instructive sayings. The extracts were intended by Stobaeus for his son Septimius, and were preceded by a letter briefly explaining the purpose of the work and giving a summary of the contents. The full title, according to Photius, was Four Books of Extracts, Sayings and Precepts (Ἐκλογῶν, ἀποφθεγμάτων, ὑποθηκῶν βιβλία τέσσαρα [Eklogon, apophthegmaton, hypothekon biblia tessara]). He quoted more than five hundred writers, generally beginning with the poets, and then proceeding to the historians, orators, philosophers, and physicians. The works of the greater part of these have perished. It is to him that we owe many of our most important fragments of the dramatists. He has quoted over 500 passages from Euripides, 150 from Sophocles, and over 200 from Menander. It is evident from this summary, preserved in Photius's Bibliotheca (9th century), that the work was originally divided into four books and two volumes, and that surviving manuscripts of the third book consist of two books which have been merged. At some time subsequent to Photius the two volumes were separated, and the two volumes became known to Latin Europe as the Eclogae and the Florilegium respectively. Modern editions have dropped these two titles and have reverted to calling the entire work the Anthology (). In most of the manuscripts there is a division into three books, forming two distinct works; the first and second books forming one work under the title Physical and Moral Extracts (also Eclogues; Greek: ), th
https://en.wikipedia.org/wiki/Digital%20elevation%20model
A digital elevation model (DEM) or digital surface model (DSM) is a 3D computer graphics representation of elevation data to represent terrain or overlaying objects, commonly of a planet, moon, or asteroid. A "global DEM" refers to a discrete global grid. DEMs are used often in geographic information systems (GIS), and are the most common basis for digitally produced relief maps. A digital terrain model (DTM) represents specifically the ground surface while DEM and DSM may represent tree top canopy or building roofs. While a DSM may be useful for landscape modeling, city modeling and visualization applications, a DTM is often required for flood or drainage modeling, land-use studies, geological applications, and other applications, and in planetary science. Terminology There is no universal usage of the terms digital elevation model (DEM), digital terrain model (DTM) and digital surface model (DSM) in scientific literature. In most cases the term digital surface model represents the earth's surface and includes all objects on it. In contrast to a DSM, the digital terrain model (DTM) represents the bare ground surface without any objects like plants and buildings (see the figure on the right). DEM is often used as a generic term for DSMs and DTMs, only representing height information without any further definition about the surface. Other definitions equalise the terms DEM and DTM, equalise the terms DEM and DSM, define the DEM as a subset of the DTM, which also represents other morphological elements, or define a DEM as a rectangular grid and a DTM as a three-dimensional model (TIN). Most of the data providers (USGS, ERSDAC, CGIAR, Spot Image) use the term DEM as a generic term for DSMs and DTMs. Some datasets such as SRTM or the ASTER GDEM are originally DSMs, although in forested areas, SRTM reaches into the tree canopy giving readings somewhere between a DSM and a DTM). DTMs are created from high resolution DSM datasets using complex algorithms to filter out buildings and other objects, a process known as "bare-earth extraction". In the following, the term DEM is used as a generic term for DSMs and DTMs. Types A DEM can be represented as a raster (a grid of squares, also known as a heightmap when representing elevation) or as a vector-based triangular irregular network (TIN). The TIN DEM dataset is also referred to as a primary (measured) DEM, whereas the Raster DEM is referred to as a secondary (computed) DEM. The DEM could be acquired through techniques such as photogrammetry, lidar, IfSAR or InSAR, land surveying, etc. (Li et al. 2005). DEMs are commonly built using data collected using remote sensing techniques, but they may also be built from land surveying. Rendering The digital elevation model itself consists of a matrix of numbers, but the data from a DEM is often rendered in visual form to make it understandable to humans. This visualization may be in the form of a contoured topographic map, or could use shading and f
https://en.wikipedia.org/wiki/Doppler%20radar
A Doppler radar is a specialized radar that uses the Doppler effect to produce velocity data about objects at a distance. It does this by bouncing a microwave signal off a desired target and analyzing how the object's motion has altered the frequency of the returned signal. This variation gives direct and highly accurate measurements of the radial component of a target's velocity relative to the radar. The term applies to radar systems in many domains like aviation, police radar detectors, navigation, meteorology, etc. Concept Doppler effect The Doppler effect (or Doppler shift), named after Austrian physicist Christian Doppler who proposed it in 1842, is the difference between the observed frequency and the emitted frequency of a wave for an observer moving relative to the source of the waves. It is commonly heard when a vehicle sounding a siren approaches, passes and recedes from an observer. The received frequency is higher (compared to the emitted frequency) during the approach, it is identical at the instant of passing by, and it is lower during the recession. This variation of frequency also depends on the direction the wave source is moving with respect to the observer; it is maximum when the source is moving directly toward or away from the observer and diminishes with increasing angle between the direction of motion and the direction of the waves, until when the source is moving at right angles to the observer, there is no shift. Imagine a baseball pitcher throwing one ball every second to a catcher (a frequency of 1 ball per second). Assuming the balls travel at a constant velocity and the pitcher is stationary, the catcher catches one ball every second. However, if the pitcher is jogging towards the catcher, the catcher catches balls more frequently because the balls are less spaced out (the frequency increases). The inverse is true if the pitcher is moving away from the catcher. The catcher catches balls less frequently because of the pitcher's backward motion (the frequency decreases). If the pitcher moves at an angle, but at the same speed, the frequency variation at which the receiver catches balls is less, as the distance between the two changes more slowly. From the point of view of the pitcher, the frequency remains constant (whether he's throwing balls or transmitting microwaves). Since with electromagnetic radiation like microwaves or with sound, frequency is inversely proportional to wavelength, the wavelength of the waves is also affected. Thus, the relative difference in velocity between a source and an observer is what gives rise to the Doppler effect. Frequency variation The formula for radar Doppler shift is the same as that for reflection of light by a moving mirror. There is no need to invoke Albert Einstein's theory of special relativity, because all observations are made in the same frame of reference. The result derived with c as the speed of light and v as the target radial velocity gives the shifted f
https://en.wikipedia.org/wiki/Proxy%20server
In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server providing that resource. It improves privacy, security, and performance in the process. Instead of connecting directly to a server that can fulfill a request for a resource, such as a file or web page, the client directs the request to the proxy server, which evaluates the request and performs the required network transactions. This serves as a method to simplify or control the complexity of the request, or provide additional benefits such as load balancing, privacy, or security. Proxies were devised to add structure and encapsulation to distributed systems. A proxy server thus functions on behalf of the client when requesting service, potentially masking the true origin of the request to the resource server. Types A proxy server may reside on the user's local computer, or at any point between the user's computer and destination servers on the Internet. A proxy server that passes unmodified requests and responses is usually called a gateway or sometimes a tunneling proxy. A forward proxy is an Internet-facing proxy used to retrieve data from a wide range of sources (in most cases, anywhere on the Internet). A reverse proxy is usually an internal-facing proxy used as a front-end to control and protect access to a server on a private network. A reverse proxy commonly also performs tasks such as load-balancing, authentication, decryption and caching. Open proxies An open proxy is a forwarding proxy server that is accessible by any Internet user. In 2008, network security expert Gordon Lyon estimated that "hundreds of thousands" of open proxies are operated on the Internet. Anonymous proxy: This server reveals its identity as a proxy server but does not disclose the originating IP address of the client. Although this type of server can be discovered easily, it can be beneficial for some users as it hides the originating IP address. Transparent proxy: This server not only identifies itself as a proxy server, but with the support of HTTP header fields such as X-Forwarded-For, the originating IP address can be retrieved as well. The main benefit of using this type of server is its ability to cache a website for faster retrieval. Reverse proxies A reverse proxy (or surrogate) is a proxy server that appears to clients to be an ordinary server. Reverse proxies forward requests to one or more ordinary servers that handle the request. The response from the original server is returned as if it came directly from the proxy server, leaving the client with no knowledge of the original server. Reverse proxies are installed in the vicinity of one or more web servers. All traffic coming from the Internet and with a destination of one of the neighborhood's web servers goes through the proxy server. The use of "reverse" originates in its counterpart "forward proxy" since the reverse proxy sits closer to the web se
https://en.wikipedia.org/wiki/Pixar
Pixar Animation Studios () is an American computer animation studio based in Emeryville, California known for its critically and commercially successful computer-animated feature films. Since 2006, Pixar has been a subsidiary of Walt Disney Studios, a division of the Disney Entertainment division of the Walt Disney Company. Pixar started in 1979 as part of the Lucasfilm computer division. It was known as the Graphics Group before its spin-off as a corporation in 1986, with funding from Apple co-founder Steve Jobs who became its majority shareholder. Disney purchased Pixar in January 2006 at a valuation of $7.4+ billion by converting each share of Pixar stock to 2.3 shares of Disney stock. Pixar is best known for its feature films, technologically powered by RenderMan, the company's own implementation of the industry-standard RenderMan Interface Specification image-rendering API. The studio's mascot is Luxo Jr., a desk lamp from the studio's 1986 short film of the same name. Pixar has produced 27 feature films as of October 2023, starting with Toy Story (1995), which is the first fully computer-animated feature film; its most recent film was Elemental (2023). The studio has also produced many short films. , its feature films have earned over $15 billion at the worldwide box office with an average gross of $546.9 million per film. Toy Story 3 (2010), Finding Dory (2016), Incredibles 2 (2018), and Toy Story 4 (2019) all grossed over $1 billion and are among the 50 highest-grossing films of all time. Moreover, 15 of Pixar's films are in the 50 highest-grossing animated films of all time. Pixar has earned 23 Academy Awards, 10 Golden Globe Awards, and 11 Grammy Awards, along with numerous other awards and acknowledgments. Since its inauguration in 2001, eleven Pixar films have won the Academy Award for Best Animated Feature, including Finding Nemo (2003), The Incredibles (2004), Ratatouille (2007), WALL-E (2008), Up (2009), Toy Story 3 (2010), Brave (2012), Inside Out (2015), Coco (2017), Toy Story 4 (2019), and Soul (2020). Toy Story 3 and Up were also nominated for the Academy Award for Best Picture. On February 10, 2009, Pixar executives John Lasseter, Brad Bird, Pete Docter, Andrew Stanton, and Lee Unkrich were presented with the Golden Lion award for Lifetime Achievement by the Venice Film Festival. The physical award was ceremoniously handed to Lucasfilm's founder, George Lucas. History Early history Pixar got its start in 1974, when New York Institute of Technology's (NYIT) founder, Alexander Schure, who was also the owner of a traditional animation studio, established the Computer Graphics Lab (CGL) and recruited computer scientists who shared his ambitions about creating the world's first computer-animated film. Edwin Catmull and Malcolm Blanchard were the first to be hired and were soon joined by Alvy Ray Smith and David DiFrancesco some months later, which were the four original members of the Computer Graphics Lab, located in a con
https://en.wikipedia.org/wiki/Maple%20%28software%29
Maple is a symbolic and numeric computing environment as well as a multi-paradigm programming language. It covers several areas of technical computing, such as symbolic mathematics, numerical analysis, data processing, visualization, and others. A toolbox, MapleSim, adds functionality for multidomain physical modeling and code generation. Maple's capacity for symbolic computing include those of a general-purpose computer algebra system. For instance, it can manipulate mathematical expressions and find symbolic solutions to certain problems, such as those arising from ordinary and partial differential equations. Maple is developed commercially by the Canadian software company Maplesoft. The name 'Maple' is a reference to the software's Canadian heritage. Overview Core functionality Users can enter mathematics in traditional mathematical notation. Custom user interfaces can also be created. There is support for numeric computations, to arbitrary precision, as well as symbolic computation and visualization. Examples of symbolic computations are given below. Maple incorporates a dynamically typed imperative-style programming language (resembling Pascal), which permits variables of lexical scope. There are also interfaces to other languages (C, C#, Fortran, Java, MATLAB, and Visual Basic), as well as to Microsoft Excel. Maple supports MathML 2.0, which is a W3C format for representing and interpreting mathematical expressions, including their display in web pages. There is also functionality for converting expressions from traditional mathematical notation to markup suitable for the typesetting system LaTeX. Architecture Maple is based on a small kernel, written in C, which provides the Maple language. Most functionality is provided by libraries, which come from a variety of sources. Most of the libraries are written in the Maple language; these have viewable source code. Many numerical computations are performed by the NAG Numerical Libraries, ATLAS libraries, or GMP libraries. Different functionality in Maple requires numerical data in different formats. Symbolic expressions are stored in memory as directed acyclic graphs. The standard interface and calculator interface are written in Java. History The first concept of Maple arose from a meeting in late 1980 at the University of Waterloo. Researchers at the university wished to purchase a computer powerful enough to run the Lisp-based computer algebra system Macsyma. Instead, they opted to develop their own computer algebra system, named Maple, that would run on lower cost computers. Aiming for portability, they began writing Maple in programming languages from the BCPL family (initially using a subset of B and C, and later on only C). A first limited version appeared after three weeks, and fuller versions entered mainstream use beginning in 1982. By the end of 1983, over 50 universities had copies of Maple installed on their machines. In 1984, the research group arranged with Watcom Pr
https://en.wikipedia.org/wiki/Amiga%203000
The Amiga 3000, or A3000, is a personal computer released by Commodore in June 1990. It is the successor to the Amiga 2000 and its upgraded model Amiga 2500 with more processing speed, improved graphics, and a new revision of the operating system. Its predecessors, the Amiga 500, 1000 and 2000, share the same fundamental system architecture and consequently perform without much variation in processing speed despite considerable variation in purchase price. The A3000 however, was entirely reworked and rethought as a high-end workstation. The new Motorola 32-bit 68030 CPU, 68882 math co-processor, and 32-bit system memory increase the integer processing speed by a factor of 5 to 18, and the floating-point processing speed by a factor of 7 to 200 times. The new 32-bit Zorro III expansion slots provide for faster and more powerful expansion capabilities. In common with earlier Amigas the 3000 runs a 32-bit operating system called AmigaOS. Version 2.0 is generally considered to have a more ergonomic and attractive interface than previous versions, which were designed with television sets as a lowest common denominator display. Access for application developers was simplified. The A3000UX is an A3000 variant bundled with the UNIX System V operating system. Commodore had a licensing agreement with AT&T to include a port of Unix System V (release 4). Commodore also sold a tower variant called the A3000T. An enhanced version, the Amiga 3000+, with the AGA chipset and an AT&T DSP3210 signal processing chip was produced to prototype stage in 1991. Although this system was never released, Commodore's negotiations with AT&T over the proper way to bundle their VCOS/VCAS operating system software in a personal computer environment helped Apple Computer deliver their Quadra 660 and Quadra 840 AV-series Macintosh systems, two years later. Instead of the Amiga 3000+, Commodore replaced the A3000 six months behind schedule, in the fall of 1992, with the A4000. The machine is reported to have sold 14,380 units in Germany (including Amiga 3000T sales). Technical information The Amiga 3000 shipped with a Motorola 68030 at either 16 or 25 MHz and 2 MB of RAM. It includes the Enhanced Chip Set (ECS), a display enhancer for use with a VGA monitor, and a DMA SCSI-II controller and hard disk drive. "Fast RAM" can be increased by fitting DIP (up to 4 MB) or ZIP DRAM chips (up to 16 MB) available in two varieties, Page Mode or Static Column. The A3000, unlike most Amiga models, supports both ROM-based Kickstarts and disk-based Kickstarts (the early "SuperKickstart" model), although not simultaneously. Kickstart V1.4 is actually a beta version of Kickstart which is loaded from disk. 68040 microprocessors require at least 2.0 ROMs. The A3000 has a number of Amiga-specific connectors including two DE-9 ports for joysticks, mice, and light pens, a standard 25-pin RS-232 serial port and a 25-pin Centronics parallel port. As a result, at launch the A3000 was compatible w
https://en.wikipedia.org/wiki/XBasic
XBasic is a variant of the BASIC programming language that was developed in the late 1980s for the Motorola 88000 CPU and Unix by Max Reason. In the early 1990s it was ported to Windows and Linux, and since 1999 it has been available as open source software with its runtime library under the LGPL license. It should not be confused with TI Extended BASIC, which is sometimes called XBasic or X Basic. It should also not be confused with the proprietary Xbasic language used in Alpha Software's Alpha Anywhere and Alpha Five products. Version 6.2.3 was the last official release, released on 27 October 2002, however unofficial releases are still being maintained by a group of enthusiasts on GitHub. Characteristics XBasic has signed and unsigned 8-, 16- and 32-bit and signed 64-bit integers as well as 32- and 64-bit floating point values. The string data type is only for 8-bit characters. It is possible to generate an assembly language file. XBasic has a Windows only version called XBLite. Development is at SourceForge. Components Editor (writing source code) Compiler (creating machine code) Debugger (checking for errors) Libraries (ready made code to call on) GuiDesigner (creates the graphical user interface for the program) Example code ' Programs contain: ' 1. A PROLOG with type/function/constant declarations. ' 2. This Entry() function where execution begins. ' 3. Zero or more additional functions. ' FUNCTION Entry() PRINT "Hello World" PRINT 2+2 PRINT 44/12 PRINT 33*3 END FUNCTION References External links https://groups.io/g/MaxReasonsxBasic {https://github.com/orgs/xbwlteam/repositories} documentation, links and resources Making your first GUI Tutorial Making a Standalone Executable XBLite homepage Category:XBasic Tasks implemented in XBasic on rosettacode.org Articles with example BASIC code BASIC compilers Linux integrated development environments Programming tools for Windows BASIC programming language family Free integrated development environments
https://en.wikipedia.org/wiki/Flight%20instruments
Flight instruments are the instruments in the cockpit of an aircraft that provide the pilot with data about the flight situation of that aircraft, such as altitude, airspeed, vertical speed, heading and much more other crucial information in flight. They improve safety by allowing the pilot to fly the aircraft in level flight, and make turns, without a reference outside the aircraft such as the horizon. Visual flight rules (VFR) require an airspeed indicator, an altimeter, and a compass or other suitable magnetic direction indicator. Instrument flight rules (IFR) additionally require a gyroscopic pitch-bank (artificial horizon), direction (directional gyro) and rate of turn indicator, plus a slip-skid indicator, adjustable altimeter, and a clock. Flight into instrument meteorological conditions (IMC) require radio navigation instruments for precise takeoffs and landings. The term is sometimes used loosely as a synonym for cockpit instruments as a whole, in which context it can include engine instruments, navigational and communication equipment. Many modern aircraft have electronic flight instrument systems. Most regulated aircraft have these flight instruments as dictated by the US Code of Federal Regulations, Title 14, Part 91. They are grouped according to pitot-static system, compass systems, and gyroscopic instruments. Pitot-static systems Instruments which are pitot-static systems use air pressure differences to determine speed and altitude. Altimeter The altimeter shows the aircraft's altitude above sea-level by measuring the difference between the pressure in a stack of aneroid capsules inside the altimeter and the atmospheric pressure obtained through the static system. The most common unit for altimeter calibration worldwide is hectopascals (hPa), except for North America and Japan where inches of mercury (inHg) are used. The altimeter is adjustable for local barometric pressure which must be set correctly to obtain accurate altitude readings, usually in either feet or meters. As the aircraft ascends, the capsules expand and the static pressure drops, causing the altimeter to indicate a higher altitude. The opposite effect occurs when descending. With the advancement in aviation and increased altitude ceiling, the altimeter dial had to be altered for use both at higher and lower altitudes. Hence when the needles were indicating lower altitudes i.e. the first 360-degree operation of the pointers was delineated by the appearance of a small window with oblique lines warning the pilot that he or she is nearer to the ground. This modification was introduced in the early sixties after the recurrence of air accidents caused by the confusion in the pilot's mind. At higher altitudes, the window will disappear. Airspeed indicator The airspeed indicator shows the aircraft's speed relative to the surrounding air. Knots is the currently most used unit, but kilometers per hour is sometimes used instead. The airspeed indicator works by meas
https://en.wikipedia.org/wiki/Uniq
uniq is a utility command on Unix, Plan 9, Inferno, and Unix-like operating systems which, when fed a text file or standard input, outputs the text with adjacent identical lines collapsed to one, unique line of text. Overview The command is a kind of filter program. Typically it is used after sort. It can also output only the duplicate lines (with the -d option), or add the number of occurrences of each line (with the -c option). For example, the following command lists the unique lines in a file, sorted by the number of times each occurs: $ sort file | uniq -c | sort -n Using uniq like this is common when building pipelines in shell scripts. History First appearing in Version 3 Unix, uniq is now available for a number of different Unix and Unix-like operating systems. It is part of the X/Open Portability Guide since issue 2 of 1987. It was inherited into the first version of POSIX and the Single Unix Specification. The version bundled in GNU coreutils was written by Richard Stallman and David MacKenzie. A uniq command is also part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2. The command is available as a separate package for Microsoft Windows as part of the GnuWin32 project and the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. The command has also been ported to the IBM i operating system. See also List of Unix commands References External links SourceForge UnxUtils – Port of several GNU utilities to Windows Unix text processing utilities Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands IBM i Qshell commands
https://en.wikipedia.org/wiki/Diff
In computing, the utility diff is a data comparison tool that computes and displays the differences between the contents of files. Unlike edit distance notions used for other purposes, diff is line-oriented rather than character-oriented, but it is like Levenshtein distance in that it tries to determine the smallest set of deletions and insertions to create one file from the other. The utility displays the changes in one of several standard formats, such that both humans or computers can parse the changes, and use them for patching. Typically, diff is used to show the changes between two versions of the same file. Modern implementations also support binary files. The output is called a "diff", or a patch, since the output can be applied with the Unix program . The output of similar file comparison utilities is also called a "diff"; like the use of the word "grep" for describing the act of searching, the word diff became a generic term for calculating data difference and the results thereof. The POSIX standard specifies the behavior of the "diff" and "patch" utilities and their file formats. History diff was developed in the early 1970s on the Unix operating system, which was emerging from Bell Labs in Murray Hill, New Jersey. The first released version shipped with the 5th Edition of Unix in 1974, and was written by Douglas McIlroy, and James Hunt. This research was published in a 1976 paper co-written with James W. Hunt, who developed an initial prototype of . The algorithm this paper described became known as the Hunt–Szymanski algorithm. McIlroy's work was preceded and influenced by Steve Johnson's comparison program on GECOS and Mike Lesk's program. also originated on Unix and, like , produced line-by-line changes and even used angle-brackets (">" and "<") for presenting line insertions and deletions in the program's output. The heuristics used in these early applications were, however, deemed unreliable. The potential usefulness of a diff tool provoked McIlroy into researching and designing a more robust tool that could be used in a variety of tasks, but perform well in the processing and size limitations of the PDP-11's hardware. His approach to the problem resulted from collaboration with individuals at Bell Labs including Alfred Aho, Elliot Pinson, Jeffrey Ullman, and Harold S. Stone. In the context of Unix, the use of the line editor provided with the natural ability to create machine-usable "edit scripts". These edit scripts, when saved to a file, can, along with the original file, be reconstituted by into the modified file in its entirety. This greatly reduced the secondary storage necessary to maintain multiple versions of a file. McIlroy considered writing a post-processor for where a variety of output formats could be designed and implemented, but he found it more frugal and simpler to have be responsible for generating the syntax and reverse-order input accepted by the command. In 1984, Larry Wall created a separate
https://en.wikipedia.org/wiki/BNC
BNC may refer to: Science and technology Binucleated cells BNC connector (Bayonet Neill–Concelman), a type of RF coaxial cable jack BNC (software), (abbreviation of Bounced Network Connection) an IRC program functioning as a proxy between an IRC client and a type of computer network proxy redirector Biological, Nuclear, Chemical, types of weapons of mass destruction Businesses and organizations Black News Channel, a cable news and current affairs channel in Tallahassee, Florida, US Ballet Nouveau Colorado, a contemporary ballet company based in Broomfield, Colorado, US BNC Bank, also called Bank of North Carolina, a former bank based in High Point, North Carolina, US Banque Nationale du Canada or National Bank of Canada Berkeley Nucleonics Corporation, a US manufacturer of precision electronic instrumentation Bernama News Channel, a news and current affairs channel in Malaysia, formerly known as Bernama TV Bethany Nazarene College, the former name of Southern Nazarene University Biblioteca Nacional de Chile, National Library of Chile Bindura Nickel Corporation, a Zimbabwe mining company, a subsidiary of Mwana Africa plc Banco Nacional de Crédito or Banco Nacional de Crédito, based in Caracas, Venezuela Brasenose College, Oxford, a constituent college of the University of Oxford Bollack Netter and Co (Bollack, Netter, et Cie), a French automobile company producing lightweight cars from 1922 until 1935 Telesta Therapeutics, as a former Toronto Stock Exchange trading symbol Brand New Congress, a US political action committee Bellambi Neighbourhood Centre, a community centre in Wollongong, Australia Other uses Bangalore Cantonment, a railway station in Bangalore, India Beni Airport (IATA airport code), in the Democratic Republic of the Congo Beth Nielsen Chapman, singer-songwriter and composer British National Corpus, a corpus of written and spoken English Mitchell Camera, Mitchell NC/BNC ("Newsreel Camera"/"Blimped Newsreel Camera")
https://en.wikipedia.org/wiki/PSV
PSV may refer to: Partial specific volume PlayStation Vita, a handheld game console produced by Sony Computer Entertainment Petit Saint Vincent, an island south of St. Vincent in the Grenadine islands Platform supply vessel, a specific type of ship Police Support Volunteer, the rank of a volunteer performing civilian and usually office based work for British Police Forces Pressure safety valve Pressure Support Ventilation, a form of pressure cycled ventilation that gives a patient a set pressure of air at every breath initialization but has no respiratory rate set PSV Eindhoven, a football club from Eindhoven, Netherlands PSV (women), a women's football team representing PSV Eindhoven in the Eredivisie Vrouwen PSV Nickerie, a football club from Nieuw Nickerie, Suriname Public service vehicle, a bus Private Security Vehicle Garuda Vega, a fictional ship in the 2017 Indian Telugu-language film PSV Garuda Vega
https://en.wikipedia.org/wiki/Fsck
The system utility fsck (file system consistency check) is a tool for checking the consistency of a file system in Unix and Unix-like operating systems, such as Linux, macOS, and FreeBSD. The equivalent programs on MS-DOS and Microsoft Windows are CHKDSK, SFC, and SCANDISK. Pronunciation There is no agreed pronunciation. It can be pronounced "F-S-C-K", "F-S-check", "fizz-check", "F-sack", "fisk", "fizz-k", "fishcake", "fizik", "F-sick", "F-sock", "F-suck", "F-sek", "feshk", the sibilant "fsk", "fix", "farsk", "fosk" or "fusk". Use Generally, fsck is run either automatically at boot time, or manually by the system administrator. The command works directly on data structures stored on disk, which are internal and specific to the particular file system in use - so an fsck command tailored to the file system is generally required. The exact behaviors of various fsck implementations vary, but they typically follow a common order of internal operations and provide a common command-line interface to the user. On modern systems, fsck simply detects the type of filesystem and calls the specialized (Linux) or (BSD, macOS) program for each type. Most fsck utilities provide options for either interactively repairing damaged file systems (the user must decide how to fix specific problems), automatically deciding how to fix specific problems (so the user does not have to answer any questions), or reviewing the problems that need to be resolved on a file system without actually fixing them. Partially recovered files where the original file name cannot be reconstructed are typically recovered to a "lost+found" directory that is stored at the root of the file system. A system administrator can also run fsck manually if they believe there is a problem with the file system. The file system is normally checked while unmounted, mounted read-only, or with the system in a special maintenance mode. Boot time As boot time fsck is expected to run without user intervention, it generally defaults to not perform any destructive operations. This may be in the form of a read-only check (failing whenever issues are found), or more commonly, a "preen" mode that only fixes innocuous issues commonly found after an unclean shutdown (i.e. crash, power fail). ext2/3/4 offers an option to force a boot-time check after a specified number of mounts, so that periodic checking can be done. Some modern file systems do not require fsck to be at boot after an unclean shutdown. Some examples are: XFS, a journaling file system. It has a dummy fsck which does nothing and an actual xfs_repair tool to be run when problems are suspected. UFS2 file system in FreeBSD, which can delay the check to background if soft updates are enabled. As a result, it is usually not necessary to wait for fsck to finish before accessing the disk. This design is reflected by the flag used at boot. ZFS and Btrfs, two full copy-on-write file systems. They avoid in-place changes to assure levels of co
https://en.wikipedia.org/wiki/Amiga%202000
The Amiga 2000, or A2000, is a personal computer released by Commodore in March 1987. It was introduced as a "big box" expandable variant of the Amiga 1000 but quickly redesigned to share most of its electronic components with the contemporary Amiga 500 for cost reduction. Expansion capabilities include two 3.5" drive bays (one of which is used by the included floppy drive) and one 5.25" bay that could be used by a 5.25" floppy drive (for IBM PC compatibility), a hard drive, or CD-ROM once they became available. The Amiga 2000 is the first Amiga model that allows expansion cards to be added internally. SCSI host adapters, memory cards, CPU cards, network cards, graphics cards, serial port cards, and PC compatibility cards were available, and multiple expansions can be used simultaneously without requiring an expansion cage like the Amiga 1000 does. Not only does the Amiga 2000 include five Zorro II card slots, the motherboard also has four PC ISA slots, two of which are inline with Zorro II slots for use with the A2088 bridgeboard, which adds IBM PC XT compatibility to the A2000. The Amiga 2000 was the most versatile and expandable Amiga computer until the Amiga 3000 was introduced three years later. The machine is reported to have sold 124,500 units in Germany. Features Aimed at the high-end market, the original Europe-only model adds a Zorro II backplane, implemented in programmable logic, to the custom Amiga chipset used in the Amiga 1000. Later improved models have redesigned hardware using the more highly integrated Amiga 500 chipset, with the addition of a gate-array called "Buster", which integrates the Zorro subsystem. This also enables hand-off of the system control to a coprocessor slot device, and implements the full video slot for add-on video devices. Like the earlier Amiga 1000 and most IBM PC compatibles of the era (but unlike the Amiga 500), the A2000 comes in a desktop case with a separate keyboard. The case is taller than the A1000 to accommodate expansion cards, two 3.5" and one 5.25" drive bays. The A2000's case lacks the "keyboard garage" of the Amiga 1000 but has space for five Zorro II expansion slots, two 16-bit and two 8-bit ISA slots, a CPU upgrade slot and a video slot. Unlike the A1000, the A2000's motherboard includes a battery-backed real-time clock. The Amiga 2000 offers graphics capabilities exceeded among its contemporaries only by the Macintosh II, which sold for about twice the price of a comparably-outfitted Amiga 2000 additionally equipped with the IBM PC Compatible bridgeboard and 5.25" floppy disk drive (which was important for real-world interoperability at this time). Also like the A1000, the A2000 was sold only by specialty computer dealers. It was originally announced at a price of . Variants The Amiga 2000 was designed with an open architecture. Commodore's engineers believed that the company would probably be unsuccessful in matching the rate of system obsolesce and replacement then common in t
https://en.wikipedia.org/wiki/Kick%20start%20%28disambiguation%29
A kick start is the task of using the foot to start a motorcycle. The term may also refer to: Technology Kickstart (Amiga), the bootstrap of the Amiga computers developed by Commodore Kickstart (Linux), a network installation system for some Linux distributions Yahoo! Kickstart, a professional network from Yahoo! BlackBerry KickStart, the codename for the Pearl smartphone Kickstart (orthosis), the Kickstart Walking System by Cadence Biomedical Organizations Kickstart Kids, a US charity founded by Chuck Norris Kickstarter, a US-based global crowdfunding platform KickStart International, a non-profit organization that provides irrigation technology to farmers in Africa Entertainment Kick Start (TV series), a UK television series on motor bike competitions, and a spinoff computer game Kick Start (album), an album by the English band The Lambrettas "Kick Start", single by Jerry Harrison "Kickstarts" (song), a 2010 song by Example Other uses Kickstart, Homes and Communities Agency funding programme for private housing in the UK See also Project KickStart, project management software Mountain Dew Kickstart, a line of energizing drinks made by Mountain Dew
https://en.wikipedia.org/wiki/UAE%20%28emulator%29
UAE is a computer emulator which emulates the hardware of Commodore International's Amiga range of computers. Released under the GNU General Public License, UAE is free software. History Bernd Schmidt conceived of an emulator that can run Amiga software when he found that such a task was widely believed to be impossible. Schmidt had written previous programs for Amiga, and was further motivated by the desire to not lose games, demos, and sound modules to switching operating systems. UAE was released in 1995 and was originally called the Unusable Amiga Emulator, due to its inability to boot. In its early stages, it was known as Unix Amiga Emulator and later with other names as well. Today, what the name of the software product, stands for today, is the Universal Amiga Emulator. Features UAE is almost a full-featured Amiga emulator. It emulates most of its functions: Original Chip Set (OCS), Enhanced Chip Set (ECS) and Advanced Graphics Architecture (AGA) I/O devices: (floppy disk drives, joystick, mouse and serial ports) Processor: Motorola 68000/010/020/040 CPU, optionally a 68881 FPU, and as of WinUAE 3.0.0 beta 15, an enhanced PowerPC JIT core using the QEMU CPU libraries. Memory: 2 MB Chip RAM and 8 MB Fast RAM, or 8 MB Chip RAM without Fast RAM. 64 MB Zorro III Fast RAM, independent of Chip RAM setting (68020+ only). 1 MB Slow RAM, for compatibility. Picasso 96 graphics with 8 MB of memory Serial port, and Simple parallel port is only sufficient for printing. Networking via bsdsocket.library emulation For software, UAE may use disk images made from original Amiga floppy disks. These images have the file extension of "ADF" (Amiga Disk File). Actual Amiga disks cannot be used, because of limitations in the floppy controllers used in other computers. Images of Amiga formatted hard drives can also be made. UAE also supports mapping host operating system's directories to Amiga hard drives, and finally, physical Amiga formatted hard drives can be mounted. UAE does not include the original Amiga operating system ROM and files, which are required for running an Amiga system. These are included under license in packages like Amiga Forever. Original Kickstart 3.1 ROM images are also included with AmigaOS4 for PowerPC since version 4.1 Update 4. UAE also supports alternative system ROMs, such as those derived from the AROS project, however these do not provide the same degree of software compatibility as the original ROMs. Portability UAE has been ported to many host operating systems, including Linux, macOS, FreeBSD, DOS, Microsoft Windows, RISC OS, BeOS, Palm OS, Android, the Xbox console, the PSP, PSVita and GP2X handhelds, iOS, the Wii and Dreamcast consoles, and even to AmigaOS, MorphOS and AROS. Emulation speed There have been many threads in the past on Usenet and other public forums where people argued about the possibility of writing an Amiga emulator. Some considered UAE to be attempting the impossible; to be demanding that a sy
https://en.wikipedia.org/wiki/Johnny%20Mnemonic%20%28film%29
Johnny Mnemonic is a 1995 cyberpunk action film directed by Robert Longo in his feature directorial debut. William Gibson, who wrote the 1981 short story, wrote the screenplay. The film, set in 2021, portrays a dystopian future wracked by a tech-induced plague, awash with conspiracies, and dominated by megacorporations and organized crime. Keanu Reeves plays Johnny, a data courier with an overloaded brain implant designed to securely store confidential information. Takeshi Kitano portrays a Yakuza affiliated with a megacorporation attempting to suppress the data; he hires a psychopathic assassin played by Dolph Lundgren to do so. Ice-T and Dina Meyer co-star as Johnny's allies, a freedom fighter and a bodyguard, respectively. It was shot in Canada; Toronto and Montreal filled in for Newark and Beijing. The project was difficult for Gibson and Longo. After they struggled for years to finance a low-budget adaptation of Gibson's story, Sony greenlit Johnny Mnemonic with a $26 million budget. When Reeves' previous film, Speed, unexpectedly became a major hit, Sony attempted to retool Johnny Mnemonic as a blockbuster. Longo experienced extensive creative differences with the studio, who forced casting choices and script rewrites on him. The film was ultimately recut without Longo's involvement, resulting in a version that he felt did not reflect his artistic vision. Described by Longo and Gibson as originally full of irony, it was edited into a mainstream action film and received negative reviews from critics. A longer version (103 mins) of the film premiered in Japan on April 15, 1995, featuring a score by Mychael Danna and more scenes involving Kitano. The film was released in the United States on May 26, 1995. In 2022, a black-and-white edition of the film, titled Johnny Mnemonic: In Black and White, which Gibson characterized as closer to his original vision. Plot In 2021, society is driven by a virtual Internet, which has created a degenerate effect called "nerve attenuation syndrome" or NAS. Megacorporations control much of the world, intensifying the class hostility already created by NAS. Johnny is a "mnemonic courier" who discreetly transports sensitive data for corporations in a storage device implanted in his brain at the cost of his childhood memories. His current job is for a group of scientists in Beijing. Johnny initially balks when he learns the data exceeds his memory capacity even with compression, but agrees given the large fee will be enough to cover the cost of the operation to remove the device. Johnny keeps it secret that he is overloaded; he must have the data extracted within a few days or suffer fatal brain damage and corrupt the data. The scientists encrypt the data with three random images from a television feed. As they transmit these images to the receiver in Newark, New Jersey, they are attacked and killed by Yakuza led by Shinji, who wields a laser whip. Johnny battles the Yakuza and grabs a fragment of the e
https://en.wikipedia.org/wiki/DCE
DCE may refer to: Science Dichloroethanes, organic solvents Dichloroethenes, also called dichloroethylene, organic solvents Dynamic contrast enhanced, a type of perfusion MRI Computing Data circuit-terminating equipment, also called data communication(s) equipment or data carrier equipment Data Center Ethernet Distributed Computing Environment, a specification from The Open Group Dead-code elimination, a kind of compiler optimization Digital Consumer Enablement, a non-neutral term for Digital rights management Organisations Delhi College of Engineering, University of Delhi, India Dalian Commodity Exchange Drum Corps Europe Other uses Daly Cherry-Evans Digital currency exchanger
https://en.wikipedia.org/wiki/Category%205
Category 5 may refer to: Category 5 (album), an album from rock band, FireHouse Category 5 cable, used for carrying data Category 5 computer virus, as classified by Symantec Corporation Category 5 Records, a record label Category 5 tropical cyclone, on any of the Tropical cyclone scales Any of several hurricanes listed at List of Category 5 Atlantic hurricanes or List of Category 5 Pacific hurricanes Category 5 pandemic, on the pandemic severity index, an American influenza pandemic with a case-fatality ratio of 2% or higher Category 5 winter storm, on the Northeast Snowfall Impact Scale and the Regional Snowfall Index Any of several winter storms listed at list of Northeast Snowfall Impact Scale winter storms and list of Regional Snowfall Index Category 5 winter storms See also Category V (disambiguation) Class 5 (disambiguation) Group 5 (disambiguation) Type 5 (disambiguation)
https://en.wikipedia.org/wiki/Droid%20%28Star%20Wars%29
In the Star Wars space opera franchise, a droid is a fictional robot possessing some degree of artificial intelligence. The term is a clipped form of "android", a word originally reserved for robots designed to look and act like a human. The word "android" itself stems from the New Latin word "androīdēs", meaning "manlike", itself from the Ancient Greek ἀνδρος (andrós) (genitive of ἀνήρ (anḗr), "man (adult male)" or "human being") + -ειδής (-eidḗs), itself from εἶδος (eîdos, "form, image, shape, appearance, look"). Writer and director George Lucas first used the term "droid" in the second draft script of Star Wars, completed 28 January 1975. However, the word does have a precedent: science fiction writer Mari Wolf used the word in her story "Robots of the World! Arise!" in 1952. It's not known if Lucas knew of this reference when he wrote Star Wars, or if he came up with the term independently. The word "droid" has been a registered trademark of Lucasfilm Ltd since 1977. Behind the scenes Droids are performed using a variety of methods, including robotics, actors inside costumes (in one case, on stilts), and computer animation. Trademark Lucasfilm registered "droid" as a trademark in 1977. The term "Droid" has been used by Verizon Wireless under licence from Lucasfilm, for their line of smartphones based on the Android operating system. Motorola's late-2009 Google Android-based cell phone is called the Droid. This line of phone has been expanded to include other Android-based phones released under Verizon, including the HTC Droid Eris, the HTC Droid Incredible, Motorola Droid X, Motorola Droid 2, and Motorola Droid Pro. The term was also used for the Lucasfilm projects EditDroid, a non-linear editing system, and SoundDroid, an early digital audio workstation. The name "Omnidroid" was used with permission of Lucasfilm for the 2004 Pixar movie, The Incredibles, referring to a line of lethal robots built by the film's antagonist. Fictional types of droids The franchise, which began with the 1977 film Star Wars, features a variety of droids designed to perform specific functions. According to background material, most droids lack true sentience and are given processing abilities sufficient only to carry out their assigned function. However, over time droids may develop sentience on their own as they accumulate experience. Periodic memory wipes can prevent this from happening, but those who manage to escape this fate will begin to develop their own personalities. Within the Star Wars universe, a class system is used to categorize different droids depending on their skill-set: first class droids (physical, mathematical and medical sciences), second class droids (engineering and technical sciences), third class droids (social sciences and service functions), fourth class droids (security and military functions), and fifth class droids (menial labor and other non-intelligence functions). Protocol droid A protocol droid specializes in translation,
https://en.wikipedia.org/wiki/Wubi%20method
The Wubizixing input method (), often abbreviated to simply Wubi or Wubi Xing, is a Chinese character input method primarily for inputting simplified Chinese and traditional Chinese text on a computer. Wubi should not be confused with the Wubihua (五笔画) method, which is a different input method that shares the categorization into five types of strokes. The method is also known as Wang Ma (), named after the inventor Wang Yongmin (王永民). There are four Wubi versions that are considered to be standard: Wubi 86, Wubi 98, Wubi 18030 and Wubi New-century (the 3rd-generation Version). The latter three can also be used to input traditional Chinese text, albeit in a more limited way. Wubi 86 is the most widely known and used shape-based input method for full letter keyboards in Mainland China. If it is frequently needed to input traditional Chinese characters as well, other input methods like Cangjie or Zhengma may be better suited to the task, and it is also much more likely to find them on the computer one needs to use. The Wubi method is based on the structure of characters rather than their pronunciation, making it possible to input characters even when the user does not know the pronunciation, as well as not being too closely linked to any particular spoken variety of Chinese. It is also extremely efficient: nearly every character can be written with at most 4 keystrokes. In practice, most characters can be written with fewer. There are reports of experienced typists reaching 160 characters per minute with Wubi. What this means in the context of Chinese is not entirely the same as it is for English, but it is true that Wubi is extremely fast when used by an experienced typist. The main reason for this is that, unlike with traditional phonetic input methods, one does not have to spend time selecting the desired character from a list of homophonic possibilities: virtually all characters have a unique representation. As its name suggests, the keyboard is divided into five regions. The Chinese character 笔 (bǐ), when used in the context of writing Chinese characters, refers to the brush strokes used in Chinese calligraphy. Each region is assigned a certain type of stroke. Region 1: horizontal (一) Region 2: vertical (丨) Region 3: downward right-to-left (丿) Region 4: dot strokes or downward left-to-right strokes (丶) Region 5: hook (乙) A major drawback to learning Wubi is its steeper learning curve, since as a more complex system it takes longer to acquire as a skill. Memorization and practice are key factors for proficient usage. To use Wubi, there are multiple input methods available, including Google Input Tools (used by Google Translate) and keyboard options on Mac devices. Wubi sequences can be looked up for specific characters by using online dictionaries. In this article, the following convention will be used: character will always mean Chinese character, whereas letter, key and keystroke will always refer to the keys on keyboard. How it
https://en.wikipedia.org/wiki/32-bit%20computing
In computer architecture, 32-bit computing refers to computer systems with a processor, memory, and other major system components that operate on data in 32-bit units. Compared to smaller bit widths, 32-bit computers can perform large calculations more efficiently and process more data per clock cycle. Typical 32-bit personal computers also have a 32-bit address bus, permitting up to 4 GB of RAM to be accessed, far more than previous generations of system architecture allowed. 32-bit designs have been used since the earliest days of electronic computing, in experimental systems and then in large mainframe and minicomputer systems. The first hybrid 16/32-bit microprocessor, the Motorola 68000, was introduced in the late 1970s and used in systems such as the original Apple Macintosh. Fully 32-bit microprocessors such as the HP FOCUS, Motorola 68020 and Intel 80386 were launched in the early to mid 1980s and became dominant by the early 1990s. This generation of personal computers coincided with and enabled the first mass-adoption of the World Wide Web. While 32-bit architectures are still widely-used in specific applications, the PC and server market has moved on to 64 bits with x86-64 since the mid-2000s with installed memory often exceeding the 32-bit 4G RAM address limits on entry level computers. The latest generation of mobile phones have also switched to 64 bits. Range for storing integers A 32-bit register can store 232 different values. The range of integer values that can be stored in 32 bits depends on the integer representation used. With the two most common representations, the range is 0 through 4,294,967,295 (232 − 1) for representation as an (unsigned) binary number, and −2,147,483,648 (−231) through 2,147,483,647 (231 − 1) for representation as two's complement. One important consequence is that a processor with 32-bit memory addresses can directly access at most 4 GiB of byte-addressable memory (though in practice the limit may be lower). Technical history The world's first stored-program electronic computer, the Manchester Baby, used a 32-bit architecture in 1948, although it was only a proof of concept and had little practical capacity. It held only 32 32-bit words of RAM on a Williams tube, and had no addition operation, only subtraction. Memory, as well as other digital circuits and wiring, was expensive during the first decades of 32-bit architectures (the 1960s to the 1980s). Older 32-bit processor families (or simpler, cheaper variants thereof) could therefore have many compromises and limitations in order to cut costs. This could be a 16-bit ALU, for instance, or external (or internal) buses narrower than 32 bits, limiting memory size or demanding more cycles for instruction fetch, execution or write back. Despite this, such processors could be labeled 32-bit, since they still had 32-bit registers and instructions able to manipulate 32-bit quantities. For example, the IBM System/360 Model 30 had an 8-bit ALU, 8-bit in
https://en.wikipedia.org/wiki/Databus
Databus may refer to: Bus (computing), a communication system that transfers data between different components in a computer or between different computers Memory bus, a bus between the computer and the memory PCI bus, a bus between motherboard and peripherals that uses the Peripheral Component Interconnect standard USB (Universal Serial Bus), a standard communication protocol used by many portable devices, computer peripherals and storage media Programming Language for Business, a business-oriented programming language originally called DATABUS the Databus project from DBpedia
https://en.wikipedia.org/wiki/Amiga%20Chip%20RAM
Chip RAM is a commonly used term for the integrated RAM used in Commodore's line of Amiga computers. Chip RAM is shared between the central processing unit (CPU) and the Amiga's dedicated chipset (hence the name). It was also, rather misleadingly, known as "graphics RAM". Direct memory access Under the Amiga architecture, the direct memory access (DMA) controller is integrated into the Agnus (Alice on AGA models) chip. Both the CPU and other members of the chipset have to arbitrate for access to shared RAM via Agnus. This allows the custom chips to perform video, audio, or other DMA operations independently of the CPU. As the 68000 processor used in early Amiga systems usually only accesses memory on every second memory cycle, Agnus operates a system where the "odd" clock cycle is allocated to time-critical custom chip access and the "even" cycle is allocated to the CPU: thus, for average DMA demand, the CPU is not typically blocked from memory access and may run without interruption. However, certain chipset DMA, such as high-resolution graphics with a larger color palette, Copper, or blitter operations, can use any spare cycles, effectively blocking cycles from the CPU. In such situations CPU cycles are only blocked while accessing shared RAM, but never when accessing Fast (CPU-only) RAM (when present) or ROM. Chip RAM by model Most stock Amiga systems were equipped with Chip RAM only and shipped with between 256 kiB and 2 MiB. The shared RAM data bus is 16-bit on OCS and ECS systems. The later AGA systems use a 32-bit data bus controlled by the Alice coprocessor (replacing Agnus) and 32-bit RAM. The memory clock runs at double the rate on AGA systems. As a result, chipset RAM bandwidth is increased fourfold compared to the earlier 16-bit design. However, 32-bit access is limited to CPU and graphics DMA and cannot be used for other devices. The ECS-based A3000 also has 32-bit Chip RAM, but access is only 32-bit for CPU operations; the chipset remained 16-bit. The maximum amount of Chip RAM is dependent on the Agnus/Alice version. The original Agnus chip fitted to the A1000 and early A2000 systems is a 48-pin DIP package able to address 512 KiB of Chip RAM. Subsequent versions of the Agnus are in an 84-pin PLCC package (either socketed or surface-mounted). All models except the A1000 are upgradable to 2 MiB of Chip RAM. The A500 and the early A2000B can accommodate 1 MiB by installing a later revision Agnus chip (8732A) with minimal hardware modifications; late-production machines usually already contained that chip, so that only jumper modifications were necessary. Likewise, 2 MB can be installed by fitting an 8372B Agnus and extra memory. The maximum amount of Chip RAM in any model is 2 MiB. The Amiga 4000 motherboard includes a non-functional jumper that anticipated later chips and is labeled for 8 MiB of Chip RAM—regardless of its position, the system only recognizes 2 MiB due to the limitations of the Alice chip. How
https://en.wikipedia.org/wiki/Planar%20%28computer%20graphics%29
In computer graphics, planar is the method of arranging pixel data into several bitplanes of RAM. Each bit in a bitplane is related to one pixel on the screen. Unlike packed, high color, or true color graphics, the whole dataset for an individual pixel is not in one specific location in RAM, but spread across the bitplanes that make up the display. Planar arrangement determines how pixel data is laid out in memory, not how the data for a pixel is interpreted; pixel data in a planar arrangement could encode either indexed or direct color. This scheme originated in the early days of computer graphics. The memory chips of this era can not supply data fast enough on their own to generate a picture on a TV screen or monitor from a large framebuffer. By splitting the data up into multiple planes, each plane can be stored on a separate memory chip. These chips can then be read in parallel at a slower rate, allowing graphical display on modest hardware, like game consoles of the third and fourth generations and home computers of the 80s. The EGA video adapter on early IBM PC computers uses planar arrangement in color graphical modes for this reason. The later VGA includes one non-planar mode which sacrifices memory efficiency for more convenient access. Hardware with planar graphics Game consoles with a planar display organization include Sega´s Master System and Game Gear, Nintendo´s NES / SNES, and the PC Engine. The British 8-bit BBC Micro has partial elements of a planar pixel arrangement. The Slovak PP 01 includes a 24KB plane-based 8-colour graphics mode with a resolution of 256x256 pixels. The 16-bit Atari ST and Amiga platforms from the 80s and 90s were exclusively based on a planar graphics configuration alongside a powerful blitter. Amiga´s OCS graphics chipset works with 5 bitplanes while later models with the AGA chipset can handle eight bitplanes. For the Sinclair (Amstrad) ZX Spectrum computer family and compatible systems, a graphics expansion named HGFX was developed in 2019. In 2022 it was implemented in FPGA-based hardware. The HGFX enables a memory organization that is compatible with the original ZX Spectrum system while taking up only 6144 bytes of the original video RAM. In addition, it provides two video-buffers, 256 indexed colours, a truecolour palette, and an HDMI output. The HGFX works with eight bitplanes. Currently it is implemented as part of the eLeMeNt ZX computer. Examples On a chunky display with 4-bits-per-pixel and a RGBI palette, each byte represents two pixels, with 16 different colors available for each pixel. Four consecutive pixels are stored in two consecutive bytes as follows: Whereas a planar scheme could use 2 bitplanes, providing for a 4 color display. Eight pixels would be stored as 2 bytes non-contiguously in memory: In the planar example, 2 bytes represent 8 pixels with 4 available colors, where the packed pixel example uses 2 bytes to represent fewer pixels but with more colors. Adding planes
https://en.wikipedia.org/wiki/Voyager
Voyager may refer to: Computing and communications LG Voyager, a mobile phone model manufactured by LG Electronics NCR Voyager, a computer platform produced by NCR Corporation Voyager (computer worm), a computer worm affecting Oracle databases Voyager (library program), the integrated library system from Ex Libris Group Voyager (web browser), a web browser for Amiga computers Voyager Digital, a defunct cryptocurrency brokerage company HP Voyager series, code name for a Hewlett-Packard series of handheld programmable calculators Transport Air Airbus Voyager, Royal Air Force version of the Airbus A330 MRTT Frequent flyer program of South African Airways Egvoyager Voyager 203, an Italian ultralight aircraft Raj Hamsa Voyager, an Indian ultralight trike design Rutan Voyager, the first airplane to fly around the world nonstop without refuelling Land Bombardier Voyager, a high-speed train operated in the United Kingdom Bombardier Voyager (British Rail Class 220), a non-tilting train built 2000–2001 Bombardier Super Voyager (British Rail Class 221), a tilting train built 2001–2002 Chrysler Voyager, a minivan Kawasaki Voyager, two series of motorcycles Mercury Voyager, a station wagon Plymouth Voyager, two series of vans Water , a Royal Australian Navy destroyer , a Royal Navy destroyer , a high-speed ferry , a US Navy motorboat , a Royal Caribbean cruise ship Space Voyager program, a NASA program of uncrewed space probes Voyager 1, an uncrewed spacecraft launched September 5, 1977 Voyager 2, an uncrewed spacecraft launched August 20, 1977 Voyager program (Mars), a cancelled series of space probes which would have traveled to the planet Mars VSS Voyager, the proposed second vessel of the Virgin Galactic suborbital tourism fleet Voyager (communications satellite), a series of American OSCAR satellites The Voyager, the first simulator constructed at the Christa McAuliffe Space Education Center Arts and entertainment Film Voyager, a 1991 German film V'Ger, or Voyager 6, a fictional NASA space probe in Star Trek: The Motion Picture (1979) Voyagers (film), a 2021 American science fiction film Music Voyager (English band), a British pop-rock group Voyager (Australian band), an Australian progressive metal band Voyager (Manilla Road album) Voyager (Mike Oldfield album) Voyager (Paul Epworth album) Voyager (Space Needle album) Voyager (Walter Meego album) Voyager (311 album) "Voyager" (song), by Pendulum "Voyager", a song from the Alan Parsons Project album Pyramid "Voyager", a song from the Daft Punk album Discovery Voyager, an album by Funk Trek Voyager, an album by The Jet Age of Tomorrow The Voyager 2014 album by Jenny Lewis Minimoog Voyager, an electronic musical instrument Voyager, a Japanese band. Television Voyagers!, an NBC television series, broadcast from 1982 to 1983 Earth Star Voyager, a 1988 television pilot that aired on Wonderful World of Disney Star Trek: Voyager, a UPN science fict
https://en.wikipedia.org/wiki/High%20color
High color graphics is a method of storing image information in a computer's memory such that each pixel is represented by two bytes. Usually the color is represented by all 16 bits, but some devices also support 15-bit high color. In Windows 7, Microsoft used the term high color to identify display systems that can make use of more than 8-bits per color channel (10:10:10:2 or 16:16:16:16 rendering formats) from traditional 8-bit per color channel formats. This is a different and distinct usage from the 15-bit (5:5:5) or 16-bit (5:6:5) formats traditionally associated with the phrase high color; see deep color. 15-bit high color In 15-bit high color, one of the bits of the two bytes is ignored or set aside for an alpha channel, and the remaining 15 bits are split between the red, green, and blue components of the final color. Each of the RGB components has 5 bits associated, giving 2⁵ = 32 intensities of each component. This allows 32768 possible colors for each pixel. The popular Cirrus Logic graphics chips of the early 1990s made use of the spare high-order bit for their so-called "mixed" video modes: with bit 15 clear, bits 0 through 14 would be treated as an RGB value as described above, while with bit 15 set, bit 0 through 7 would be interpreted as an 8-bit index into a 256-color palette (with bits 8 through 14 remaining unused.) This enabled display of (comparatively) high-quality color images side by side with palette-animated screen elements, but in practice, this feature was hardly used by any software. 16-bit high color When all 16 bits are used, one of the components (usually green with RGB565, see below) gets an extra bit, allowing 64 levels of intensity for that component, and a total of 65536 available colors. This can lead to small discrepancies in encoding, e.g. when one wishes to encode the 24-bit colour RGB (40, 40, 40) with 16 bits (a problem common to subsampling). Forty in binary is 00101000. The red and blue channels will take the five most significant bits, and will have a value of 00101, or 5 on a scale from 0 to 31 (16.1%). The green channel, with six bits of precision, will have a binary value of 001010, or 10 on a scale from 0 to 63 (15.9%). Because of this, the colour RGB (40, 40, 40) will have a slight purplish (magenta) tinge when displayed in 16 bits. 40 on a scale from 0 to 255 is 15.7%. Other 24-bit colours would incur a green tinge when subsampled: for instance, the 24-bit RGB representation of 14.1% grey, i.e. (36, 36, 36), would be encoded as 4/31 (12.9%) on the red and blue channels, but 9/63 (14.3%) on the green channel, because 36 is represented as 00100100 in binary. Green is usually chosen for the extra bit in 16 bits because the human eye has its highest sensitivity for green shades. For a demonstration, look closely at the following picture (note: this will work only on monitors displaying true color, i.e., 24 or 32 bits) where dark shades of red, green and blue are shown using 128 levels of
https://en.wikipedia.org/wiki/Citrullus
Citrullus is a genus of seven species of desert vines, among which Citrullus lanatus (the watermelon) is an important crop. Taxonomy Molecular data, including sequences from the original collection of Momordica lanata made near Cape Town by C. P. Thunberg in 1773, show that what Thunberg collected is not what has been called Citrullus lanatus, the domesticated watermelon, since the 1930s. Although this error only occurred in 1930 (Bailey, Gentes Herbarum 2: 180–186), it has been perpetuated in hundreds of papers on the watermelon. In addition, there is an older name for the watermelon, Citrullus battich Forssk. (Fl. Aegypt.-Arab.: 167. Jun 1775), which would normally have the precedence over Momordica lanata Thunberg (Prodr. Pl. Cap.: 13. 1794). To solve this problem, it has been proposed to conserve the name Citrullus lanatus with a new type to preserve the current sense of the name Species Citrullus consists of the following species and subspecies: Citrullus amarus Schrad. – citron melon Citrullus colocynthis (L.) Schrad. – colocynth Citrullus ecirrhosus Cogn. – tendril-less melon Citrullus lanatus (Thunb.) Matsum. & Nakai – desert watermelon Citrullus lanatus subsp. vulgaris var. cordophanus (Ter-Avan.) Fursa Citrullus lanatus var. lanatus Citrullus mucosospermus (Fursa) Fursa – egusi melon Citrullus naudinianus (Sond.) Hook.f. Citrullus rehmii References External links Chomicki, G., and S. S. Renner. 2015. Watermelon origin solved with molecular phylogenetics including Linnaean material: Another example of museomics. New Phytologist 205(2): 526-532 Cucurbitaceae genera Benincaseae
https://en.wikipedia.org/wiki/Grim%20Fandango
Grim Fandango is a 1998 adventure game directed by Tim Schafer and developed and published by LucasArts for Microsoft Windows. It is the first adventure game by LucasArts to use 3D computer graphics overlaid on pre-rendered static backgrounds. As with other LucasArts adventure games, the player must converse with characters and examine, collect, and use objects to solve puzzles. Grim Fandango is set in the Land of the Dead and the retro-futuristic version of the 1950s, through which recently departed souls, represented as calaca-like figures, travel before they reach their final destination. The story follows travel agent Manuel "Manny" Calavera as he attempts to save new arrival Mercedes "Meche" Colomar, a virtuous soul, on her journey. The game combines elements of the Aztec afterlife with film noir style, with influences including The Maltese Falcon, On the Waterfront and Casablanca. Grim Fandango received praise for its art design and direction. It was selected for several awards and is often listed as one of the greatest video games of all time. However, it was a commercial failure and contributed towards LucasArts' decision to end adventure game development and the decline of the adventure game genre. In 2014, with help from Sony, Schafer's studio Double Fine Productions acquired the Grim Fandango license following Disney's acquisition and closure of LucasArts as a video game developer the previous year. Double Fine produced a remastered version of the game, featuring improved character graphics, controls (including point and click), an orchestrated score, and directors' commentary. It was released for Linux, OS X, PlayStation 4, PlayStation Vita, and Windows in January 2015, for Android and iOS in May 2015, for Nintendo Switch in November 2018, and for Xbox One in October 2020. Gameplay Grim Fandango is an adventure game, in which the player controls Manuel "Manny" Calavera (calavera being Spanish for 'skull') as he follows Mercedes "Meche" Colomar in the Underworld. The game uses the GrimE engine, pre-rendering static backgrounds from 3D models, while the main objects and characters are animated in 3D. Additionally, cutscenes in the game have also been pre-rendered in 3D. The player controls Manny's movements and actions with a keyboard, a joystick, or a gamepad. The remastered edition allows control via a mouse as well. Manny must collect objects that can be used with either other collectible objects, parts of the scenery, or with other people in the Land of the Dead in order to solve puzzles and progress in the game. The game lacks any type of HUD. Unlike the earlier 2D LucasArts games, the player is informed of objects or persons of interest not by text floating on the screen when the player passes a cursor over them, but instead by the fact that Manny will turn his head toward that object or person as he walks by. The player reviews the inventory of items that Manny has collected by watching him pull each item in and out of his c
https://en.wikipedia.org/wiki/Cfront
Cfront was the original compiler for C++ (then known as "C with Classes") from around 1983, which converted C++ to C; developed by Bjarne Stroustrup at AT&T Bell Labs. The preprocessor did not understand all of the language and much of the code was written via translations. Cfront had a complete parser, built symbol tables, and built a tree for each class, function, etc. Cfront was based on CPre, a C compiler started in 1979. As Cfront was written in C++, it was a challenge to bootstrap on a machine without a C++ compiler/translator. Along with the Cfront C++ sources, a special "half-preprocessed" version of the C code resulting from compiling Cfront with itself was also provided. This C code was to be compiled with the native C compiler, and the resulting executable could then be used to compile the Cfront C++ sources. Most of the porting effort in getting Cfront running on a new machine was related to standard I/O. Cfront's C++ streams were closely tied in with the C library's buffered I/O streams, but there was little interaction with the rest of the C environment. The compiler could be ported to most System V derivatives without many changes, but BSD-based systems usually had many more variations in their C libraries and associated stdio structures. Cfront defined the language until circa 1990, and many of the more obscure corner cases in C++ were related to its C++-to-C translation approach. A few remnants of Cfront's translation method are still found in today's C++ compilers; name mangling was originated by Cfront, as the relatively primitive linkers at the time did not support type information in symbols, and some template instantiation models are derived from Cfront's early efforts. C++ (and Cfront) was directly responsible for many improvements in Unix linkers and object file formats, as it was the first widely used language which required link-time type checking, weak symbols, and other similar features. Cfront 4.0 was abandoned in 1993 after a failed attempt to add exception support. The C++ language had grown beyond its capabilities; however a compiler with similar approach became available later, namely Comeau C/C++. Analogous to the way cfront can process C++ source code into something that can be compiled by previously-available C compilers, cppfront processes source code written in new and experimental C++ 'syntax 2' into something that can be compiled by previously-available 'syntax 1' C++ compilers. References Notes External links Cfront Releases at C++ Historical Sources Archive cfront v3, the cfront re-port for the 4th edition of Plan 9 from Bell Labs Cfront 3.0.3, "AT&T/Bell Labs C++ to C translator from 1994, modified to build on modern hardware" C++ compilers Unix programming tools fr:Cfront
https://en.wikipedia.org/wiki/Cross-platform%20software
In computing, cross-platform software (also called multi-platform software, platform-agnostic software, or platform-independent software) is computer software that is designed to work in several computing platforms. Some cross-platform software requires a separate build for each platform, but some can be directly run on any platform without special preparation, being written in an interpreted language or compiled to portable bytecode for which the interpreters or run-time packages are common or standard components of all supported platforms. For example, a cross-platform application may run on Linux, macOS and Microsoft Windows. Cross-platform software may run on many platforms, or as few as two. Some frameworks for cross-platform development are Codename One, ArkUI, Kivy, Qt, Flutter, NativeScript, Xamarin, Phonegap, Ionic, and React Native. Platforms Platform can refer to the type of processor (CPU) or other hardware on which an operating system (OS) or application runs, the type of OS, or a combination of the two. An example of a common platform is Android which runs on the ARM architecture family. Other well-known platforms are Linux/Unix, macOS and Windows, these are all cross-platform. Applications can be written to depend on the features of a particular platform—either the hardware, OS, or virtual machine (VM) it runs on. For example, the Java platform is a common VM platform which runs on many OSs and hardware types. Hardware A hardware platform can refer to an instruction set architecture. For example: ARM or the x86 architecture. These machines can run different operating systems. Smartphones and tablets generally run ARM architecture, these often run Android or iOS and other mobile operating systems. Software A software platform can be either an operating system (OS) or programming environment, though more commonly it is a combination of both. An exception is Java, which uses an OS-independent virtual machine (VM) to execute Java bytecode. Some software platforms are: Android (ARM64) ChromeOS (ARM32, ARM64, IA-32, x86-64) Common Language Infrastructure (CLI) by Microsoft, implemented in: The legacy .NET Framework that works only on Microsoft Windows. The newer .NET framework (simply called ".NET") that works across Microsoft Windows, macOS, and Linux. Other implementations such as Mono (formerly by Novell and Xamarin) HarmonyOS (ARM64, RISC-V, x86, x64, and LoongArch) iOS ((ARMv8-A)) iPadOS (ARMv8-A) Java Linux ( Alpha, ARC, ARM, C-Sky, Hexagon, IA-64, LoongArch, m68k, Microblaze, MIPS, Nios II, OpenRISC, PA-RISC, PowerPC, RISC-V, s390, SuperH, SPARC, x86, Xtensa) macOS x86, ARM (Apple silicon) Microsoft Windows (IA-32, x86-64, ARM, ARM64) PlayStation 4 (x86), PlayStation 3 (PowerPC) and PlayStation Vita (ARM) Solaris (SPARC, x86) SPARC Unix (many platforms since 1969) Web browsers – mostly compatible with each other, running JavaScript web-apps Xbox Minor, historical AmigaOS (m68k), AmigaOS 4 (PowerPC),
https://en.wikipedia.org/wiki/Computing%20platform
A computing platform, digital platform, or software platform is an environment in which a piece of software is executed. It may be the hardware or the operating system (OS), even a web browser and associated application programming interfaces, or other underlying software, as long as the program code is executed with it. Computing platforms have different abstraction levels, including a computer architecture, an OS, or runtime libraries. A computing platform is the stage on which computer programs can run. A platform can be seen both as a constraint on the software development process, in that different platforms provide different functionality and restrictions; and as an assistant to the development process, in that they provide low-level functionality ready-made. For example, an OS may be a platform that abstracts the underlying differences in hardware and provides a generic command for saving files or accessing the network. Components Platforms may also include: Hardware alone, in the case of small embedded systems. Embedded systems can access hardware directly, without an OS; this is referred to as running on "bare metal". A browser in the case of web-based software. The browser itself runs on a hardware+OS platform, but this is not relevant to software running within the browser. An application, such as a spreadsheet or word processor, which hosts software written in an application-specific scripting language, such as an Excel macro. This can be extended to writing fully-fledged applications with the Microsoft Office suite as a platform. Software frameworks that provide ready-made functionality. Cloud computing and Platform as a Service. Extending the idea of a software framework, these allow application developers to build software out of components that are hosted not by the developer, but by the provider, with internet communication linking them together. The social networking sites Twitter and Facebook are also considered development platforms. A virtual machine (VM) such as the Java virtual machine or .NET CLR. Applications are compiled into a format similar to machine code, known as bytecode, which is then executed by the VM. A virtualized version of a complete system, including virtualized hardware, OS, software, and storage. These allow, for instance, a typical Windows program to run on what is physically a Mac. Some architectures have multiple layers, with each layer acting as a platform for the one above it. In general, a component only has to be adapted to the layer immediately beneath it. For instance, a Java program has to be written to use the Java virtual machine (JVM) and associated libraries as a platform but does not have to be adapted to run on the Windows, Linux or Macintosh OS platforms. However, the JVM, the layer beneath the application, does have to be built separately for each OS. Operating system examples Desktop, laptop, server AmigaOS, AmigaOS 4 ChromeOS Unix and Unix-like Linux EulerOS FreeBS
https://en.wikipedia.org/wiki/AWeb
AWeb is a web browser for the Amiga range of computers. Originally developed by Yvon Rozijn, AWeb was shipped with version 3.9 of AmigaOS, and is now open source. AWeb supports HTML 3.2, and some 4.01, JavaScript, frames, SSL, and various other Netscape and Internet Explorer features. Awards/Press Amiga Computing, December 96 issue, Overall rating of 89%. Amiga User International, January 97 issue, rating of 95%. CU Amiga, November 1996 issue, rating of 91%. See also AMosaic IBrowse Voyager References External links AWeb II Website Unofficial AWeb for AOS 3.x / AOS4 / MorphOS Free web browsers Web browsers for AmigaOS MorphOS software 1996 software Discontinued web browsers
https://en.wikipedia.org/wiki/IBrowse
IBrowse is a MUI-based web browser for the Amiga range of computers, and was a rewritten follow-on to Amiga Mosaic, one of the first web browsers for the Amiga Computer. IBrowse was originally developed for a company called Omnipresence, now defunct. The original author has since continued development of IBrowse. IBrowse supports some HTML 4, JavaScript, frames, SSL, and various other standards. It was one of the first browsers to include tabbed browsing as early as 1999 with IBrowse². However, it does not support CSS. A limited OEM version of IBrowse 2.4 is included with AmigaOS 4. Between April 2007 and August 2019, IBrowse was not available for sale to new customers since its distributor had quit the Amiga market, although existing v2.x users could download and install the demo version over their existing installation in order to access all functionality. Starting with IBrowse 2.5, new purchases can be made directly from the developer's website. System requirements Kickstart 3.0 Motorola 68020 or higher 5 MB free memory (7 MB with AmiSSL v5) MUI 3.8 See also AMosaic AWeb NetSurf Voyager OWB TimberWolf References Further reading Fischer, Michael., Meyer, Michael., and Witbrock, Michael. "User Extensibility in Amiga Mosaic", Proceedings of the Second International World Wide Web (WWW) Conference '94: Mosaic and the Web, October 1994 1996 software Web browsers Gopher clients Web browsers for AmigaOS
https://en.wikipedia.org/wiki/Voyager%20%28web%20browser%29
Voyager is a discontinued web browser for the Amiga range of computers, developed by VaporWare. Voyager supports HTML 3.2 and some HTML 4, JavaScript, frames, SSL, Flash, and various other Internet Explorer and Netscape Navigator features. Voyager is also available for the MorphOS and CaOS operating systems. Voyager 3 In May 1999 Oliver Wagner of VaporWare gave details of the upcoming Voyager 3 to Amiga Format, with planned new features including support for JavaScript, DOM (based on Internet Explorer's model), and CSS. Voyager 3 was generally well-received, with Amiga Format praising its fast JavaScript execution and rapid table layout, but criticising its 'virtually unusable' print function and out-of-date documentation. See also AMosaic AWeb IBrowse OWB References External links Official Website Web browsers for AmigaOS 1996 software MorphOS software Discontinued web browsers
https://en.wikipedia.org/wiki/Ousterhout%27s%20dichotomy
Ousterhout's dichotomy is computer scientist John Ousterhout's categorization that high-level programming languages tend to fall into two groups, each with distinct properties and uses: system programming languages and scripting languages – compare programming in the large and programming in the small. This distinction underlies the design of his language Tcl. System programming languages (or applications languages) usually have the following properties: They are typed statically They support creating complex data structures Programs in them are compiled into machine code Programs in them are meant to operate largely independently of other programs System programming languages tend to be used for components and applications with large amounts of internal functionality such as operating systems, database servers, and Web browsers. These applications typically employ complex algorithms and data structures and require high performance. Prototypical examples of system programming languages include C, OCaml and Modula-2. By contrast, scripting languages (or glue languages) tend to have the following properties: They are typed dynamically They have little or no provision for complex data structures Programs in them (scripts) are interpreted Scripting languages tend to be used for applications where most of the functionality comes from other programs (often implemented in system programming languages); the scripts are used to glue together other programs or add additional layers of functionality on top of existing programs. Ousterhout claims that scripts tend to be short and are often written by less sophisticated programmers, so execution efficiency is less important than simplicity and ease of interaction with other programs. Common applications for scripting include Web page generation, report generation, graphical user interfaces, and system administration. Prototypical examples of scripting languages include Python, AppleScript, C shell, DOS batch files, and Tcl. History The dichotomy was fully set out in , though Ousterhout had drawn this distinction since at least the design of Tcl (1988), and had stated it publicly at various times. An early episode was "The Tcl War" of late September and October 1994, where Richard Stallman posted an article critical of Tcl, entitled "Why you should not use Tcl", to which Ousterhout replied with an articulation of his dichotomy: Criticism Critics believe that the dichotomy is highly arbitrary, and refer to it as Ousterhout's fallacy or Ousterhout's false dichotomy. While static-versus-dynamic typing, data structure complexity, and dependent versus stand-alone might be said to be unrelated features, the usual critique of Ousterhout's dichotomy is of its distinction of compiling versus interpreting. Neither semantics nor syntax depend significantly on whether a language implementation compiles into machine language, interprets, tokenizes, or byte-compiles at the start of each run, or any mix of thes
https://en.wikipedia.org/wiki/Screen%20name
Screen name may refer to: Stage name, pseudonyms used for film appearances User (computing), pseudonyms used for Internet communications and BBSs
https://en.wikipedia.org/wiki/High%20Performance%20File%20System
HPFS (High Performance File System) is a file system created specifically for the OS/2 operating system to improve upon the limitations of the FAT file system. It was written by Gordon Letwin and others at Microsoft and added to OS/2 version 1.2, at that time still a joint undertaking of Microsoft and IBM, and released in 1988. Overview Compared with FAT, HPFS provided a number of additional capabilities: Support for mixed case file names, in different code pages Support for long file names (255 characters as opposed to FAT's 8.3 naming scheme) More efficient use of disk space (files are not stored using multiple-sector clusters but on a per-sector basis) An internal architecture that keeps related items close to each other on the disk volume Less fragmentation of data Extent-based space allocation Separate datestamps for last modification, last access, and creation (as opposed to last-modification-only datestamp in then-times implementations of FAT) B+ tree structure for directories Root directory located at the midpoint, rather than at the beginning of the disk, for faster average access HPFS also can keep 64 KB of metadata ("extended attributes") per file. IBM offers two kinds of IFS drivers for this file system: The standard one with a cache limited to 2 MB HPFS386 provided with certain server versions of OS/2, or as added component for the server versions that did not come with it HPFS386's cache is limited by the amount of available memory in OS/2's system memory arena and was implemented in 32-bit assembly language. HPFS386 is a ring 0 driver (allowing direct hardware access and direct interaction with the kernel) with built-in SMB networking properties that are usable by various server daemons, whereas HPFS is a ring 3 driver. Thus, HPFS386 is faster than HPFS and highly optimized for server applications. It is also highly tunable by experienced administrators. Though IBM still had rights to HPFS, its agreement with Microsoft to continue licensing the HPFS386 version was contingent upon the company paying Microsoft a licensing fee for each copy sold. This was a result of the Microsoft and IBM collaboration that gave both the right to use Windows and OS/2 technology. Due to the Microsoft dependence, limited partition size, file size limit of 2 GB and the long disk-check times after a crash, IBM ported the journaling file system, JFS, to OS/2 as a substitute. DOS and Linux support HPFS via third-party drivers. Windows NT versions 3.51 and earlier had native support for HPFS. Native support under Windows Windows 95 and its successors Windows 98 and Windows Me can read and write HPFS only when mapped via a network share; they cannot read it from a local disk. , because NTFS and HPFS share the same filesystem identification number in the partition table. Windows NT 3.1 and 3.5 have native read/write support for local disks and can even be installed onto an HPFS partition. Windows NT 3.51 can also read and write from local HPFS fo
https://en.wikipedia.org/wiki/Windows%20API
The Windows API, informally WinAPI, is Microsoft's core set of application programming interfaces (APIs) available in the Microsoft Windows operating systems. The name Windows API collectively refers to several different platform implementations that are often referred to by their own names (for example, the Win32 API). Almost all Windows programs interact with the Windows API. On the Windows NT line of operating systems, a small number (such as programs started early in the Windows startup process) use the Native API. Developer support is available in the form of a software development kit, Microsoft Windows SDK, providing documentation and tools needed to build software based on the Windows API and associated Windows interfaces. The Windows API (Win32) is focused mainly on the programming language C in that its exposed functions and data structures are described in that language in recent versions of its documentation. However, the API may be used by any programming language compiler or assembler able to handle the (well-defined) low-level data structures along with the prescribed calling conventions for calls and callbacks. Similarly, the internal implementation of the API's function has been developed in several languages, historically. Despite the fact that C is not an object-oriented programming language, the Windows API and Windows have both historically been described as object-oriented. There have also been many wrapper classes and extensions (from Microsoft and others) for object-oriented languages that make this object-oriented structure more explicit (Microsoft Foundation Class Library (MFC), Visual Component Library (VCL), GDI+, etc.). For instance, Windows 8 provides the Windows API and the WinRT API, which is implemented in C++ and is object-oriented by design. Overview The functions provided by the Windows API can be grouped into eight categories: Base ServicesProvide access to the basic resources available to a Windows system. Included are things like file systems, devices, processes, threads, and error handling. These functions reside in kernel.exe, krnl286.exe or krnl386.exe files on 16-bit Windows, and kernel32.dll and KernelBase.dll on 32 and 64 bit Windows. These files reside in the folder \Windows\System32 on all versions of Windows. Advanced Services Provide access to functions beyond the kernel. Included are things like the Windows registry, shutdown/restart the system (or abort), start/stop/create a Windows service, manage user accounts. These functions reside in advapi32.dll and advapires32.dll on 32-bit Windows. Graphics Device InterfaceProvides functions to output graphics content to monitors, printers, and other output devices. It resides in gdi.exe on 16-bit Windows, and gdi32.dll on 32-bit Windows in user-mode. Kernel-mode GDI support is provided by win32k.sys which communicates directly with the graphics driver. User InterfaceProvides the functions to create and manage screen windows and most basic contro
https://en.wikipedia.org/wiki/Thinking%20Machines%20Corporation
Thinking Machines Corporation was a supercomputer manufacturer and artificial intelligence (AI) company, founded in Waltham, Massachusetts, in 1983 by Sheryl Handler and W. Daniel "Danny" Hillis to turn Hillis's doctoral work at the Massachusetts Institute of Technology (MIT) on massively parallel computing architectures into a commercial product named the Connection Machine. The company moved in 1984 from Waltham to Kendall Square in Cambridge, Massachusetts, close to the MIT AI Lab. Thinking Machines made some of the most powerful supercomputers of the time, and by 1993 the four fastest computers in the world were Connection Machines. The firm filed for bankruptcy in 1994; its hardware and parallel computing software divisions were acquired in time by Sun Microsystems. Supercomputer products On the hardware side, Thinking Machines produced several Connection Machine models (in chronological order): the CM-1, CM-2, CM-200, CM-5, and CM-5E. The CM-1 and 2 came first in models with 64K (65,536) bit-serial processors (16 processors per chip) and later, the smaller 16K and 4K configurations. The Connection Machine was programmed in a variety of specialized programming languages, including *Lisp and CM Lisp (derived from Common Lisp), C* (derived by Thinking Machines from C), and CM Fortran. These languages used proprietary compilers to translate code into the parallel instruction set of the Connection Machine. The CM-1 through CM-200 were examples of single instruction, multiple data (SIMD) architecture, while the later CM-5 and CM-5E were multiple instruction, multiple data (MIMD) that combined commodity SPARC processors and proprietary vector processors in a fat tree computer network. All Connection Machine models required a serial front-end processor, which was most often a Sun Microsystems workstation, but on early models could also be a Digital Equipment Corporation (DEC) VAX minicomputer or Symbolics Lisp machine. Thinking Machines also introduced an early commercial redundant array of independent disks (RAID) 2 disk array, the DataVault, circa 1988. Business history In May 1985, Thinking Machines became the third company to register a .com domain name (think.com). It became profitable in 1989, in part because of its contracts from the Defense Advanced Research Projects Agency (DARPA). The next year, they sold $65 million (USD) worth of hardware and software, making them the market leader in parallel supercomputers. Thinking Machines' primary supercomputer competitor was Cray Research. Other parallel computing competitors included nCUBE, nearby Kendall Square Research, and MasPar, which made a computer similar to the CM-2, and Meiko Scientific, whose CS-2 was similar to the CM-5. In 1991, DARPA and the United States Department of Energy reduced their purchases amid criticism they were unfairly favoring Thinking Machines at the expense of Cray, nCUBE, and MasPar. Tightening export laws also prevented the most powerful Connection Ma
https://en.wikipedia.org/wiki/First-generation%20programming%20language
A first-generation programming language (1GL) is a machine-level programming language and belongs to the low-level programming languages. A first generation (programming) language (1GL) is a grouping of programming languages that are machine level languages used to program first-generation computers. Originally, no translator was used to compile or assemble the first-generation language. The first-generation programming instructions were entered through the front panel switches of the computer system. The instructions in 1GL are made of binary numbers, represented by 1s and 0s. This makes the language suitable for the understanding of the machine but far more difficult to interpret and learn by the human programmer. The main advantage of programming in 1GL is that the code can run very fast and very efficiently, precisely because the instructions are executed directly by the central processing unit (CPU). One of the main disadvantages of programming in a low level language is that when an error occurs, the code is not as easy to fix. First generation languages are very much adapted to a specific computer and CPU, and code portability is therefore significantly reduced in comparison to higher level languages. Modern day programmers still occasionally use machine level code, especially when programming lower level functions of the system, such as drivers, interfaces with firmware and hardware devices. Modern tools such as native-code compilers are used to produce machine level from a higher-level language. References General 1. Nwankwogu S.E (2016). Programming Languages and their history. Programming language classification
https://en.wikipedia.org/wiki/Proportionality%20%28mathematics%29
In mathematics, two sequences of numbers, often experimental data, are proportional or directly proportional if their corresponding elements have a constant ratio. The ratio is called coefficient of proportionality (or proportionality constant) and its reciprocal is known as constant of normalization (or normalizing constant). Two sequences are inversely proportional if corresponding elements have a constant product, also called the coefficient of proportionality. This definition is commonly extended to related varying quantities, which are often called variables. This meaning of variable is not the common meaning of the term in mathematics (see variable (mathematics)); these two different concepts share the same name for historical reasons. Two functions and are proportional if their ratio is a constant function. If several pairs of variables share the same direct proportionality constant, the equation expressing the equality of these ratios is called a proportion, e.g., (for details see Ratio). Proportionality is closely related to linearity. Direct proportionality Given an independent variable x and a dependent variable y, y is directly proportional to x if there is a non-zero constant k such that: The relation is often denoted using the symbols "∝" (not to be confused with the Greek letter alpha) or "~": (or ) For the proportionality constant can be expressed as the ratio: It is also called the constant of variation or constant of proportionality. A direct proportionality can also be viewed as a linear equation in two variables with a y-intercept of and a slope of k. This corresponds to linear growth. Examples If an object travels at a constant speed, then the distance traveled is directly proportional to the time spent traveling, with the speed being the constant of proportionality. The circumference of a circle is directly proportional to its diameter, with the constant of proportionality equal to . On a map of a sufficiently small geographical area, drawn to scale distances, the distance between any two points on the map is directly proportional to the beeline distance between the two locations represented by those points; the constant of proportionality is the scale of the map. The force, acting on a small object with small mass by a nearby large extended mass due to gravity, is directly proportional to the object's mass; the constant of proportionality between the force and the mass is known as gravitational acceleration. The net force acting on an object is proportional to the acceleration of that object with respect to an inertial frame of reference. The constant of proportionality in this, Newton's second law, is the classical mass of the object. Inverse proportionality The concept of inverse proportionality can be contrasted with direct proportionality. Consider two variables said to be "inversely proportional" to each other. If all other variables are held constant, the magnitude or absolute value o
https://en.wikipedia.org/wiki/Blender%20%28software%29
Blender is a free and open-source 3D computer graphics software tool set used for creating animated films, visual effects, art, 3D-printed models, motion graphics, interactive 3D applications, virtual reality, and, formerly, video games. Blender's features include 3D modelling, UV mapping, texturing, digital drawing, raster graphics editing, rigging and skinning, fluid and smoke simulation, particle simulation, soft body simulation, sculpting, animation, match moving, rendering, motion graphics, video editing, and compositing. History Blender was initially developed as an in-house application by the Dutch animation studio NeoGeo, and was officially launched on January 2, 1994. Version 1.00 was released in January 1995, with the primary author being company co-owner and software developer Ton Roosendaal. The name Blender was inspired by a song by the Swiss electronic band Yello, from the album Baby, which NeoGeo used in its showreel. Some design choices and experiences for Blender were carried over from an earlier software application, called Traces, that Roosendaal developed for NeoGeo on the Commodore Amiga platform during the 1987–1991 period. On January 1, 1998, Blender was released publicly online as SGI freeware. NeoGeo was later dissolved, and its client contracts were taken over by another company. After NeoGeo's dissolution, Ton Roosendaal founded Not a Number Technologies (NaN) in June 1998 to further develop Blender, initially distributing it as shareware until NaN went bankrupt in 2002. This also resulted in the discontinuation of Blender's development. In May 2002, Roosendaal started the non-profit Blender Foundation, with the first goal to find a way to continue developing and promoting Blender as a community-based open-source project. On July 18, 2002, Roosendaal started the "Free Blender" campaign, a crowdfunding precursor. The campaign aimed at open-sourcing Blender for a one-time payment of €100,000 (US$100,670 at the time), with the money being collected from the community. On September 7, 2002, it was announced that they had collected enough funds and would release the Blender source code. Today, Blender is free and open-source software, largely developed by its community as well as 26 full-time employees and 12 freelancers employed by the Blender Institute. The Blender Foundation initially reserved the right to use dual licensing so that, in addition to GPL 2.0-or-later, Blender would have been available also under the "Blender License", which did not require disclosing source code but required payments to the Blender Foundation. However, this option was never exercised and was suspended indefinitely in 2005. Blender is solely available under "GNU GPLv2 or any later" and was not updated to the GPLv3, as "no evident benefits" were seen. In 2019, with the release of version 2.80, the integrated game engine for making and prototyping video games was removed; Blender's developers recommended that users migrate to more powerf