source stringlengths 32 199 | text stringlengths 26 3k |
|---|---|
https://en.wikipedia.org/wiki/List%20of%20statisticians | This list of statisticians lists people who have made notable contributions to the theories or application of statistics, or to the related fields of probability or machine learning. Also included are actuaries and demographers.
A
Aalen, Odd Olai (1947–1987)
Abbey, Helen (1915–2001)
Abbott, Edith (1876–1957)
Abelson, Robert P. (1928–2005)
Abramovitz, Moses (1912–2000)
Achenwall, Gottfried (1719–1772)
Adelstein, Abraham Manie (1916–1992)
Adkins, Dorothy (1912–1975)
Ahsan, Riaz (1951–2008)
Ahtime, Laura
Aitchison, Beatrice (1908–1997)
Aitchison, John (1926–2016)
Aitken, Alexander (1895–1967)
Akaike, Hirotsugu (1927–2009)
Aliaga, Martha (1937–2011)
Allan, Betty (1905–1952)
Allen, R. G. D. (1906–1983)
Allison, David B.
Altman, Doug (1948–2018)
Altman, Naomi
Amemiya, Takeshi (1938–)
Anderson, Oskar (1887–1960)
Anderson, Theodore Wilbur
Anderson-Cook, Christine (1966–)
de Andrade, Mariza
Anscombe, Francis (1918–2001)
Anselin, Luc
Antonovska, Svetlana (1952–2016)
Armitage, Peter (1924–)
Armstrong, Margaret
Arrow, Kenneth
Ash, Arlene
Ashby, Deborah (1959–)
Asher, Jana
Ashley-Cooper, Anthony
Austin, Oscar Phelps
Ayres, Leonard Porter
B
Backer, Julie E. (1890–1977)
Bahadur, Raghu Raj (1924–1997)
Bahn, Anita K. (1920–1980)
Bailar, Barbara A.
Bailey, Rosemary A. (1947–)
Bailey-Wilson, Joan (1953–)
Baker, Rose
Balding, David
Bandeen-Roche, Karen
Barber, Rina Foygel
Barnard, George Alfred (1915–2002)
Barnard, Mildred (1908–2000)
Barnett, William A.
Bartels, Julius
Bartlett, M. S. (1910–2002)
Bascand, Geoff
Basford, Kaye
Basu, Debabrata (1924–2001)
Bates, Nancy
Batcher, Mary
Baxter, Laurence (1954–1996)
Bayarri, M. J. (1956–2014)
Bayes, Thomas (1702–1761)
Beale, Calvin
Becker, Betsy
Bediako, Grace
Behm, Ernst
Benjamin, Bernard
Benzécri, Jean-Paul (1932–2019)
Berger, James
Berkson, Joseph (1899–1982)
Bernardo, José-Miguel
Berry, Don
Best, Alfred M. (1876–1958)
Best, Nicky
Betensky, Rebecca
Beveridge, William
Bhat, B. R.
Bhat, P. N. Mari
Bhat, U. Narayan
Bienaymé, Irénée-Jules
Bienias, Julia
Billard, Lynne (1943–)
Bingham, Christopher
Bird, Sheila (1952–)
Birnbaum, Allan (1923–1976)
Bishop, Yvonne (–2015)
Bisika, Thomas John
Bixby, Lenore E. (1914–1994)
Blackwell, David (1919–2010)
Blankenship, Erin
Bliss, Chester Ittner (1899–1979)
Block, Maurice
Bloom, David E.
Blumberg, Carol Joyce
Bock, Mary Ellen
Boente, Graciela
Bodio, Luigi
Bodmer, Walter
Bonferroni, Carlo Emilio (1892–1960)
Booth, Charles
Boreham, John
Borror, Connie M. (1966–2016)
Bortkiewicz, Ladislaus (1868–1931)
Bose, R. C. (1901–1987)
Botha, Roelof
Bottou, Léon
Bowley, Arthur Lyon (1869–1957)
Bowman, Kimiko O. (1927–2019)
Box, George E. P. (1919–2010)
Boyle, Phelim
Brad, Ion Ionescu de la (1818–1891)
Brady, Dorothy (1903–1977)
Brassey, Thomas
Braverman, Amy
Breiman, Leo
Breslow, Norman (1941–2015)
Brogan, Donna (1939–)
Brooks, Steve
Brown, Jennifer
Brown, Lawrence D. (1940–2018)
Broze, Laurence (1960–)
Buck, Caitlin E. |
https://en.wikipedia.org/wiki/Alfred%20Aho | Alfred Vaino Aho (born August 9, 1941) is a Canadian computer scientist best known for his work on programming languages, compilers, and related algorithms, and his textbooks on the art and science of computer programming.
Aho was elected into the National Academy of Engineering in 1999 for his contributions to the fields of algorithms and programming tools.
He and his long-time collaborator Jeffrey Ullman are the recipients of the 2020 Turing Award, generally recognized as the highest distinction in computer science.
Career
Aho received a B.A.Sc. (1963) in Engineering Physics from the University of Toronto, then an M.A. (1965) and Ph.D. (1967) in Electrical Engineering/Computer Science from Princeton University. He conducted research at Bell Labs from 1967 to 1991, and again from 1997 to 2002 as Vice President of the Computing Sciences Research Center. Since 1995, he has held the Lawrence Gussman Professorship in Computer Science at Columbia University. He served as chair of the department from 1995 to 1997, and again in the spring of 2003.
In his PhD thesis Aho created indexed grammars and the nested-stack automaton as vehicles for extending the power of context-free languages, but retaining many of their decidability and closure properties. One application of indexed grammars is modelling parallel rewriting systems, particularly in biological applications.
After graduating from Princeton, Aho joined the Computing Sciences Research Center at Bell Labs where he devised efficient regular expression and string-pattern matching algorithms that he implemented in the first versions of the Unix tools egrep and fgrep. The fgrep algorithm has become known as the Aho–Corasick algorithm; it is used by several bibliographic search-systems, including the one developed by Margaret J. Corasick, and by other string-searching applications.
At Bell Labs, Aho worked closely with Steve Johnson and Jeffrey Ullman to develop efficient algorithms for analyzing and translating programming languages. Steve Johnson used the bottom-up LALR parsing algorithms to create the syntax-analyzer generator yacc, and Michael E. Lesk and Eric Schmidt used Aho's regular-expression pattern-matching algorithms to create the lexical-analyzer generator lex. The lex and yacc tools and their derivatives have been used to develop the front ends of many of today's programming language compilers.
Aho and Ullman wrote a series of textbooks on compiling techniques that codified the theory relevant to compiler design. Their 1977 textbook Principles of Compiler Design had a green dragon on the front cover and became known as "the green dragon book". In 1986 Aho and Ullman were joined by Ravi Sethi to create a new edition, "the red dragon book" (which was briefly shown in the 1995 movie Hackers), and in 2006 also by Monica Lam to create "the purple dragon book". The dragon books are used for university courses as well as industry references.
In 1974, Aho, John Hopcroft, and Ullman wrot |
https://en.wikipedia.org/wiki/Artifact | Artifact (American English) or artefact (British English) may refer to:
Science and technology
Artifact (error), misleading or confusing alteration in data or observation, commonly in experimental science, resulting from flaws in technique or equipment
Compression artifact, a loss of clarity caused by the data compression of an image, audio, or video
Digital artifact, any undesired alteration in data introduced during its digital processing
Visual artifact, anomalies during visual representation of digital graphics and imagery
In the scrum software project management framework, documentation used for managing the project
Archaeology
Artifact (archaeology), an object formed by humans, particularly one of interest to archaeologists
Cultural artifact, in the social sciences, anything created by humans which gives information about the culture of its creator and users
The Artefact (journal), published annually by the Archaeological and Anthropological Society of Victoria
Computing
Artifact (software development), one of many kinds of tangible by-products produced during the development of software
Artifact (enterprise architecture), a separate component of enterprise architecture
Virtual artifact, an object in a digital environment
Artifact (UML), a term in the Unified Modeling Language
Artifact (app), a news recommendation app for iOS and Android
Arts and media
Film and television
Artifact (film), a 2012 documentary film directed by Jared Leto under the pseudonym of Bartholomew Cubbins
Artifacts (film), a 2007 horror film
The Artifact (Eureka), a fictional object appearing in the TV series Eureka
Games
Artifact (video game), a 2018 digital collectible card game by Valve
Artifact (Magic: The Gathering), a card type in the trading card game Magic: The Gathering
Music
Artifacts (Steve Roach album), 1994
Artifacts (Nicole Mitchell album), 2015
Artifacts (group), a hip-hop duo from New Jersey
Artifact (album), a 2002 album by The Electric Prunes
Artiifact, a 2016 album by South African hip hop record producer Anatii
Artifact, a 2019 album by Swedish electronic musician Waveshaper
Artifacts (Beirut album), 2022
"Artefact", a song by Phoenix from Alpha Zulu, 2022
Other media
Artifact, a 1985 science fiction novel by Gregory Benford
Artifact (ballet), 1984 ballet by William Forsythe
Other uses
Learning artifact (education), an object created by students during the course of instruction
Artifacting, a technique used on some older computers to generate color in monochrome modes by exploiting artifacts of analog television systems
A relic, an object left behind by a prophet or other important religious figure
See also
Object (disambiguation)
Artifakt, a compilation album by Better Than Ezra
Artifakts (bc), a 1998 album by Plastikman
Different spellings and connotations for artefact or artifact
Magic item, in fantasy, any object that has magical powers so powerful that it cannot be duplicated or destroyed by |
https://en.wikipedia.org/wiki/Subnormal%20number | This page describes subnormal numbers in general. For the details of IEEE subnormal and denormal numbers, see the IEEE section below.
In computer science, subnormal numbers are the subset of denormalized numbers (sometimes called denormals) that fill the underflow gap around zero in floating-point arithmetic. Any non-zero number with magnitude smaller than the smallest positive normal number is subnormal, while denormal can also refer to numbers outside that range.
Terminology
In some older documents (especially standards documents such as the initial releases of IEEE 754 and the C language), "denormal" is used to refer exclusively to subnormal numbers. This usage persists in various standards documents, especially when discussing hardware that is incapable of representing any other denormalized numbers, but the discussion here uses the term "subnormal" in line with the 2008 revision of IEEE 754. In casual discussions the terms subnormal and denormal are often used interchangeably, in part because there are no denormalized IEEE binary numbers outside the subnormal range.
The term "number" is used rather loosely, to describe a particular sequence of digits, rather than a mathematical abstraction; see Floating Point for details of how real numbers relate to floating point representations. "Representation" rather than "number" may be used when clarity is required.
Definition
Mathematical real numbers may be approximated by multiple floating point representations. One representation is defined as normal, and others are defined as subnormal, denormal, or unnormal by their relationship to normal.
In a normal floating-point value, there are no leading zeros in the significand (mantissa); rather, leading zeros are removed by adjusting the exponent (for example, the number 0.0123 would be written as ). Conversely, a denormalized floating point value has a significand with a leading digit of zero. Of these, the subnormal numbers represent values which if normalized would have exponents below the smallest representable exponent (the exponent having a limited range).
The significand (or mantissa) of an IEEE floating-point number is the part of a floating-point number that represents the significant digits. For a positive normalised number it can be represented as m0.m1m2m3...mp−2mp−1 (where m represents a significant digit, and p is the precision) with non-zero m0. Notice that for a binary radix, the leading binary digit is always 1. In a subnormal number, since the exponent is the least that it can be, zero is the leading significant digit (0.m1m2m3...mp−2mp−1), allowing the representation of numbers closer to zero than the smallest normal number. A floating-point number may be recognized as subnormal whenever its exponent is the least value possible.
By filling the underflow gap like this, significant digits are lost, but not as abruptly as when using the flush to zero on underflow approach (discarding all significant digits when underflow is reac |
https://en.wikipedia.org/wiki/Homebrew%20Computer%20Club | The Homebrew Computer Club was an early computer hobbyist group in Menlo Park, California, which met from March 1975 to December 1986. The club had an influential role in the development of the microcomputer revolution and the rise of that aspect of the Silicon Valley information technology industrial complex.
Several high-profile hackers and computer entrepreneurs emerged from its ranks, including Steve Jobs and Steve Wozniak, the founders of Apple Computer. With its newsletter and monthly meetings promoting an open exchange of ideas, the club has been described as "the crucible for an entire industry" as it pertains to personal computing.
History
The Homebrew Computer Club was an informal group of electronic enthusiasts and technically minded hobbyists who gathered to trade parts, circuits, and information pertaining to DIY construction of personal computing devices. It was started by Gordon French and Fred Moore who met at the Community Computer Center in Menlo Park. They both were interested in maintaining a regular, open forum for people to get together to work on making computers more accessible to everyone.
The first meeting of the club was held on March 5, 1975, in French's garage in Menlo Park, San Mateo County, California, on the occasion of the arrival in the area of the first Micro Instrumentation and Telemetry Systems (MITS) Altair 8800 microcomputer, a unit sent for review by People's Computer Company. Steve Wozniak credits that first meeting as the inspiration to design the Apple I. The next few meetings were held at a large home in Atherton, California, which had been used as a preschool. Subsequent meetings were held at an auditorium at the Stanford Linear Accelerator Center (SLAC), until 1978, when meetings moved to the Stanford Medical School.
An anecdote from member Thomas "Todd" Fischer relates that after the more-or-less "formal" meetings the participants often reconvened for an informal, late night "swap meet" in the parking lot of the Safeway store down the road, as SLAC campus rules prohibited such activity on campus property. Others, at the suggestion of Roger Melen, convened at The Oasis, a bar and grill they considered a pub located on El Camino Real in nearby Menlo Park, recalled years later by a member as "Homebrew's other staging area". As Steven Levy wrote about the Oasis gatherings:
The Oasis closed on March 7, 2018, due to unaffordable rent. Its Menlo Park building is a historical landmark; in 2019 the building became home to a venture capital firm, Pear Ventures.
The 1999 made-for-television movie Pirates of Silicon Valley (and the book on which it is based, Fire in the Valley: The Making of the Personal Computer) describes the role the Homebrew Computer Club played in creating the first personal computers, although the movie took the liberty of placing the meeting in Berkeley and misrepresented the meeting process.
Many of the original members of the Homebrew Computer Club continue to meet (), having |
https://en.wikipedia.org/wiki/Keighley%20%26%20Worth%20Valley%20Railway | The Keighley & Worth Valley Railway (KWVR) is a heritage railway in the Worth Valley, West Yorkshire, England, which runs from Keighley to Oxenhope. It connects to the National Rail network at Keighley railway station.
History
Inception and building of the branch
In 1861, John McLandsborough, a civil engineer, visited Haworth to pay tribute to Charlotte Brontë but was surprised to find that it was not served by a railway. He proposed a branch running from the Midland Railway's station at to Oxenhope. The line would serve three small towns and 15 mills along its length.
A meeting of local gentlemen were told that the line would cost £36,000 to build (). A total of 3,134 shares worth £10 each were issued at this meeting, along with the election of directors, bankers, solicitors and engineers. J McLandsborough, the original proposer of the line (who dealt predominantly with water and sewerage engineering, but had experience of building the Otley and Ilkley Railway) was appointed acting engineer; whilst J. S. Crossley of the Midland Railway was appointed consultant engineer.
The railway was incorporated by an Act of Parliament in 1862 and the first sod was cut on Shrove Tuesday, 9 February 1864 by Isaac Holden, the chairman of the Keighley and Worth Valley Railway.
The railway was built as single track but with a trackbed wide enough to allow upgrading to double track for expansion. Although the work was estimated to take approximately one year, delays including buying land for the line, a cow eating the plans near Oakworth and engineering problems meant the work took nearly two years to complete. In particular the southern tunnel to Ingrow West had quicksand oozing through bore holes that required additional piles to be driven down to the bedrock to support and stabilise the tunnel. Unfortunately the work damaged the foundation to the Wesley Place Methodist Church resulting in the church receiving £1,980 from the railway company.
Tracklaying was completed in 1866, having started at each end and joined in the middle. The line was tested with a locomotive from Ilkley, which took nearly two hours to get from Keighley to Oxenhope, but just 13 minutes to get back. Before opening, violent storms struck the line in November of that year.
The opening ceremony was held on Saturday 13 April 1867. Unfortunately, the train got stuck on Keighley bank and again between Oakworth and Haworth, necessitating splitting it before carrying on with the journey. Finally, on 15 April 1867, public passenger services on the Worth Valley commenced.
Operation
The line was operated by the Midland Railway, who owned most of the rail network in the area, and was eventually bought by the Midland in part due to interest from the rival railway company, the Great Northern. Upon sale of the railway, the mill owners made a profit, which was unusual for many lines of that type, as (for strategic reasons) the Midland wanted to prevent the GN from taking over its territory. Afte |
https://en.wikipedia.org/wiki/Top%20Cat | Top Cat is an American animated sitcom produced by Hanna-Barbera Productions and originally broadcast in prime time on the ABC network. It aired in a weekly evening time slot from September 27, 1961, to April 18, 1962, for a single season of 30 episodes. The show was a ratings failure in prime time, but became successful upon its time on Saturday morning television. The show also became very popular in Latin American countries (especially Mexico), and the United Kingdom.
History
Top Cat was created as a parody of The Phil Silvers Show with Arnold Stang imitating Sgt Bilko's voice for the titular character. Hanna-Barbera sold the cartoon to ABC based on a drawing of the main character. This was only the second cartoon series to premiere on prime time network television in the United States.
Premise
The title character, Top Cat (T.C.; voiced by Arnold Stang impersonating Phil Silvers) is the leader of a gang of Manhattan alley cats living in Hoagy's Alley: Fancy-Fancy, Spook, Benny the Ball, Brain, and Choo-Choo.
Top Cat and his gang were inspired by the East Side Kids, roguish, street-smart characters from a series of 1940s B movies, but their more immediate roots lay in The Phil Silvers Show (1955–59), a successful military comedy whose lead character (Sergeant Bilko, played by Silvers) was a fast-talking con artist. Maurice Gosfield, who played Private Duane Doberman in The Phil Silvers Show, provided the voice for Benny the Ball in Top Cat, and Benny's chubby appearance was based on Gosfield's. Additionally, Arnold Stang's vocal characterization was originally based on an impression of Phil Silvers's voice. During the original network run, the sponsor objected to the Silvers impersonation—insisting that he was playing Arnold Stang, not Phil Silvers—so in later episodes Stang modified the Top Cat voice, to a closer tone of his own voice.
The gang constantly hatch get-rich-quick schemes through scams but most of them usually backfire, and a frequent plot thread revolves around the local police officer, Charles "Charlie" Dibble (voiced by Allen Jenkins), ineffectually trying to either arrest them, evict them from the alley, get them to clean the alley, or stopping them using the policebox phone.
Like The Flintstones, all the episodes feature a cold open, which is a small scene from the episode that takes place in medias res, and after that, a long flashback that leads to the scene begins with the series' theme song "The Most Effectual Top Cat" and features Top Cat's misadventures that happen before the scene from the beginning plays. The story then continues from where it left off. In some episodes, the flashback stops near the middle when the same scene plays.
Broadcast
Top Cat aired on Wednesday nights in Prime Time from 8:30 to 9:00 pm. Hanna-Barbera created 30 half hour episodes of Top Cat. The show was broadcast in black and white but was created in color. The show aired on Saturdays in 1962 and 1963 on ABC, and was then rerun in variou |
https://en.wikipedia.org/wiki/A.I.%20Artificial%20Intelligence | A.I. Artificial Intelligence (or simply A.I.) is a 2001 American science fiction film directed by Steven Spielberg. The screenplay by Spielberg and screen story by Ian Watson were based on the 1969 short story "Supertoys Last All Summer Long" by Brian Aldiss. Set in a futuristic society, the film stars Haley Joel Osment as David, a childlike android uniquely programmed with the ability to love. Jude Law, Frances O'Connor, Brendan Gleeson and William Hurt star in supporting roles.
Development of A.I. originally began after producer/director Stanley Kubrick acquired the rights to Aldiss' story in the early 1970s. Kubrick hired a series of writers, including Brian Aldiss, Bob Shaw, Ian Watson and Sara Maitland, until the mid-1990s. The film languished in development hell for years, partly because Kubrick felt that computer-generated imagery was not advanced enough to create the David character, whom he believed no child actor would convincingly portray. In 1995, Kubrick handed A.I. to Spielberg, but the film did not gain momentum until Kubrick died in 1999. Spielberg remained close to Watson's treatment for the screenplay, and dedicated the film to Kubrick.
A.I. Artificial Intelligence was released on June 29, 2001 by Warner Bros. Pictures in North America. It received generally positive reviews from critics and grossed $235.9 million against a budget of $90–100 million. It was also nominated for Best Visual Effects and Best Original Score (for John Williams) at the 74th Academy Awards. In a 2016 BBC poll of 177 critics around the world, A.I. Artificial Intelligence was voted the eighty-third greatest film since 2000. It has since been called one of Spielberg's best works and one of the greatest films of the 2000s, the 21st century, and of all time.
Plot
In the 22nd century, rising sea levels from global warming have wiped out 99% of existing cities, reducing the world's population. Mecha humanoid robots, seemingly capable of complex thought but lacking emotions, have been created as replacements. In Madison, New Jersey, David, a prototype Mecha child capable of experiencing love, is given to Henry Swinton and his wife Monica, whose son Martin contracted a rare disease and has been placed in suspended animation. Initially uncomfortable with David, Monica eventually warms to him and activates his imprinting protocol. Seeking to have her love him in return, he also befriends Teddy, Martin's robotic teddy bear. After Martin is unexpectedly cured of his disease and brought home, he jealously goads David into cutting off a piece of Monica's hair. That night, David enters his adoptive parents' room and cuts a lock of hair from Monica's head using a pair of scissors. Monica turns over and is poked in the eye by the scissors. As Henry helps her treat her eye, Teddy picks up the lock of hair from the floor and places it in his pocket. At a pool party, one of Martin's friends pokes David with a knife, triggering his self-protection programming. David grab |
https://en.wikipedia.org/wiki/Network%20mapping | Network mapping is the study of the physical connectivity of networks e.g. the Internet. Network mapping discovers the devices on the network and their connectivity. It is not to be confused with network discovery or network enumerating which discovers devices on the network and their characteristics such as (operating system, open ports, listening network services, etc.). The field of automated network mapping has taken on greater importance as networks become more dynamic and complex in nature.
Large-scale mapping project
Images of some of the first attempts at a large scale map of the internet were produced by the Internet Mapping Project and appeared in Wired magazine. The maps produced by this project were based on the layer 3 or IP level connectivity of the Internet (see OSI model), but there are different aspects of internet structure that have also been mapped.
More recent efforts to map the internet have been improved by more sophisticated methods, allowing them to make faster and more sensible maps. An example of such an effort is the OPTE project, which is attempting to develop a system capable of mapping the internet in a single day.
The "Map of the Internet Project" maps over 4 billion internet locations as cubes in 3D cyberspace. Users can add URLs as cubes and re-arrange objects on the map.
In early 2011 Canadian based ISP PEER 1 Hosting created their own Map of the Internet that depicts a graph of 19,869 autonomous system nodes connected by 44,344 connections. The sizing and layout of the autonomous systems was calculated based on their eigenvector centrality, which is a measure of how central to the network each autonomous system is.
Graph theory can be used to better understand maps of the internet and to help choose between the many ways to visualize internet maps. Some projects have attempted to incorporate geographical data into their internet maps (for example, to draw locations of routers and nodes on a map of the world), but others are only concerned with representing the more abstract structures of the internet, such as the allocation, structure, and purpose of IP space.
Enterprise network mapping
Many organizations create network maps of their network system. These maps can be made manually using simple tools such as Microsoft Visio, or the mapping process can be simplified by using tools that integrate auto network discovery with Network mapping, one such example being the Fabric platform. Many of the vendors from the Notable network mappers list enable you to customize the maps and include your own labels, add un-discoverable items and background images. Sophisticated mapping is used to help visualize the network and understand relationships between end devices and the transport layers that provide service. Mostly, network scanners detect the network with all its components and deliver a list which is used for creating charts and maps using network mapping software. Items such as bottlenecks and root cause an |
https://en.wikipedia.org/wiki/RIPE | Réseaux IP Européens (RIPE, French for "European IP Networks") is a forum open to all parties with an interest in the technical development of the Internet. The RIPE community's objective is to ensure that the administrative and technical coordination necessary to maintain and develop the Internet continues. It is not a standards body like the Internet Engineering Task Force (IETF) and does not deal with domain names like ICANN.
RIPE is not a legal entity and has no formal membership. This means that anybody who is interested in the work of RIPE can participate through mailing lists and by attending meetings. RIPE has a chair to keep an eye on work between RIPE meetings and to act as its external liaison. Rob Blokzijl, who was instrumental in the formation of RIPE, was the initial chair and remained in that position until 2014, when he appointed Hans Petter Holen as his successor. The RIPE community interacts via RIPE Mailing Lists, RIPE Working Groups, and RIPE Meetings.
Although similar in name, the RIPE NCC and RIPE are separate entities. The RIPE NCC provides administrative support to RIPE, such as the facilitation of RIPE meetings and providing administrative support to RIPE Working Groups. It was established in 1992 by the RIPE community to serve as an administrative body.
History
The name is a translation of the English title of a diagram to French by John Quarterman. This was presented in Special Session of RIPE 58.
The first RIPE meeting was held on 22 May 1989 in Amsterdam, Netherlands. It brought together 14 representatives of 6 countries and 11 networks At the time European governments, standardisation bodies and telecommunications companies were pushing for the OSI standard and IP-based networks were seen as the wrong way to go. In the academic community (mostly nuclear and particle physics) there was a strong need to work together with colleagues across Europe and the United States. IP provided a standard to allow interconnection and cooperation, whereas the networks offered by the European telecommunications companies often completely lacked that.
RIPE as an organisation was established by the RIPE terms of reference, which were agreed on 29 November 1989. There were ten organisations intending to participate in the RIPE Coordinating Committee, along the lines defined by the RIPE Terms of Reference, though some still needed to make a formal decision. These organisations were: BelWue, CERN, EASInet, EUnet, GARR, HEPnet, NORDUnet, SURFnet, SWITCH and XLINK. At the same time task forces were established to facilitate the interconnection of European IP-networks in the following weeks and months. The four task forces were:
Connectivity and Routing
Network Management and Operations
Domain Name System
Formal Coordination
One of the results was a proposal on 16 September 1990 to establish the RIPE Network Coordination Centre (NCC) to support the administrative tasks in the RIPE community and the first RIPE NCC Activity Plan was pub |
https://en.wikipedia.org/wiki/Session%20layer | In the seven-layer OSI model of computer networking, the session layer is layer 5.
The session layer provides the mechanism for opening, closing and managing a session between end-user application processes, i.e., a semi-permanent dialogue. Communication sessions consist of requests and responses that occur between applications. Session-layer services are commonly used in application environments that make use of remote procedure calls (RPCs).
An example of a session-layer protocol is the OSI protocol suite session-layer protocol, also known as X.225 or ISO 8327. In case of a connection loss this protocol may try to recover the connection. If a connection is not used for a long period, the session-layer protocol may close it and re-open it. It provides for either full duplex or half-duplex operation and provides synchronization points in the stream of exchanged messages.
Other examples of session layer implementations include Zone Information Protocol (ZIP) – the AppleTalk protocol that coordinates the name binding process, and Session Control Protocol (SCP) – the DECnet Phase IV session-layer protocol.
Within the service layering semantics of the OSI network architecture, the session layer responds to service requests from the presentation layer and issues service requests to the transport layer.
Services
Connection establishment and release
At the minimum, the session layer allows the two sides to establish and use a connection, called a session, and allows orderly release of the connection.
In the OSI model, the transport layer is not responsible for an orderly release of a connection. Instead, the session layer is responsible for that. However, in modern TCP/IP networks, TCP already provides orderly closing of connections at the transport layer.
After a session connection is released, the underlying transport connection may be reused for another session connection. Also, a session connection may make use of multiple consecutive transport connections. For example, if, during a session, the underlying transport connection has a failure, the session layer may try to re-establish a transport connection to continue the session.
Dialogue control
The session layer may provide three different dialogue types - two way simultaneous (full-duplex), two way alternate (half-duplex), and one way (simplex). It also provides the mechanisms to negotiate the type of the dialogue, and controls which side has the "turn" or "token" to send data or to perform some control functions.
Dialogue control is not implemented in TCP/IP, and is left to the application layer to handle, if necessary. In the widely-used HTTP/1.1 protocol, the client and the server typically work in a half-duplex way. HTTP/1.1 also supports HTTP pipelining for full-duplex operation, but many servers/proxies couldn't handle it correctly, and there was no dialogue negotiation mechanism to check whether full-duplex is usable or not, so its support was eventually dropped by most brows |
https://en.wikipedia.org/wiki/Resource%20Interchange%20File%20Format | Resource Interchange File Format (RIFF) is a generic file container format for storing data in tagged chunks. It is primarily used for audio and video, though it can be used for arbitrary data.
The Microsoft implementation is mostly known through container formats like AVI, ANI and WAV, which use RIFF as their basis.
History
RIFF was introduced in 1991 by Microsoft and IBM and used as the default format for Windows 3.1 multimedia files. It is based on Interchange File Format introduced by Electronic Arts in 1985 on the Amiga. IFF uses the big-endian convention of the Amiga's Motorola 68000 CPU, but in RIFF multi-byte integers are stored in the little-endian order of the x86 processors used in IBM PC compatibles. A RIFX format, which is big-endian, was also introduced.
In 2010 Google introduced the WebP picture format, which uses RIFF as a container.
Explanation
RIFF files consist entirely of "chunks". The overall format is identical to IFF, except for the endianness as previously stated, and the different meaning of the chunk names.
All chunks have the following format:
4 bytes: an ASCII identifier for this chunk (examples are "fmt " and "data"; note the space in "fmt ").
4 bytes: an unsigned, little-endian 32-bit integer with the length of this chunk (except this field itself and the chunk identifier).
variable-sized field: the chunk data itself, of the size given in the previous field.
a pad byte, if the chunk's length is not even.
Two chunk identifiers, "RIFF" and "LIST", introduce a chunk that can contain subchunks. The RIFF and LIST chunk data (appearing after the identifier and length) have the following format:
4 bytes: an ASCII identifier for this particular RIFF or LIST chunk (for RIFF in the typical case, these 4 bytes describe the content of the entire file, such as "AVI " or "WAVE").
rest of data: subchunks.
The file itself consists of one RIFF chunk, which then can contain further subchunks: hence, the first four bytes of a correctly formatted RIFF file will spell out "RIFF".
More information about the RIFF format can be found in the Interchange File Format article.
RF64 is a multichannel file format based on RIFF specification, developed by the European Broadcasting Union. It is BWF-compatible and allows file sizes to exceed 4 gigabytes. It does so by providing a "ds64" chunk with a 64-bit (8-byte) size.
Use of the INFO chunk
The optional INFO chunk allows RIFF files to be "tagged" with information falling into a number of predefined categories, such as copyright ("ICOP"), comments ("ICMT"), artist ("IART"), in a standardised way. These details can be read from a RIFF file even if the rest of the file format is unrecognized. The standard also allows the use of user-defined fields. Programmers intending to use non-standard fields should bear in mind that the same non-standard subchunk ID may be used by different applications in different (and potentially incompatible) ways.
Compatibility issues
Initial difficult |
https://en.wikipedia.org/wiki/Jerry%20Yang | Jerry Chih-Yuan Yang (; born Yang Chih-Yuan; November 6, 1968) is a Taiwanese-American billionaire computer programmer, internet entrepreneur, and venture capitalist. He is the co-founder and former CEO of Yahoo! Inc, which he started with classmate David Filo in 1994.
As of July 2023, Yang has a net worth of $2.5 billion.
Early life
Yang was born with the name Yang Chih-Yuan in Taipei, Taiwan, on November 6, 1968; his mother was a professor of English and drama and his father died when he was two, by which time Yang had a brother. In 1978, his mother moved the family to San Jose, California, where his grandmother and extended family took care of the boys while his mother taught English to other immigrants. After moving to the US, Yang took the American name Jerry; his mother, Lily; and his brother, Ken. He says that he only knew one English word, "shoe", when he came to America, but became fluent in English in about three years.
Yang graduated from Piedmont Hills High School and went on to earn both a Bachelor of Science and a Master of Science in electrical engineering from Stanford University in four years. He met David Filo at Stanford in 1989, and the two of them went to Japan in 1992 for a six-month exchange program, during which he met his future wife, who was there as part of the exchange program.
Career
Yang founded Yahoo! in 1994 and served as CEO from 2007 to 2009. He left Yahoo! in 2012. He founded a venture capital firm called AME Cloud Ventures and, as of 2015, serves on several corporate boards. According to Rob Solomon, a venture capitalist at Accel Partners, Yang was "a great founder, evangelist, strategist and mentor," having "created the blueprint for what is possible on the Internet."
1994–2012: Yahoo! years
While studying at Stanford in 1994, Yang and David Filo co-created an Internet website called "Jerry and David's Guide to the World Wide Web," which consisted of a directory of other websites. As it grew in popularity they renamed it "Yahoo! Inc." Yahoo! received around 100,000 unique visitors by the fall of 1994. In April 1995, Yahoo! received a $2 million investment from Sequoia Capital, Tim Koogle was hired as CEO, and Yang and Filo were each appointed "Chief Yahoo." Yahoo! received a second round of funding in the Fall of 1995 from Reuters and Softbank. It went public in April 1996 with 49 employees. In 1999, Yang was named to the MIT Technology Review TR100 as one of the top 100 innovators in the world under the age of 35. Terry Semel, who replaced Tim Koogle as CEO after the dot-com bubble crash, served until 2007 when the rise of Google led the board to fire him and appoint Yang as interim CEO.
Alibaba
Yang met Alibaba founder Jack Ma in 1997 during Yang's first trip to China. Ma, a government-employed tour guide and former English teacher, gave Yang a tour of the Great Wall of China. The two hit it off and discussed the growth of the Web. Ma created Alibaba several months later. A 1997 photo of Yang an |
https://en.wikipedia.org/wiki/Connie%20Chung | Constance Yu-Hwa Chung (born August 20, 1946) is an American journalist who has been a news anchor and reporter for the U.S. television news networks ABC, CBS, NBC, CNN, and MSNBC. Some of her more famous interview subjects include Claus von Bülow and U.S. representative Gary Condit, whom Chung interviewed first after the Chandra Levy disappearance, and basketball legend Magic Johnson after he went public about being HIV-positive. In 1993, she became the second woman to co-anchor a network newscast as part of CBS Evening News.
Early life and education
The youngest of ten children, Chung was born in Washington, D.C., less than a year after her family emigrated from China and was raised in Washington, D.C. Her father, William Ling Chung, was an intelligence officer in the Chinese Nationalist Government and five of her siblings died during wartime. She was named after singer and actress Constance Moore.
In 1969, she graduated from the University of Maryland, College Park with a degree in journalism.
Career
Early career
Chung was a Washington, D.C.-based correspondent for the CBS Evening News with Walter Cronkite in the early 1970s during the Watergate political scandal. Chung left to anchor evening newscasts for KNXT, a CBS owned and operated station in Los Angeles (now KCBS-TV), with Joe Benti. The Los Angeles Times TV columnist said Chung "helped give Channel 2 an agreeable, respectable, middle-road identity". Chung also anchored CBS's primetime news updates (CBS Newsbreak) for West Coast stations from the KNXT studios at Columbia Square during her tenure there.
In early 2018, Chung was asked if she was sexually harassed in her career. She replied, "Oh, yeah! Oh, sure. Yeah. Every day. I mean, a lot. Especially when I started out". Later that year, following Christine Blasey Ford's testimony to the Senate Judiciary Committee of being sexually assaulted by Brett Kavanaugh, Chung wrote an open letter to Blasey-Ford saying that she too was sexually assaulted when she was in college by the doctor who delivered her.
NBC
In 1983, Chung returned to network news as anchor of NBC's new early program, NBC News at Sunrise, which was scheduled as the lead-in to the Today program. She was also anchor of the Saturday edition of NBC Nightly News and filled in for Tom Brokaw on weeknights. NBC also created two newsmagazines, American Almanac and 1986, which she co-hosted with Roger Mudd.
CBS
In 1989, Chung returned to CBS to host Saturday Night with Connie Chung (later renamed Face to Face with Connie Chung) (1989–90) and anchor CBS Sunday Evening News (1989–1993). On June 1, 1993, she became the second woman (after Barbara Walters with ABC in 1976) to co-anchor a major network's national weekday news broadcast. While hosting the CBS Evening News, Chung also hosted a side project on CBS, Eye to Eye with Connie Chung. After her co-anchoring duties with Dan Rather ended in 1995, Chung left CBS. She eventually jumped to ABC News, where she co-hosted the M |
https://en.wikipedia.org/wiki/List%20of%20ad%20hoc%20routing%20protocols | An ad hoc routing protocol is a convention, or standard, that controls how nodes decide which way to route packets between computing devices in a mobile ad hoc network.
In ad hoc networks, nodes are not familiar with the topology of their networks. Instead, they have to discover it: typically, a new node announces its presence and listens for announcements broadcast by its neighbors. Each node learns about others nearby and how to reach them, and may announce that it too can reach them.
Note that in a wider sense, ad hoc protocol can also be used literally, to mean an improvised and often impromptu protocol established for a specific purpose.
The following is a list of some ad hoc network routing protocols.
Table-driven (proactive) routing
This type of protocols maintains fresh lists of destinations and their routes by periodically distributing routing tables throughout the network. The main disadvantages of such algorithms are:
Respective amount of data for maintenance.
Slow reaction on restructuring and failures.
Examples of proactive algorithms are:
Optimized Link State Routing Protocol (OLSR) RFC 3626, RFC 7181.
Babel RFC 6126
Destination Sequence Distance Vector (DSDV)
DREAM
B.A.T.M.A.N.
On-demand (reactive) routing
This type of protocol finds a route on demand by flooding the network with Route Request packets. The main disadvantages of such algorithms are:
High latency time in route finding.
Excessive flooding can lead to network clogging.
Examples of on-demand algorithms are:
ABR - Associativity-Based Routing
Ad hoc On-demand Distance Vector(AODV) (RFC 3561)
Dynamic Source Routing (RFC 4728)
Power-Aware DSR-based
Link-life base routing protocols
Hybrid (both proactive and reactive) routing
This type of protocol combines the advantages of proactive and reactive routing. The routing is initially established with some proactively prospected routes and then serves the demand from additionally activated nodes through reactive flooding. The choice of one or the other method requires predetermination for typical cases. The main disadvantages of such algorithms are:
Advantage depends on number of other nodes activated.
Reaction to traffic demand depends on gradient of traffic volume.
Examples of hybrid algorithms are:
ZRP (Zone Routing Protocol) ZRP uses IARP as pro-active and IERP as reactive component.
ZHLS (Zone-based Hierarchical Link State Routing Protocol)
Hierarchical routing protocols
With this type of protocol the choice of proactive and of reactive routing depends on the hierarchic level in which a node resides. The routing is initially established with some proactively prospected routes and then serves the demand from additionally activated nodes through reactive flooding on the lower levels. The choice for one or the other method requires proper attributation for respective levels. The main disadvantages of such algorithms are:
Advantage depends on depth of nesting and addressing scheme.
Reaction to tr |
https://en.wikipedia.org/wiki/List%20of%20Nintendo%20Entertainment%20System%20games | This is a list of video games released for the Nintendo Entertainment System (NES) and Family Computer (Famicom) video game consoles.
The Family Computer was released by Nintendo on and featured ports of Donkey Kong, Donkey Kong Jr., and Popeye as launch titles. It would become the highest-selling video game console by the end of 1984, paving the way for the release of the Nintendo Entertainment System, which launched on in North America and in Europe. The final licensed game released was the PAL-exclusive The Lion King on May 25, 1995.
In addition to the games, a software for programming titled Family BASIC was created by Nintendo, Hudson Soft, and Sharp Corporation was released on June 21, 1984. An updated version of the software titled Family BASIC V3 was released on February 21, 1985.
Licensed games
There are officially licensed games released for the Nintendo Entertainment System/Family Computer during its lifespan. Of these, 681 were released exclusively in Japan, 186 were released exclusively in North America, and 20 were released exclusively in PAL countries. Worldwide, 507 games were released.
Championship games
Unreleased games
Unlicensed games
Console's lifespan
Famicom games
After lifespan
See also
List of best-selling Nintendo Entertainment System video games
List of cancelled NES games
List of Famicom Disk System games
Lists of video games
References
Nintendo Entertainment System games
Nintendo Entertainment System
ko:패밀리 컴퓨터 게임 목록 |
https://en.wikipedia.org/wiki/FOSSIL | FOSSIL is a standard protocol for allowing serial communication for telecommunications programs under the DOS operating system. FOSSIL is an acronym for Fido Opus SEAdog Standard Interface Layer. Fido refers to FidoNet, Opus refers to Opus-CBCS BBS, and SEAdog refers to a Fidonet compatible mailer. The standards document that defines the FOSSIL protocol is maintained by the Fidonet Technical Standards Committee.
Serial device drivers
A "FOSSIL driver" is simply a communications device driver. They exist because in the early days of Fidonet, computer hardware was very diverse and there were no standards on how software was to communicate with the serial interface hardware. Initial development of FidoBBS only worked on a specific type of machine. Before FidoBBS could start spreading, it was seen that a uniform method of communicating with serial interface hardware was needed if the software was going to be used on other machines. This need was also apparent for other communications based software. The FOSSIL specification was born in 1986 so as to provide this uniform method. Software using the FOSSIL standard could communicate using the same interrupt functions no matter what hardware it was running on. This enabled the developers to concentrate on the application and not the interface to the hardware.
FOSSIL drivers are specific to the hardware they operate on because each is written to fit specifically to the serial interface hardware of that platform. FOSSIL drivers became more well known with the spread of IBM PC compatible machines. These machines ran some form of DOS (Disk Operating System) and their BIOS provided very poor support for serial communications—so poor that it fell far short of the needs of any non-trivial communications task. Over time, MS-DOS and PC DOS became the prevalent operating systems and PC compatible hardware became predominant.
Two popular DOS based FOSSIL drivers were X00 and BNU. A popular Windows based FOSSIL driver is NetFoss, which is freeware. SIO is a popular OS/2-based FOSSIL driver.
FOSSIL drivers for hardware other than serial interfaces
FOSSIL drivers have also been implemented to support other communications hardware by making it "look like a modem" to the application.
Internal ISDN cards (that did not use serial ports at all) often came with FOSSIL drivers to make them work with software that was originally intended for modem operation only.
References
External links
FOSSIL drivers' ancient history |
https://en.wikipedia.org/wiki/Fractal%20landscape | A fractal landscape or fractal surface is generated using a stochastic algorithm designed to produce fractal behavior that mimics the appearance of natural terrain. In other words, the surface resulting from the procedure is not a deterministic, but rather a random surface that exhibits fractal behavior.
Many natural phenomena exhibit some form of statistical self-similarity that can be modeled by fractal surfaces. Moreover, variations in surface texture provide important visual cues to the orientation and slopes of surfaces, and the use of almost self-similar fractal patterns can help create natural looking visual effects.
The modeling of the Earth's rough surfaces via fractional Brownian motion was first proposed by Benoit Mandelbrot.
Because the intended result of the process is to produce a landscape, rather than a mathematical function, processes are frequently applied to such landscapes that may affect the stationarity and even the overall fractal behavior of such a surface, in the interests of producing a more convincing landscape.
According to R. R. Shearer, the generation of natural looking surfaces and landscapes was a major turning point in art history, where the distinction between geometric, computer generated images and natural, man made art became blurred. The first use of a fractal-generated landscape in a film was in 1982 for the movie Star Trek II: The Wrath of Khan. Loren Carpenter refined the techniques of Mandelbrot to create an alien landscape.
Behavior of natural landscapes
Whether or not natural landscapes behave in a generally fractal manner has been the subject of some research. Technically speaking, any surface in three-dimensional space has a topological dimension of 2, and therefore any fractal surface in three-dimensional space has a Hausdorff dimension between 2 and 3. Real landscapes however, have varying behavior at different scales. This means that an attempt to calculate the 'overall' fractal dimension of a real landscape can result in measures of negative fractal dimension, or of fractal dimension above 3. In particular, many studies of natural phenomena, even those commonly thought to exhibit fractal behavior, do not do so over more than a few orders of magnitude. For instance, Richardson's examination of the western coastline of Britain showed fractal behavior of the coastline over only two orders of magnitude. In general, there is no reason to suppose that the geological processes that shape terrain on large scales (for example plate tectonics) exhibit the same mathematical behavior as those that shape terrain on smaller scales (for instance, soil creep).
Real landscapes also have varying statistical behavior from place to place, so for example sandy beaches don't exhibit the same fractal properties as mountain ranges. A fractal function, however, is statistically stationary, meaning that its bulk statistical properties are the same everywhere. Thus, any real approach to modeling landscapes requires |
https://en.wikipedia.org/wiki/Jacobus%20de%20Voragine | Jacobus de Voragine (c. 123013/16 July 1298) was an Italian chronicler and archbishop of Genoa. He was the author, or more accurately the compiler, of the Golden Legend, a collection of the legendary lives of the greater saints of the medieval church that was one of the most popular religious works of the Middle Ages.
Biography
Jacobus was born either in Varazze or in Genoa, where a family originally from Varazze and bearing that name is attested at the time. He entered the Dominican order in 1244, and became the prior at Como, Bologna and Asti in succession. Besides preaching with success in many parts of Italy, he also taught in the schools of his own fraternity. He was provincial of Lombardy from 1267 till 1286, when he was removed at the meeting of the order in Paris. He also represented his own province at the councils of Lucca (1288) and Ferrara (1290). On the last occasion he was one of the four delegates charged with signifying Pope Nicholas IV's desire for the deposition of Munio de Zamora – who had been master of the Dominican order from 1285 and was eventually deprived of his office by a papal bull dated 12 April 1291.
In 1288 Nicholas empowered him to absolve the people of Genoa for their offence in aiding the Sicilians against Charles II. Early in 1292 the same pope, himself a Franciscan, summoned Jacobus to Rome, intending to consecrate him archbishop of Genoa. Jacobus reached Rome on Palm Sunday (30 March), only to find his patron ill of a deadly sickness, from which he died on Good Friday (4 April). The cardinals, however, propter honorem Communis Januae ("for the honor of the commune of Genoa"), determined to carry out this consecration on the Sunday after Easter. He was a good bishop, and especially distinguished himself by his efforts to appease the civil discords of Genoa among Guelfs and Ghibellines. A story, mentioned by Échard as unworthy of credit, makes Pope Boniface VIII, on the first day of Lent, cast the ashes in the archbishop's eyes instead of on his head, with the words, "Remember that thou art a Ghibelline, and with thy fellow Ghibellines wilt return to naught."
He died in 1298 or 1299, and was buried in the Dominican church at Genoa. He was beatified by Pius VII in 1816.
Works
Jacobus de Voragine left a list of his own works. Speaking of himself in his Chronicon januense, he says: "While he was in his order, and after he had been made archbishop, he wrote many works. For he compiled the legends of the saints (Legenda sanctorum) in one volume, adding many things from the Historia tripartita et scholastica, and from the chronicles of many writers."
The other writings he claims are two anonymous volumes of Sermons concerning all the Saints whose yearly feasts the church celebrates. Of these volumes, he adds, one is very diffuse, but the other short and concise. Then follow Sermones de omnibus evangeliis dominicalibus for every Sunday in the year; Sermones de omnibus evangeliis, i.e., a book of discourses on all |
https://en.wikipedia.org/wiki/Wintel | Wintel (portmanteau of Windows and Intel) is the partnership of Microsoft Windows and Intel producing personal computers using Intel x86-compatible processors running Microsoft Windows.
Background
By the early 1980s, the chaos and incompatibility that was rife in the early microcomputer market had given way to a smaller number of de facto industry standards, including the S-100 bus, CP/M, the Apple II, Microsoft BASIC in read-only memory (ROM), and the inch floppy drive. No single firm controlled the industry, and fierce competition ensured that innovation in both hardware and software was the rule rather than the exception. Microsoft Windows and Intel processors gained ascendance and their ongoing alliance gave them market dominance.
Intel claimed that this partnership has enabled the two companies to give customers the benefit of "a seemingly unending spiral of falling prices and rising performance". In addition, they claim a "history of innovation" and "a shared vision of flexible computing for the agile business".
IBM
In 1981, IBM entered the microcomputer market. The IBM PC was created by a small subdivision of the firm. It was unusual for an IBM product because it was largely sourced from outside component suppliers and was intended to run third-party operating systems and software. IBM published the technical specifications and schematics of the PC, which allowed third-party companies to produce compatible hardware, the so-called open architecture. The IBM PC became one of the most successful computers of all time.
The key feature of the IBM PC was that it had IBM's enormous public respect behind it. It was an accident of history that the IBM PC happened to have an Intel CPU (instead of the technically superior Motorola 68000 that had been tipped for it, or an IBM in-house design), and that it shipped with IBM PC DOS (a licensed version of Microsoft's MS-DOS) rather than the CP/M-86 operating system, but these accidents were to have enormous significance in later years.
Because the IBM PC was an IBM product with the IBM badge, personal computers became respectable. It became easier for a business to justify buying a microcomputer than it had been even a year or two before, and easiest of all to justify buying the IBM Personal Computer. Since the PC architecture was well documented in IBM's manuals, and PC DOS was designed to be similar to earlier CP/M operating system, the PC soon had thousands of different third-party add-in cards and software packages available. This made the PC the preferred option for many, since the PC supported the hardware and software they needed.
Competitors
Industry competitors took one of several approaches to the changing market. Some (such as Apple, Amiga, Atari, and Acorn) persevered with their independent and quite different systems. Of those systems, Apple's Mac is the only one remaining on the market. Others (such as Digital, then the world's second-largest computer company, Hewlett-Packard, a |
https://en.wikipedia.org/wiki/UNIVAC%20I | The UNIVAC I (Universal Automatic Computer I) was the first general-purpose electronic digital computer design for business application produced in the United States. It was designed principally by J. Presper Eckert and John Mauchly, the inventors of the ENIAC. Design work was started by their company, Eckert–Mauchly Computer Corporation (EMCC), and was completed after the company had been acquired by Remington Rand (which later became part of Sperry, now Unisys). In the years before successor models of the UNIVAC I appeared, the machine was simply known as "the UNIVAC".
The first Univac was accepted by the United States Census Bureau on March 31, 1951, and was dedicated on June 14 that year. The fifth machine (built for the U.S. Atomic Energy Commission) was used by CBS to predict the result of the 1952 presidential election. With a sample of a mere 5.5% of the voter turnout, it famously predicted an Eisenhower landslide.
History
Market positioning
The UNIVAC I was the first American computer designed at the outset for business and administrative use with fast execution of relatively simple arithmetic and data transport operations, as opposed to the complex numerical calculations required of scientific computers. As such, the UNIVAC competed directly against punch-card machines, though the UNIVAC originally could neither read nor punch cards. That shortcoming hindered sales to companies concerned about the high cost of manually converting large quantities of existing data stored on cards. This was corrected by adding offline card processing equipment, the UNIVAC Tape to Card converter, to transfer data between cards and UNIVAC magnetic tapes. However, the early market share of the UNIVAC I was lower than the Remington Rand Company wished.
To promote sales, the company joined with CBS to have UNIVAC I predict the result of the 1952 Presidential election. After it predicted Eisenhower would have a landslide victory over Adlai Stevenson, as opposed to the final Gallup Poll which had predicted that Eisenhower would win the popular vote by 51–49 in a close contest, the CBS crew was so certain that UNIVAC was wrong that they believed it was not working.
As the election continued, it became clear it was correct all along: UNIVAC had predicted Eisenhower would receive 32,915,949 votes and win the Electoral College 438–93, while the final result had Eisenhower receive 34,075,029 votes in a 442–89 Electoral College victory. UNIVAC had come within 3.5% of Eisenhower's popular vote tally, and four votes of his electoral vote total.
After the announcers admitted their sleight of hand, and their reluctance to believe the prediction, the machine became famous. This gave rise to a greater public awareness of computing technology, while computerized predictions were a must-have part of election night broadcasts.
Installations
The first contracts were with government agencies such as the Census Bureau, the U.S. Air Force, and the U.S. Army Map Service. |
https://en.wikipedia.org/wiki/IBM%20Db2 | Db2 is a family of data management products, including database servers, developed by IBM. It initially supported the relational model, but was extended to support object–relational features and non-relational structures like JSON and XML. The brand name was originally styled as DB/2, then DB2 until 2017 and finally changed to its present form.
History
Unlike other database vendors, IBM previously produced a platform-specific Db2 product for each of its major operating systems. However, in the 1990s IBM changed track and produced a Db2 common product, designed with a mostly common code base for L-U-W (Linux-Unix-Windows); DB2 for System z and DB2 for IBM i are different. As a result, they use different drivers.
DB2 traces its roots back to the beginning of the 1970s when Edgar F. Codd, a researcher working for IBM, described the theory of relational databases, and in June 1970 published the model for data manipulation.
In 1974, the IBM San Jose Research center developed a relational DBMS, System R, to implement Codd's concepts. A key development of the System R project was the Structured Query Language (SQL). To apply the relational model, Codd needed a relational-database language he named DSL/Alpha. At the time, IBM didn't believe in the potential of Codd's ideas, leaving the implementation to a group of programmers not under Codd's supervision. This led to an inexact interpretation of Codd's relational model that matched only part of the prescriptions of the theory; the result was Structured English QUEry Language or SEQUEL.
When IBM released its first relational-database product, they wanted to have a commercial-quality sublanguage as well, so it overhauled SEQUEL and renamed the revised language Structured Query Language (SQL) to differentiate it from SEQUEL, and also because the acronym "SEQUEL" was a trademark of the UK-based Hawker Siddeley aircraft company.
IBM bought Metaphor Computer Systems to utilize their GUI interface and encapsulating SQL platform that had already been in use since the mid-80s.
In parallel with the development of SQL, IBM also developed Query by Example (QBE), the first graphical query language.
IBM's first commercial relational-database product, SQL/DS, was released for the DOS/VSE and VM/CMS operating systems in 1981. In 1976, IBM released Query by Example for the VM platform where the table-oriented front-end produced a linear-syntax language that drove transactions to its relational database. Later, the QMF feature of DB2 produced real SQL, and brought the same "QBE" look and feel to DB2. The inspiration for the mainframe version of DB2's architecture came in part from IBM IMS, a hierarchical database, and its dedicated database-manipulation language, IBM DL/I.
The name DB2 (IBM Database 2), was first given to the Database Management System or DBMS in 1983 when IBM released DB2 on its MVS mainframe platform.
For some years DB2, as a full-function DBMS, was exclusively available on IBM mainframes. Lat |
https://en.wikipedia.org/wiki/NCUBE | nCUBE was a series of parallel computing computers from the company of the same name. Early generations of the hardware used a custom microprocessor. With its final generations of servers, nCUBE no longer designed custom microprocessors for machines, but used server-class chips manufactured by a third party in massively parallel hardware deployments, primarily for the purposes of on-demand video.
Company history
Founding and early growth
nCUBE was founded in 1983 in Beaverton, Oregon, by a group of Intel employees (Steve Colley, Bill Richardson, John Palmer, Doran Wilde, Dave Jurasek) frustrated by Intel's reluctance to enter the parallel computing market, though Intel released its iPSC/1 in the same year as the first nCUBE was released. In December 1985, the first generation of nCUBE's hypercube machines were released. The second generation (N2) was launched in June 1989. The third generation (N3) was released in 1995. The fourth generation (N4) was released in 1999.
In 1988, Larry Ellison invested heavily in nCUBE and became the company's majority shareholder. The company's headquarters were relocated to Foster City, California, to be closer to the Oracle Corporation. In 1994, Ronald Dilbeck became CEO and set nCUBE on a fast track to an initial public offering.
Pivot to video
In 1996, Ellison downsized nCUBE. Dilbeck left and Ellison took over as acting CEO, redirecting the company to become Oracle's Network Computer division. After the network computer diversion, nCUBE resumed development on video servers. nCUBE deployed its first VOD video server in Dubai's Burj al-Arab hotel.
In 1999, nCUBE announced it was acquiring SkyConnect, a seven-year-old software company based in Louisville, Colorado, which developed digital advertising and VOD software for cable television. In the 1990s, nCUBE shifted its focus from the parallel computing market and, by 1999, had identified itself as a video on demand (VOD) solutions provider, shipping over 100 VOD systems delivering 17,000 streams and establishing a relationship with Microsoft TV. The company was once again on IPO fast-track, only to be halted again after the bursting of dot-com bubble.
Lawsuits and dot-com aftermath
In 2000, SeaChange International filed a patent infringement suit against nCUBE, alleging its nCUBE MediaCube-4 product infringed on a SeaChange patent. A jury upheld the validity of SeaChange's patent and awarded damages. The U.S. Court of Appeals for the Federal Circuit subsequently overturned the ruling on June 29, 2005. A separate lawsuit against SeaChange was filed by nCUBE in 2001 after it acquired the patents from Oracle's interactive television division. nCUBE claimed that SeaChange's video server offering violated its VOD patent on delivery to set-top boxes. nCUBE won the lawsuit and was awarded over $2 million in damages. SeaChange appealed, but the decision was upheld in 2004.
On the business front, the dot-com bubble burst and ensuing recession as well as lawsui |
https://en.wikipedia.org/wiki/DCI | DCI may be an abbreviation for:
Technology
D-chiro-inositol, an isomer of inositol
Data, context and interaction, an architectural pattern in computer software development
Direct Count & Intersect, an algorithm for discovering frequent sets in large databases
Digitally controlled impedance, impedance control function in FPGA
, a standard developed by Microsoft and Intel using code from San Francisco Canyon Company for device drivers that control computer graphics cards
Distributed Computing Infrastructure, a term used in grid computing referring to the combination of distributed computer resources
Ductile cast iron, another name for ductile iron, a more flexible type of cast iron
dCi (direct Common-rail Injection), Renault/Nissan's common rail fuel injection technology for diesel engines
Data Center Interconnect, (DCI) is a network that connects two or more Data Centers together to transport traffic between them
Businesses
Digital Cinema Initiatives, a joint venture between the major Hollywood studios to establish a specification for a standard digital cinema architecture
Discovery Communications, an American global media and entertainment company
Dynamic Cassette International, a Boston, Lincolnshire, UK company that produces products under the Jet Tec brand name
DCI Cheese Company, a cheese manufacturer
Dolphin Capital Investors, a real estate investment company focusing on the residential resort sector in emerging markets, and listed on the Alternative Investment Market (AIM) of the London Stock Exchange
Diners Club International, a global charge card
Organizations
DCI (formerly known as Duelists' Convocation International), an organization that sanctioned official tournaments of Wizards of the Coast games, most notably Magic: The Gathering
Defence for Children International, an independent non-governmental organisation that operates globally to promote and protect children's rights
Dental Council of India, an organisation which regulates dental education in India
Dialysis Clinic, Inc, a nonprofit medical organization headquartered in Nashville, Tennessee
Drum Corps International, the nonprofit organization governing modern junior drum and bugle corps
DCI Global Partnerships, the community and work associated with DCI Trust—a UK charity committed to mobilising Christian leaders, providing micro-loans for the economically poor, networking a global Christian constituency and giving to bless the poor of the earth
Distressed Children & Infants International, a nonprofit organization serving disadvantaged children in South Asia
Other
Dade Correctional Institution, a prison in Florida
Development Cooperation Instrument
Director of Central Intelligence, the former title of the head of the U.S. Intelligence Community and Central Intelligence Agency
Detective chief inspector, a police rank in the United Kingdom
Decompression illness, a medical condition brought on by rapid decompression
Doctor of Creative Industries, a |
https://en.wikipedia.org/wiki/Wormhole%20switching | Wormhole flow control, also called wormhole switching or wormhole routing, is a system of simple flow control in computer networking based on known fixed links. It is a subset of flow control methods called Flit-Buffer Flow Control.
Switching is a more appropriate term than routing, as "routing" defines the route or path taken to reach the destination. The wormhole technique does not dictate the route to the destination but decides when the packet moves forward from a router.
Wormhole switching is widely used in multicomputers because of its low latency and small requirements at the nodes.
Wormhole routing supports very low-latency, high-speed, guaranteed delivery of packets suitable for real-time communication.
Mechanism principle
In the wormhole flow control, each packet is broken into small pieces called flits (flow control units or flow control digits).
Commonly, the first flits, called the header flits, holds information about this packet's route (for example, the destination address) and sets up the routing behavior for all subsequent flits associated with the packet. The header flits are followed by zero or more body flits which contain the actual payload of data. Some final flits, called the tail flits, perform some bookkeeping to close the connection between the two nodes.
In wormhole switching, each buffer is either idle, or allocated to one packet. A header flit can be forwarded to a buffer if this buffer is idle. This allocates the buffer to the packet. A body or trailer flit can be forwarded to a buffer if this buffer is allocated to its packet and is not full. The last flit frees the buffer. If the header flit is blocked in the network, the buffer fills up, and once full, no more flits can be sent: this effect is called "back-pressure" and can be propagated back to the source.
The name "wormhole" plays on the way packets are sent over the links: the address is so short that it can be translated before the message itself arrives. This allows the router to quickly set up the routing of the actual message and then "bow out" of the rest of the conversation. Since a packet is transmitted flit by flit, it may occupy several flit buffers along its path, creating a worm-like image.
This behaviour is quite similar to cut-through switching, commonly called "virtual cut-through," the major difference being that cut-through flow control allocates buffers and channel bandwidth on a packet level, while wormhole flow control does this on the flit level.
In case of circular dependency, this back-pressure can lead to deadlock.
In most respects, wormhole is very similar to ATM or MPLS forwarding, with the exception that the cell does not have to be queued.
One thing special about wormhole flow control is the implementation of virtual channels:
A virtual channel holds the state needed to coordinate the handling of the flits of a packet over a channel. At a minimum, this state identifies the output channel of the current node for the next |
https://en.wikipedia.org/wiki/IWarp | iWarp was an experimental parallel supercomputer architecture developed as a joint project by Intel and Carnegie Mellon University. The project started in 1988, as a follow-up to CMU's previous WARP research project, in order to explore building an entire parallel-computing "node" in a single microprocessor, complete with memory and communications links. In this respect the iWarp is very similar to the INMOS transputer and nCUBE.
Intel announced iWarp in 1989. The first iWarp prototype was delivered to Carnegie Mellon in summer of 1990, and in fall they received the first 64-cell production systems, followed by two more in 1991. With the creation of the Intel Supercomputing Systems Division in the summer of 1992, the iWarp was merged into the iPSC product line. Intel kept iWarp as a product but stopped actively marketing it.
Each iWarp CPU included a 32-bit ALU with a 64-bit FPU running at 20 MHz. It was purely scalar and completed one instruction per cycle, so the performance was 20 MIPS or 20 megaflops for single precision and 10 MFLOPS for double. The communications were handled by a separate unit on the CPU that drove four serial channels at 40 MB/s, and included networking support in hardware that allowed for up to 20 virtual channels (similar to the system added to the INMOS T9000).
iWarp processors were combined onto boards along with memory, but unlike other systems Intel chose the faster, but more expensive, static RAM for use on the iWarp. Boards typically included four CPUs and anywhere from 512 kB to 4 MB of SRAM.
Another difference in the iWarp was that the systems were connected together as a n-by-m torus, instead of the more common hypercube. A typical system included 64 CPUs connected as an 8×8 torus, which could deliver 1.2 gigaflops peak.
George Cox was the lead architect of the iWarp project. Steven McGeady (later an Intel Vice-president and witness in the Microsoft antitrust case) wrote an innovative development environment that allowed software to be written for the array before it was completed. Each node of the array was represented by a different Sun workstation on a LAN, with the iWarp's unique inter-node communication protocol simulated over sockets. Unlike the chip-level simulator, which could not simulate a multi-node array, and which ran very slowly, this environment allowed in-depth development of array software to begin.
The production compiler for iWarp was a C and Fortran compiler based on the AT&T pcc compiler for UNIX, ported under contract for Intel by the Canadian firm HCR Corporation and then extensively modified and extended by Intel.
See also
Systolic array
Notes
External links
iWarp Project at CMU
Supercomputers
Parallel computing
Massively parallel computers |
https://en.wikipedia.org/wiki/HyperTransport | HyperTransport (HT), formerly known as Lightning Data Transport, is a technology for interconnection of computer processors. It is a bidirectional serial/parallel high-bandwidth, low-latency point-to-point link that was introduced on April 2, 2001. The HyperTransport Consortium is in charge of promoting and developing HyperTransport technology.
HyperTransport is best known as the system bus architecture of AMD central processing units (CPUs) from Athlon 64 through AMD FX and the associated motherboard chipsets. HyperTransport has also been used by IBM and Apple for the Power Mac G5 machines, as well as a number of modern MIPS systems.
The current specification HTX 3.1 remained competitive for 2014 high-speed (2666 and 3200 MT/s or about 10.4 GB/s and 12.8 GB/s) DDR4 RAM and slower (around 1 GB/s similar to high end PCIe SSDs ULLtraDIMM flash RAM) technology—a wider range of RAM speeds on a common CPU bus than any Intel front-side bus. Intel technologies require each speed range of RAM to have its own interface, resulting in a more complex motherboard layout but with fewer bottlenecks. HTX 3.1 at 26 GB/s can serve as a unified bus for as many as four DDR4 sticks running at the fastest proposed speeds. Beyond that DDR4 RAM may require two or more HTX 3.1 buses diminishing its value as unified transport.
Overview
Links and rates
HyperTransport comes in four versions—1.x, 2.0, 3.0, and 3.1—which run from 200MHz to 3.2 GHz. It is also a DDR or "double data rate" connection, meaning it sends data on both the rising and falling edges of the clock signal. This allows for a maximum data rate of 6400 MT/s when running at 3.2 GHz. The operating frequency is autonegotiated with the motherboard chipset (North Bridge) in current computing.
HyperTransport supports an autonegotiated bit width, ranging from 2 to 32 bits per link; there are two unidirectional links per HyperTransport bus. With the advent of version 3.1, using full 32-bit links and utilizing the full HyperTransport 3.1 specification's operating frequency, the theoretical transfer rate is 25.6 GB/s (3.2 GHz × 2 transfers per clock cycle × 32 bits per link) per direction, or 51.2 GB/s aggregated throughput, making it faster than most existing bus standard for PC workstations and servers as well as making it faster than most bus standards for high-performance computing and networking.
Links of various widths can be mixed together in a single system configuration as in one 16-bit link to another CPU and one 8-bit link to a peripheral device, which allows for a wider interconnect between CPUs, and a lower bandwidth interconnect to peripherals as appropriate. It also supports link splitting, where a single 16-bit link can be divided into two 8-bit links. The technology also typically has lower latency than other solutions due to its lower overhead.
Electrically, HyperTransport is similar to low-voltage differential signaling (LVDS) operating at 1.2 V. HyperTransport 2.0 added post-cursor t |
https://en.wikipedia.org/wiki/InfiniBand | InfiniBand (IB) is a computer networking communications standard used in high-performance computing that features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also used as either a direct or switched interconnect between servers and storage systems, as well as an interconnect between storage systems. It is designed to be scalable and uses a switched fabric network topology.
By 2014, it was the most commonly used interconnect in the TOP500 list of supercomputers, until about 2016.
Mellanox (acquired by Nvidia) manufactures InfiniBand host bus adapters and network switches, which are used by large computer system and database vendors in their product lines.
As a computer cluster interconnect, IB competes with Ethernet, Fibre Channel, and Intel Omni-Path. The technology is promoted by the InfiniBand Trade Association.
History
InfiniBand originated in 1999 from the merger of two competing designs: Future I/O and Next Generation I/O (NGIO). NGIO was led by Intel, with a specification released on 1998, and joined by Sun Microsystems and Dell.
Future I/O was backed by Compaq, IBM, and Hewlett-Packard.
This led to the formation of the InfiniBand Trade Association (IBTA), which included both sets of hardware vendors as well as software vendors such as Microsoft.
At the time it was thought some of the more powerful computers were approaching the interconnect bottleneck of the PCI bus, in spite of upgrades like PCI-X. Version 1.0 of the InfiniBand Architecture Specification was released in 2000. Initially the IBTA vision for IB was simultaneously a replacement for PCI in I/O, Ethernet in the machine room, cluster interconnect and Fibre Channel. IBTA also envisaged decomposing server hardware on an IB fabric.
Mellanox had been founded in 1999 to develop NGIO technology, but by 2001 shipped an InfiniBand product line called InfiniBridge at 10 Gbit/second speeds.
Following the burst of the dot-com bubble there was hesitation in the industry to invest in such a far-reaching technology jump.
By 2002, Intel announced that instead of shipping IB integrated circuits ("chips"), it would focus on developing PCI Express, and Microsoft discontinued IB development in favor of extending Ethernet. Sun Microsystems and Hitachi continued to support IB.
In 2003, the System X supercomputer built at Virginia Tech used InfiniBand in what was estimated to be the third largest computer in the world at the time.
The OpenIB Alliance (later renamed OpenFabrics Alliance) was founded in 2004 to develop an open set of software for the Linux kernel. By February, 2005, the support was accepted into the 2.6.11 Linux kernel.
In November 2005 storage devices finally were released using InfiniBand from vendors such as Engenio.
Of the top 500 supercomputers in 2009, Gigabit Ethernet was the internal interconnect technology in 259 installations, compared with 181 using InfiniBand.
In 2010, market leaders M |
https://en.wikipedia.org/wiki/PCI%20Express | PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe or PCI-e, is a high-speed serial computer expansion bus standard, designed to replace the older PCI, PCI-X and AGP bus standards. It is the common motherboard interface for personal computers' graphics cards, sound cards, hard disk drive host adapters, SSDs, Wi-Fi and Ethernet hardware connections. PCIe has numerous improvements over the older standards, including higher maximum system bus throughput, lower I/O pin count and smaller physical footprint, better performance scaling for bus devices, a more detailed error detection and reporting mechanism (Advanced Error Reporting, AER), and native hot-swap functionality. More recent revisions of the PCIe standard provide hardware support for I/O virtualization.
The PCI Express electrical interface is measured by the number of simultaneous lanes. (A lane is a single send/receive line of data. The analogy is a highway with traffic in both directions.) The interface is also used in a variety of other standards — most notably the laptop expansion card interface called ExpressCard. It is also used in the storage interfaces of SATA Express, U.2 (SFF-8639) and M.2.
Format specifications are maintained and developed by the PCI-SIG (PCI Special Interest Group) — a group of more than 900 companies that also maintains the conventional PCI specifications.
Architecture
Conceptually, the PCI Express bus is a high-speed serial replacement of the older PCI/PCI-X bus. One of the key differences between the PCI Express bus and the older PCI is the bus topology; PCI uses a shared parallel bus architecture, in which the PCI host and all devices share a common set of address, data, and control lines. In contrast, PCI Express is based on point-to-point topology, with separate serial links connecting every device to the root complex (host). Because of its shared bus topology, access to the older PCI bus is arbitrated (in the case of multiple masters), and limited to one master at a time, in a single direction. Furthermore, the older PCI clocking scheme limits the bus clock to the slowest peripheral on the bus (regardless of the devices involved in the bus transaction). In contrast, a PCI Express bus link supports full-duplex communication between any two endpoints, with no inherent limitation on concurrent access across multiple endpoints.
In terms of bus protocol, PCI Express communication is encapsulated in packets. The work of packetizing and de-packetizing data and status-message traffic is handled by the transaction layer of the PCI Express port (described later). Radical differences in electrical signaling and bus protocol require the use of a different mechanical form factor and expansion connectors (and thus, new motherboards and new adapter boards); PCI slots and PCI Express slots are not interchangeable. At the software level, PCI Express preserves backward compatibility with PCI; legacy PCI system software can detect |
https://en.wikipedia.org/wiki/HIPPI | HIPPI, short for High Performance Parallel Interface, is a computer bus for the attachment of high speed storage devices to supercomputers, in a point-to-point link. It was popular in the late 1980s and into the mid-to-late 1990s, but has since been replaced by ever-faster standard interfaces like Fibre Channel and 10 Gigabit Ethernet.
The first HIPPI standard defined a 50-pair (100-wire) unidirectional twisted pair cable, running at 800 Mbit/s (100 MB/s) with maximum range limited to . A bidirectional connection therefore required two separate cables. Late standards included a 1600 Mbit/s (200 MB/s) mode running on Serial HIPPI fibre optic cable with a maximum range of .
HIPPI usage dwindled in the late 1990s. This was partly because Ultra3 SCSI offered rates of 320 MB/s and was available at almost any computer store at commodity prices. Meanwhile, Fibre Channel offered simple interconnect with both HIPPI and SCSI (it can run both protocols) and speeds of up to 400 MB/s on fibre and 100 MB/s on a single pair of twisted pair copper wires. Both of these systems have since been supplanted by even higher performance systems during the 2000s.
GSN - HIPPI-6400
In 1999 in an effort to improve the speed resulted in HIPPI-6400, which was later renamed GSN (for Gigabyte System Network) but this saw little use due to competing standards and the high cost of GSN. GSN has a full-duplex bandwidth of 6400 Mbit/s or 800 MB/s in each direction. GSN was developed at Los Alamos National Laboratory and uses a parallel interface for higher speeds. GSN copper cables are limited to and fibre optic cables limited to 1 km.
HIPPI is the first “near-gigabit” (0.8 Gbit/s) (ANSI) standard for network data transmission. It was specifically designed for supercomputers and was never intended for mass market networks such as Ethernet. Many of the features developed for HIPPI were integrated into such technologies as InfiniBand. What is remarkable about HIPPI is that it came out when Ethernet was still a 10 Mbit/s data link and SONET at OC-3 (155 Mbit/s) was considered leading edge technology.
See also
GG45
List of device bandwidths
Optical communication
Optical fiber cable
Parallel optical interface
TERA
XAUI
References
Computer buses
Computer storage buses |
https://en.wikipedia.org/wiki/Fibre%20Channel | Fibre Channel (FC) is a high-speed data transfer protocol providing in-order, lossless delivery of raw block data. Fibre Channel is primarily used to connect computer data storage to servers in storage area networks (SAN) in commercial data centers.
Fibre Channel networks form a switched fabric because the switches in a network operate in unison as one big switch. Fibre Channel typically runs on optical fiber cables within and between data centers, but can also run on copper cabling. Supported data rates include 1, 2, 4, 8, 16, 32, 64, and 128 gigabit per second resulting from improvements in successive technology generations. The industry now notates this as Gigabit Fibre Channel (GFC).
There are various upper-level protocols for Fibre Channel, including two for block storage. Fibre Channel Protocol (FCP) is a protocol that transports SCSI commands over Fibre Channel networks. FICON is a protocol that transports ESCON commands, used by IBM mainframe computers, over Fibre Channel. Fibre Channel can be used to transport data from storage systems that use solid-state flash memory storage medium by transporting NVMe protocol commands.
Etymology
When the technology was originally devised, it ran over optical fiber cables only and, as such, was called "Fiber Channel". Later, the ability to run over copper cabling was added to the specification. In order to avoid confusion and to create a unique name, the industry decided to change the spelling and use the British English fibre for the name of the standard.
History
Fibre Channel is standardized in the T11 Technical Committee of the International Committee for Information Technology Standards (INCITS), an American National Standards Institute (ANSI)-accredited standards committee. Fibre Channel started in 1988, with ANSI standard approval in 1994, to merge the benefits of multiple physical layer implementations including SCSI, HIPPI and ESCON.
Fibre Channel was designed as a serial interface to overcome limitations of the SCSI and HIPPI physical-layer parallel-signal copper wire interfaces. Such interfaces face the challenge of, among other things, maintaining signal timing coherence across all the data-signal wires (8, 16 and finally 32 for SCSI, 50 for HIPPI) so that a receiver can determine when all the electrical signal values are "good" (stable and valid for simultaneous reception sampling). This challenge becomes evermore difficult in a mass-manufactured technology as data signal frequencies increase, with part of the technical compensation being ever reducing the supported connecting copper-parallel cable length. See Parallel SCSI. FC was developed with leading-edge multi-mode optical fiber technologies that overcame the speed limitations of the ESCON protocol. By appealing to the large base of SCSI disk drives and leveraging mainframe technologies, Fibre Channel developed economies of scale for advanced technologies and deployments became economical and widespread.
Commercial produc |
https://en.wikipedia.org/wiki/Matthias%20Ettrich | Matthias Ettrich (born 14 June 1972) is a German computer scientist and founder of the KDE and LyX projects.
Early life
Ettrich was born in Bietigheim-Bissingen, Baden-Württemberg, West Germany, and went to school in Beilstein while living with his parents in Oberstenfeld. He passed the Abitur in 1991. Ettrich studied for his MSc in Computer Science at the Wilhelm Schickard Institute for Computer Science at the University of Tübingen.
Career
He currently resides in Berlin, Germany. He is currently focused on advising start-ups and corporations on digital transformation and in sound technical decision-making.
Free software projects
Ettrich founded and furthered the LyX project in 1995, initially conceived as a university term project. LyX is a graphical frontend to LaTeX.
Since LyX's main target platform was Linux, he started exploring different ways to improve the graphical user interface, ultimately leading him to the KDE project. Ettrich founded KDE in 1996 when he proposed on Usenet a "consistent, nice looking free desktop environment" for Unix-like systems using Qt as its widget toolkit.
On 6 November 2009, Ettrich was decorated with the Federal Cross of Merit for his contributions to free software.
References
External links
The People Behind KDE: Interview with Matthias Ettrich (2000)
The People Behind KDE: Interview with Matthias Ettrich (2004)
1972 births
Computer programmers
Free software programmers
German computer scientists
KDE
Living people
People from Bietigheim-Bissingen
Recipients of the Medal of the Order of Merit of the Federal Republic of Germany |
https://en.wikipedia.org/wiki/Windows%20Notepad | Windows Notepad is a simple text editor for Windows; it creates and edits plain text documents. First released in 1983 to commercialize the computer mouse in MS-DOS, Notepad has been part of every version of Windows ever since.
History
In May 1983, at the COMDEX computer expo in Atlanta, Microsoft introduced the Multi-Tool Notepad, a mouse-based text editor Richard Brodie had created, along with the $195 Microsoft Mouse. Also appearing at that COMDEX was the Multi-Tool Word, a word processor that Charles Simonyi was developing and supported the mouse. Most visitors had never heard of a computer mouse before. The mouse began shipping in July. Initial sales were modest because it had no use other than running the programs included in the box (a tutorial, a practice app, and Multi-Tool Notepad.)
The Multi-Tool product line began with expert systems for the Multiplan spreadsheet. On the suggestion of Rowland Hanson, Microsoft dropped the Multi-Tool brand name. Hanson's rationale was that "the brand is the hero" and people wouldn't automatically associate "Multi-Tool" with Microsoft. As a result, the Multi-Tool Notepad and the Multi-Tool Word became Windows Notepad and Microsoft Word, respectively. (Hanson also convinced Bill Gates to rename "Interface Manager" to "Windows" before the release of Windows 1.0.)
Since then, Notepad has been part of Microsoft Windows.
Change in development model
Since the introduction of Microsoft Store in 2012, Microsoft has converted some of the built-in Windows apps into Microsoft Store apps (e.g., Sticky Notes), so that they could be updated independent of Windows releases. Within three years, Notepad has appeared on Microsoft Store thrice. The first time was in August 2019; it vanished shortly thereafter. This version required Windows 10 preview build 18963. During this short-lived presence on the Store, technology news blogs speculated that Microsoft intended to de-couple Notepad's life-cycle from that of Windows 10 and update it more frequently through Microsoft Store. Notepad appeared on Microsoft Store for a second time in April 2020, this time, sporting a new logo. It runs on the preview versions of Windows 10, build number 19541 or later. On 16 February 2022, Microsoft started rolling out a new and redesigned version of Notepad to all Windows 11 users. This version had Dark Mode added and a new Find and Replace flyout with the same functionality. Notepad is now available in the Microsoft Store in both Windows 10 and 11.
Features
Notepad is a text editor, i.e., an app specialized in editing plain text. It can edit text files (bearing the ".txt" filename extension) and compatible formats, such as batch files, INI files, and log files.
Notepad offers only the most basic text manipulation functions, such as finding and replacing text. Until Windows ME, there were almost no keyboard shortcuts and no line-counting feature. Starting with Windows 2000, shortcuts for common commands like "New", "Open", and "Sav |
https://en.wikipedia.org/wiki/IBM%20650 | The IBM 650 Magnetic Drum Data-Processing Machine is an early digital computer produced by IBM in the mid-1950s. It was the first mass produced computer in the world. Almost 2,000 systems were produced, the last in 1962, and it was the first computer to make a meaningful profit. The first one was installed in late 1954 and it was the most-popular computer of the 1950s.
The 650 offered it to business, scientific and engineering users as a slower and less expensive version of the IBM 701 and IBM 702 computers which were for scientific and business purposes respectively. It was also marketed to users of punched card machines who were upgrading from calculating punches, such as the IBM 604, to computers.
Because of its relatively low cost and ease of programming, the 650 was used to pioneer a wide variety of applications, from modeling submarine crew performance to teaching high school and college students computer programming. The IBM 650 became highly popular in universities, where a generation of students first learned programming.
It was announced in 1953 and in 1956 enhanced as the IBM 650 RAMAC with the addition of up to four disk storage units. The purchase price for the bare IBM 650 console, without the reader punch unit, was $150,000 in 1959, or roughly $1,500,000 as of 2023. Support for the 650 and its component units was withdrawn in 1969.
The 650 was a two-address, bi-quinary coded decimal computer (both data and addresses were decimal), with memory on a rotating magnetic drum. Character support was provided by the input/output units converting punched card alphabetical and special character encodings to/from a two-digit decimal code.
The 650 was clocked at a frequency of 125 kHz. It could add or subtract in 1.63 milliseconds, multiply in 12.96 ms, and divide in 16.90 ms. The average speed of the 650 was estimated to be around 27.6 ms per instruction, or roughly 40 instructions per second.
History
The first 650 was installed on December 8, 1954 in the controller's department of the John Hancock Mutual Life Insurance Company in Boston.
The IBM 7070 (signed 10-digit decimal words), announced 1958, was expected to be a "common successor to at least the 650 and the [IBM] 705". The IBM 1620 (variable-length decimal), introduced in 1959, addressed the lower end of the market. The UNIVAC Solid State (a two-address computer, signed 10-digit decimal words) was announced by Sperry Rand in December 1958 as a response to the 650. None of these had an instruction set that was compatible with the 650.
Hardware
The basic 650 system consisted of three units:
IBM 650 Console Unit housed the magnetic drum storage, arithmetical device (using vacuum tubes) and the operator's console.
IBM 655 Power Unit
IBM 533 or IBM 537 Card Read Punch Unit The IBM 533 had separate feeds for reading and punching; the IBM 537 had one feed, thus could read and then punch into the same card.
Weight: .
Optional units:
IBM 46 Tape To Card Punch, Model 3
IBM 47 |
https://en.wikipedia.org/wiki/Selby%20Abbey | Selby Abbey is a former Benedictine abbey and current Anglican parish church in the town of Selby, North Yorkshire, England. It is a member of the Major Churches Network in England.
Monastic history
The church is one of the relatively few surviving abbey churches of the medieval period, and, although not a cathedral, is one of the biggest. It was founded by Benedict of Auxerre in 1069 and subsequently built by the de Lacy family.
On 31 May 1256, the abbey was bestowed with the grant of a Mitre by Pope Alexander IV and from this date was a "Mitred Abbey". This privilege fell in abeyance a number of times, but on 11 April 1308, Archbishop William Greenfield confirmed the grant, and Selby remained a "Mitred Abbey" until the Dissolution of the Monasteries.
Archbishop Walter Giffard visited the monastery in 1275 by commission, and several monks and the Abbot were charged with a list of faults including loose living, (many complaints referred to misconduct with married women). In 1279 Archbishop William de Wickwane made a visitation, and found fault with the Abbot as he did not observe the rule of St Benedict, was not singing mass, preaching or teaching, and seldom attending chapter. Things had not improved much in 1306 when Archbishop William Greenfield visited and similar visitations in later years resulted in similar findings.
The community rebuilt the choir in the early fourteenth century, but in 1340, a fire destroyed the Chapter House, Dormitory, Treasury and part of the church. The damage was repaired and the decorated windows in the south aisle of the nave were installed.
In 1380–1 there was the abbot and twenty-five monks. In 1393 Pope Boniface IX granted an indulgence to pilgrims who contributed to the conservation of the chapel of the Holy Cross in the abbey.
The fifteenth century saw more alterations to the abbey. The perpendicular windows in the north transept and at the west end of the nave were added and the Sedilia in the Sanctuary was added. One of the final additions was the Lathom Chapel, dedicated to St Catherine, east of the north transept, in 1465.
In the Valor Ecclesiasticus of 1535 the abbey was valued at £719 2s. 6¼d (equivalent to £ in ). The abbey surrendered on 6 December 1539. The community comprised the Abbot, and 23 monks. The abbot was pensioned off on £100 a year (equivalent to £ in ) the prior got £8 and the others between £6 6s. 8d. and £5 each.
Abbots of Selby
Source:
Post monastic history
For a time after the dissolution, the church was unused but in 1618 it became the Parish Church of Selby. During the English Civil War and the Commonwealth period the building suffered with the north transept window being destroyed, and the statues on the brackets in the choir were demolished.
Like York Minster, the church rests on a base of sand and has suffered from subsidence. Many sections collapsed entirely during the seventeenth century, including the central tower in 1690 which destroyed the south transept. Th |
https://en.wikipedia.org/wiki/Old%20boy%20network | An old boy network (also known as old boys' network, old boys' club) is an informal system in which wealthy men with similar social or educational background help each other in business or personal matters. The term originally referred to social and business connections among former pupils of male-only elite schools, though the term is now also used to refer to any closed system of relationships that restrict opportunities to within the group. The term originated from much of the British upper-class having attended certain fee-charging public schools as boys, thus former pupils are "old boys".
This can apply to the network between the graduates of a single school regardless of their gender. It is also known as an old boys' society and is similar to an alumni association. It can also mean a network of social and business connections among the alumni of various prestigious schools. In popular language, old boy network or old boys' society has come to be used in reference to the preservation of social elites in general; such connections within the British Civil Service formed a primary theme in the BBC's satirical comedy series Yes Minister. The phrase "It's not what you know, it's who you know" is associated with this tradition.
Australia
In Australia, the term "Old Boy" is used to describe a male alumnus of some prestigious state and private schools. The term "Old Girl" is similarly used for a female alumna of such schools. Both "Old Girl" and "Old Boy" are sometimes used as a reference to someone's parents.
Canada
The term is also used in Canada, where the alumni of such schools as St. Andrew's College, Trinity College School, Crescent School, St George's School, Vancouver College, Stratford Hall, Bishop's College School, Hillfield Strathallan College, Collège Jean-de-Brébeuf, Lower Canada College, and Upper Canada College are known as Old Boys. Other influential private schools with powerful alumni networks may have become co-ed, such as Appleby College or University of Toronto Schools, but operate similarly in which large numbers of alumni all work for the same organization.
Finland
In Finland, the Finnish term hyvä veli -verkosto (literally dear brother network) is used to refer to the alleged informal network of men in high places whose members use their influence to pervert or circumvent official decision-making processes to the members' mutual benefit. As such, the term is pejorative.
The term derives from the salutation "Hyvä veli!", or "Dear brother!", traditionally used to open a letter to a not quite intimate friend. The implication is that since the elites of all fields are drawn from a fairly small pool of people who are mostly more or less acquainted with each other, they can and often do manage public and private affairs amongst themselves, off the record, and outside public scrutiny as they like. As the word "brother" implies, the network is usually presumed to be consisting of males, and thus the term is also sometimes use |
https://en.wikipedia.org/wiki/IBM%20550 | The IBM 550 numerical interpreter was the first commercial machine made by IBM that read numerical data punched on cards and printed it across the top of each card. The 550 was introduced in 1930.
Information to be printed could be placed in any sequence via plugboard control panel selections. The machine operated at the rate of 75 cards a minute. The feed hopper had a capacity of 800 cards, and the stacker had a capacity of 1,000 cards.
Alphabetic and numeric characters could be printed by the Type 552 alphabetic interpreter, announced in 1937. It could process 60 cards per minute. The Type 552 was withdrawn in December 1957.
References
http://www.columbia.edu/acis/history/interpreter.html
See also
Unit record equipment
IBM 557 Alphabetic Interpreter
550
550 |
https://en.wikipedia.org/wiki/List%20of%20programming%20languages%20by%20type | This is a list of notable programming languages, grouped by type.
There is no overarching classification scheme for programming languages. Thus, in many cases, a language is listed under multiple headings (in this regard, see "Multiparadigm languages" below).
Array languages
Array programming (also termed vector or multidimensional) languages generalize operations on scalars to apply transparently to vectors, matrices, and higher-dimensional arrays.
A+
Ada
Analytica
APL
Chapel
Dartmouth BASIC
Fortran (As of Fortran 90)
FreeMat
GAUSS
Interactive Data Language (IDL)
J
Julia
K
Mathematica (Wolfram language)
MATLAB
Octave
Q
R
S
Scilab
S-Lang
SequenceL
Speakeasy
X10
ZPL
Agent-oriented programming languages
Agent-oriented programming allows the developer to build, extend and use software agents, which are abstractions of objects that can message other agents.
Clojure
F#
GOAL
SARL
Aspect-oriented programming languages
Aspect-oriented programming enables developers to add new functionality to code, known as "advice", without modifying that code itself; rather, it uses a pointcut to implement the advice into code blocks.
Ada
AspectJ
Groovy
Nemerle
Assembly languages
Assembly languages directly correspond to a machine language (see below), so machine code instructions appear in a form understandable by humans, although there may not be a one-to-one mapping between an individual statement and an individual instruction. Assembly languages let programmers use symbolic addresses, which the assembler converts to absolute or relocatable addresses. Most assemblers also support macros and symbolic constants.
Authoring languages
An authoring language is a programming language designed for use by a non-computer expert to easily create tutorials, websites, and other interactive computer programs.
Darwin Information Typing Architecture (DITA)
Lasso
PILOT
TUTOR
Authorware
Concatenative programming languages
A concatenative programming language is a point-free computer programming language in which all expressions denote functions, and the juxtaposition of expressions denotes function composition. Concatenative programming replaces function application, which is common in other programming styles, with function composition as the default way to build subroutines.
Factor
Forth
jq (function application is also supported)
Joy
PostScript
Constraint programming languages
A constraint programming language is a declarative programming language where relationships between variables are expressed as constraints. Execution proceeds by attempting to find values for the variables which satisfy all declared constraints.
Claire
Constraint Handling Rules
CHIP
ECLiPSe
Kaleidoscope
Command-line interface languages
Command-line interface (CLI) languages are also called batch languages or job control languages. Examples:
4DOS (shell for IBM PCs)
4OS2 (shell for IBM PCs)
bash (the Bourne-Again shell from GNU |
https://en.wikipedia.org/wiki/List%20of%20programming%20languages | This is an index to notable programming languages, in current or historical use. Dialects of BASIC, esoteric programming languages, and markup languages are not included. A programming language does not need to be imperative or Turing-complete, but must be executable and so does not include markup languages such as HTML or XML, but does include domain-specific languages such as SQL and its dialects.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
See also
Lists of programming languages
List of programming languages by type
Comparison of programming languages
List of BASIC dialects
List of markup languages
List of stylesheet languages
List of programming languages for artificial intelligence
History of programming languages
:Category:Programming languages |
https://en.wikipedia.org/wiki/Generational%20list%20of%20programming%20languages | This is a "genealogy" of programming languages. Languages are categorized under the ancestor language with the strongest influence. Those ancestor languages are listed in alphabetic order. Any such categorization has a large arbitrary element, since programming languages often incorporate major ideas from multiple sources.
ALGOL based
ALGOL (also under Fortran)
Atlas Autocode
ALGOL 58 (IAL, International Algorithmic Language)
MAD and GOM (Michigan Algorithm Decoder and Good Old MAD)
ALGOL 60
MAD/I
Simula (see also Simula based)
ALGOL 68
ALGOL W
Pascal
Ada
SPARK
PL/SQL
Turbo Pascal
Object Pascal (Delphi)
Free Pascal (FPC)
Kylix (same as Delphi, but for Linux)
Euclid
Concurrent Euclid
Turing
Turing Plus
Object Oriented Turing
Mesa
Modula-2
Modula-3
Oberon (Oberon-1)
Go (also under C)
Nim (also under Python)
Oberon-2
Component Pascal
Active Oberon
Zonnon
Oberon-07
Lua (also under Scheme and SNOBOL)
Ring (also under C, BASIC, Ruby, Python, C#)
SUE
Plus
CPL
BCPL
B
C (see also C based)
Julia (also under Lisp, Python, Ruby)
APL based
APL
A+
J (also under FL)
K (also under LISP)
NESL
PDL (also under Perl)
BASIC based
BASIC (also under Fortran II)
AmigaBASIC
AMOS BASIC
BASIC Stamp
Basic-256
BASIC09
BBC Basic
Blitz BASIC
Blitz3D
BlitzMax
BlitzPlus
Business Basic
Caché Basic
Chinese BASIC
COMAL
Commodore BASIC
DarkBASIC
DarkBASIC Professional
Euphoria
GLBasic
GW-BASIC
QuickBASIC
QBasic
Basic4GL
FreeBASIC
Liberty BASIC
Run BASIC
Visual Basic
VBScript
Visual Basic for Applications (VBA)
LotusScript
Visual Basic .NET
Small Basic
B4X
Basic for Qt
OpenOffice Basic
HBasic
Gambas
WinWrap Basic
WordBasic
QB64
PureBasic
REALbasic (Xojo)
Ring (also under C, Ruby, Python, C#, Lua)
thinBasic
TI-BASIC
True BASIC
Turbo Basic
PowerBASIC
wxBasic
SdlBasic
XBasic
YaBasic
Batch languages
MS-DOS Batch files
Winbatch
CLIST
IBM Job Control Language (JCL)
C based
C (also under BCPL)
Lua
Alef
C++
Rust (also under Cyclone, Haskell, and OCaml)
D
C#
Windows PowerShell (also under DCL, ksh, and Perl)
Ring (also under BASIC, Ruby, Python, Lua)
Cobra (class/object model and other features)
Java (see also Java based)
C--
Cyclone
Rust (also under C++, Haskell, and OCaml)
ColdFusion
Go (also under Oberon)
Harbour
Limbo
LPC
Pike
Objective-C (also under Smalltalk)
Swift (also under Ruby, Python, and Haskell)
PCASTL (also under Lisp)
Perl
Windows PowerShell (also under C#, DCL, and ksh)
S2
PHP
Ruby (also under Smalltalk)
Julia (also under Lisp, Python, ALGOL)
Ring (also under C, BASIC, Python, C#, Lua)
Swift (also under Objective-C, Python, and Haskell)
Crystal
Elixir (also under Erlang)
PDL (also under APL)
Raku
Python
Julia (also under Lisp, Ruby, ALGOL)
Nim (also under Oberon)
Ring (also under C, BASIC, Ruby, C#, Lua)
Swift (also under Ruby, Objective-C, and Haskell)
QuakeC
Ring (also under BASIC, Ruby, Python, C#, Lua)
tcsh (also under sh |
https://en.wikipedia.org/wiki/UNIVAC | UNIVAC (Universal Automatic Computer) was a line of electronic digital stored-program computers starting with the products of the Eckert–Mauchly Computer Corporation. Later the name was applied to a division of the Remington Rand company and successor organizations.
The BINAC, built by the Eckert–Mauchly Computer Corporation, was the first general-purpose computer for commercial use, but it was not a success. The last UNIVAC-badged computer was produced in 1986.
History and structure
J. Presper Eckert and John Mauchly built the ENIAC (Electronic Numerical Integrator and Computer) at the University of Pennsylvania's Moore School of Electrical Engineering between 1943 and 1946. A 1946 patent rights dispute with the university led Eckert and Mauchly to depart the Moore School to form the Electronic Control Company, later renamed Eckert–Mauchly Computer Corporation (EMCC), based in Philadelphia, Pennsylvania. That company first built a computer called BINAC (BINary Automatic Computer) for Northrop Aviation (which was little used, or perhaps not at all). Afterwards began the development of UNIVAC. UNIVAC was first intended for the Bureau of the Census, which paid for much of the development, and then was put in production.
With the death of EMCC's chairman and chief financial backer Henry L. Straus in a plane crash on October 25, 1949, EMCC was sold to typewriter maker Remington Rand on February 15, 1950. Eckert and Mauchly now reported to Leslie Groves, the retired army general who had previously managed building the Pentagon and the Manhattan Project, where he was exposed to ENIAC.
The most famous UNIVAC product was the UNIVAC I mainframe computer of 1951, which became known for predicting the outcome of the U.S. presidential election the following year: this incident is noteworthy because the computer correctly predicted an Eisenhower landslide over Adlai Stevenson, whereas the final Gallup poll had Eisenhower winning the popular vote 51–49 in a close contest.
The prediction led CBS's news boss in New York, Sigfried Mickelson, to believe the computer was in error, and he refused to allow the prediction to be read. Instead, the crew showed some staged theatrics that suggested the computer was not responsive, and announced it was predicting 8–7 odds for an Eisenhower win (the actual prediction was 100–1 in his favour).
When the predictions proved true – Eisenhower defeated Stevenson in a landslide, with UNIVAC coming within 3.5% of his popular vote total and four votes of his Electoral College total – Charles Collingwood, the on-air announcer, announced that they had failed to believe the earlier prediction.
The United States Army requested a UNIVAC computer from Congress in 1951. Colonel Wade Heavey explained to the Senate subcommittee that the national mobilization planning involved multiple industries and agencies: "This is a tremendous calculating process...there are equations that can not be solved by hand or by electrically operate |
https://en.wikipedia.org/wiki/Marriott%27s%20Way | The Marriott's Way is a footpath, cycle-path and bridleway in north Norfolk, England, between Norwich and Aylsham via Themelthorpe. It forms part of the National Cycle Network (NCN) (Route 1) and the red route of Norwich's Pedalways cycle path network. It is open to walkers, cyclists and horse riders. Its total length is 24.6 miles (39.5 km). It has a mixture of surfaces: tarmac, compacted gravel and earth. The name of the route originates from the chief engineer and manager of the Midland and Great Northern Joint Railway (M&GNJR), William Marriott, who held the position for 41 years.
Railway history
The path uses the trackbeds of two former railway lines: from Norwich to Themelthorpe and from Themelthorpe to Aylsham. The Themelthorpe to Norwich line was built in 1882 by the Lynn and Fakenham Railway Company, which was taken over by the M&GNJR in 1893 as part of a line that ran to Melton Constable; this line gave a through route to the Midlands. The Themelthorpe to Aylsham line was completed in 1893 by the Great Eastern Railway to provide a link to its other lines at Wroxham and County School, close to North Elmham.
The lines were never profitable. Freight services were largely based on farm products and the line closed to passenger traffic in 1959.
In 1960, the two lines were joined by the so-called Themelthorpe Curve, believed to be the sharpest bend on the British Rail network. Its construction was to maintain the important movement of concrete products from Lenwade railway station; once production ceased in 1985, the line was closed.
Route
At Norwich, the path begins to the north of the Barn Road and Barker Street Inner Ring Road roundabout; much of this area was part of the former Norwich City railway station. The path, which is also known as the Railway Path, follows approximately the course of the River Wensum which forms a boundary with the Train Wood.
After crossing a footbridge, the industrial landscape gives way to the water-meadows of the Sweetbriar Road Meadows before crossing the Wensum over the A frame bridge at Hellesdon. Travelling in a northerly direction from the junction of Marlpit Lane and Hellesdon Road, through the site of the former Hellesdon railway station, the way soon crosses the tiny River Tud at Costessey. The tree-lined River Wensum can be seen to the east, as the path passes through the open country side of the Wensum Valley. The river is crossed by means of an A-frame bridge (there are only three in Norfolk), before arriving at Drayton. The original Drayton railway station is now an industrial estate; the path follows a gravel path before crossing a minor road and entering a deep cutting to cross the busy A1067 road, close to Taverham. To the west is Thorpe Marriott, a large housing estate built in the late 20th century.
The tranquil path passes through extensive mixed woodland of the Mileplain plantation to cross the Wensum below Attlebridge. To the east of the way, the circular Winch's Way can be acc |
https://en.wikipedia.org/wiki/Information%20warfare | Information warfare (IW) (different from cyberwarfare that attacks computers, software, and command control systems) is a concept involving the battlespace use and management of information and communication technology (ICT) in pursuit of a competitive advantage over an opponent. Information warfare is the manipulation of information trusted by a target without the target's awareness so that the target will make decisions against their interest but in the interest of the one conducting information warfare. As a result, it is not clear when information warfare begins, ends, and how strong or destructive it is. Information warfare may involve the collection of tactical information, assurance(s) that one's information is valid, spreading of propaganda or disinformation to demoralize or manipulate the enemy and the public, undermining the quality of the opposing force's information, and denial of information-collection opportunities to opposing forces. Information warfare is closely linked to psychological warfare.
The United States Armed Forces' use of the term favors technology and hence tends to extend into the realms of electronic warfare, cyberwarfare, information assurance and computer network operations, attack, and defense. Other militaries use the much broader term Information Operations (IO) which, although making use of technology, focuses on the more human-related aspects of information use, including (amongst many others) social network analysis, decision analysis, and the human aspects of command and control.
Overview
Information warfare has been described as "the use of information to achieve our national objectives." According to NATO, "Information war is an operation conducted in order to gain an information advantage over the opponent."
Information warfare can take many forms:
Television, internet and radio transmission(s) can be jammed to disrupt communications, or hijacked for a disinformation campaign.
Logistics networks can be disabled.
Enemy communications networks can be disabled or spoofed, especially online social communities in modern days.
Stock exchange transactions can be sabotaged, either with electronic intervention, by leaking sensitive information or by placing disinformation.
The use of drones and other surveillance robots or webcams.
Communication management
Synthetic media
The organized use of social media and other online content-generation platforms can be used to influence public perceptions.
The United States Air Force has had Information Warfare Squadrons since the 1980s. In fact, the official mission of the U.S. Air Force is now "To fly, fight and win... in air, space and cyberspace", with the latter referring to its information warfare role.
As the U.S. Air Force often risks aircraft and aircrews to attack strategic enemy communications targets, remotely disabling such targets using software and other means can provide a safer alternative. In addition, disabling such networks electronically ( |
https://en.wikipedia.org/wiki/Sorting | Sorting refers to ordering data in an increasing or decreasing manner according to some linear relationship among the data items.
ordering: arranging items in a sequence ordered by some criterion;
categorizing: grouping items with similar properties.
Ordering items is the combination of categorizing them based on equivalent order, and ordering the categories themselves.
By type
Information or data
In , arranging in an ordered sequence is called "sorting". Sorting is a common operation in many applications, and efficient algorithms have been developed to perform it.
The most common uses of sorted sequences are:
making lookup or search efficient;
making merging of sequences efficient;
enabling processing of data in a defined order.
The opposite of sorting, rearranging a sequence of items in a random or meaningless order, is called shuffling.
For sorting, either a weak order, "should not come after", can be specified, or a strict weak order, "should come before" (specifying one defines also the other, the two are the complement of the inverse of each other, see operations on binary relations). For the sorting to be unique, these two are restricted to a total order and a strict total order, respectively.
Sorting n-tuples (depending on context also called e.g. records consisting of fields) can be done based on one or more of its components. More generally objects can be sorted based on a property. Such a component or property is called a sort key.
For example, the items are books, the sort key is the title, subject or author, and the order is alphabetical.
A new sort key can be created from two or more sort keys by lexicographical order. The first is then called the primary sort key, the second the secondary sort key, etc.
For example, addresses could be sorted using the city as primary sort key, and the street as secondary sort key.
If the sort key values are totally ordered, the sort key defines a weak order of the items: items with the same sort key are equivalent with respect to sorting. See also stable sorting. If different items have different sort key values then this defines a unique order of the items.
A standard order is often called ascending (corresponding to the fact that the standard order of numbers is ascending, i.e. A to Z, 0 to 9), the reverse order descending (Z to A, 9 to 0). For dates and times, ascending means that earlier values precede later ones e.g. 1/1/2000 will sort ahead of 1/1/2001.
Common algorithms
Bubble/Shell sort: Exchange two adjacent elements if they are out of order. Repeat until array is sorted.
Insertion sort: Scan successive elements for an out-of-order item, then insert the item in the proper place.
Selection sort: Find the smallest (or biggest) element in the array, and put it in the proper place. Swap it with the value in the first position. Repeat until array is sorted.
Quick sort: Partition the array into two segments. In the first segment, all elements are less than or equal to the |
https://en.wikipedia.org/wiki/Man-in-the-middle%20attack | In cryptography and computer security, a man-in-the-middle (MITM) attack is a cyberattack where the attacker secretly relays and possibly alters the communications between two parties who believe that they are directly communicating with each other, as the attacker has inserted themselves between the two parties.
One example of a MITM attack is active eavesdropping, in which the attacker makes independent connections with the victims and relays messages between them to make them believe they are talking directly to each other over a private connection, when in fact the entire conversation is controlled by the attacker. In this scenario, the attacker must be able to intercept all relevant messages passing between the two victims and inject new ones. This is straightforward in many circumstances; for example, an attacker within the reception range of an unencrypted Wi-Fi access point could insert themselves as a man-in-the-middle.
As it aims to circumvent mutual authentication, a MITM attack can succeed only when the attacker impersonates each endpoint sufficiently well to satisfy their expectations. Most cryptographic protocols include some form of endpoint authentication specifically to prevent MITM attacks. For example, TLS can authenticate one or both parties using a mutually trusted certificate authority.
Example
Suppose Alice wishes to communicate with Bob. Meanwhile, Mallory wishes to intercept the conversation to eavesdrop (breaking confidentiality) with the option to deliver a false message to Bob under the guise of Alice (breaking non-repudiation). Mallory would perform a man-in-the-middle attack as described in the following sequence of events.
Alice sends a message to Bob, which is intercepted by Mallory:
Alice "Hi Bob, it's Alice. Give me your key." → Mallory Bob
Mallory relays this message to Bob; Bob cannot tell it is not really from Alice:
Alice Mallory "Hi Bob, it's Alice. Give me your key." → Bob
Bob responds with his encryption key:
Alice Mallory ← [Bob's key] Bob
Mallory replaces Bob's key with her own, and relays this to Alice, claiming that it is Bob's key:
Alice ← [Mallory's key] Mallory Bob
Alice encrypts a message with what she believes to be Bob's key, thinking that only Bob can read it:
Alice "Meet me at the bus stop!" [encrypted with Mallory's key] → Mallory Bob
However, because it was actually encrypted with Mallory's key, Mallory can decrypt it, read it, modify it (if desired), re-encrypt with Bob's key, and forward it to Bob:
Alice Mallory "Meet me at the van down by the river!" [encrypted with Bob's key] → Bob
Bob thinks that this message is a secure communication from Alice.
This example shows the need for Alice and Bob to have a means to ensure that they are truly each using each other's public keys, and not the public key of an attacker. Otherwise, such attacks are generally possible, in principle, against any message sent using public-key technolo |
https://en.wikipedia.org/wiki/GRASS%20%28programming%20language%29 | GRASS (GRAphics Symbiosis System) is a programming language created to script 2D vector graphics animations. GRASS was similar to BASIC in syntax, but added numerous instructions for specifying 2D object animation, including scaling, translation and rotation over time. These functions were directly supported by the Vector General 3D graphics terminal GRASS was written for. It quickly became a hit with the artistic community who were experimenting with the new medium of computer graphics, and is most famous for its use by Larry Cuba to create the original "attacking the Death Star will not be easy" animation in Star Wars (1977).
As part of a later partnership with Midway Games, the language was ported to the Midway's Z80-based Z Box. This machine used raster graphics and a form of sprites, which required extensive changes to support, along with animating color changes. This version was known as Zgrass.
History
GRASS
The original version of GRASS was developed by Tom DeFanti for his 1974 Ohio State University Ph.D. thesis. It was developed on a PDP-11/45 driving a Vector General 3DR display. As the name implies, this was a purely vector graphics machine. GRASS included a number of vector-drawing commands, and could organize collections of them into a hierarchy, applying the various animation effects to whole "trees" of the image at once (stored in arrays).
After graduation, DeFanti moved to the University of Illinois, Chicago Circle. There he joined up with Dan Sandin and together they formed the Circle Graphics Habitat (today known as the Electronic Visualization Laboratory, or EVL). Sandin had joined the university in 1971 and built the Sandin Image Processor, or IP. The IP was an analog computer which took two video inputs, mixed them, colored the results, and then re-created TV output. He described it as the video version of a Moog synthesizer.
DeFanti added the existing GRASS system as the input to the IP, creating the GRASS/Image Processor, which was used throughout the mid-1970s. In order to make the system more useful, DeFanti and Sandin added all sorts of "one-off" commands to the existing GRASS system, but these changes also made the language considerably more idiosyncratic. In 1977 another member of the Habitat, Nola Donato, re-designed many of GRASS's control structures into more general forms, resulting in the considerably cleaner GRASS3.
Larry Cuba's Star Wars work is based on semi-automated filming of a GRASS system running on a Vector General 3D terminal. The VG3D had internal hardware that performed basic transformations - scaling, rotation, etc. - in realtime without interacting with the computer. It is only during the times when new scenery is being presented that the much slower communications with the GRASS language takes place. This can be seen in the sequence, as the initial sections of the film show the Death Star being rotated and scaled very rapidly, while the later sections simulating flight down the trench requires |
https://en.wikipedia.org/wiki/Bally%20Astrocade | The Bally Astrocade (also known as Bally Arcade or initially as Bally ABA-1000) is a second-generation home video game console and simple computer system designed by a team at Midway, at that time the videogame division of Bally. It was originally announced as the "Bally Home Library Computer" in October 1977 and initially made available for mail order in December 1977. But due to production delays, the units were first released to stores in April 1978 and its branding changed to "Bally Professional Arcade". It was marketed only for a limited time before Bally decided to exit the market. The rights were later picked up by a third-party company, who re-released it and sold it until around 1984. The Astrocade is particularly notable for its very powerful graphics capabilities for the time of release, and for the difficulty in accessing those capabilities.
History
Nutting and Midway
In the late 1970s, Midway contracted Dave Nutting Associates to design a video display chip that could be used in all of their videogame systems, from standup arcade games, to a home computer system. The system Nutting delivered was used in most of Midway's classic arcade games of the era, including Gorf and Wizard of Wor. The chip set supported what was at that time relatively high resolution of 320×204 in four colours per line, although to access this mode required memory that could be accessed at a faster rate than the common 2 MHz dynamic RAM of the era.
Console use
Originally referred to as the Bally Home Library Computer, it was released in 1977 but available only through mail order. Delays in the production meant none of the units actually shipped until 1978, and by this time the machine had been renamed the Bally Professional Arcade. In this form it sold mostly at computer stores and had little retail exposure (unlike the Atari VCS). In 1979, Bally grew less interested in the arcade market and decided to sell off their Consumer Products Division, including development and production of the game console.
At about the same time, a third-party group had been unsuccessfully attempting to bring their own console design to market as the Astrovision. A corporate buyer from Montgomery Ward who was in charge of the Bally system put the two groups in contact, and a deal was eventually arranged. In 1981 they re-released the unit with the BASIC cartridge included for free, this time known as the Bally Computer System, with the name changing again, in 1982, to Astrocade. It sold under this name until the video game crash of 1983, and then disappeared around 1985.
Midway had long been planning to release an expansion system for the unit, known as the ZGRASS-100. The system was being developed by a group of computer artists at the University of Illinois at Chicago known as the 'Circle Graphics Habitat', along with programmers at Nutting. Midway felt that such a system, in an external box, would make the Astrocade more interesting to the market. However it was still not |
https://en.wikipedia.org/wiki/Altair%208800 | The Altair 8800 is a microcomputer designed in 1974 by MITS and based on the Intel 8080 CPU. Interest grew quickly after it was featured on the cover of the January 1975 issue of Popular Electronics and was sold by mail order through advertisements there, in Radio-Electronics, and in other hobbyist magazines. According to Harry Garland, the Altair 8800 was the product that catalyzed the microcomputer revolution of the 1970s. It was the first commercially successful personal computer. The computer bus designed for the Altair was to become a de facto standard in the form of the S-100 bus, and the first programming language for the machine was Microsoft's founding product, Altair BASIC.
History
While serving at the Air Force Weapons Laboratory at Kirtland Air Force Base, Ed Roberts and Forrest M. Mims III decided to use their electronics background to produce small kits for model rocket hobbyists. In 1969, Roberts and Mims, along with Stan Cagle and Robert Zaller, founded Micro Instrumentation and Telemetry Systems (MITS) in Roberts' garage in Albuquerque, New Mexico, and started selling radio transmitters and instruments for model rockets.
Calculators
The model rocket kits were a modest success and MITS wanted to try a kit that would appeal to more hobbyists. The November 1970 issue of Popular Electronics featured the Opticom, a kit from MITS that would send voice over an LED light beam. As Mims and Cagle were losing interest in the kit business, Roberts bought his partners out, then began developing a calculator kit. Electronic Arrays had just announced the EAS100, a set of six large scale integrated (LSI) circuit chips that would make a four-function calculator. The MITS 816 calculator kit used the chipset and was featured on the November 1971 cover of Popular Electronics. This calculator kit sold for , or $275 assembled. Forrest Mims wrote the assembly manual for this kit and many others over the next several years. As payment for each manual he often accepted a copy of the kit.
The calculator was successful and was followed by several improved models. The MITS 1440 calculator was featured in the July 1973 issues of Radio-Electronics. It had a 14-digit display, memory, and square root function. The kit sold for and the assembled version was . MITS later developed a programmer unit that would connect to the 816 or 1440 calculator and allow programs of up to 256 steps.
In 1972, Texas Instruments developed its own calculator chip and started selling complete calculators at less than half the price of other commercial models. MITS and many other companies were devastated by this, and Roberts struggled to reduce his quarter-million-dollar debt.
Test equipment
In addition to calculators, MITS made a line of test equipment kits. These included an IC tester, a waveform generator, a digital voltmeter, and several other instruments. To keep up with the demand, MITS moved into a larger building at 6328 Linn NE in Albuquerque in 1973. They installed |
https://en.wikipedia.org/wiki/River%20Yare | The River Yare is a river in the English county of Norfolk. In its lower reaches it is one of the principal navigable waterways of The Broads and connects with the rest of the network.
The river rises south of Dereham to the west to the village of Shipdham. Above its confluence with a tributary stream from Garvestone it is known as the Blackwater River. From there it flows in a generally eastward direction passing Barnham Broom and is joined by the River Tiffey before reaching Bawburgh. It then skirts the southern fringes of the city of Norwich, passing through Colney, Cringleford, Lakenham and Trowse. At Whitlingham it is joined by the River Wensum and although the Wensum is the larger and longer of the two, the river downstream of their confluence continues to be called the Yare. Flowing eastward into The Broads it passes the villages of Bramerton, Surlingham, Rockland St. Mary and Cantley. Just before Reedham at Hardley Cross (erected in 1676) it is joined by the River Chet. The cross marks the ancient boundary between the City of Norwich and Borough of Great Yarmouth. Beyond Reedham the river passes the famously isolated marshland settlement of Berney Arms before entering the tidal lake of Breydon Water. Here the Yare is joined by the Rivers Waveney and Bure and finally enters the North Sea at Gorleston, Great Yarmouth.
The Yare is the frequent subject of landscape paintings by members of the early 19th century Norwich School of artists. The National Gallery of Art in Washington D.C. contains an oil painting by John Crome entitled Moonlight on the Yare. Joseph Stannard depicted the river in Thorpe Water Frolic, Afternoon (1824) and Boats on the Yare near Bramerton (1828) which is in the Fitzwilliam Museum, Cambridge.
The river is navigable to small coastal vessels from Norwich to the sea, and in former times carried significant commercial traffic to that city. At Reedham the river is joined by the Haddiscoe Cut, a canal which provides a direct navigable link to the River Waveney at Haddiscoe avoiding Breydon Water.
Navigation
The river provides a navigable link between Norwich and the North Sea, but silting has been a long-standing problem. In 1698, an Act of Parliament was obtained which allowed duty to be collected for any coal traffic using the river. The money raised was to pay for improvements to the course of the river and to the harbour at Great Yarmouth, but the majority of it went towards harbour improvements, and little improvement of the river occurred. Three more acts attempted to rectify the situation, but the river continued to be neglected. A fifth act, obtained in 1772, sought to address the problem in a different way, and specified how the tolls were to be used. 15 per cent was to be given to Norwich for river improvements between the city and Hardley Cross, 25 per cent was given to Yarmouth for improvements to the lower river between Hardley Cross and the town, with a further 40 per cent set aside for maintenance of Yarm |
https://en.wikipedia.org/wiki/XML%20Metadata%20Interchange | The XML Metadata Interchange (XMI) is an Object Management Group (OMG) standard for exchanging metadata information via Extensible Markup Language (XML).
It can be used for any metadata whose metamodel can be expressed in Meta-Object Facility (MOF), a platform-independent model (PIM).
The most common use of XMI is as an interchange format for UML models, although it can also be used for serialization of models of other languages (metamodels).
Overview
In the OMG vision of modeling, data is split into abstract models and concrete models. The abstract models represent the semantic information, whereas the concrete models represent visual diagrams. Abstract models are instances of arbitrary MOF-based modeling languages such as UML or SysML. For diagrams, the Diagram Interchange (DI, XMI[DI]) standard is used. There are currently several incompatibilities between different modeling tool vendor implementations of XMI, even between interchange of abstract model data. The usage of Diagram Interchange is almost nonexistent. This means exchanging files between UML modeling tools using XMI is rarely possible.
One purpose of XML Metadata Interchange (XMI) is to enable easy interchange of metadata between UML-based modeling tools and MOF-based metadata repositories in distributed heterogeneous environments. XMI is also commonly used as the medium by which models are passed from modeling tools to software generation tools as part of model-driven engineering.
Examples of XMI, and lists of the XML tags that make up XMI-formatted files, are available in the version 2.5.1 specification document.
Integration of industry standards
XMI integrates 4 industry standards:
XML – Extensible Markup Language, a W3C standard.
UML – Unified Modeling Language, an OMG modeling standard.
MOF – Meta Object Facility, an OMG language for specifying metamodels.
MOF – Mapping to XMI
The integration of these 4 standards into XMI allows tool developers of distributed systems to share object models and other metadata.
Several versions of XMI have been created: 1.0, 1.1, 1.2, 2.0, 2.1, 2.1.1, 2.4, 2.4.1, 2.4.2. and 2 5.1. The 2.x versions are radically different from the 1.x series.
The Diagram Definition OMG project is another alternative for metadata interchange, which can also express the layout and graphical representation.
XMI is an international standard:
XMI 2.4.2
ISO/IEC 19509:2014 Information technology — XML Metadata Interchange (XMI)
XMI 2.0
ISO/IEC 19503:2005 Information technology — XML Metadata Interchange (XMI)
See also
Common Warehouse Metamodel
Web Ontology Language
Generic Modeling Environment (GME)
Eclipse Modeling Framework (EMF)
Domain Specific Language (DSL)
Domain-specific modelling (DSM)
Meta-modeling
Meta-Object Facility (MOF)
References
External links
OMG XMI Specification
XML-based standards
Unified Modeling Language
Systems Modeling Language
ISO standards |
https://en.wikipedia.org/wiki/Narrowboat | A narrowboat is a particular type of canal boat, built to fit the narrow locks of the United Kingdom. The UK's canal system provided a nationwide transport network during the Industrial Revolution, but with the advent of the railways, commercial canal traffic gradually diminished and the last regular long-distance transportation of goods by canal had virtually disappeared by 1970. However, some commercial traffic continued. From the 1970s onward narrowboats were gradually being converted into permanent residences or as holiday lettings. Currently, about 8580 narrowboats are registered as 'permanent homes' on Britain's waterway system and represent a growing alternative community living on semi-permanent moorings or continuously cruising.
For any boat to enter a narrow lock, it must be under wide, so most narrowboats are nominally wide. A narrowboat's maximum length is generally , as anything longer will be unable to navigate much of the British canal network, because the nominal maximum length of locks is . Some locks are shorter than , so to access the entire canal network the maximum length is .
The first narrow boats played a key role in the economic changes of the British Industrial Revolution. They were wooden boats drawn by a horse walking on the canal towpath led by a crew member. Horses were gradually replaced by steam and then diesel engines. By the end of the 19th century it was common practice to paint roses and castles on narrowboats and their fixtures and fittings. This tradition has continued into the 21st century, but not all narrowboats have such decorations.
Modern narrowboats are used for holidays, weekend breaks, touring, or as permanent or part-time residences. Usually, they have steel hulls and a steel superstructure. The hull's flat base is usually 10 mm thick, the hull sides 6 mm or 8 mm, the cabin sides 6 mm, and the roof 4 mm or 6 mm. The numbers of boats have been rising, with the number of licensed boats (not all of them narrowboats) on canals and rivers managed by the Canal & River Trust (CRT) estimated at about 27,000 in 2006; by 2019, this had risen to 34,367. Although a small number of steel narrowboats dispense with the need for a rear steering deck entirely, by imitating some river cruisers in providing wheel steering from a central cockpit, most narrowboats' steering is by a tiller on the stern. There are three major configurations for the stern: traditional stern, cruiser stern and semi-traditional stern.
Terminology
The narrowboat (one word) definition in the Oxford English Dictionary is:
Earlier quotations listed in the Oxford English Dictionary use the term "narrow boat", with the most recent, a quotation from an advertisement in Canal Boat & Inland Waterways in 1998, uses "narrowboat".
The single word "narrowboat" has been adopted by authorities such as the Canal and River Trust, Scottish Canals and the authoritative magazine Waterways World to refer to all boats built in the style and tradition |
https://en.wikipedia.org/wiki/Algorithmic%20efficiency | In computer science, algorithmic efficiency is a property of an algorithm which relates to the amount of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on the usage of different resources. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process.
For maximum efficiency it is desirable to minimize resource usage. However, different resources such as time and space complexity cannot be compared directly, so which of two algorithms is considered to be more efficient often depends on which measure of efficiency is considered most important.
For example, bubble sort and timsort are both algorithms to sort a list of items from smallest to largest. Bubble sort sorts the list in time proportional to the number of elements squared (, see Big O notation), but only requires a small amount of extra memory which is constant with respect to the length of the list (). Timsort sorts the list in time linearithmic (proportional to a quantity times its logarithm) in the list's length (), but has a space requirement linear in the length of the list (). If large lists must be sorted at high speed for a given application, timsort is a better choice; however, if minimizing the memory footprint of the sorting is more important, bubble sort is a better choice.
Background
The importance of efficiency with respect to time was emphasised by Ada Lovelace in 1843 as applied to Charles Babbage's mechanical analytical engine:
"In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selections amongst them for the purposes of a calculating engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation"
Early electronic computers had both limited speed and limited random access memory. Therefore, a space–time trade-off occurred. A task could use a fast algorithm using a lot of memory, or it could use a slow algorithm using little memory. The engineering trade-off was then to use the fastest algorithm that could fit in the available memory.
Modern computers are significantly faster than the early computers, and have a much larger amount of memory available (Gigabytes instead of Kilobytes). Nevertheless, Donald Knuth emphasised that efficiency is still an important consideration:
"In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering"
Overview
An algorithm is considered efficient if its resource consumption, also known as computational cost, is at or below some acceptable level. Roughly speaking, 'acceptable' means: it will run in a reasonable amount of time or space on an available computer |
https://en.wikipedia.org/wiki/Back%20Orifice%202000 | Back Orifice 2000 (often shortened to BO2k) is a computer program designed for remote system administration. It enables a user to control a computer running the Microsoft Windows operating system from a remote location. The name is a pun on Microsoft BackOffice Server software.
BO2k debuted on July 10, 1999, at DEF CON 7, a computer security convention in Las Vegas, Nevada. It was originally written by Dildog, a member of US hacker group Cult of the Dead Cow. It was a successor to the cDc's Back Orifice remote administration tool, released the previous year. , BO2k was being actively developed.
Whereas the original Back Orifice was limited to the Windows 95 and Windows 98 operating systems, BO2k also supports Windows NT, Windows 2000 and Windows XP. Some BO2k client functionality has also been implemented for Linux systems. In addition, BO2k was released as free software, which allows one to port it to other operating systems.
Plugins
BO2k has a plugin architecture. The optional plugins include:
communication encryption with AES, Serpent, CAST-256, IDEA or Blowfish encryption algorithms
network address altering notification by email and CGI
total remote file control
remote Windows registry editing
watching at the desktop remotely by streaming video
remote control of both the keyboard and the mouse
a chat, allowing administrator to discuss with users
option to hide things from system (rootkit behavior, based on FU Rootkit)
accessing systems hidden by a firewall (the administrated system can form a connection outward to the administrator's computer. Optionally, to escape even more connection problems, the communication can be done by a web browser the user uses to surf the web.)
forming connection chains through a number of administrated systems
client-less remote administration over IRC
on-line keypress recording
Controversy
Back Orifice and Back Orifice 2000 are widely regarded as malware, tools intended to be used as a combined rootkit and backdoor. For example, at present many antivirus software packages identify them as Trojan horses. This classification is justified by the fact that BO2k can be installed by a Trojan horse, in cases where it is used by an unauthorized user, unbeknownst to the system administrator.
There are several reasons for this, including: the association with cDc; the tone of the initial product launch at DEF CON (including that the first distribution of BO2k by cDc was infected by the CIH virus); the existence of tools (such as "Silk Rope") designed to add BO2k dropper capability to self-propagating malware; and the fact that it has actually widely been used for malicious purposes. The most common criticism is that BO2k installs and operates silently, without warning a logged-on user that remote administration or surveillance is taking place. According to the official BO2k documentation, the person running the BO2k server is not supposed to know that it is running on their computer.
BO2k developers co |
https://en.wikipedia.org/wiki/Parallel%20computing | Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
Parallel computing is closely related to concurrent computing—they are frequently used together, and often conflated, though the two are distinct: it is possible to have parallelism without concurrency, and concurrency without parallelism (such as multitasking by time-sharing on a single-core CPU). In parallel computing, a computational task is typically broken down into several, often many, very similar sub-tasks that can be processed independently and whose results are combined afterwards, upon completion. In contrast, in concurrent computing, the various processes often do not address related tasks; when they do, as is typical in distributed computing, the separate tasks may have a varied nature and often require some inter-process communication during execution.
Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, and grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks.
In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, but explicitly parallel algorithms, particularly those that use concurrency, are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting optimal parallel program performance.
A theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahl's law, which states that it is limited by the fraction of time for which the parallelization can be utilised.
Background
Traditionally, computer software has been written for serial computation. To solve a problem, an algorithm is constructed and implemented as a serial stream of instructions. These instructions are executed on a central processing unit on one computer. Only one instruction may execute at a time—after that inst |
https://en.wikipedia.org/wiki/ASCI%20White | ASCI White was a supercomputer at the Lawrence Livermore National Laboratory in California, which was briefly the fastest supercomputer in the world.
It was a computer cluster based on IBM's commercial RS/6000 SP computer. 512 nodes were interconnected for ASCI White, with each node containing sixteen 375MHz IBM POWER3-II processors. In total, the ASCI White had 8,192 processors, 6terabytes (TB) of memory, and 160TB of disk storage. It was almost exclusively used for large-scale computations requiring dozens, hundreds, or thousands of processors. The computer weighed 106tons and consumed 3MW of electricity with a further 3MW needed for cooling. It had a theoretical processing speed of 12.3teraFLOPS (TFLOPS). A single modern 4U rackmount server could match these specifications while weighing under 50 kg and consuming under 2 kW of power. The system ran IBM's AIX operating system.
ASCI White was made up of three individual systems, the 512-node White, the 28-node Ice and the 68-node Frost.
The system was built in Poughkeepsie, New York. Completed in June 2000 it was transported to specially built facilities in California and officially dedicated on August 15, 2001. Its peak performance of 12.3TFLOPS was not achieved in the widely accepted LINPACK tests. The system cost US$110 million (equivalent to $ million in ).
It was built as stage three of the Accelerated Strategic Computing Initiative (ASCI) started by the U.S. Department of Energy and the National Nuclear Security Administration to build a simulator to replace live WMD testing following the moratorium on testing started by President George H. W. Bush in 1992 and extended by Bill Clinton in 1993.
The machine was decommissioned beginning July 27, 2006.
References
Cluster computing
Nuclear stockpile stewardship
Lawrence Livermore National Laboratory
IBM supercomputers
64-bit computers |
https://en.wikipedia.org/wiki/Westwood | Westwood may refer to:
Companies and brands
Westwood, Baillie, 19th-century engineering and shipbuilding company, London
Westwood One (1976–2011), a former American radio network based in New York City
Westwood One, an American radio and media broadcasting company
Westwood Studios, an American video game developer, defunct since 2003
Westwood, a brand of American manufacturer Ariens
Educational institutions
Westwood College, several campuses in the United States
Westwood Elementary School (Prince George), British Columbia
Westwood Elementary School (Coquitlam), British Columbia
Westwood High School (disambiguation), several schools
Westwood International School, Gaborone, Botswana
Westwood Regional School District, Bergen County, New Jersey
Westwood Secondary School, Singapore
Westwood Secondary School (now Lincoln M. Alexander Secondary School), Mississauga, Ontario
People
Westwood (surname)
Baron Westwood, a title in the British peerage
Places
Australia
Westwood, Queensland, a town in the Rockhampton Region
Westwood, Tasmania
Canada
Westwood, Asphodel-Norwood, Ontario
Westwood, Edmonton, a neighbourhood in Edmonton, Alberta
Westwood Plateau, an area of Coquitlam, British Columbia
Westwood Motorsport Park, a race track in Coquitlam, British Columbia
Westwood, St. James-Assiniboia, Winnipeg, Manitoba
Port Moody-Westwood, a provincial electoral district for the Legislative Assembly of British Columbia
Port Moody—Westwood—Port Coquitlam, a federal electoral district in British Columbia
England
Westwood, Peterborough, Cambridgeshire
Westwood, Greater Manchester, a district of Oldham
Westwood, Kent
Westwood Cross shopping centre
Westwood House, country house near Droitwich, Worcestershire
Westwood, Somerset, village in West Bagborough parish
Westwood, Southfleet, Kent
Westwood, Wiltshire
High Westwood, County Durham
Low Westwood, County Durham
Westwood Heath, Coventry, West Midlands
Westwood (Campus), University of Warwick
Westwoodside, North Lincolnshire
Stretton Westwood and Bourton Westwood, Shropshire
Scotland
Westwood, East Kilbride, South Lanarkshire
United States
Westwood (Uniontown, Alabama), an 1836 historic district on the National Register of Historic Places
Westwood, California
Westwood, Los Angeles, a neighborhood
Westwood Village Memorial Park Cemetery
Westwood, Indiana
Westwood, Iowa
Westwood, Kansas
Westwood, Boyd County, Kentucky
Westwood, Jefferson County, Kentucky
Westwood, Massachusetts
Westwood, Memphis, Tennessee, a neighborhood
Westwood, Michigan
Westwood, Missouri
Westwood, New Jersey
Westwood, Cincinnati, a neighborhood
Westwood, Cambria County, Pennsylvania, a census-designated place
Westwood, Chester County, Pennsylvania, a census-designated place
Westwood (Pittsburgh), Pennsylvania, a neighborhood
West Wood, Utah, a census-designated place
Westwood, Bainbridge Island, Washington
Westwood, Seattle
Westwood (subdivision), Houston
Westwood Highlands, San Francisco, a neighborhood
Westwood Hills, Kansas
Westwood |
https://en.wikipedia.org/wiki/Matthew%20Cook | Matthew Cook (born February 7, 1970) is a mathematician and computer scientist who is best known for having proved Stephen Wolfram's conjecture that the Rule 110 cellular automaton is Turing-complete.
Biography
Cook was born in Morgantown, West Virginia and grew up in Evanston, Illinois. He completed his undergraduate studies at the University of Illinois and the Budapest Semesters in Mathematics program. In 1987, Cook qualified as a member of the six-person US team to the International Mathematical Olympiad and won a bronze medal. In 1990, Cook went to work for Wolfram Research, makers of the computer algebra system Mathematica. He did his doctoral work in Computation and Neural Systems at Caltech from 1999 to 2005. He is now at the Institute of Neuroinformatics at Zurich in Switzerland.
Work with Stephen Wolfram
In the 1990s Cook worked as a research assistant to Stephen Wolfram, assisting with work on Wolfram's book, A New Kind of Science. Among other things, he developed a proof showing that the Rule 110 cellular automaton is Turing-complete.
Cook presented his proof at the Santa Fe Institute conference CA98 before the publishing of Wolfram's book—an action that led Wolfram Research to accuse Cook of violating his NDA and resulted in the blocking of the publication of the proof in the conference proceedings.
A New Kind of Science was released in 2002 with an outline of the proof. In 2004, Cook published his proof in Wolfram's journal Complex Systems.
References
External links
Personal web site
Site at INI Zurich
20th-century American mathematicians
21st-century American mathematicians
California Institute of Technology alumni
Cellular automatists
People from Evanston, Illinois
1970 births
Living people
International Mathematical Olympiad participants
Mathematicians from Illinois |
https://en.wikipedia.org/wiki/Aspect-oriented%20programming | In computing, aspect-oriented programming (AOP) is a programming paradigm that aims to increase modularity by allowing the separation of cross-cutting concerns. It does so by adding behavior to existing code (an advice) without modifying the code itself, instead separately specifying which code is modified via a "pointcut" specification, such as "log all function calls when the function's name begins with 'set. This allows behaviors that are not central to the business logic (such as logging) to be added to a program without cluttering the code core to the functionality.
AOP includes programming methods and tools that support the modularization of concerns at the level of the source code, while aspect-oriented software development refers to a whole engineering discipline.
Aspect-oriented programming entails breaking down program logic into distinct parts (so-called concerns, cohesive areas of functionality). Nearly all programming paradigms support some level of grouping and encapsulation of concerns into separate, independent entities by providing abstractions (e.g., functions, procedures, modules, classes, methods) that can be used for implementing, abstracting and composing these concerns. Some concerns "cut across" multiple abstractions in a program, and defy these forms of implementation. These concerns are called cross-cutting concerns or horizontal concerns.
Logging exemplifies a crosscutting concern because a logging strategy must affect every logged part of the system. Logging thereby crosscuts all logged classes and methods.
All AOP implementations have some crosscutting expressions that encapsulate each concern in one place. The difference between implementations lies in the power, safety, and usability of the constructs provided. For example, interceptors that specify the methods to express a limited form of crosscutting, without much support for type-safety or debugging. AspectJ has a number of such expressions and encapsulates them in a special class, called an aspect. For example, an aspect can alter the behavior of the base code (the non-aspect part of a program) by applying advice (additional behavior) at various join points (points in a program) specified in a quantification or query called a pointcut (that detects whether a given join point matches). An aspect can also make binary-compatible structural changes to other classes, such as adding members or parents.
History
AOP has several direct antecedents A1 and A2: reflection and metaobject protocols, subject-oriented programming, Composition Filters, and Adaptive Programming.
Gregor Kiczales and colleagues at Xerox PARC developed the explicit concept of AOP and followed this with the AspectJ AOP extension to Java. IBM's research team pursued a tool approach over a language design approach and in 2001 proposed Hyper/J and the Concern Manipulation Environment, which have not seen wide usage.
The examples in this article use AspectJ.
The Microsoft Transaction Server is co |
https://en.wikipedia.org/wiki/Network%20Rail | Network Rail Limited is the owner (via its subsidiary Network Rail Infrastructure Limited, which was known as Railtrack plc before 2002) and infrastructure manager of most of the railway network in Great Britain. Network Rail is an "arm's length" public body of the Department for Transport with no shareholders, which reinvests its income in the railways.
Network Rail's main customers are the private train operating companies (TOCs), responsible for passenger transport, and freight operating companies (FOCs), who provide train services on the infrastructure that the company owns and maintains. Since 1 September 2014, Network Rail has been classified as a "public sector body".
To cope with rapidly increasing passenger numbers, () Network Rail has been undertaking a £38 billion programme of upgrades to the network, including Crossrail, electrification of lines and upgrading Thameslink.
In May 2021, the Government announced its intent to replace Network Rail in 2023 with a new public body called Great British Railways. In 2022 it was announced that Great British Railways would not replace Network Rail until 2024.
History
Background
Britain's railway system was built by private companies, but it was nationalised by the Transport Act 1947 and run by British Railways until re-privatisation which was begun in 1994 and completed in 1997. As a part of the privatisation process, the railway infrastructure, passenger and freight services were separated into separate organisations. Between 1994 and 2002, the infrastructure was owned and operated by Railtrack, a privately-owned company.
A spate of accidents, including the Southall rail crash in 1997 and the Ladbroke Grove rail crash in 1999 called into question the negative consequences that the fragmentation of the railway network had introduced to both safety and maintenance procedures. Railtrack was severely criticised for both its performance for infrastructure improvement and for its safety record. The Hatfield train crash on 17 October 2000 was a defining moment in the collapse of Railtrack. The immediate major repairs undertaken across the whole British railway network were estimated to have cost in the order of £580 million and Railtrack had no idea how many more 'Hatfields' were waiting to happen because it had lost considerable in-house engineering skill following the sale or closure of many of the engineering and maintenance functions of British Rail to external companies; nor did the company have any way of assessing the consequence of the speed restrictions it was ordering. These restrictions brought the railway network to an almost total standstill and drew significant public ire. According to Wolmar, Railtrack's board panicked in the wake of Hatfield. Railtrack's first chief executive, John Edmonds, had pursued a deliberate strategy of outsourcing engineers' work wherever possible with the goal of reducing costs.
Various major schemes being undertaken by Railtrack had also gone awry. Th |
https://en.wikipedia.org/wiki/DSAP | DSAP may refer to:
Destination Service Access Point, a part of the IEEE 802.2 standard for local area network communication
Disseminated superficial actinic porokeratosis, a human skin condition possibly related to mutations in the gene SSH1
Durational Shortage Area Permit, a form of temporary teacher certification for subject areas with teacher shortages
Deputy Sheriffs' Association of Pennsylvania
Deutsche Sozialistische Arbeiterpartei in Polen, the German Socialist Workers' Party in Poland
German Social Democratic Workers Party in the Czechoslovak Republic (DSAP, Deutsche sozialdemokratische Arbeiterpartei in der Tschechoslowakischen Republik) |
https://en.wikipedia.org/wiki/SSAP | SSAP may refer to:
Source Service Access Point, OSI network endpoint defined in IEEE 802.2
Sequential structure alignment program, double dynamic programming method in Structural alignment
Statements of Standard Accounting Practice, in Generally Accepted Accounting Principles (UK)
Statement of Statutory Accounting Principles, for insurance in the United States
Story Stem Assessment Profile, method for attachment measures
SSAP, ICAO code for Apucarana Airport (APU), Paraná state, Brazil |
https://en.wikipedia.org/wiki/Logical%20link%20control | In the IEEE 802 reference model of computer networking, the logical link control (LLC) data communication protocol layer is the upper sublayer of the data link layer (layer 2) of the seven-layer OSI model. The LLC sublayer acts as an interface between the media access control (MAC) sublayer and the network layer.
The LLC sublayer provides multiplexing mechanisms that make it possible for several network protocols (e.g. IP, IPX and DECnet) to coexist within a multipoint network and to be transported over the same network medium. It can also provide flow control and automatic repeat request (ARQ) error management mechanisms.
Operation
The LLC sublayer is primarily concerned with multiplexing protocols transmitted over the MAC layer (when transmitting) and demultiplexing them (when receiving).
It can also provide node-to-node flow control and error management.
The flow control and error management capabilities of the LLC sublayer are used by protocols such as the NetBIOS Frames protocol. However, most protocol stacks running atop 802.2 do not use LLC sublayer flow control and error management. In these cases flow control and error management are taken care of by a transport layer protocol such as TCP or by some application layer protocol. These higher layer protocols work in an end-to-end fashion, i.e. re-transmission is done from the original source to the final destination, rather than on individual physical segments. For these protocol stacks only the multiplexing capabilities of the LLC sublayer are used.
Application examples
X.25 and LAPB
An LLC sublayer was a key component in early packet switching networks such as X.25 networks with the LAPB data link layer protocol, where flow control and error management were carried out in a node-to-node fashion, meaning that if an error was detected in a frame, the frame was retransmitted from one switch to next instead. This extensive handshaking between the nodes made the networks slow.
Local area network
The IEEE 802.2 standard specifies the LLC sublayer for all IEEE 802 local area networks, such as IEEE 802.3/Ethernet (when Ethernet II frame format is not used), IEEE 802.5, and IEEE 802.11. IEEE 802.2 is also used in some non-IEEE 802 networks such as FDDI.
Ethernet
Since bit errors are very rare in wired networks, Ethernet does not provide flow control or automatic repeat request (ARQ), meaning that incorrect packets are detected but only cancelled, not retransmitted (except in case of collisions detected by the CSMA/CD MAC layer protocol). Instead, retransmissions rely on higher-layer protocols.
As the EtherType in an Ethernet frame using Ethernet II framing is used to multiplex different protocols on top of the Ethernet MAC header it can be seen as an LLC identifier. However, Ethernet frames lacking an EtherType have no LLC identifier in the Ethernet header, and, instead, use an IEEE 802.2 LLC header after the Ethernet header to provide the protocol multiplexing function.
Wireless LA |
https://en.wikipedia.org/wiki/Blitter | A blitter is a circuit, sometimes as a coprocessor or a logic block on a microprocessor, dedicated to the rapid movement and modification of data within a computer's memory. A blitter can copy large quantities of data from one memory area to another relatively quickly, and in parallel with the CPU, while freeing up the CPU's more complex capabilities for other operations. A typical use for a blitter is the movement of a bitmap, such as windows and icons in a graphical user interface or images and backgrounds in a 2D video game. The name comes from the bit blit operation of the 1973 Xerox Alto, which stands for bit-block transfer. A blit operation is more than a memory copy, because it can involve data that's not byte aligned (hence the bit in bit blit), handling transparent pixels (pixels which should not overwrite the destination), and various ways of combining the source and destination data.
Blitters have largely been superseded by programmable graphics processing units.
History
In computers without hardware accelerated raster graphics, which includes most 1970s and 1980s home computers and IBM PC compatibles through the mid-1990s, the frame buffer is commonly stored in CPU-accessible memory. Drawing is accomplished by updating the frame buffer via software. For basic graphics routines, like compositing a smaller image into a larger one (such as for a video game) or drawing a filled rectangle, large amounts of memory need to be manipulated, and many cycles are spent fetching and decoding short loops of load/store instructions. For CPUs without caches, the bus requirement for instructions is as significant as data. To reduce the size of the frame buffer, a single byte may not necessarily correspond to a pixel, but contain 8 single-bit pixels, 4 two-bit pixels, or a pair of 4-bit pixels. Manipulating packed pixels requires extra shifting and masking operations on the CPU.
Blitters were developed to offload repetitive tasks of copying data or filling blocks of memory faster than possible by the CPU. This can be done in parallel with the CPU and also handle special cases which would be significantly slower if coded by hand, such as skipping over pixels marked as transparent or handling data that isn't byte-aligned.
Blitters in computers and video games
1973: The Xerox Alto, where the term bit blit originated, has a bit block transfer instruction implemented in microcode, making it much faster than the same operation written on the CPU. The microcode was implemented by Dan Ingalls.
1982: In addition to drawing shape primitives, the NEC µPD7220 video display processor can transfer rectangular bitmaps to display memory via direct memory access and fill rectangular portions of the screen.
1982: The Robotron: 2084 arcade video game from Williams Electronics includes two blitter chips which allow the game to have up to 80 simultaneously moving objects. Performance was measured at roughly 910 KB/second. The blitter operates on 4-bit (16 color) pix |
https://en.wikipedia.org/wiki/Bit%20blit | Bit blit (also written BITBLT, BIT BLT, BitBLT, Bit BLT, Bit Blt etc., which stands for bit block transfer) is a data operation commonly used in computer graphics in which several bitmaps are combined into one using a boolean function.
The operation involves at least two bitmaps: a "source" (or "foreground") and a "destination" (or "background"), and possibly a third that is often called the "mask". The result may be written to a fourth bitmap, though often it replaces the destination. The pixels of each are combined bitwise according to the specified raster operation (ROP) and the result is then written to the destination. The ROP is essentially a boolean formula. The most obvious ROP overwrites the destination with the source. Other ROPs may involve AND, OR, XOR, and NOT operations. The Commodore Amiga's graphics chipset (and others) could combine three source bitmaps using any of the 256 possible boolean functions with three inputs.
Modern graphics software has almost completely replaced bitwise operations with more general mathematical operations used for effects such as alpha compositing. This is because bitwise operations on color displays do not usually produce results that resemble the physical combination of lights or inks. Some software still uses XOR to draw interactive highlight rectangles or region borders; when this is done to color images, the unusual resulting colors are easily seen.
Origins
The name derives from the BitBLT routine for the Xerox Alto computer, standing for bit-boundary block transfer. Dan Ingalls, Larry Tesler, Bob Sproull, and Diana Merry programmed this operation at Xerox PARC in November 1975 for the Smalltalk-72 system. Dan Ingalls later implemented a redesigned version in microcode.
The development of fast methods for various bit blit operations gave impetus to the evolution of computer displays from using character graphics (text mode) to using raster graphics (bitmap) for everything. Machines that rely heavily on the performance of 2D graphics (such as video game consoles) often have special-purpose circuitry called a blitter.
Example of a masked blit implementation
A classic use for blitting is to render transparent sprites onto a background. In this example a background image, a sprite, and a 1-bit mask are used. As the mask is 1-bit, there is no possibility for partial transparency via alpha blending.
A loop that examines each bit in the mask and copies the pixel from the sprite only if the mask is set will be much slower than hardware that can apply exactly the same operation to every pixel. Instead a masked blit can be implemented with two regular BitBlit operations using the AND and OR raster operations.
The sprite is drawn in various positions over the image to produce this:
Technique
When preparing the sprite, the colors are very important. The mask pixels are 0 (black) wherever the corresponding sprite pixel is to be displayed, and 1 (white) wherever the background needs to be preserved. |
https://en.wikipedia.org/wiki/EtherType | EtherType is a two-octet field in an Ethernet frame. It is used to indicate which protocol is encapsulated in the payload of the frame and is used at the receiving end by the data link layer to determine how the payload is processed. The same field is also used to indicate the size of some Ethernet frames.
EtherType is also used as the basis of 802.1Q VLAN tagging, encapsulating packets from VLANs for transmission multiplexed with other VLAN traffic over an Ethernet trunk.
EtherType was first defined by the Ethernet II framing standard and later adapted for the IEEE 802.3 standard. EtherType values are assigned by the IEEE Registration Authority.
Overview
In modern implementations of Ethernet, the field within the Ethernet frame used to describe the EtherType can also be used to represent the size of the payload of the Ethernet Frame. Historically, depending on the type of Ethernet framing that was in use on an Ethernet segment, both interpretations were simultaneously valid, leading to potential ambiguity. Ethernet II framing considered these octets to represent EtherType while the original IEEE 802.3 framing considered these octets to represent the size of the payload in bytes.
In order to allow Ethernet II and IEEE 802.3 framing to be used on the same Ethernet segment, a unifying standard, IEEE 802.3x-1997, was introduced that required that EtherType values be greater than or equal to 1536. That value was chosen because the maximum length (MTU) of the data field of an Ethernet 802.3 frame is 1500 bytes and 1536 is equivalent to the number 600 in the hexadecimal numeral system. Thus, values of 1500 and below for this field indicate that the field is used as the size of the payload of the Ethernet frame while values of 1536 and above indicate that the field is used to represent an EtherType. The interpretation of values 1501–1535, inclusive, is undefined.
The end of a frame is signaled by a valid frame check sequence followed by loss of carrier or by a special symbol or sequence in the line coding scheme for a particular Ethernet physical layer, so the length of the frame does not always need to be encoded as a value in the Ethernet frame. However, as the minimum payload of an Ethernet frame is 46 bytes, a protocol that uses EtherType must include its own length field if that is necessary for the recipient of the frame to determine the length of short packets (if allowed) for that protocol.
VLAN tagging
802.1Q VLAN tagging uses an 0x8100 EtherType value. The payload following includes a 16-bit tag control identifier (TCI) followed by an Ethernet frame beginning with a second (original) EtherType field for consumption by end stations. IEEE 802.1ad extends this tagging with further nested EtherType and TCI pairs.
Jumbo frames
The size of the payload of non-standard jumbo frames, typically ~9000 Bytes long, collides with the range used by EtherType, and cannot be used for indicating the length of such a frame. The proposition to re |
https://en.wikipedia.org/wiki/XOR%20swap%20algorithm | In computer programming, the exclusive or swap (sometimes shortened to XOR swap) is an algorithm that uses the exclusive or bitwise operation to swap the values of two variables without using the temporary variable which is normally required.
The algorithm is primarily a novelty and a way of demonstrating properties of the exclusive or operation. It is sometimes discussed as a program optimization, but there are almost no cases where swapping via exclusive or provides benefit over the standard, obvious technique.
The algorithm
Conventional swapping requires the use of a temporary storage variable. Using the XOR swap algorithm, however, no temporary storage is needed. The algorithm is as follows:
X := X XOR Y; // XOR the values and store the result in X
Y := Y XOR X; // XOR the values and store the result in Y
X := X XOR Y; // XOR the values and store the result in X
Since XOR is a commutative operation, either X XOR Y or Y XOR X can be used interchangeably in any of the foregoing three lines. Note that on some architectures the first operand of the XOR instruction specifies the target location at which the result of the operation is stored, preventing this interchangeability. The algorithm typically corresponds to three machine-code instructions, represented by corresponding pseudocode and assembly instructions in the three rows of the following table:
In the above System/370 assembly code sample, R1 and R2 are distinct registers, and each operation leaves its result in the register named in the first argument. Using x86 assembly, values X and Y are in registers eax and ebx (respectively), and places the result of the operation in the first register.
However, in the pseudocode or high-level language version or implementation, the algorithm fails if x and y use the same storage location, since the value stored in that location will be zeroed out by the first XOR instruction, and then remain zero; it will not be "swapped with itself". This is not the same as if x and y have the same values. The trouble only comes when x and y use the same storage location, in which case their values must already be equal. That is, if x and y use the same storage location, then the line:
X := X XOR Y
sets x to zero (because x = y so X XOR Y is zero) and sets y to zero (since it uses the same storage location), causing x and y to lose their original values.
Proof of correctness
The binary operation XOR over bit strings of length exhibits the following properties (where denotes XOR):
L1. Commutativity:
L2. Associativity:
L3. Identity exists: there is a bit string, 0, (of length N) such that for any
L4. Each element is its own inverse: for each , .
Suppose that we have two distinct registers R1 and R2 as in the table below, with initial values A and B respectively. We perform the operations below in sequence, and reduce our results using the properties listed above.
Linear algebra interpretation
As XOR can be interpreted as binary addition and |
https://en.wikipedia.org/wiki/DEF%20CON | DEF CON (also written as DEFCON, Defcon or DC) is a hacker convention held annually in Las Vegas, Nevada. The first DEF CON took place in June 1993 and today many attendees at DEF CON include computer security professionals, journalists, lawyers, federal government employees, security researchers, students, and hackers with a general interest in software, computer architecture, hardware modification, conference badges, and anything else that can be "hacked". The event consists of several tracks of speakers about computer- and hacking-related subjects, as well as cyber-security challenges and competitions (known as hacking wargames). Contests held during the event are extremely varied and can range from creating the longest Wi-Fi connection to finding the most effective way to cool a beer in the Nevada heat.
Other contests, past and present, include lockpicking, robotics-related contests, art, slogan, coffee wars, scavenger hunt, and Capture the Flag. Capture the Flag (CTF) is perhaps the best known of these contests and is a hacking competition where teams of hackers attempt to attack and defend computers and networks using software and network structures. CTF has been emulated at other hacking conferences as well as in academic and military contexts (as red team exercises).
Federal law enforcement agents from the FBI, DoD, United States Postal Inspection Service, DHS (via CISA) and other agencies regularly attend DEF CON.
History
DEF CON was founded in 1993, by then 18-year-old Jeff Moss as a farewell party for his friend, a fellow hacker and member of "Platinum Net", a FidoNet protocol based hacking network from Canada. The party was planned for Las Vegas a few days before his friend was to leave the United States, because his father had accepted employment out of the country. However, his friend's father left early, taking his friend along, so Jeff was left alone with the entire party planned. Jeff decided to invite all his hacker friends to go to Las Vegas with him and have the party with them instead. Hacker friends from far and wide got together and laid the foundation for DEF CON, with roughly 100 people in attendance.
The term DEF CON comes from the movie WarGames, referencing the U.S. Armed Forces defense readiness condition (DEFCON). In the movie, Las Vegas was selected as a nuclear target, and since the event was being hosted in Las Vegas, it occurred to Jeff Moss to name the convention DEF CON. However, to a lesser extent, CON also stands for convention and DEF is taken from the letters on the number 3 on a telephone keypad, a reference to phreakers. Any variation of the spelling, other than "DEF CON", could be considered an infringement of the DEF CON brand. The official name of the conference includes a space in-between DEF and CON.
Though intended to be a one-time event, Moss received overwhelmingly positive feedback from attendees, and decided to host the event for a second year at their urging. The event's attendance nearly |
https://en.wikipedia.org/wiki/Hercules%20Graphics%20Card | The Hercules Graphics Card (HGC) is a computer graphics controller formerly made by Hercules Computer Technology, Inc. that combines IBM's text-only MDA display standard with a bitmapped graphics mode, also offering a parallel printer port. This allows the HGC to offer both high-quality text and graphics from a single card.
The HGC was very popular, and became a widely supported de facto display standard on IBM PC compatibles. The HGC standard was used long after more technically capable systems had entered the market, especially on dual-monitor setups.
History
The Hercules Graphics Card was released to fill a gap in the IBM video product lineup. When the IBM Personal Computer was launched in 1981, it had two graphics cards available, the Color Graphics Adapter (CGA) and the Monochrome Display And Printer Adapter (MDA). CGA offered low-resolution () color graphics and medium-resolution () monochrome graphics, while MDA offers a sharper text mode (equivalent to ) but has no per-pixel addressing modes and is limited to a fixed character set.
These adapters were quickly found to be inadequate by the market, creating a demand for a card that offers high-resolution graphics and text. The founder of Hercules Computer Technology, Van Suwannukul, created the Hercules Graphics Card so that he could work on his doctoral thesis on an IBM PC using the Thai alphabet, impossible with the low resolution of CGA or the fixed character set of MDA. It initially retailed in 1982 for $499.
Hardware design
The original HGC is an 8-bit ISA card with 64KB of RAM, visible on the board as eight 4164 RAM chips, and a DE-9 output compatible with the IBM monochrome monitor used with the MDA. Like the MDA, it includes a parallel interface for attaching a printer.
The video output is 5V TTL, as with the MDA card. Nominally, the Hercules card provides a horizontal scanning frequency of 18.425 ±0.500 kHz, and 50 Hz vertical. It runs at two slightly different frequencies depending on whether in text or graphics mode, due to the slight difference in horizontal resolution.
Capabilities
The Hercules card provides two modes:
text mode with pixel font (effective resolution of , MDA-compatible)
graphics mode (pixel-addressable graphics)
The text mode of the Hercules card uses the same signal timing as the MDA text mode.
The Hercules graphics mode is similar to the CGA high-resolution () two-color mode; the video buffer contains a packed-pixel bitmap (eight pixels per byte, one bit per pixel) with the same byte format—including the pixel-to-bit mapping and byte order—as the CGA two-color graphics mode, and the video buffer is also split into interleaved banks, each 8 KB in size.
However, because in the Hercules graphics mode there are more than 256 scanlines and the display buffer size is nearly 32 KB (instead of 16 KB as in all CGA graphics modes), four interleaved banks are used in the Hercules mode instead of two as in the CGA modes. Also, to represent 720 pixels pe |
https://en.wikipedia.org/wiki/Plumber%20%28program%29 | The plumber, in the Plan 9 from Bell Labs and Inferno operating systems, is a mechanism for reliable uni- or multicast inter-process communication of formatted textual messages. It uses the Plan 9 network file protocol, 9P, rather than a special-purpose IPC mechanism.
Any number of clients may listen on a named port (a file) for messages. Ports and port routing are defined by plumbing rules. These rules are dynamic. Each listening program receives a copy of matching messages. For example, if the data /sys/lib/plumb/basic is plumbed with the standard rules, it is sent to the edit port. The port will write a copy of the message to each listener. In this case, all running editors will interpret this message as a file name, and open the file.
The plumber is the 9P file server that provides this service. Clients may use libplumb to format messages. Since the messages are 9P, they are network transparent.
See also
Pipeline (software)
External links
"Plumbing and Other Utilities" by Rob Pike
A port of plumber to Unix-like operating systems
Plumbing extension for urxvt
Plan 9 from Bell Labs
Inferno (operating system)
Free special-purpose file systems |
https://en.wikipedia.org/wiki/Acme%20%28text%20editor%29 | Acme is a text editor and graphical shell from the Plan 9 from Bell Labs operating system, designed and implemented by Rob Pike. It can use the Sam command language. The design of the interface was influenced by Oberon. It is different from other editing environments in that it acts as a 9P server. A distinctive element of the user interface is mouse chording.
Overview
Acme can be used as a mail and news reader, or as a frontend to wikifs. These applications are made possible by external components interacting with acme through its file system interface. Rob Pike has mentioned that the name "Acme" was suggested to him by Penn Jillette of Penn & Teller during a movie night at Times Square when he asked for a suitable name for a text editor that does "everything".
Ports
A port to the Inferno operating system is part of Inferno's default distribution. Inferno can run as an application on top of other operating systems, allowing Inferno's port of acme to be used on most operating systems, including Microsoft Windows and Linux. A project called acme: stand alone complex intends to make acme run as a standalone application on the host operating system.
A working port of acme for Unix-like operating systems is included in Plan 9 from User Space, a collection of various ported programs from Plan 9. Currently it has been tested on a variety of operating systems including: Linux, Mac OS X, FreeBSD, NetBSD, OpenBSD, Solaris and SunOS.
Notable Acme users
Dennis Ritchie
Russ Cox
Rob Pike (Acme creator)
Brian L. Stuart
See also
Wily (text editor), a look-alike available for Unix. Unmaintained since the port of the original acme as part of Plan 9 from User Space.
sam, Rob Pike's other popular text editor. Predecessor of Acme.
'Help': A Minimalist Global User Interface. precursor of acme and sharing many of its ideas also by Rob Pike.
Plan 9 from Bell Labs
wmii, a window manager with much inspiration from Acme.
List of Plan 9 applications
References
External links
, the manuals.
Plan 9 from User Space (aka plan9port) is a port of many Plan 9 programs from their native Plan 9 environment to Unix-like operating systems, including Mac OS X.
acme stand alone complex - A distribution of the Inferno version of acme packaged for Windows, OS X and Linux and including many extras and tools (an IRC client, a wiki client, a web browser, a debugger, etc.)
Russ Cox demonstrating Acme
Free text editors
Plan 9 from Bell Labs
Inferno (operating system)
MacOS text editors |
https://en.wikipedia.org/wiki/Transport%20layer | In computer networking, the transport layer is a conceptual division of methods in the layered architecture of protocols in the network stack in the Internet protocol suite and the OSI model. The protocols of this layer provide end-to-end communication services for applications. It provides services such as connection-oriented communication, reliability, flow control, and multiplexing.
The details of implementation and semantics of the transport layer of the Internet protocol suite, which is the foundation of the Internet, and the OSI model of general networking are different. The protocols in use today in this layer for the Internet all originated in the development of TCP/IP. In the OSI model the transport layer is often referred to as Layer 4, or L4, while numbered layers are not used in TCP/IP.
The best-known transport protocol of the Internet protocol suite is the Transmission Control Protocol (TCP). It is used for connection-oriented transmissions, whereas the connectionless User Datagram Protocol (UDP) is used for simpler messaging transmissions. TCP is the more complex protocol, due to its stateful design incorporating reliable transmission and data stream services. Together, TCP and UDP comprise essentially all traffic on the Internet and are the only protocols implemented in every major operating system. Additional transport layer protocols that have been defined and implemented include the Datagram Congestion Control Protocol (DCCP) and the Stream Control Transmission Protocol (SCTP).
Services
Transport layer services are conveyed to an application via a programming interface to the transport layer protocols. The services may include the following features:
Connection-oriented communication: It is normally easier for an application to interpret a connection as a data stream rather than having to deal with the underlying connection-less models, such as the datagram model of the User Datagram Protocol (UDP) and of the Internet Protocol (IP).
Same order delivery: The network layer doesn't generally guarantee that packets of data will arrive in the same order that they were sent, but often this is a desirable feature. This is usually done through the use of segment numbering, with the receiver passing them to the application in order. This can cause head-of-line blocking.
Reliability: Packets may be lost during transport due to network congestion and errors. By means of an error detection code, such as a checksum, the transport protocol may check that the data is not corrupted, and verify correct receipt by sending an ACK or NACK message to the sender. Automatic repeat request schemes may be used to retransmit lost or corrupted data.
Flow control: The rate of data transmission between two nodes must sometimes be managed to prevent a fast sender from transmitting more data than can be supported by the receiving data buffer, causing a buffer overrun. This can also be used to improve efficiency by reducing buffer underrun.
Congestion |
https://en.wikipedia.org/wiki/Rsync | rsync is a utility for efficiently transferring and synchronizing files between a computer and a storage drive and across networked computers by comparing the modification times and sizes of files. It is commonly found on Unix-like operating systems and is under the GPL-3.0-or-later license.
rsync is written in C as a single threaded application. The rsync algorithm is a type of delta encoding, and is used for minimizing network usage. Zstandard, LZ4, or Zlib may be used for additional data compression, and SSH or stunnel can be used for security.
rsync is typically used for synchronizing files and directories between two different systems. For example, if the command rsync local-file user@remote-host:remote-file is run, rsync will use SSH to connect as user to remote-host. Once connected, it will invoke the remote host's rsync and then the two programs will determine what parts of the local file need to be transferred so that the remote file matches the local one. One application of rsync is the synchronization of software repositories on mirror sites used by package management systems.
rsync can also operate in a daemon mode (rsyncd), serving and receiving files in the native rsync protocol (using the "rsync://" syntax).
History
Andrew Tridgell and Paul Mackerras wrote the original rsync, which was first announced on 19 June 1996. It is similar in function and invocation to rdist (rdist -c), created by Ralph Campbell in 1983 and released under the Berkeley Software Distribution. Tridgell discusses the design, implementation, and performance of rsync in chapters 3 through 5 of his 1999 Ph.D. thesis. , it is maintained by Wayne Davison.
Because of the flexibility, speed, and scriptability of rsync, it has become a standard Linux utility, included in all popular Linux distributions. It has been ported to Windows (via Cygwin, Grsync, or SFU), FreeBSD, NetBSD, OpenBSD, and macOS.
Use
Similar to cp, rcp and scp, rsync requires the specification of a source and of a destination, of which at least one must be local.
Generic syntax:
rsync [OPTION] … SRC … [USER@]HOST:DEST
rsync [OPTION] … [USER@]HOST:SRC [DEST]
where SRC is the file or directory (or a list of multiple files and directories) to copy from, DEST is the file or directory to copy to, and square brackets indicate optional parameters.
rsync can synchronize Unix clients to a central Unix server using rsync/ssh and standard Unix accounts. It can be used in desktop environments, for example to efficiently synchronize files with a backup copy on an external hard drive. A scheduling utility such as cron can carry out tasks such as automated encrypted rsync-based mirroring between multiple hosts and a central server.
Examples
A command line to mirror FreeBSD might look like:
$ rsync -avz --delete ftp4.de.FreeBSD.org::FreeBSD/ /pub/FreeBSD/
The Apache HTTP Server supports rsync only for updating mirrors.
$ rsync -avz --delete --safe-links rsync.apache.org::apache-dist /path/to/mirr |
https://en.wikipedia.org/wiki/Brickfilm | A brickfilm is a film or Internet video made by either shooting stop motion animation using construction set bricks like Lego bricks (and figures) or using computer-generated imagery or traditional animation to imitate the look. They can sometimes also be live action films featuring plastic construction toys (or representations of them). Since the 2000s The Lego Group has released various films and TV series and brickfilms have also become popular on (social-) media websites. The term “brick film” was coined by Jason Rowoldt, founder of the website brickfilms.com.
History
1960s-1970s – early brickfilms
The earliest known brickfilm was a German advertisement for Lego, released around 1960. It features various brick-built animal characters, including dogs, cats, and camels, all animated using stop-motion. Little information is known about the advertisement, other than it was released for German cinemas. A display featuring the advertisement is located in the History Collection of Lego House, in Billund, Denmark.
The first known amateur brickfilm, , was created in 1973 by Lars C. Hassing and Henrik Hassing. The six-minute video featured both stop motion animation and live action, and was recorded on Super 8 film. It depicted Apollo 17 and was made for their grandparents' golden wedding anniversary. The film was later shown to Godtfred Kirk Christiansen, who had a personal copy made, though the film was not released to the public until May 2013, when the creator uploaded it to YouTube.
Other early brickfilms are known to have been created from 1975 onwards. Many were independent projects while others were promos or advertisements made by Lego itself.
1980s-1990s
A well-known early brickfilm was made between 1985 and 1989 in Perth, Western Australia by Lindsay Fleay, named The Magic Portal. It was captured on a Bolex 16mm camera with 16mm film and features animated Lego, Plasticine, and cardboard characters and objects, mixing both stop motion animation and live action footage, with Fleay making a live action appearance. The Magic Portal had high production values for a brickfilm of its time, with a five-figure budget granted by the Australian Film Commission. However, due to legal issues with The Lego Group, it did not see a wide release for years. The Lego Group eventually backed down on these charges.
More early brickfilms were produced in the Lego Sport Champions series, officially commissioned by The Lego Group in 1987. During this time, Dave Lennie and Andrew Boyer started making "Legomation" using a VHS camera and professional video equipment.
An early brickfilm with no involvement from The Lego Group to be widely released was a music video for the UK dance act Ethereal for their song Zap on Truelove Records. Released in 1991, the film was shown across the MTV network and other music channels and was the first time a full-length stop-motion brickfilm has been released across public channels. The film again attracted the attention of The |
https://en.wikipedia.org/wiki/JOHNNIAC | The JOHNNIAC was an early computer built by the RAND Corporation (not Remington Rand, maker of the contemporaneous UNIVAC I computer) and based on the von Neumann architecture that had been pioneered on the IAS machine. It was named in honor of von Neumann, short for John von Neumann Numerical Integrator and Automatic Computer. JOHNNIAC is arguably the longest operational early computer, being used almost continuously from 1953 for over 13 years before finally being shut down on February 11, 1966, logging over 50,000 operating hours.
After being rescued from the scrap heap twice, the machine is currently at the Computer History Museum in Mountain View, California.
Like the IAS machine, JOHNNIAC used 40-bit words, and included 1024 words of Selectron tube main memory, each holding 256 bits of data. Two instructions were stored in every word in 20-bit subwords consisting of an 8-bit instruction and a 12-bit address, the instructions being operated in series with the left subword running first. The initial machine had 83 instructions. A single register, named A, supplied an accumulator and the machine also featured a register named Q, for quotient. There was only one test condition, whether or not the high bit of the A register was set. There were no index registers, and as addresses were stored in the instructions, loops had to be implemented by modifying the instructions as the program ran. Since the machine only had 10 bits of address space, two of the address bits were unused and were sometimes used for data storage by interleaving data through the instructions.
JOHNNIAC weighed .
Numerous modifications were made to the system over its lifetime. In March 1955, 4096 words of magnetic-core memory were added to the system, replacing the earlier Selectrons. This required all 12 bits of addressing, and caused programs that stored data in the "spare bits" to fail. Later in 1955 a 12k-word drum memory secondary storage system was added as well. A transistor-based adder replaced the original tube-based adder in 1956. Numerous changes were made to the input/output peripherals as well, and in 1964, a real-time clock was added to support time-sharing.
One JOHNNIAC legacy was the JOSS programming language (the JOHNNIAC Open Shop System), an easy-to-use language which catered to novices. JOSS was an ancestor of DEC's FOCAL and of MUMPS.
The CYCLONE at Iowa State University was a direct clone of JOHNNIAC, and was instruction compatible with it; the ILLIAC I at the University of Illinois may have been as well. Cyclone was later updated to include hardware for floating-point arithmetic.
See also
List of vacuum-tube computers
References
External links
Johnniac entry on Antique Computers site.
The History of the JOHNNIAC (RAND monograph)
Oral history interview with Keith W. Uncapher, Charles Babbage Institute, University of Minnesota. Review of projects at RAND Corporation when Keith Uncapher was hired in 1950 through the early 1970s, such as JOHNN |
https://en.wikipedia.org/wiki/Resource%20fork | The resource fork is a fork or section of a file on Apple's classic Mac OS operating system, which was also carried over to the modern macOS for compatibility, used to store structured data along with the unstructured data stored within the data fork.
A resource fork stores information in a specific form, containing details such as icon bitmaps, the shapes of windows, definitions of menus and their contents, and application code (machine code). For example, a word processing file might store its text in the data fork, while storing any embedded images in the same file's resource fork. The resource fork is used mostly by executables, but every file is able to have a resource fork.
In a 1986 technical note, Apple strongly recommended that developers do not put general data into the resource fork of a file. According to Apple, there are parts of the system software that rely on resource forks having only valid Resource Manager information in them.
The Macintosh file system
Originally conceived and implemented by programmer Bruce Horn, the resource fork was used for three purposes with Macintosh file system:
It was used to store all graphical data on disk until it was needed, then retrieved, drawn on the screen, and thrown away. This software variant of virtual memory helped Apple to reduce memory requirements from 1 MB in the Apple Lisa to 128 KB in Macintosh.
Because all the pictures and text were stored separately in a resource fork, it could be used to allow a non-programmer to translate an application for a foreign market, a process called internationalization and localization.
It could be used to distribute nearly all of the components of an application in a single file, reducing clutter and simplifying application installation and removal.
The resource fork is implemented in all of the file systems used for system drives on the Macintosh (MFS, HFS and HFS Plus). The presence of a resource fork makes it easy to store a variety of additional information, such as allowing the system to display the correct icon for a file and open it without the need for a file extension in the file name. While access to the data fork works like file access on any other operating system pick a file, pick a byte offset, read some data access to the resource fork works more like extracting structured records from a database. (Microsoft Windows also has a concept of "resources", but these are completely unrelated to resources in Mac OS.)
The resource fork is sometimes used to store the metadata of a file, although it can also be used for storing the actual data, as was the case with font files in the classic Mac operating systems. The Macintosh file systems also have a separate area for metadata distinct from either the data or resource fork. Being part of the catalogue entry for the file, it is much faster to access this. However, the amount of data stored here is minimal, being just the creation and modification timestamps, the file type and creator cod |
https://en.wikipedia.org/wiki/Ho%20Chi%20Minh%20trail | The Ho Chi Minh Trail (), also called Annamite Range Trail () was a logistical network of roads and trails that ran from North Vietnam to South Vietnam through the kingdoms of Laos and Cambodia. The system provided support, in the form of manpower and materiel, to the Viet Cong (or "VC") and the People's Army of Vietnam (PAVN), during the Vietnam War. Construction for the network began following the North Vietnamese invasion of Laos in July 1959.
It was named by the U.S. after the North Vietnamese president Hồ Chí Minh. The origin of the name is presumed to have come from the First Indochina War, when there was a Viet Minh maritime logistics line called the "Route of Ho Chi Minh", and shortly after late 1960, as the present trail developed, Agence France-Presse (AFP) announced that a north–south trail had opened, and they named the corridor La Piste de Hồ Chí Minh, the 'Hồ Chí Minh Trail'. The trail ran mostly in Laos, and was called the Trường Sơn Strategic Supply Route (Đường Trường Sơn) by the communists, after the Vietnamese name for the Annamite Range mountains in central Vietnam. They further identified the trail as either West Trường Sơn (Laos) or East Trường Sơn (Vietnam). According to the U.S. National Security Agency's official history of the war, the trail system was "one of the great achievements of military engineering of the 20th century". The trail was able to effectively supply troops fighting in the south, an unparalleled military feat, given it was the site of the single most intense air interdiction campaign in history.
Origins (1959–1965)
Parts of what became the trail had existed for centuries as primitive footpaths that enabled trade. The area through which the system meandered was among the most challenging in Southeast Asia: a sparsely populated region of rugged mountains in elevation, triple-canopy jungle and dense tropical rainforests. Pre-First Indochina War, the routes were known as the "Southward March", "Eastward March", "Westward March", and "Northward March". During the First Indochina War the Việt Minh maintained north–south communications and logistics by expanding on this system of trails and paths, and called the routes the "Trans-West Supply Line" (running in south Vietnam, Cambodia, and Thailand) and the "Trans-Indochina Link" (running in north Vietnam, Laos, and Thailand).
In May 1958 PAVN and Pathet Lao forces seized the transportation hub at Tchepone, on Laotian Route 9. Laotian elections in May brought a right-wing government to power in Laos, increasing dependence on U.S. military and economic aid and an increasingly antagonistic attitude toward North Vietnam.
PAVN forces, alongside the Pathet Lao, invaded Laos on 28 July 1959, with fighting all along the border with North Vietnam against the Royal Lao Army (RLA). In September 1959, Hanoi established the 559th Transportation Group, headquartered at Na Kai, Houaphan province in northeast Laos close to the border. It was under the command of Colonel |
https://en.wikipedia.org/wiki/Texture%20mapping | Texture mapping is a method for mapping a texture on a computer-generated graphic. Texture here can be high frequency detail, surface texture, or color.
History
The original technique was pioneered by Edwin Catmull in 1974.
Texture mapping originally referred to diffuse mapping, a method that simply mapped pixels from a texture to a 3D surface ("wrapping" the image around the object). In recent decades, the advent of multi-pass rendering, multitexturing, mipmaps, and more complex mappings such as height mapping, bump mapping, normal mapping, displacement mapping, reflection mapping, specular mapping, occlusion mapping, and many other variations on the technique (controlled by a materials system) have made it possible to simulate near-photorealism in real time by vastly reducing the number of polygons and lighting calculations needed to construct a realistic and functional 3D scene.
Texture maps
A is an image applied (mapped) to the surface of a shape or polygon. This may be a bitmap image or a procedural texture. They may be stored in common image file formats, referenced by 3d model formats or material definitions, and assembled into resource bundles.
They may have 1-3 dimensions, although 2 dimensions are most common for visible surfaces. For use with modern hardware, texture map data may be stored in swizzled or tiled orderings to improve cache coherency. Rendering APIs typically manage texture map resources (which may be located in device memory) as buffers or surfaces, and may allow 'render to texture' for additional effects such as post processing or environment mapping.
They usually contain RGB color data (either stored as direct color, compressed formats, or indexed color), and sometimes an additional channel for alpha blending (RGBA) especially for billboards and decal overlay textures. It is possible to use the alpha channel (which may be convenient to store in formats parsed by hardware) for other uses such as specularity.
Multiple texture maps (or channels) may be combined for control over specularity, normals, displacement, or subsurface scattering e.g. for skin rendering.
Multiple texture images may be combined in texture atlases or array textures to reduce state changes for modern hardware. (They may be considered a modern evolution of tile map graphics). Modern hardware often supports cube map textures with multiple faces for environment mapping.
Creation
Texture maps may be acquired by scanning/digital photography, designed in image manipulation software such as GIMP, Photoshop, or painted onto 3D surfaces directly in a 3D paint tool such as Mudbox or zbrush.
Texture application
This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned a texture coordinate (which in the 2d case is also known as UV coordinates).
This may be done through explicit assignment of vertex attributes, manually edited in a 3D modelling package through UV unwrapping tools. It is also possible to |
https://en.wikipedia.org/wiki/Z-buffering | A depth buffer, also known as a z-buffer, is a type of data buffer used in computer graphics to represent depth information of objects in 3D space from a particular perspective. Depth buffers are an aid to rendering a scene to ensure that the correct polygons properly occlude other polygons. Z-buffering was first described in 1974 by Wolfgang Straßer in his PhD thesis on fast algorithms for rendering occluded objects. A similar solution to determining overlapping polygons is the painter's algorithm, which is capable of handling non-opaque scene elements, though at the cost of efficiency and incorrect results.
In a 3D-rendering pipeline, when an object is projected on the screen, the depth (z-value) of a generated fragment in the projected screen image is compared to the value already stored in the buffer (depth test), and replaces it if the new value is closer. It works in tandem with the rasterizer, which computes the colored values. The fragment output by the rasterizer is saved if it is not overlapped by another fragment.
When viewing an image containing partially or fully overlapping opaque objects or surfaces, it is not possible to fully see those objects that are farthest away from the viewer and behind other objects (i.e., some surfaces are hidden behind others). If there were no mechanism for managing overlapping surfaces, surfaces would render on top of each other, not caring if they are meant to be behind other objects. The identification and removal of these surfaces are called the hidden-surface problem. To check for overlap, the computer calculates the z-value of a pixel corresponding to the first object and compares it with the z-value at the same pixel location in the z-buffer. If the calculated z-value is smaller than the z-value already in the z-buffer (i.e., the new pixel is closer), then the current z-value in the z-buffer is replaced with the calculated value. This is repeated for all objects and surfaces in the scene (often in parallel). In the end, the z-buffer will allow correct reproduction of the usual depth perception: a close object hides one further away. This is called z-culling.
The z-buffer has the same internal data structure as an image, namely a 2D-array, with the only difference being that it stores a single value for each screen pixel instead of color images that use 3 values to create color. This makes the z-buffer appear black-and-white because it is not storing color information. The buffer has the same dimensions as the screen buffer for consistency.
Primary visibility tests (such as back-face culling) and secondary visibility tests (such as overlap checks and screen clipping) are usually performed on objects' polygons in order to skip specific polygons that are unnecessary to render. Z-buffer, by comparison, is comparatively expensive, so performing primary and secondary visibility tests relieve the z-buffer of some duty.
The granularity of a z-buffer has a great influence on the scene quality: the tr |
https://en.wikipedia.org/wiki/Unlambda | Unlambda is a minimal, "nearly pure" functional programming language invented by David Madore. It is based on combinatory logic, an expression system without the lambda operator or free variables. It relies mainly on two built-in functions (s and k) and an apply operator (written `, the backquote character). These alone make it Turing-complete, but there are also some input/output (I/O) functions to enable interacting with the user, some shortcut functions, and a lazy evaluation function. Variables are unsupported.
Unlambda is free and open-source software distributed under a GNU General Public License (GPL) 2.0 or later.
Basic principles
As an esoteric programming language, Unlambda is meant as a demonstration of very pure functional programming rather than for practical use. Its main feature is the lack of conventional operators and data types—the only kind of data in the program are one-parameter functions. Data can nevertheless be simulated with appropriate functions as in the lambda calculus. Multi-parameter functions can be represented via the method of currying.
Unlambda is based on the principle of abstraction elimination, or the elimination of all saved variables, including functions. As a purely functional language, Unlambda's functions are first-class objects, and are the only such objects.
Here is an implementation of a hello world program in Unlambda:
`r```````````.H.e.l.l.o. .w.o.r.l.di
Original built-in functions
The notation .x denotes a function which takes one argument and returns it unchanged, printing the single character x as a side effect when it is invoked. i represents the version of the identity function that has no such side effect; it is used here as a dummy argument. The program `.di applies the d-printing function to a dummy argument of i, returning i and printing the letter d as a side effect. Similarly, ``.l.di first applies .l to .d, printing the letter l and returning .d; this result of .d is then applied to i as in the previous example. The function r is syntactic sugar for the function that prints a newline character.
Other important features provided by Unlambda include the k and s functions. k manufactures constant functions: the result of `kx is a function which, when invoked, returns x. Thus the value of ``kxy is x for any x and y.
s is a generalized evaluation operator. ```sxyz evaluates to ``xz`yz for any x, y, and z. It is a remarkable fact that s and k are sufficient to perform any calculation, as described in SKI combinator calculus. As a brief example, the identity function i can be implemented as ``skk, since ```skkx yields x for all x.
Unlambda's one flow control construct is call with current continuation, denoted c. When an expression of the form `cx is evaluated, a special continuation object is constructed, representing the state of the interpreter at that moment. Then x is evaluated, and then the result is given the continuation object as an argument. If the continuation i |
https://en.wikipedia.org/wiki/Rapira | Rapira is also a name for the Soviet 100 mm anti-tank gun T-12
Rapira (, rapier) is an educational procedural programming language developed in the Soviet Union and implemented on the Agat computer, PDP-11 clones (Electronika, DVK, BK series), and Intel 8080 and Zilog Z80 clones (Korvet). It is interpreted with a dynamic type system and high level constructions. The language originally had a Russian-based set of reserved words (keywords), but English and Romanian were added later. It was considered more elegant and easier to use than Pascal implementations of the time.
Rapira was used to teach computer programming in Soviet schools. The integrated development environment included a text editor and a debugger.
Sample program:
ПРОЦ СТАРТ()
ВЫВОД: 'Привет, мир!!!'
КОН ПРОЦ
The same, but using the English lexics [sic, from the article referenced below]:
proc start()
output: 'Hello, world!!!';
end proc
Rapira's ideology was based on languages such as POP-2 and SETL, with strong influences from ALGOL.
Consequently, for example, Rapira implements a very strong, flexible, and interesting data structure, named a tuple. in Rapira, these are heterogeneous lists with allowed operations such as indexing, joining, length count, getting of sublist, easy comparison, etc.
References
External links
, interpreter for English dialect of Rapira
Rapira Reborn, instructional book for learning Rapira
Non-English-based programming languages
Pascal programming language family
Procedural programming languages
Structured programming languages
Educational programming languages
Computing in the Soviet Union
Soviet inventions
Programming languages created in the 20th century |
https://en.wikipedia.org/wiki/Andrey%20Yershov | Andrey Petrovich Yershov (; 19 April 1931, Moscow – 8 December 1988, Moscow) was a Soviet computer scientist, notable as a pioneer in systems programming and programming language research.
Donald Knuth considers him to have independently co-discovered the idea of hashing with linear probing. He also created one of the first algorithms for compiling arithmetic expressions.
He was responsible for the languages ALPHA and Rapira, the first Soviet time-sharing system AIST-0, electronic publishing system RUBIN, and a multiprocessing workstation MRAMOR. He also was the initiator of developing the Computer Bank of the Russian Language (Машинный Фонд Русского Языка), the Soviet project for creating a large representative Russian corpus, a project in the 1980s comparable to the Bank of English and British National Corpus. The Russian National Corpus created by the Russian Academy of Sciences in the 2000s is a successor of Yershov's project.
From 1959, he worked at the Siberian Division of the Academy of Sciences of the Soviet Union, and helped found both the Novosibirsk Computer Center and the Siberian School of Computer Science.
He received the Academician A. N. Krylov Prize from the Academy of Sciences, the first programmer to be so recognized. In 1974, he was made a Distinguished Fellow of the British Computer Society.
He was involved with developing international standards in programming and informatics, as a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which specified, maintains, and supports the languages ALGOL 60 and ALGOL 68. In 1981, he received the IFIP's Silver Core Award.
To the computer science community, he is mostly known for his speech Aesthetics and the Human Factor in Programming presented at the dinner at the AFIPS Spring Joint Computer Conference in 1972 and, due to its importance, republished as an article by the Communications of the ACM.
See also
List of Russian IT developers
List of computer scientists
List of programmers
References
Books
Programming Programme for the BESM Computer, Pergamon Press, London, 1959. Translated from the Russian original: , 1958.
External links
Academician A. Yershov's archive, including documents and photographs
About the archive
Biography of Academician A.P. Yershov at the archive
Computer Fund of Russian Language
PSI International Andrey Yershov Memorial Conference (Novosibirsk, Russia)
1931 births
1988 deaths
Fellows of the British Computer Society
Full Members of the USSR Academy of Sciences
Moscow State University alumni
Academic staff of Novosibirsk State University
Recipients of the Order of the Red Banner of Labour
Computer programmers
Computer systems researchers
Programming language designers
Programming language researchers
Russian computer scientists
Russian inventors
Soviet computer scientists
Soviet inventors
Scientists from Moscow
Scientists from Novosibirsk
Burials at Yuzhnoye Cemete |
https://en.wikipedia.org/wiki/Drum%20memory | Drum memory was a magnetic data storage device invented by Gustav Tauschek in 1932 in Austria. Drums were widely used in the 1950s and into the 1960s as computer memory.
Many early computers, called drum computers or drum machines, used drum memory as the main working memory of the computer. Some drums were also used as secondary storage as for example various IBM drum storage drives.
Drums were displaced as primary computer memory by magnetic core memory, which offered a better balance of size, speed, cost, reliability and potential for further improvements. Drums in turn were replaced by hard disk drives for secondary storage, which were both less expensive and offered denser storage. The manufacturing of drums ceased in the 1970s.
Technical design
A drum memory or drum storage unit contained a large metal cylinder, coated on the outside surface with a ferromagnetic recording material. It could be considered the precursor to the hard disk drive (HDD), but in the form of a drum (cylinder) rather than a flat disk. In most designs, one or more rows of fixed read-write heads ran along the long axis of the drum, one for each track. The drum's controller simply selected the proper head and waited for the data to appear under it as the drum turned (rotational latency). Not all drum units were designed with each track having its own head. Some, such as the English Electric DEUCE drum and the UNIVAC FASTRAND had multiple heads moving a short distance on the drum in contrast to modern HDDs, which have one head per platter surface.
In November 1953 Hagen published a paper disclosing "air floating" of magnetic heads in an experimental sheet metal drum. A US patent filed in January 1954 by Baumeister of IBM disclosed a "spring loaded and air supported shoe for poising a magnetic head above a rapidly rotating magnetic drum." Flying heads became standard in drums and hard disk drives.
Magnetic drum units used as primary memory were addressed by word. Drum units used as secondary storage were addressed by block. Several modes of block addressing were possible, depending on the device.
Blocks took up an entire track and were addressed by track.
Tracks were divided into fixed length sectors and addressing was by track and sectors.
Blocks were variable length, and blocks were addressed by track and record number.
Blocks were variable length with a key, and could be searched by key content.
Some devices were divided into logical cylinders, and addressing by track was actually logical cylinder and track.
The performance of a drum with one head per track is comparable to that of a disk with one head per track and is determined almost entirely by the rotational latency, whereas in an HDD with moving heads its performance includes a rotational latency delay plus the time to position the head over the desired track (seek time). In the era when drums were used as main working memory, programmers often did optimum programming—the programmer—or the assembler, |
https://en.wikipedia.org/wiki/Virtual%20private%20network | A virtual private network (VPN) is a mechanism for creating a secure connection between a computing device and a computer network, or between two networks, using an insecure communication medium such as the public Internet.
A VPN can extend access to a private network (one that disallows or restricts public access) to users who do not have direct access to it, such as an office network allowing secure access from off-site over the Internet. The benefits of a VPN include security, reduced costs for dedicated communication lines, and greater flexibility for remote workers. VPNs are also used to bypass internet censorship. Encryption is common, although not an inherent part of a VPN connection.
A VPN is created by establishing a virtual point-to-point connection through the use of tunneling protocols over existing networks. A VPN available from the public Internet can provide some of the benefits of a private wide area network (WAN).
Types
Virtual private networks may be classified into several categories:
Remote access A host-to-network configuration is analogous to connecting a computer to a local area network. This type provides access to an enterprise network, such as an intranet. This may be employed for remote workers, or to enable a mobile worker to access necessary tools without exposing them to the public Internet.
Site-to-site A site-to-site configuration connects two networks. This configuration expands a network across geographically disparate offices or connects a group of offices to a data center installation. The interconnecting link may run over a dissimilar intermediate network, such as two IPv6 networks connected over an IPv4 network.
Extranet-based site-to-site In the context of site-to-site configurations, the terms intranet and extranet are used to describe two different use cases. An intranet site-to-site VPN describes a configuration where the sites connected by the VPN belong to the same organization, whereas an extranet site-to-site VPN joins sites belonging to multiple organizations.
Typically, individuals interact with remote access VPNs, whereas businesses tend to make use of site-to-site connections for business-to-business, cloud computing, and branch office scenarios. However, these technologies are not mutually exclusive and, in a significantly complex business network, may be combined to enable remote access to resources located at any given site, such as an ordering system that resides in a data center.
VPN systems also may be classified by:
the tunneling protocol used to tunnel the traffic
the tunnel's termination point location, e.g., on the customer edge or network-provider edge
the type of topology of connections, such as site-to-site or network-to-network
the levels of security provided
the OSI layer they present to the connecting network, such as Layer 2 circuits or Layer 3 network connectivity
the number of simultaneous connections
Security mechanisms
VPNs cannot make online connections complet |
https://en.wikipedia.org/wiki/Broadband | In telecommunications, broadband is the wide-bandwidth data transmission that exploits signals at a wide spread of frequencies or several different simultaneous frequencies, and is used in fast internet connections. The medium can be coaxial cable, optical fiber, wireless Internet (radio), twisted pair, or satellite.
Originally used to mean ‘using a wide-spread frequency’ and for services that were analog at the lowest level, nowadays in the context of Internet access, ‘broadband’ is often used to mean any high-speed Internet access that is seemingly always ‘on’ and is faster than dial-up access over traditional analog or ISDN PSTN services.
The ideal telecommunication network has the following characteristics: broadband, multi-media, multi-point, multi-rate and economical implementation for a diversity of services (multi-services). The Broadband Integrated Services Digital Network (B-ISDN) was planned to provide these characteristics. Asynchronous Transfer Mode (ATM) was promoted as a target technology for meeting these requirements.
Overview
Different criteria for "broad" have been applied in different contexts and at different times. Its origin is in physics, acoustics, and radio systems engineering, where it had been used with a meaning similar to "wideband", or in the context of audio noise reduction systems, where it indicated a single-band rather than a multiple-audio-band system design of the compander. Later, with the advent of digital telecommunications, the term was mainly used for transmission over multiple channels. Whereas a passband signal is also modulated so that it occupies higher frequencies (compared to a baseband signal which is bound to the lowest end of the spectrum, see line coding), it is still occupying a single channel. The key difference is that what is typically considered a broadband signal in this sense is a signal that occupies multiple (non-masking, orthogonal) passbands, thus allowing for much higher throughput over a single medium but with additional complexity in the transmitter/receiver circuitry.
The term became popularized through the 1990s as a marketing term for Internet access that was faster than dial-up access (dial-up being typically limited to a maximum of 56 kbit/s). This meaning is only distantly related to its original technical meaning.
Since 1999, broadband Internet access has been a factor in public policy. In that year, at the World Trade Organization Biannual Conference called “Financial Solutions to Digital Divide” in Seattle, the term “Meaningful Broadband” was introduced to the world leaders, leading to the activation of a movement to close the digital divide. Fundamental aspects of this movement are to suggest that the equitable distribution of broadband is a fundamental human right.
Personal computing facilitated easy access, manipulation, storage, and exchange of information, and required reliable data transmission. Communicating documents by images and the use of high-resolution g |
https://en.wikipedia.org/wiki/Golomb%20coding | Golomb coding is a lossless data compression method using a family of data compression codes invented by Solomon W. Golomb in the 1960s. Alphabets following a geometric distribution will have a Golomb code as an optimal prefix code, making Golomb coding highly suitable for situations in which the occurrence of small values in the input stream is significantly more likely than large values.
Rice coding
Rice coding (invented by Robert F. Rice) denotes using a subset of the family of Golomb codes to produce a simpler (but possibly suboptimal) prefix code. Rice used this set of codes in an adaptive coding scheme; "Rice coding" can refer either to that adaptive scheme or to using that subset of Golomb codes. Whereas a Golomb code has a tunable parameter that can be any positive integer value, Rice codes are those in which the tunable parameter is a power of two. This makes Rice codes convenient for use on a computer since multiplication and division by 2 can be implemented more efficiently in binary arithmetic.
Rice was motivated to propose this simpler subset due to the fact that geometric distributions are often varying with time, not precisely known, or both, so selecting the seemingly optimal code might not be very advantageous.
Rice coding is used as the entropy encoding stage in a number of lossless image compression and audio data compression methods.
Overview
Construction of codes
Golomb coding uses a tunable parameter to divide an input value into two parts: , the result of a division by , and , the remainder. The quotient is sent in unary coding, followed by the remainder in truncated binary encoding. When , Golomb coding is equivalent to unary coding.
Golomb–Rice codes can be thought of as codes that indicate a number by the position of the bin (), and the offset within the bin (). The example figure shows the position and offset for the encoding of integer using Golomb–Rice parameter , with source probabilities following a geometric distribution with .
Formally, the two parts are given by the following expression, where is the nonnegative integer being encoded:
and
.
Both and will be encoded using variable numbers of bits: by a unary code, and by bits for Rice code, or a choice between and bits for Golomb code (i.e. is not a power of 2), with . If , then use bits to encode ; otherwise, use +1 bits to encode . Clearly, if is a power of 2 and we can encode all values of with bits.
The integer treated by Golomb was the run length of a Bernoulli process, which has a geometric distribution starting at 0. The best choice of parameter is a function of the corresponding Bernoulli process, which is parameterized by the probability of success in a given Bernoulli trial. is either the median of the distribution or the median ±1. It can be determined by these inequalities:
which are solved by
.
For the example with :
.
The Golomb code for this distribution is equivalent to the Huffman code for the s |
https://en.wikipedia.org/wiki/Java%20Data%20Objects | Java Data Objects (JDO) is a specification of Java object persistence. One of its features is a transparency of the persistence services to the domain model. JDO persistent objects are ordinary Java programming language classes (POJOs); there is no requirement for them to implement certain interfaces or extend from special classes. JDO 1.0 was developed under the Java Community Process as JSR 12. JDO 2.0 was developed under JSR 243 and was released on May 10, 2006. JDO 2.1 was completed in Feb 2008, developed by the Apache JDO project. JDO 2.2 was released in October 2008. JDO 3.0 was released in April 2010.
Object persistence is defined in the external XML metafiles, which may have vendor-specific extensions. JDO vendors provide developers with enhancers, which modify compiled Java class files so they can be transparently persisted. (Note that byte-code enhancement is not mandated by the JDO specification, although it is the commonly used mechanism for implementing the JDO specification's requirements.) Currently, JDO vendors offer several options for persistence, e.g. to RDBMS, to OODB, or to files.
JDO enhanced classes are portable across different vendors' implementation. Once enhanced, a Java class can be used with any vendor's JDO product.
JDO is integrated with Java EE in several ways. First of all, the vendor implementation may be provided as a . Secondly, JDO may work in the context of JEE transaction services.
JDO vs. EJB3 vs. JPA
Enterprise JavaBeans 3.0 (EJB3) specification also covered persistence, as had EJB v2 with Entity Beans. There have been standards conflicts between the two standards bodies in terms of pre-eminence. JDO has several commercial implementations.
In the end, persistence has been "broken out" of "EJB3 Core", and a new standard formed, the Java Persistence API (JPA). JPA uses the javax.persistence package, and was first specified in a separate document within the EJB3 spec JSR 220, but was later moved to its own spec JSR 317. Significantly, javax.persistence will not require an EJB container, and thus will work within a Java SE environment as well, as JDO always has. JPA, however, is an object-relational mapping (ORM) standard, while JDO is both an object-relational mapping standard and a transparent object persistence standard. JDO, from an API point of view, is agnostic to the technology of the underlying datastore, whereas JPA is targeted to RDBMS datastores (although there are several JPA providers that support access to non-relational datastores through the JPA API, such as DataNucleus and ObjectDB).
Leading JDO commercial implementations and open source projects also offer a JPA API implementation as an alternative access to their underlying persistence engines, formerly exposed solely via JDO in the original products. There are many open source implementations of JDO.
New Features in JDO Version 2 Not Found In Version 1
Disconnected object graphs concept
Standardized ORM mapping descriptors (for |
https://en.wikipedia.org/wiki/BESM | BESM (БЭСМ) is the series of Soviet mainframe computers built in 1950–60s. The name is an acronym for "Bolshaya (or Bystrodeystvuyushchaya) Elektronno-schotnaya Mashina" ("Большая электронно-счётная машина" or "Быстродействующая электронно-счётная машина"), meaning "Big Electronic Computing Machine" or "High-Speed Electronic Computing Machine". It was designed at the Institute of Precision Mechanics and Computer Engineering
Models
The BESM series included six models.
BESM-1
BESM-1, originally referred to as simply the BESM or BESM AN ("BESM Akademii Nauk", BESM of the Academy of Sciences), was completed in 1952. Only one BESM-1 machine was built. The machine used approximately 5,000 vacuum tubes. At the time of completion, it was the fastest computer in Europe. The floating-point numbers were represented as 39-bit words: 32 bits for the mantissa, one bit for sign, and 1 + 5 bits for the exponent. It was capable of representing numbers in the range 10−9 – 1010. BESM-1 had 1024 words of read–write memory using ferrite cores, and 1024 words of read-only memory based on semiconducting diodes. It also had external storage: four magnetic tape units of 30,000 words each, and fast magnetic drum storage with a capacity of 5120 words and an access rate of 800 words/second. The computer was capable of performing 8–10 KFlops. The energy consumption was approximately 30 kW, not accounting for the cooling systems.
BESM-2
BESM-2 also used vacuum tubes.
BESM-3M and BESM-4
BESM-3M and BESM-4 were built using transistors. Their architecture was similar to that of the M-20 and M-220 series. The word size was 45 bits. Thirty BESM-4 machines were built. BESM-4 was used to create the first ever computer animation. The prototypes of both models were made in 1962–63, and the beginning of the series release was in 1964.
EPSILON (a macro language with high level features including strings and lists, developed by Andrey Ershov at Novosibirsk in 1967) was used to implement ALGOL 68 on the M-220.
BESM-6
The BESM-6 was the best known and influential model of the series. The design was completed in 1965. Production started in 1968 and continued for the following 19 years.
See also
Sergei Alekseyevich Lebedev
Lev Korolyov
List of Soviet computer systems
History of computing hardware
History of computing in the Soviet Union
List of vacuum tube computers
References
Further reading
A museum curator suggests Russia's BESM supercomputer may have been superior to the USA's supercomputers during the early stages of the Cold War.
Mainframe computers
Supercomputers
Vacuum tube computers
Soviet inventions
Transistorized computers
Soviet computer systems |
https://en.wikipedia.org/wiki/IBM%207090 | The IBM 7090 is a second-generation transistorized version of the earlier IBM 709 vacuum tube mainframe computer that was designed for "large-scale scientific and technological applications". The 7090 is the fourth member of the IBM 700/7000 series scientific computers. The first 7090 installation was in December 1959. In 1960, a typical system sold for $2.9 million (equivalent to $ million in ) or could be rented for $63,500 a month ().
The 7090 uses a 36-bit word length, with an address space of 32,768 words (15-bit addresses). It operates with a basic memory cycle of 2.18 μs, using the IBM 7302 Core Storage core memory technology from the IBM 7030 (Stretch) project.
With a processing speed of around 100 Kflop/s, the 7090 is six times faster than the 709, and could be rented for half the price. An upgraded version, the 7094, was up to twice as fast. Both the 7090 and the 7094 were withdrawn from sale on July 14, 1969, but systems remained in service for more than a decade after.
Development and naming
Although the 709 was a superior machine to its predecessor, the 704, it was being built and sold at the time that transistor circuitry was supplanting vacuum tube circuits. Hence, IBM redeployed its 709 engineering group to the design of a transistorized successor. That project became called the 709-T (for transistorized), which because of the sound when spoken, quickly shifted to the nomenclature 7090 (i.e., seven - oh - ninety). Similarly, the related machines such as the 7070 and other 7000 series equipment were sometimes called by names of digit - digit - decade (e.g., seven - oh - seventy).
IBM 7094
An upgraded version, the IBM 7094, was first installed in September 1962. It has seven index registers, instead of three on the earlier machines. The 7094 console has a distinctive box on top that displays lights for the four new index registers. The 7094 introduced double-precision floating point and additional instructions, but is largely backward compatible with the 7090.
Although the 7094 has four more index registers than the 709 and 7090, at power-on time it is in multiple tag mode, compatible with the 709 and 7090, and requires a Leave Multiple Tag Mode instruction in order to enter seven index register mode and use all seven index registers. In multiple tag mode, when more than one bit is set in the tag field, the contents of the two or three selected index registers are logically ORed, not added, together, before the decrement takes place. In seven index register mode, if the three-bit tag field is not zero, it selects just one of seven index registers, however, the program can return to multiple tag mode with the instruction Enter Multiple Tag Mode, restoring 7090 compatibility.
In April 1964, the first 7094 II was installed, which had almost twice as much general speed as the 7094 due to a faster clock cycle, dual memory banks and improved overlap of instruction execution, an early instance of pipelined design.
IBM 7040/704 |
https://en.wikipedia.org/wiki/WinZip | WinZip is a trialware file archiver and compressor for Microsoft Windows, macOS, iOS and Android. It is developed by WinZip Computing (formerly Nico Mak Computing), which is owned by Corel Corporation. The program can create archives in Zip file format, unpack some other archive file formats and it also has various tools for system integration.
Features
Support for ARC and ARJ archives if suitable external programs are installed.
History
WinZip 1.0 was released in April 1991 as a Graphical User Interface (GUI) front-end for PKZIP.
From version 6.0 until version 9.0, registered users could download the newest versions of the software, enter their original registration information or install over the top of their existing registered version, and thereby obtain a free upgrade. This upgrade scheme was discontinued as of version 10.0.
On May 2, 2006, WinZip Computing was acquired by Corel Corporation using the proceeds from its initial public offering.
Supported .ZIP archive features
128- and 256-bit key AES encryption in addition to the less secure PKZIP 2.0 encryption method used in earlier versions. The AES implementation, using Brian Gladman's code, was FIPS-197 certified, on March 27, 2003. However, Central Directory Encryption feature is not supported.
Release history
Windows
The ZIP file archive format (ZIP) was originally invented for MS-DOS in 1989 by Phil Katz.
Mac
WinZip 1.0 for Mac OS X (November 16, 2010): Initial release is compatible with Intel Macs and can be run on v10.5 'Leopard.'
iOS
The iOS version was first released on February 17, 2012.
Android
WinZip Android was first released on June 19, 2012.
See also
Comparison of archive formats
Comparison of file archivers
List of archive formats
References
External links
1991 software
2006 mergers and acquisitions
Corel software
data compression software
file archivers
Proprietary cross-platform software
Windows compression software |
https://en.wikipedia.org/wiki/Token%20bus%20network | Token bus is a network implementing a Token Ring protocol over a virtual ring on a coaxial cable. A token is passed around the network nodes and only the node possessing the token may transmit. If a node doesn't have anything to send, the token is passed on to the next node on the virtual ring. Each node must know the address of its neighbour in the ring, so a special protocol is needed to notify the other nodes of connections to, and disconnections from, the ring.
Ethernet's access protocol could not absolutely guarantee a maximum time any station would have to wait to access the network, so was thought to be unsuitable for manufacturing automation applications. The Token bus protocol was created to combine the benefits of a physical bus network with the deterministic access protocol of a Token Ring network.
Token bus was standardized by IEEE standard 802.4. It was mainly used for industrial applications. Token bus was used by General Motors for their Manufacturing Automation Protocol (MAP) standardization effort. This is an application of the concepts used in Token Ring networks. The main difference is that the endpoints of the bus do not meet to form a physical ring.
In order to guarantee the packet delay and transmission in Token bus protocol, a modified Token bus was proposed in Manufacturing Automation Systems and flexible manufacturing system (FMS). To optimize deterministic access required by real-time IoT communication in a distributed manufacturing plant or IIoT, token bus method can also be implemented according to IEEE 802.15.4.
A means for carrying Internet Protocol over IEEE 802 networks, including token bus networks, was developed.
The IEEE 802.4 Working Group has disbanded and the standard has been withdrawn by the IEEE.
See also
ARCNET
Network topology
References
Local area networks
IEEE 802
Link protocols |
https://en.wikipedia.org/wiki/Free%20variables%20and%20bound%20variables | In mathematics, and in other disciplines involving formal languages, including mathematical logic and computer science, a variable may be said to be either free or bound. The terms are opposites. A free variable is a notation (symbol) that specifies places in an expression where substitution may take place and is not a parameter of this or any container expression. Some older books use the terms real variable and apparent variable for free variable and bound variable, respectively. The idea is related to a placeholder (a symbol that will later be replaced by some value), or a wildcard character that stands for an unspecified symbol.
In computer programming, the term free variable refers to variables used in a function that are neither local variables nor parameters of that function. The term non-local variable is often a synonym in this context.
An instance of a variable symbol is bound, in contrast, if the value of that variable symbol has been bound to a specific value or range of values in the domain of discourse or universe. This may be achieved through the use of logical quantifiers, variable-binding operators, or an explicit statement of allowed values for the variable (such as, "...where is a positive integer".) A variable symbol overall is bound if at least one occurrence of it is bound.pp.142--143 Since the same variable symbol may appear in multiple places in an expression, some occurrences of the variable symbol may be free while others are bound,p.78 hence "free" and "bound" are at first defined for occurrences and then generalized over all occurrences of said variable symbol in the expression. However it is done, the variable ceases to be an independent variable on which the value of the expression depends, whether that value be a truth value or the numerical result of a calculation, or, more generally, an element of an image set of a function.
While the domain of discourse in many contexts is understood, when an explicit range of values for the bound variable has not been given, it may be necessary to specify the domain in order to properly evaluate the expression. For example, consider the following expression in which both variables are bound by logical quantifiers:
This expression evaluates to false if the domain of and is the real numbers, but true if the domain is the complex numbers.
The term "dummy variable" is also sometimes used for a bound variable (more commonly in general mathematics than in computer science), but this should not be confused with the identically named but unrelated concept of dummy variable as used in statistics, most commonly in regression analysis.
Examples
Before stating a precise definition of free variable and bound variable, the following are some examples that perhaps make these two concepts clearer than the definition would:
In the expression
n is a free variable and k is a bound variable; consequently the value of this expression depends on the value of n, but there is nothing called |
https://en.wikipedia.org/wiki/Richmond%20Hill | Richmond Hill may refer to:
Places
Australia
Richmond Hill, Queensland, a suburb of Charters Towers
Canada
Richmond Hill, Ontario
Richmond Hill GO Station, a station in the GO Transit network located in the town
Richmond Hill (federal electoral district)
Richmond Hill (provincial electoral district)
New Zealand
Richmond Hill, Christchurch, Canterbury, a suburb of Christchurch on the Banks Peninsula
Sri Lanka
Richmond Hill, Galle
United Kingdom
Richmond Hill, London
Richmond Hill, Leeds, West Yorkshire
Richmond Hill, Bournemouth, Dorset
United States
Richmond Hill, Georgia
Richmond Hill Road, a major street in Augusta, Georgia
Richmond Hill explosion, in Indianapolis
Richmond Hill, Queens, New York City
Richmond Hill (Manhattan), a colonial estate that served for a time as the headquarters of George Washington
Richmond Hill (Livingston, New York), listed on the National Register of Historic Places
Richmond Hill, North Carolina
Richmond Hill, Virginia
Other
SS Richmond Hill, a cargo ship built in 1940 for Counties Ship Management Co. Ltd.
Richmond Hill (television series), a 1988 Australian television soap opera |
https://en.wikipedia.org/wiki/Bill%20Stealey | John Wilbur Stealey Sr. is an American game developer and publisher who founded MicroProse with Sid Meier. He also founded (in 1995) and is the current CEO of iEntertainment Network.
Business career
Stealey took a job with General Instrument as their Director of Strategic Planning for their Systems and Service Division in Hunt Valley, Maryland. There he met Sid Meier and co-founded his first game company, MicroProse Software. As CEO he grew the company to over $43 million in annual sales, taking MicroProse Software public in 1991, and selling in 1993 to a Kleiner Perkins company, Spectrum HoloByte. He resigned from the company following the merger.
Stealey started the game software company Interactive Magic in 1995, took it public in 1998, and sold to a private equity firm in 1999.
While running iEntertainment Network, Stealey mentioned in a 1996 interview that he owned his own military training aircraft and flew it for recreation on a regular basis.
Personal life
Stealey graduated from the United States Air Force Academy in 1970 and was, at the time he co-founded Microprose, a Major in the USAF Reserve and instructor for the Pennsylvania Air National Guard, flying A-37 attack aircraft. He retired from the military with the rank of Lt Col.
Stealey owned the Baltimore Spirit of the National Professional Soccer League from the franchise's inception in 1992 until he sold it to Edwin F. Hale, Sr. in 1998.
Next Generation listed Stealey in their "75 Most Important People in the Games Industry of 1995" for his roles as former head of MicroProse and then-current head of Interactive Magic. Stealey left the company in 1999, but later returned as CEO in 2002.
References
External links
LinkedIn Profile
1947 births
American video game designers
Living people
MicroProse people
United States Air Force Academy alumni
Baltimore Blast |
https://en.wikipedia.org/wiki/Vertical%20blank%20interrupt | A vertical blank interrupt (or VBI) is a hardware feature found in some legacy computer systems that generate a video signal. Cathode-ray tube based video display circuits generate vertical blanking and vertical sync pulses when the display picture has completed and the raster is being returned to the start of the display. With VBI, the vertical blank pulse is also used to generate an interrupt request for the computer's microprocessor.
The interrupt service routine can then modify data in the video display memory while it is not being read to avoid screen tearing effects. This was particularly useful in simple home computers and video game consoles that relied upon a central microprocessor to generate text or graphic displays. More advanced home computers featuring hardware sprites often supported the more flexible horizontal blank interrupt instead in order to allow them to be multiplexed.
As the VBI will be generated at the start of every displayed frame (50 Hz for PAL, 60 Hz for NTSC), it is a useful timebase in systems lacking other timing sources. VBIs are used in some home computers to perform regular functions like scanning the keyboard and joystick ports. It can also be used to implement a basic form of multitasking as well as a buffered graphics screen via page flipping, if hardware permits.
Modern protected mode operating systems generally do not support VBIs as access to hardware interrupts for unprivileged user programs could compromise the system stability. Instead, various APIs like DirectX provide efficient and safe ways to present graphics free of tear and flicker.
For computers that support VBIs see the page about raster interrupts.
See also
Horizontal blank interrupt
Video game development |
https://en.wikipedia.org/wiki/Datamax%20UV-1 | The Datamax UV-1 is a pioneering computer designed by a group of computer graphics artists working at the University of Illinois at Chicago, known as the Circle Graphics Habitat. It was primarily the brainchild of Tom DeFanti, who was trying to build a machine capable of running his GRASS programming language at a personal computer price point, a project they referred to as the Z-Box. As time went on the project evolved into a machine intended to be used to make high-quality color graphics for output to videotape, and later as a titling system for use by cable television companies. It represents what seems to be the first dedicated graphics workstation.
DeFanti had been working at the Habitat for some time when, in 1977, he was introduced to Jeff Frederiksen, a chip designer working at Dave Nutting Associates. Nutting had been contracted by Midway, the video game division of Bally, to create a standardized graphics driver chip. They intended to use it in most of their future arcade games, as well as a console they were working on which would later turn into the Astrocade.
Midway was not immediately interested in the home computer market, but the Nutting people managed to convince management to get DeFanti to port GRASS3 to the platform under contract. The idea was to build an external box that would be used with the existing console to turn it into a "real" computer, a system known as the ZGRASS-100. A number of people at the Habitat, as well as some from Nutting, worked on the project, adding a keyboard, memory, and additional connectors. A separate display chip created text, which was then mixed with the output from the display chip for the screen. Also included would be a new version of GRASS3, known as Zgrass.
At about the same time, another version of the same basic parts was built as the UV-1. In this case the machine was built as an all-in-one box, including the small amount of additional hardware needed to support the high-resolution mode of the Nutting chipset, which supplied 320 x 204 resolution with up to 8 colors per line. This mode required 16 KB for the display buffer alone, so the machine included 32 KB RAM and a larger 16 KB ROM with additional Zgrass commands in it. To this basic system the Habitat added high quality video output circuitry and a floppy disk interface.
Bally's intents for the UV-1 are not entirely clear. The November 1980 Byte magazine contains an article by DeFanti (et al.) that seems to suggest that the ZGRASS-100 was already "dead", and that the UV-1 was intended to be used for high-quality video output. Ad copy from the same era suggests that Bally intended to sell the UV-1 as a home computer, competing directly with the Apple II and similar machines. This makes the ZGRASS-100 somewhat unnecessary, so whether or not Bally intended to offer both remains a mystery. Either way, in 1980 Bally decided to exit the industry altogether, dropping both Z-Box projects, and the Astrocade too.
The final version of th |
https://en.wikipedia.org/wiki/Atari%20BASIC | Atari BASIC is an interpreter for the BASIC programming language that shipped with the Atari 8-bit family of 6502-based home computers. Unlike most American BASICs of the home computer era, Atari BASIC is not a derivative of Microsoft BASIC and differs in significant ways. It includes keywords for Atari-specific features and lacks support for string arrays, for example.
The language was distributed as an 8 KB ROM cartridge for use with the 1979 Atari 400 and 800 computers. Starting with the 600XL and 800XL in 1983, BASIC is built into the system. There are three primary versions of the software: the original cartridge-based "A", the built-in "B" for the 600XL/800XL, and the final "C" version in late-model XLs and the XE series.
Despite the Atari 8-bit computers running at a higher speed than most of its contemporaries, several technical decisions placed Atari BASIC near the bottom in performance benchmarks. The original authors addressed most of these issues in a series of improved versions: BASIC A+ (1981), BASIC XL (1983), and BASIC XE (1985). A host of third-party interpreters and compilers like Turbo-BASIC XL also appeared.
The complete, annotated source code and design specifications of Atari BASIC were published as The Atari BASIC Source Book in 1983.
Development
The machines that would become the Atari 8-bit family were originally developed as second-generation video game consoles intended to replace the Atari VCS. Ray Kassar, the new president of Atari, decided to challenge Apple Computer by building a home computer instead.
This meant the designs needed to include the BASIC programming language, the standard for home computers. In early 1978, Atari licensed the source code to the MOS 6502 version of Microsoft BASIC. It was offered in two versions: one using a 32-bit floating point format that was about 7800 bytes when compiled, and another using an extended 40-bit format that was close to 9 KB.
Even the 32-bit version barely fit into the 8 KB size of the machine's ROM cartridge format. Atari also felt that they needed to expand the language to support the hardware features of their computers, similar to what Apple had done with Applesoft BASIC. This increased the size of Atari's version to around 11 KB; AppleSoft BASIC on the Apple II+ was 10,240 bytes long. After six months the code was pared down to almost fit in an 8 KB ROM, but Atari was facing a January 1979 deadline for the Consumer Electronics Show (CES) where the machines would be demonstrated. They decided to ask for help to get a version of BASIC ready in time for the show.
Shepardson Microsystems
In September 1978, Shepardson Microsystems won the bid on completing BASIC. At the time they were finishing Cromemco 16K Structured BASIC for the Z80-based Cromemco S-100 bus machines. Developers Kathleen O'Brien and Paul Laughton used Data General Business Basic, an integer-only implementation, as the inspiration for their BASIC, given Laughton's experience with Data General |
https://en.wikipedia.org/wiki/ABC%20%28programming%20language%29 | ABC is an imperative general-purpose programming language and integrated development environment (IDE) developed at Centrum Wiskunde & Informatica (CWI), Netherlands by Leo Geurts, Lambert Meertens, and Steven Pemberton. It is interactive, structured, high-level, and intended to be used instead of BASIC, Pascal, or AWK. It is intended for teaching or prototyping, but not as a systems-programming language.
ABC had a major influence on the design of the language Python, developed by Guido van Rossum, who formerly worked for several years on the ABC system in the mid-1980s.
Features
Its designers claim that ABC programs are typically around a quarter the size of the equivalent Pascal or C programs, and more readable. Key features include:
Only five basic data types
No required variable declarations
Explicit support for top-down programming
Statement nesting is indicated by indentation, via the off-side rule
Infinite precision arithmetic, unlimited-sized lists and strings, and other features supporting orthogonality and ease of use by novices
ABC was originally a monolithic implementation, leading to an inability to adapt to new requirements, such as creating a graphical user interface (GUI). ABC could not directly access the underlying file system and operating system.
The full ABC system includes a programming environment with a structure editor (syntax-directed editor), suggestions, static variables (persistent), and multiple workspaces, and is available as an interpreter–compiler. , the latest version is 1.05.02, and it is ported to Unix, DOS, Atari, and Apple MacOS.
Example
This is the following code of for loop.
An example function to collect the set of all words in a document:
HOW TO RETURN words document:
PUT {} IN collection
FOR line IN document:
FOR word IN split line:
IF word not.in collection:
INSERT word IN collection
RETURN collection
References
External links
ABC Programmer's Handbook
Computer science in the Netherlands
Dutch inventions
Educational programming languages
Information technology in the Netherlands
Persistent programming languages
Procedural programming languages
Programming languages created in the 1980s |
https://en.wikipedia.org/wiki/Andrew%20D.%20Gordon | Andrew D. Gordon is a British computer scientist employed by Microsoft Research. His research interests include programming language design, formal methods, concurrency, cryptography, and access control.
Biography
Gordon earned a Ph.D. from the University of Cambridge in 1992. Until 1997 Gordon was a Research Fellow at the University of Cambridge Computer Laboratory. He then joined the Microsoft Research laboratory in Cambridge, England, where he is a principal researcher in the Programming Principles and Tools group. He also holds a professorship at the University of Edinburgh.
Research
Gordon is one of the designers of Concurrent Haskell, a functional programming language with explicit primitives for concurrency. He is the co-designer with Martin Abadi of spi calculus, an extension of the π-calculus for formalized reasoning about cryptographic systems. He and Luca Cardelli invented the ambient calculus for reasoning about mobile code. With Moritz Y. Becker and Cédric Fournet, Gordon also designed SecPAL, a Microsoft specification language for access control policies.
Awards and honours
Gordon's Ph.D. thesis, Functional Programming and Input/Output, won the 1993 Distinguished Dissertation Award of the British Computer Society. His 2000 paper on the ambient calculus subject with Luca Cardelli, "Anytime, Anywhere: Modal Logics for Mobile Ambients", won the 2010 SIGPLAN Most Influential POPL Paper Award.
References
External links
, Microsoft Research
Year of birth missing (living people)
Living people
British computer scientists
Formal methods people
Members of the University of Cambridge Computer Laboratory
Academics of the University of Edinburgh
Programming language researchers |
https://en.wikipedia.org/wiki/DTP | DTP may refer to:
Computing
Desktop publishing, the creation of documents using page layout skills on a personal computer
Distributed transaction processing, the X/Open model of coordinating transactions between multiple participants
Dynamic Trunking Protocol, a networking protocol from Cisco
Digital Teaching Platform, educational products
Dependently typed programming
Parasoft DTP, development testing platform
Data Transfer Project, an open-source initiative on data portability
Medicine
Drug therapy problems, a categorization of drug problems in pharmaceutical care
DTP vaccine, a triple vaccine used to inoculate against diphtheria, tetanus and pertussis
Developmental Therapeutics Program, of the National Cancer Institute
Distal tingling on percussion, another term for Tinel's sign
Diphtheria toxin
Music
Devin Townsend Project, a rock and metal group
Disturbing tha Peace, a record label
DTP (Sadus album)
Organizations
Demokratik Toplum Partisi (Democratic Society Party), a former pro-Kurdish political party in Turkey
Democrat Turkey Party (Democrat Türkiye Partisi), a minor party in the late 1990s
Doctoral Training Partnerships, British centres for managing PhD studies
dTP Entertainment, a German video game producer
Dick Johnson Racing, formerly DJR Team Penske, Australia's oldest motor racing team competing in the Supercars Championship
Other
Dynamic tidal power, a proposed technology to generate energy from the oceans
Dubai Techno Park, a technology park in Dubai, UAE |
https://en.wikipedia.org/wiki/DOS%20%28disambiguation%29 | DOS is shorthand for the MS-DOS and IBM PC DOS family of operating systems.
DOS may also refer to:
Computing
Data over signalling (DoS), multiplexing data onto a signalling channel
Denial-of-service attack (DoS), an attack on a communications network
Disk operating system
List of disk operating systems, Apple DOS, Atari DOS, DOS/360, etc.
Distributed operating system
Music
Albums
Dos (Altered State album)
Dos (Dos album)
Dos (Fanny Lú album)
Dos (Gerardo album)
Dos (Malo album), 1972
Dos (Myriam Hernández album), 1989
Dos, album by Wooden Shjips, 2009
¡Dos!, album by Green Day
Other uses in music
Dos (band), an American band
DOS (concert), by Filipino singer Daniel Padilla
Organisations
Democratic Opposition of Serbia, a former political alliance
Department of Space, India
Deutscher Olympischer Sportbund
Directorate of Overseas Surveys, UK 1957–1984
Dominus Obsequious Sororium, within cult NXIVM
United States Department of State
Places
Dos, a village in Vidra Commune, Romania
Science
Density of states in physics
DOS-1 etc., Russian space station designation in the Salyut programme
Dioctyl sebacate, an organic chemical
Diversity oriented synthesis in chemistry
Sports
DOS Kampen, a Dutch football club
VV DOS, a past Dutch football club now part of FC Utrecht
Other uses
Dos (card game), a variation of Uno
Dos, an Ancient Roman dowry
Day of Silence, an LGBT observance
See also
American Descendants of Slavery, ADOS or sometimes DOS
Doss (disambiguation) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.