source stringlengths 32 199 | text stringlengths 26 3k |
|---|---|
https://en.wikipedia.org/wiki/ABC%2080 | The ABC 80 (Advanced BASIC Computer 80) was a personal computer engineered by the Swedish corporation Dataindustrier AB (DIAB) and manufactured by Luxor in Motala, Sweden in the late 1970s and early 1980s. It was introduced on the market in August 1978.
The ABC 80 was based on an earlier modular computer system from the same company and built around a Z80 and of ROM containing a fast semi-compiling BASIC interpreter. It had of RAM as main memory and a dedicated (included) tape recorder for program and data storage, but could also be expanded to handle disk drives as well as many other peripherals. The ROM could be extended in increments of 1 or 4 KB in order to handle such so called "options". The monitor was a black and white TV set modified for the purpose, an obvious choice since Luxor also made TVs.
The ABC 80 was used in schools and offices around Scandinavia and parts of Europe. It was also used for industrial automation, scientific measurement and control systems. Like its successor, the ABC 800, the computer had an unusually quick and usable BASIC with excellent I/O response times, something that was often discovered when trying to switch to IBM PC-based personal computers. Due to its roots in an industrial computer system, the ABC 80 also had a flexible bus extension system with many (external) expansion and peripheral cards available for various purposes and applications, as well as high quality support and documentation.
ABC 80 was also manufactured on license as BRG ABC80 by Budapesti Rádiótechnikai Gyár in Hungary. It used the same keyboard, but the case was metal instead of plastic.
Popularity
In addition to its widespread use in schools, offices and industrial applications, the initially also grasped a majority share of the rising personal computer market in Sweden, partly thanks to its office software in Swedish. The computer was robust and well engineered, mechanically and electrically, and its BASIC was fast enough that it could be used to write arcade games, without resorting to assembly language. However, despite such technical virtues, it couldn't defend the home market against the dedicated gaming computers with color and sound that appeared in the early 1980s, neither against the cheap ultra simplistic home computers of the same era, even though a new low cost version was released that could use an ordinary TV instead of the dedicated monitor.
Luxor (and Facit) held on to its more professional markets for some more years with the ABC 800 series (also sold as Facit DTC). It had a more extensive BASIC, more memory, color, and a 512×240 graphics mode. From 1985 DIAB and Luxor also tried to compete against the IBM PC in the industrial and office markets with its high performance ABC 1600 and ABC 9000 series of computers based on DIABs real-time operating system called DNIX, but failed.
However, many ABC 80 and ABC 800 machines used in industrial or scientific applications were in use in their respective installations |
https://en.wikipedia.org/wiki/Matra%20Alice | The Matra & Hachette Ordinateur Alice is a home computer sold in France beginning in 1983. It was a clone of the TRS-80 MC-10, produced through a collaboration between Matra and Hachette in France and Tandy Corporation in the United States.
The Alice is distinguished by its bright red casing. Functionally, it is equivalent to the MC-10, with a Péritel (SCART) connector replacing the RF modulator for video output.
The Alice never became a popular computer in its home country. It tried to invade schools by being part of the country's Plan Informatique pour Tous ("Information technology for everyone") programme, but Thomson won the whole deal. Less than 50 games were released for the system.
The original model had 4 kB of RAM and used a Motorola 6847 video display generator chip, as used in the Dragon 32 and Acorn Atom among others.
At least three emulators for the system exist.
Specifications
The machine is similar to the TRS-80 MC-10, with the following specifications:
CPU: Motorola 6803 at 0.89 Mhz
RAM: 4 KiB on-board, expandable to 8 KB
ROM: 8 KiB (Microsoft BASIC)
I/O Ports:
RS-232C serial interface
Cassette interface
Péritel video output
Expansion interface
AZERTY keyboard layout
Display: Motorola 6847, 32 x 16 or 64 x 32 with 8 colors, 160 x 125 with 4 colors (with expanded RAM)
Sound: 1 channel, 5 octaves
Successor Models
Matra later released two successor models:
The Matra Alice 32 released in 1983 shared the case style of the original, but was a different computer inside, due to using the EF9345 video chip. The Alice 32 had 8 kilobytes of main RAM, 8 kilobytes of dedicated video RAM, and 16 kilobytes ROM (the ROM incorporated an assembler). The CPU was clocked at 1 Mhz. Higher resolution graphic modes included 320 X 192 with 16 colors from a 256 color palette.
The Matra Alice 90 released in late 1984 was an upgrade to the Alice 32, which featured 32 kilobytes of RAM and a full-size case and keyboard. Its video cable included video-in, so EF9345 graphics could be overlaid onto the input video.
The Matra Alice 8000 released in 1985 was a more powerful machine with two CPUs, a 6803 at 4,9152Mhz and a Intel 8088. It had 64KB of RAM.
The EF9345 video chip in the Matra Alice 32/90 was capable of displaying 8 colors, 128 alphanumeric characters, and 128 semi-graphic characters with a semigraphic mode and 40- and 80-column text modes. It could address up to 16 KiB of dedicated VRAM although the Alice 32 and 90 only included 8 KiB. The 32x16 semigraphic mode of the original Alice was simulated in software by the Alice 32/90 system ROM.
References
External links
Le wiki d'Alice - Everything about Alice
Matra Alice — information from The Machine Room
Matra Alice32 — Alice 32 information complete
DCAlice — Alice 32 emulator and information site
My First Alice32 — Alice 32 emulator
FAQ on Alice32 — comprehensive FAQ regarding Alice
Home computers
Computer companies of France |
https://en.wikipedia.org/wiki/Free-net | A free-net was originally a computer system or network that provided public access to digital resources and community information, including personal communications, through modem dialup via the public switched telephone network. The concept originated in the health sciences to provide online help for medical patients. With the development of the Internet free-net systems became the first to offer limited Internet access to the general public to support the non-profit community work. The Cleveland Free-Net (cleveland.freenet.edu), founded in 1986, was the pioneering community network of this kind in the world.
Any person with a personal computer, or through access from public terminal in libraries, could register for accounts on a free-net, and was assigned an email address. Other services often included Usenet newsgroups, chat rooms, IRC, telnet, and archives of community information, delivered either with text-based Gopher software or later the World-Wide Web.
The word mark Free-Net was a registered trademark of the National Public Telecomputing Network (NPTN), founded in 1989 by Tom Grundner at Case Western Reserve University. NPTN was a non-profit organization dedicated to establishing and developing, free, public access, digital information and communication services for the general public. It closed operations in 1996, filing for Chapter 7 bankruptcy. However, prior use of the term created some conflicts. NPTN distributed the software package FreePort, developed at Case Western Reserve, that was used and licensed by many of the free-net sites.
The Internet domain name freenet.org was first registered by the Greater Detroit Free-Net (detroit.freenet.org), a non-profit community system in Detroit, MI, and a member of the NPTN. The Greater Detroit Free-Net provided other subdomains to several free-net systems during its operation from 1993 to approximately 2001.
Unlike commercial Internet service providers, free-nets originally provided direct terminal-based dialup, instead of other networked connections, such as Point-to-Point Protocol (PPP). The development of Internet access with cheaper and faster connections, and the advent of the World-Wide Web made the original free-net community concept obsolete.
A number of free-nets, including the original Cleveland Free-Net, have shut down or changed their focus. Free-nets have always been locally governed, so interpretation of their mission to remove barriers to access and provide a forum for community information, as well as services offered, can vary widely. As text-based Internet became less popular, some of the original free-nets have made available PPP dialup and more recently DSL services, as a revenue generating mechanism, with some now transitioning into the community wireless movement.
Several free-net systems continue under new mission statements. Rochester Free-Net (Rochester, New York), for instance, focuses on hosting community service organizations (over 500 to date) as we |
https://en.wikipedia.org/wiki/MacPaint | MacPaint is a raster graphics editor developed by Apple Computer and released with the original Macintosh personal computer on January 24, 1984. It was sold separately for US$195 with its word processing counterpart, MacWrite. MacPaint was notable because it could generate graphics that could be used by other applications. It taught consumers what a graphics-based system could do by using the mouse, the clipboard, and QuickDraw picture language. Pictures could be cut from MacPaint and pasted into MacWrite documents.
The original MacPaint was developed by Bill Atkinson, a member of Apple's original Macintosh development team. Early development versions of MacPaint were called MacSketch, still retaining part of the name of its roots, LisaSketch. It was later developed by Claris, the software subsidiary of Apple which was formed in 1987. The last version of MacPaint was version 2.0, released in 1988. It was discontinued by Claris in 1998 because of diminishing sales.
Development
MacPaint was written by Bill Atkinson, a member of Apple's original Macintosh development team. The original MacPaint consisted of 5,804 lines of Pascal computer code, augmented by another 2,738 lines of 68000 assembly language. MacPaint's user interface was designed by Susan Kare, also a member of the Macintosh team. Kare also beta-tested MacPaint before release.
MacPaint allow users to edit a 576-by-720 pixel, 72dpi bitmap (slightly wider than the screen, and slightly more than twice as tall as the screen). A occupies most of the screen real estate, offering a viewport into a portion of the bitmap, with toolbars and pattern palettes around it.
MacPaint uses two offscreen memory buffers to avoid flicker when dragging shapes or images across the screen. One of these buffers contained the existing pixels of a document, and the other contained the pixels of its previous state. The second buffer was used as the basis of the software's undo feature. In April 1983, the software's name was changed from MacSketch to MacPaint. The original MacPaint was programmed as a single-document interface. The palette positions and sizes were unalterable, as was the document window. This differed from other Macintosh software at the time, which allowed users to move windows and resize them.
The original MacPaint did incorporate a double zoom function. Instead of a zoom function, a special magnification mode called FatBits was used. FatBits showed each pixel as a clickable rectangle with a white border. The FatBits editing mode set the standard for many future editors. MacPaint included a "Goodies" menu which included the FatBits tool. This menu had been named the "Aids" menu in prerelease versions, but was renamed "Goodies" as public awareness of the AIDS epidemic grew in the summer of 1983.
Release and version history
MacPaint was first advertised in an 18-page brochure in December 1983, following the earlier announcement of the Macintosh 128K. The Macintosh was released on January 24, |
https://en.wikipedia.org/wiki/UNIVAC%201101 | The ERA 1101, later renamed UNIVAC 1101, was a computer system designed and built by Engineering Research Associates (ERA) in the early 1950s and continued to be sold by the Remington Rand corporation after that company later purchased ERA. Its (initial) military model, the ERA Atlas, was the first stored-program computer that was moved from its site of manufacture and successfully installed at a distant site. Remington Rand used the 1101's architecture as the basis for a series of machines into the 1960s.
History
Codebreaking
ERA was formed from a group of code-breakers working for the United States Navy during World War II. The team had built a number of code-breaking machines, similar to the more famous Colossus computer in England, but designed to attack Japanese codes. After the war the Navy was interested in keeping the team together even though they had to formally be turned out of Navy service. The result was ERA, which formed in St. Paul, Minnesota in the hangars of a former Chase Aircraft shadow factory.
After the war, the team continued to build codebreaking machines, targeted at specific codes. After one of these codes changed, making an expensive computer obsolete, the team convinced the Navy that the only way to make a system that would remain useful was to build a fully programmable computer. The Navy agreed, and in 1947 they funded development of a new system under "Task 13".
The resulting machines, known as "Atlas", used drum memory for main memory and featured a simple central processing unit built for integer math. The first Atlas machine was built, moved, and installed at the Army Security Agency by December 1950. A faster version using Williams tubes and drums was delivered to the NSA in 1953.
Commercialization
The company turned to the task of selling the systems commercially. Atlas was named after a character in the popular comic strip Barnaby, and they initially decided to name the commercial versions "Mabel". Jack Hill suggested "1101" instead; 1101 is the binary representation of the number 13. The ERA 1101 was publicly announced in December 1951. Atlas II, slightly modified became the ERA 1103, while a more heavily modified version with core memory and floating point math support became the UNIVAC 1103A.
At about this time the company became embroiled in a lengthy series of political maneuverings in Washington, D.C. Drew Pearson's Washington Merry-Go-Round claimed that the founding of ERA was a conflict of interest for Norris and Engstrom because they had used their war-time government connections to set up a company for their own profit. The resulting legal fight left the company drained, both financially and emotionally. In 1952 they were purchased by Remington Rand, largely as a result of these problems.
Remington Rand had recently purchased Eckert–Mauchly Computer Corporation, builders of the famed UNIVAC I, the first commercial computer in the US. Although ERA and UNIVAC were run separately within the compa |
https://en.wikipedia.org/wiki/Chart | A chart (sometimes known as a graph) is a graphical representation for data visualization, in which "the data is represented by symbols, such as bars in a bar chart, lines in a line chart, or slices in a pie chart". A chart can represent tabular numeric data, functions or some kinds of quality structure and provides different info.
The term "chart" as a graphical representation of data has multiple meanings:
A data chart is a type of diagram or graph, that organizes and represents a set of numerical or qualitative data.
Maps that are adorned with extra information (map surround) for a specific purpose are often known as charts, such as a nautical chart or aeronautical chart, typically spread over several map sheets.
Other domain-specific constructs are sometimes called charts, such as the chord chart in music notation or a record chart for album popularity.
Charts are often used to ease understanding of large quantities of data and the relationships between parts of the data. Charts can usually be read more quickly than the raw data. They are used in a wide variety of fields, and can be created by hand (often on graph paper) or by computer using a charting application. Certain types of charts are more useful for presenting a given data set than others. For example, data that presents percentages in different groups (such as "satisfied, not satisfied, unsure") are often displayed in a pie chart, but maybe more easily understood when presented in a horizontal bar chart. On the other hand, data that represents numbers that change over a period of time (such as "annual revenue from 1990 to 2000") might be best shown as a line chart1
Features
A chart can take a large variety of forms. However, there are common features that provide the chart with its ability to extract meaning from data.
Typically the data in a chart is represented graphically since humans can infer meaning from pictures more quickly than from text. Thus, the text is generally used only to annotate the data.
One of the most important uses of text in a graph is the title. A graph's title usually appears above the main graphic and provides a succinct description of what the data in the graph refers to.
Dimensions in the data are often displayed on axes. If a horizontal and a vertical axis are used, they are usually referred to as the x-axis and y-axis. Each axis will have a scale, denoted by periodic graduations and usually accompanied by numerical or categorical indications. Each axis will typically also have a label displayed outside or beside it, briefly describing the dimension represented. If the scale is numerical, the label will often be suffixed with the unit of that scale in parentheses. For example, "Distance traveled (m)" is a typical x-axis label and would mean that the distance traveled, in units of meters, is related to the horizontal position of the data within the chart.
Within the graph, a grid of lines may appear to aid in the visual alignment of data. The g |
https://en.wikipedia.org/wiki/IBM%20Series/1 | The IBM Series/1 is a 16-bit minicomputer, introduced in 1976, that in many respects competed with other minicomputers of the time, such as the PDP-11 from Digital Equipment Corporation and similar offerings from Data General and HP. The Series/1 was typically used to control and operate external electro-mechanical components while also allowing for primitive data storage and handling.
Although the Series/1 uses EBCDIC character encoding internally and locally attached EBCDIC terminals, ASCII-based remote terminals and devices could be attached via an I/O card with a RS-232 interface to be more compatible with competing minicomputers. IBM's own 3101 and 3151 ASCII display terminals are examples of this. This was a departure from IBM mainframes that used 3270 terminals and coaxial attachment.
Series/1 computers were withdrawn from marketing in 1988 at or near the introduction of the IBM AS/400 line.
A US government asset report dated May 2016 revealed that an IBM Series/1 was still being used as part of the country's nuclear command and control systems.
Models
Initially, model 1 (4952, Model C), model 3 (IBM 4953) and model 5 (IBM 4955, Model F) processors were provided. Later processors were the model 4 (IBM 4954) and model 6 (IBM 4956). Don Estridge had been the lead manager on the IBM Series/1 minicomputer. He reportedly had fallen out of grace when that project was ill-received.
Software support
The Series/1 could be ordered with or without operating system. Available were either of two mutually exclusive operating systems: Event Driven Executive (EDX) or Realtime Programming System (RPS). Systems using EDX were primarily programmed using Event Driven Language (EDL), though high level languages such as FORTRAN IV, PL/I, Pascal and COBOL were also available. EDL delivered output in IBM machine code for System/3 or System/7 and for the Series/1 by an emulator. Although the Series/1 is underpowered by today's standards, a robust multi-user operating environment (RPS) was available along with several additional high level languages for the RPS OS. The EDX operating system was originally ported from the System/7. Series/1 was also the first computer that IBM supported for Unix.
Systems without an operating system were intended for users needing dedicated applications that did not require the full capabilities of either OS. Applications were built using a set of standalone programs, called the Base Program Preparation Facilities, consisting of a macro assembler, a link editor and some basic utilities. A set of modules, called Control Program Support (CPS), was linked with the application to provide task management, data processing input/output support and initial program loading for both disks and diskettes.
Applications of the Series/1
The Series/1 was also widely used in manufacturing environments, including General Motors assembly plants.
Example systems and applications included Manufacturing Information Database (MIDB), Vehicle |
https://en.wikipedia.org/wiki/Computer%20magazine | Computer magazines are about computers and related subjects, such as networking and the Internet. Most computer magazines offer (or offered) advice, some offer programming tutorials, reviews of the latest technologies, and advertisements.
History
1940s–1950s
Sources:.
Mathematics of Computation established in 1943, articles about computers began to appear from 1946 (Volume 2, Number 15) to the end of 1954. Scientific journal.
Digital Computer Newsletter, (1949–1968), founded by Albert Eugene Smith.
Computers and People, (1951–1988), was arguably the first computer magazine. It began as Roster of Organizations in the Field of Automatic Computing Machinery (1951–1952), and then The Computing Machinery Field (1952–1953). It was published by Edmund Berkeley. Computers and Automation held the first Computer Art Contest in 1963 and maintained a bibliography on computer art starting in 1966. It also included a monthly estimated census of all installed computer systems starting in 1962. In 1973 name changed to Computers and Automation and People, and finally in 1975 to Computers and People.
AFIPS conference proceedings (AFIPS Joint Computer Conferences) (1952-1987).
ACM National Conference proceedings (Proceedings of National Meetings) (1952, 1956-1987, 1997)
IEEE Transactions on Computers from 1952, scientific journal.
Computing News (1953–1963), was an early computer magazine produced by Jackson W. Granholm out of Thousand Oaks, California. The first documented copyright was applied for on September 1, 1954, for issue #36. The magazine was released on the 1st and 15th of each month, which places issue #1 at March 15, 1953. The last documented release was issue #217 on March 15, 1962.
Journal of the ACM from 1954, scientific journal.
Datamation from 1957, was another early computer and data processing magazine. It is still being published as an e-publication on the Internet. Futurist Donald Prell was its founder.
Information and Computation from 1957, scientific journal.
IBM Journal of Research and Development from 1957, scientific journal.
Communications of the ACM from 1958, mix of science magazine, trade magazine, and a scientific journal
The Computer Journal from 1958, scientific journal.
1960s–1970s
ACS Newsletter (1966–1976), Amateur Computer Society newsletter.
Computerworld (1967)
People's Computer Company Newsletter (1972–1981)
Amateur Computer Club Newsletter (ACCN; 1973–)
Dr. Dobb's Journal (1976–2014) was the first microcomputer magazine to focus on software, rather than hardware.
1980s
1980s computer magazines skewed their content towards the hobbyist end of the then-microcomputer market, and used to contain type-in programs, but these have gone out of fashion. The first magazine devoted to this class of computers was Creative Computing. Byte was an influential technical journal that published until the 1990s.
In 1983 an average of one new computer magazine appeared each week. By late that year more than 200 existed. Their numbe |
https://en.wikipedia.org/wiki/FN | FN may refer to:
Arts, entertainment, and media
Faking News, Indian news satire website
Financial News, UK financial newspaper and news website
Finding Nemo, a 2003 computer-animated adventure comedy film by Disney and Pixar
Fortnite, a game released in 2017 by Epic Games
Future Nostalgia, a 2020 album by Dua Lipa
"F.N" (song), a 2019 song by Lil Tjay
Businesses and brands
FN Herstal or Fabrique Nationale de Herstal, a Belgian arms factory
FN (automobile), cars produced by FN Herstal
FN (motorcycle), motorcycles produced by FN Herstal
Royal Air Maroc Express (IATA airline designator FN)
Organizations
Front National (France), a French political party
Front National (French Resistance), a World War II French Resistance group
Front National (Belgium), a Belgian political party
Fuerza Nueva, the name of a former succession of political parties in Spain
Forza Nuova, an Italian political party
Other uses
First Nations in Canada, the predominant indigenous peoples in Canada, south of the Arctic Circle
Fn key, a key found on some compact keyboards
Fibronectin, a glycoprotein involved in cell adhesion and growth
Function (computer science)
Fireman, an enlisted rate in the United States Navy and United States Coast Guard
Footnote, in texts |
https://en.wikipedia.org/wiki/Semaphore%20%28programming%29 | In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system. Semaphores are a type of synchronization primitive. A trivial semaphore is a plain variable that is changed (for example, incremented or decremented, or toggled) depending on programmer-defined conditions.
A useful way to think of a semaphore as used in a real-world system is as a record of how many units of a particular resource are available, coupled with operations to adjust that record safely (i.e., to avoid race conditions) as units are acquired or become free, and, if necessary, wait until a unit of the resource becomes available.
Semaphores are a useful tool in the prevention of race conditions; however, their use is not a guarantee that a program is free from these problems. Semaphores which allow an arbitrary resource count are called counting semaphores, while semaphores which are restricted to the values 0 and 1 (or locked/unlocked, unavailable/available) are called binary semaphores and are used to implement locks.
The semaphore concept was invented by Dutch computer scientist Edsger Dijkstra in 1962 or 1963, when Dijkstra and his team were developing an operating system for the Electrologica X8. That system eventually became known as THE multiprogramming system.
Library analogy
Suppose a physical library has 10 identical study rooms, to be used by one student at a time. Students must request a room from the front desk if they wish to use a study room. If no rooms are free, students wait at the desk until someone relinquishes a room. When a student has finished using a room, the student must return to the desk and indicate that one room has become free.
In the simplest implementation, the clerk at the front desk knows only the number of free rooms available, which they only know correctly if all of the students actually use their room while they have signed up for them and return them when they're done. When a student requests a room, the clerk decreases this number. When a student releases a room, the clerk increases this number. The room can be used for as long as desired, and so it is not possible to book rooms ahead of time.
In this scenario the front desk count-holder represents a counting semaphore, the rooms are the resource, and the students represent processes/threads. The value of the semaphore in this scenario is initially 10, with all rooms empty. When a student requests a room, they are granted access, and the value of the semaphore is changed to 9. After the next student comes, it drops to 8, then 7 and so on. If someone requests a room and the current value of the semaphore is 0, they are forced to wait until a room is freed (when the count is increased from 0). If one of the rooms was released, but there are several students waiting, then any method can be used to select the one who |
https://en.wikipedia.org/wiki/Builder%20pattern | The builder pattern is a design pattern designed to provide a flexible solution to various object creation problems in object-oriented programming. The intent of the builder design pattern is to separate the construction of a complex object from its representation. It is one of the Gang of Four design patterns.
Overview
The Builder design pattern is one of the Design Patterns that describe how to solve recurring design problems in object-oriented software.
The Builder design pattern solves problems like:
How can a class (the same construction process) create different representations of a complex object?
How can a class that includes creating a complex object be simplified?
Creating and assembling the parts of a complex object directly within a class is inflexible. It commits the class to creating a particular representation of the complex object and makes it impossible to change the representation later independently from (without having to change) the class.
The Builder design pattern describes how to solve such problems:
Encapsulate creating and assembling the parts of a complex object in a separate Builder object.
A class delegates object creation to a Builder object instead of creating the objects directly.
A class (the same construction process) can delegate to different Builder objects to create different representations of a complex object.
Definition
The intent of the Builder design pattern is to separate the construction of a complex object from its representation. By doing so, the same construction process can create different representations.
Advantages
Advantages of the Builder pattern include:
Allows you to vary a product's internal representation.
Encapsulates code for construction and representation.
Provides control over steps of construction process.
Disadvantages
Disadvantages of the Builder pattern include:
A distinct ConcreteBuilder must be created for each type of product.
Builder classes must be mutable.
May hamper/complicate dependency injection.
Structure
UML class and sequence diagram
In the above UML class diagram,
the Director class doesn't create and assemble the ProductA1 and ProductB1 objects directly.
Instead, the Director refers to the Builder interface for building (creating and assembling) the parts of a complex object,
which makes the Director independent of which concrete classes are instantiated (which representation is created).
The Builder1 class implements the Builder interface by creating and assembling the ProductA1 and ProductB1 objects.
The UML sequence diagram shows the run-time interactions:
The Director object calls buildPartA() on the Builder1 object, which creates and assembles the ProductA1 object.
Thereafter,
the Director calls buildPartB() on Builder1, which creates and assembles the ProductB1 object.
Class diagram
Builder
Abstract interface for creating objects (product).
ConcreteBuilder
Provides implementation for Builder. It is an object able to construct ot |
https://en.wikipedia.org/wiki/Factory%20method%20pattern | In class-based programming, the factory method pattern is a creational pattern that uses factory methods to deal with the problem of creating objects without having to specify the exact class of the object that will be created. This is done by creating objects by calling a factory method—either specified in an interface and implemented by child classes, or implemented in a base class and optionally overridden by derived classes—rather than by calling a constructor.
Overview
The Factory Method
design pattern is one of the twenty-three well-known design patterns that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse.
The Factory Method design pattern solves problems like:
How can an object be created so that subclasses can redefine which class to instantiate?
How can a class defer instantiation to subclasses?
The Factory Method design pattern describes how to solve such problems:
Define a separate operation (factory method) for creating an object.
Create an object by calling a factory method.
This enables writing of subclasses to change the way an object is created (to redefine which class to instantiate).
See also the UML class diagram below.
Definition
"Define an interface for creating an object, but let subclasses decide which class to instantiate. The Factory method lets a class defer instantiation it uses to subclasses." (Gang Of Four)
Creating an object often requires complex processes not appropriate to include within a composing object. The object's creation may lead to a significant duplication of code, may require information not accessible to the composing object, may not provide a sufficient level of abstraction, or may otherwise not be part of the composing object's concerns. The factory method design pattern handles these problems by defining a separate method for creating the objects, which subclasses can then override to specify the derived type of product that will be created.
The factory method pattern relies on inheritance, as object creation is delegated to subclasses that implement the factory method to create objects.
As shown in the C# example below, the factory method pattern can also rely on an Interface - in this case IPerson - to be implemented.
Structure
UML class diagram
In the above UML class diagram,
the Creator class that requires a Product object does not instantiate the Product1 class directly.
Instead, the Creator refers to a separate factoryMethod() to create a product object,
which makes the Creator independent of which concrete class is instantiated.
Subclasses of Creator can redefine which class to instantiate. In this example, the Creator1 subclass implements the abstract factoryMethod() by instantiating the Product1 class.
Examples
This C++14 implementation is based on the pre C++98 implementation in the book.
#include <iostream>
#include <memory>
enum ProductId |
https://en.wikipedia.org/wiki/Composite%20pattern | In software engineering, the composite pattern is a partitioning design pattern. The composite pattern describes a group of objects that are treated the same way as a single instance of the same type of object. The intent of a composite is to "compose" objects into tree structures to represent part-whole hierarchies. Implementing the composite pattern lets clients treat individual objects and compositions uniformly.
Overview
The Composite
design pattern is one of the twenty-three well-known
GoF design patterns
that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse.
What problems can the Composite design pattern solve?
A part-whole hierarchy should be represented so that clients can treat part and whole objects uniformly.
A part-whole hierarchy should be represented as tree structure.
When defining (1) Part objects and (2) Whole objects that act as containers for Part objects, clients must treat them separately, which complicates client code.
What solution does the Composite design pattern describe?
Define a unified Component interface for both part (Leaf) objects and whole (Composite) objects.
Individual Leaf objects implement the Component interface directly, and Composite objects forward requests to their child components.
This enables clients to work through the Component interface to treat Leaf and Composite objects uniformly:
Leaf objects perform a request directly,
and Composite objects
forward the request to their child components recursively downwards the tree structure.
This makes client classes easier to implement, change, test, and reuse.
See also the UML class and object diagram below.
Motivation
When dealing with Tree-structured data, programmers often have to discriminate between a leaf-node and a branch. This makes code more complex, and therefore, more error prone. The solution is an interface that allows treating complex and primitive objects uniformly. In object-oriented programming, a composite is an object designed as a composition of one-or-more similar objects, all exhibiting similar functionality. This is known as a "has-a" relationship between objects. The key concept is that you can manipulate a single instance of the object just as you would manipulate a group of them. The operations you can perform on all the composite objects often have a least common denominator relationship. For example, if defining a system to portray grouped shapes on a screen, it would be useful to define resizing a group of shapes to have the same effect (in some sense) as resizing a single shape.
When to use
Composite should be used when clients ignore the difference between compositions of objects and individual objects. If programmers find that they are using multiple objects in the same way, and often have nearly identical code to handle each of them, then composite is a good choice; it is less c |
https://en.wikipedia.org/wiki/Decorator%20pattern | In object-oriented programming, the decorator pattern is a design pattern that allows behavior to be added to an individual object, dynamically, without affecting the behavior of other objects from the same class. The decorator pattern is often useful for adhering to the Single Responsibility Principle, as it allows functionality to be divided between classes with unique areas of concern as well as to the Open-Closed Principle, by allowing the functionality of a class to be extended without being modified. Decorator use can be more efficient than subclassing, because an object's behavior can be augmented without defining an entirely new object.
Overview
The decorator design pattern is one of the twenty-three well-known design patterns; these describe how to solve recurring design problems and design flexible and reusable object-oriented software—that is, objects which are easier to implement, change, test, and reuse.
What problems can it solve?
Responsibilities should be added to (and removed from) an object dynamically at run-time.
A flexible alternative to subclassing for extending functionality should be provided.
When using subclassing, different subclasses extend a class in different ways. But an extension is bound to the class at compile-time and can't be changed at run-time.
What solution does it describe?
Define Decorator objects that
implement the interface of the extended (decorated) object (Component) transparently by forwarding all requests to it
perform additional functionality before/after forwarding a request.
This allows working with different Decorator objects to extend the functionality of an object dynamically at run-time.
See also the UML class and sequence diagram below.
Intent
The decorator pattern can be used to extend (decorate) the functionality of a certain object statically, or in some cases at run-time, independently of other instances of the same class, provided some groundwork is done at design time. This is achieved by designing a new Decorator class that wraps the original class. This wrapping could be achieved by the following sequence of steps:
Subclass the original Component class into a Decorator class (see UML diagram);
In the Decorator class, add a Component pointer as a field;
In the Decorator class, pass a Component to the Decorator constructor to initialize the Component pointer;
In the Decorator class, forward all Component methods to the Component pointer; and
In the ConcreteDecorator class, override any Component method(s) whose behavior needs to be modified.
This pattern is designed so that multiple decorators can be stacked on top of each other, each time adding a new functionality to the overridden method(s).
Note that decorators and the original class object share a common set of features. In the previous diagram, the operation() method was available in both the decorated and undecorated versions.
The decoration features (e.g., methods, properties, or other members) are usual |
https://en.wikipedia.org/wiki/Proxy%20pattern | In computer programming, the proxy pattern is a software design pattern. A proxy, in its most general form, is a class functioning as an interface to something else. The proxy could interface to anything: a network connection, a large object in memory, a file, or some other resource that is expensive or impossible to duplicate. In short, a proxy is a wrapper or agent object that is being called by the client to access the real serving object behind the scenes. Use of the proxy can simply be forwarding to the real object, or can provide additional logic. In the proxy, extra functionality can be provided, for example caching when operations on the real object are resource intensive, or checking preconditions before operations on the real object are invoked. For the client, usage of a proxy object is similar to using the real object, because both implement the same interface.
Overview
The Proxy
design pattern is one of the twenty-three well-known
GoF design patterns
that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse.
What problems can the Proxy design pattern solve?
The access to an object should be controlled.
Additional functionality should be provided when accessing an object.
When accessing sensitive objects, for example, it should be possible to check that clients have the needed access rights.
What solution does the Proxy design pattern describe?
Define a separate Proxy object that
can be used as substitute for another object (Subject) and
implements additional functionality to control the access to this subject.
This makes it possible to work through a Proxy object to perform additional functionality when accessing a subject. For example, to check the access rights of clients accessing a sensitive object.
To act as substitute for a subject, a proxy must implement the Subject interface.
Clients can't tell whether they work with a subject or its proxy.
See also the UML class and sequence diagram below.
Structure
UML class and sequence diagram
In the above UML class diagram,
the Proxy class implements the Subject interface so that it can act as substitute for Subject objects. It maintains a reference (realSubject)
to the substituted object (RealSubject) so that it can forward requests to it
(realSubject.operation()).
The sequence diagram
shows the run-time interactions: The Client object
works through a Proxy object that
controls the access to a RealSubject object.
In this example, the Proxy forwards the request to the RealSubject, which performs the request.
Class diagram
Possible usage scenarios
Remote proxy
In distributed object communication, a local object represents a remote object (one that belongs to a different address space). The local object is a proxy for the remote object, and method invocation on the local object results in remote method invocation on the remote object. |
https://en.wikipedia.org/wiki/Command%20pattern | In object-oriented programming, the command pattern is a behavioral design pattern in which an object is used to encapsulate all information needed to perform an action or trigger an event at a later time. This information includes the method name, the object that owns the method and values for the method parameters.
Four terms always associated with the command pattern are command, receiver, invoker and client. A command object knows about receiver and invokes a method of the receiver. Values for parameters of the receiver method are stored in the command. The receiver object to execute these methods is also stored in the command object by aggregation. The receiver then does the work when the execute() method in command is called. An invoker object knows how to execute a command, and optionally does bookkeeping about the command execution. The invoker does not know anything about a concrete command, it knows only about the command interface. Invoker object(s), command objects and receiver objects are held by a client object, the client decides which receiver objects it assigns to the command objects, and which commands it assigns to the invoker. The client decides which commands to execute at which points. To execute a command, it passes the command object to the invoker object.
Using command objects makes it easier to construct general components that need to delegate, sequence or execute method calls at a time of their choosing without the need to know the class of the method or the method parameters. Using an invoker object allows bookkeeping about command executions to be conveniently performed, as well as implementing different modes for commands, which are managed by the invoker object, without the need for the client to be aware of the existence of bookkeeping or modes.
The central ideas of this design pattern closely mirror the semantics of first-class functions and higher-order functions in functional programming languages. Specifically, the invoker object is a higher-order function of which the command object is a first-class argument.
Overview
The command
design pattern is one of the twenty-three well-known GoF design patterns that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse.
Using the command design pattern can solve these problems:
Coupling the invoker of a request to a particular request should be avoided. That is, hard-wired requests should be avoided.
It should be possible to configure an object (that invokes a request) with a request.
Implementing (hard-wiring) a request directly into a class is inflexible because it couples the class to a particular request at compile-time, which makes it impossible to specify a request at run-time.
Using the command design pattern describes the following solution:
Define separate (command) objects that encapsulate a request.
A class delegates a requ |
https://en.wikipedia.org/wiki/Iterator%20pattern | In object-oriented programming, the iterator pattern is a design pattern in which an iterator is used to traverse a container and access the container's elements. The iterator pattern decouples algorithms from containers; in some cases, algorithms are necessarily container-specific and thus cannot be decoupled.
For example, the hypothetical algorithm SearchForElement can be implemented generally using a specified type of iterator rather than implementing it as a container-specific algorithm. This allows SearchForElement to be used on any container that supports the required type of iterator.
Overview
The Iterator
design pattern is one of the twenty-three well-known
GoF design patterns
that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse.
What problems can the Iterator design pattern solve?
The elements of an aggregate object should be accessed and traversed without exposing its representation (data structures).
New traversal operations should be defined for an aggregate object without changing its interface.
Defining access and traversal operations in the aggregate interface is inflexible because it commits the aggregate to particular access and traversal operations and makes it impossible to add new operations
later without having to change the aggregate interface.
What solution does the Iterator design pattern describe?
Define a separate (iterator) object that encapsulates accessing and traversing an aggregate object.
Clients use an iterator to access and traverse an aggregate without knowing its representation (data structures).
Different iterators can be used to access and traverse an aggregate in different ways.
New access and traversal operations can be defined independently by defining new iterators.
See also the UML class and sequence diagram below.
Definition
The essence of the Iterator Pattern is to "Provide a way to access the elements of an aggregate object sequentially without exposing its underlying representation.".
Structure
UML class and sequence diagram
In the above UML class diagram, the Client class refers (1) to the Aggregate interface for creating an Iterator object (createIterator()) and (2) to the Iterator interface for traversing an Aggregate object (next(),hasNext()).
The Iterator1 class implements the Iterator interface by accessing the Aggregate1 class.
The UML sequence diagram
shows the run-time interactions: The Client object calls createIterator() on an Aggregate1 object, which creates an Iterator1 object and returns it
to the Client.
The Client uses then Iterator1 to traverse the elements of the Aggregate1 object.
UML class diagram
Example
Some languages standardize syntax. C++ and Python are notable examples.
C++
C++ implements iterators with the semantics of pointers in that language. In C++, a class can overload all of the pointer operations, so an iterator |
https://en.wikipedia.org/wiki/Interpreter%20pattern | In computer programming, the interpreter pattern is a design pattern that specifies how to evaluate sentences in a language.
The basic idea is to have a class for each symbol (terminal or nonterminal) in a specialized computer language. The syntax tree of a sentence in the language is an instance of the composite pattern and is used to evaluate (interpret) the sentence for a client. See also Composite pattern.
Overview
The Interpreter
design pattern is one of the twenty-three well-known
GoF design patterns
that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse.
What problems can the Interpreter design pattern solve?
A grammar for a simple language should be defined
so that sentences in the language can be interpreted.
When a problem occurs very often, it could be considered to represent it as a sentence in a simple language
(Domain Specific Languages) so that an interpreter can solve the problem
by interpreting the sentence.
For example, when many different or complex search expressions must be specified.
Implementing (hard-wiring) them directly into a class is inflexible
because it commits the class to particular expressions and makes it impossible to specify new expressions or change existing ones independently from (without having to change) the class.
What solution does the Interpreter design pattern describe?
Define a grammar for a simple language by defining an Expression class hierarchy and implementing an interpret() operation.
Represent a sentence in the language by an abstract syntax tree (AST) made up of Expression instances.
Interpret a sentence by calling interpret() on the AST.
The expression objects are composed recursively into a composite/tree structure that is called
abstract syntax tree (see Composite pattern).
The Interpreter pattern doesn't describe how
to build an abstract syntax tree. This can
be done either manually by a client or automatically by a parser.
See also the UML class and object diagram below.
Uses
Specialized database query languages such as SQL.
Specialized computer languages that are often used to describe communication protocols.
Most general-purpose computer languages actually incorporate several specialized languages.
Structure
UML class and object diagram
In the above UML class diagram, the Client class refers to the common AbstractExpression interface for interpreting an expression
interpret(context).
The TerminalExpression class has no children and interprets an expression directly.
The NonTerminalExpression class maintains a container of child expressions
(expressions) and forwards interpret requests
to these expressions.
The object collaboration diagram
shows the run-time interactions: The Client object sends an interpret request to the abstract syntax tree.
The request is forwarded to (performed on) all objects downwards the tree structure.
The No |
https://en.wikipedia.org/wiki/Mediator%20pattern | In software engineering, the mediator pattern defines an object that encapsulates how a set of objects interact. This pattern is considered to be a behavioral pattern due to the way it can alter the program's running behavior.
In object-oriented programming, programs often consist of many classes. Business logic and computation are distributed among these classes. However, as more classes are added to a program, especially during maintenance and/or refactoring, the problem of communication between these classes may become more complex. This makes the program harder to read and maintain. Furthermore, it can become difficult to change the program, since any change may affect code in several other classes.
With the mediator pattern, communication between objects is encapsulated within a mediator object. Objects no longer communicate directly with each other, but instead communicate through the mediator. This reduces the dependencies between communicating objects, thereby reducing coupling.
Overview
The mediator design pattern is one of the twenty-three well-known design patterns that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse.
Problems that the mediator design pattern can solve
Tight coupling between a set of interacting objects should be avoided.
It should be possible to change the interaction between a set of objects independently.
Defining a set of interacting objects by accessing and updating each other directly is inflexible because it tightly couples the objects to each other and makes it impossible to change the interaction independently from (without having to change) the objects.
And it stops the objects from being reusable and makes them hard to test.
Tightly coupled objects are hard to implement, change, test, and reuse because they refer to and know about many different objects.
Solutions described by the mediator design pattern
Define a separate (mediator) object that encapsulates the interaction between a set of objects.
Objects delegate their interaction to a mediator object instead of interacting with each other directly.
The objects interact with each other indirectly through a mediator object that controls and coordinates the interaction.
This makes the objects loosely coupled. They only refer to and know about their mediator object and have no explicit knowledge of each other.
See also the UML class and sequence diagram below.
Definition
The essence of the mediator pattern is to "define an object that encapsulates how a set of objects interact". It promotes loose coupling by keeping objects from referring to each other explicitly, and it allows their interaction to be varied independently. Client classes can use the mediator to send messages to other clients, and can receive messages from other clients via an event on the mediator class.
Structure
UML class and sequence diagram
I |
https://en.wikipedia.org/wiki/Strategy%20pattern | In computer programming, the strategy pattern (also known as the policy pattern) is a behavioral software design pattern that enables selecting an algorithm at runtime. Instead of implementing a single algorithm directly, code receives run-time instructions as to which in a family of algorithms to use.
Strategy lets the algorithm vary independently from clients that use it. Strategy is one of the patterns included in the influential book Design Patterns by Gamma et al. that popularized the concept of using design patterns to describe how to design flexible and reusable object-oriented software. Deferring the decision about which algorithm to use until runtime allows the calling code to be more flexible and reusable.
For instance, a class that performs validation on incoming data may use the strategy pattern to select a validation algorithm depending on the type of data, the source of the data, user choice, or other discriminating factors. These factors are not known until run-time and may require radically different validation to be performed. The validation algorithms (strategies), encapsulated separately from the validating object, may be used by other validating objects in different areas of the system (or even different systems) without code duplication.
Typically, the strategy pattern stores a reference to some code in a data structure and retrieves it. This can be achieved by mechanisms such as the native function pointer, the first-class function, classes or class instances in object-oriented programming languages, or accessing the language implementation's internal storage of code via reflection.
Structure
UML class and sequence diagram
In the above UML class diagram, the Context class doesn't implement an algorithm directly.
Instead, Context refers to the Strategy interface for performing an algorithm (strategy.algorithm()), which makes Context independent of how an algorithm is implemented.
The Strategy1 and Strategy2 classes implement the Strategy interface, that is, implement (encapsulate) an algorithm.
The UML sequence diagram
shows the run-time interactions: The Context object delegates an algorithm to different Strategy objects. First, Context calls algorithm() on a Strategy1 object,
which performs the algorithm and returns the result to Context.
Thereafter, Context changes its strategy and calls algorithm() on a Strategy2 object,
which performs the algorithm and returns the result to Context.
Class diagram
Strategy and open/closed principle
According to the strategy pattern, the behaviors of a class should not be inherited. Instead, they should be encapsulated using interfaces. This is compatible with the open/closed principle (OCP), which proposes that classes should be open for extension but closed for modification.
As an example, consider a car class. Two possible functionalities for car are brake and accelerate. Since accelerate and brake behaviors change frequently between models, a common approach is to impleme |
https://en.wikipedia.org/wiki/Template%20method%20pattern | In object-oriented programming, the template method is one of the behavioral design patterns identified by Gamma et al. in the book Design Patterns. The template method is a method in a superclass, usually an abstract superclass, and defines the skeleton of an operation in terms of a number of high-level steps. These steps are themselves implemented by additional helper methods in the same class as the template method.
The helper methods may be either abstract methods, in which case subclasses are required to provide concrete implementations, or hook methods, which have empty bodies in the superclass. Subclasses can (but are not required to) customize the operation by overriding the hook methods. The intent of the template method is to define the overall structure of the operation, while allowing subclasses to refine, or redefine, certain steps.
Overview
This pattern has two main parts:
The "template method" is implemented as a method in a base class (usually an abstract class). This method contains code for the parts of the overall algorithm that are invariant. The template ensures that the overarching algorithm is always followed. In the template method, portions of the algorithm that may vary are implemented by sending self messages that request the execution of additional helper methods. In the base class, these helper methods are given a default implementation, or none at all (that is, they may be abstract methods).
Subclasses of the base class "fill in" the empty or "variant" parts of the "template" with specific algorithms that vary from one subclass to another. It is important that subclasses do not override the template method itself.
At run-time, the algorithm represented by the template method is executed by sending the template message to an instance of one of the concrete subclasses. Through inheritance, the template method in the base class starts to execute. When the template method sends a message to self requesting one of the helper methods, the message will be received by the concrete sub-instance. If the helper method has been overridden, the overriding implementation in the sub-instance will execute; if it has not been overridden, the inherited implementation in the base class will execute. This mechanism ensures that the overall algorithm follows the same steps every time, while allowing the details of some steps to depend on which instance received the original request to execute the algorithm.
This pattern is an example of inversion of control because the high-level code no longer determines what algorithms to run; a lower-level algorithm is instead selected at run-time.
Some of the self messages sent by the template method may be to hook methods. These methods are implemented in the same base class as the template method, but with empty bodies (i.e., they do nothing). Hook methods exist so that subclasses can override them, and can thus fine-tune the action of the algorithm without the need to override |
https://en.wikipedia.org/wiki/Guarded%20suspension | In concurrent programming, guarded suspension is a software design pattern for managing operations that require both a lock to be acquired and a precondition to be satisfied before the operation can be executed. The guarded suspension pattern is typically applied to method calls in object-oriented programs, and involves suspending the method call, and the calling thread, until the precondition (acting as a guard) is satisfied.
Usage
Because it is blocking, the guarded suspension pattern is generally only used when the developer knows that a method call will be suspended for a finite and reasonable period of time. If a method call is suspended for too long, then the overall program will slow down or stop, waiting for the precondition to be satisfied. If the developer knows that the method call suspension will be indefinite or for an unacceptably long period, then the balking pattern may be preferred.
Implementation
In Java, the Object class provides the wait() and notify() methods to assist with guarded suspension. In the implementation below, originally found in , if there is no precondition satisfied for the method call to be successful, then the method will wait until it finally enters a valid state.
public class Example {
synchronized void guardedMethod() {
while (!preCondition()) {
try {
// Continue to wait
wait();
// …
} catch (InterruptedException e) {
// …
}
}
// Actual task implementation
}
synchronized void alterObjectStateMethod() {
// Change the object state
// …
// Inform waiting threads
notify();
}
}
An example of an actual implementation would be a queue object with a get method that has a guard to detect when there are no items in the queue. Once the put method notifies the other methods (for example, a get method), then the get method can exit its guarded state and proceed with a call. Once the queue is empty, then the get method will enter a guarded state once again.
See also
Balking pattern is an alternative pattern for dealing with a precondition
Guarded Command Language includes a similar language construct
Readers–writer lock
Notes
References
.
Software design patterns |
https://en.wikipedia.org/wiki/Double-checked%20locking | In software engineering, double-checked locking (also known as "double-checked locking optimization") is a software design pattern used to reduce the overhead of acquiring a lock by testing the locking criterion (the "lock hint") before acquiring the lock. Locking occurs only if the locking criterion check indicates that locking is required.
The original form of the pattern, appearing in Pattern Languages of Program Design 3, has data races, depending on the memory model in use, and it is hard to get right. Some consider it to be an anti-pattern. There are valid forms of the pattern, including the use of the keyword in Java and explicit memory barriers in C++.
The pattern is typically used to reduce locking overhead when implementing "lazy initialization" in a multi-threaded environment, especially as part of the Singleton pattern. Lazy initialization avoids initializing a value until the first time it is accessed.
Motivation and original pattern
Consider, for example, this code segment in the Java programming language:
// Single-threaded version
class Foo {
private static Helper helper;
public Helper getHelper() {
if (helper == null) {
helper = new Helper();
}
return helper;
}
// other functions and members...
}
The problem is that this does not work when using multiple threads. A lock must be obtained in case two threads call getHelper() simultaneously. Otherwise, either they may both try to create the object at the same time, or one may wind up getting a reference to an incompletely initialized object.
Synchronizing with a lock can fix this, as is shown in the following example:
// Correct but possibly expensive multithreaded version
class Foo {
private Helper helper;
public synchronized Helper getHelper() {
if (helper == null) {
helper = new Helper();
}
return helper;
}
// other functions and members...
}
This is correct and will most likely have sufficient performance. However, the first call to getHelper() will create the object and only the few threads trying to access it during that time need to be synchronized; after that all calls just get a reference to the member variable. Since synchronizing a method could in some extreme cases decrease performance by a factor of 100 or higher, the overhead of acquiring and releasing a lock every time this method is called seems unnecessary: once the initialization has been completed, acquiring and releasing the locks would appear unnecessary. Many programmers, including the authors of the double-checked locking design pattern, have attempted to optimize this situation in the following manner:
Check that the variable is initialized (without obtaining the lock). If it is initialized, return it immediately.
Obtain the lock.
Double-check whether the variable has already been initialized: if another thread acquired the lock first, it may have already done the initialization. If so, re |
https://en.wikipedia.org/wiki/Software%20design%20pattern | In software engineering, a software design pattern is a general, reusable solution to a commonly occurring problem within a given context in software design. It is not a finished design that can be transformed directly into source or machine code. Rather, it is a description or template for how to solve a problem that can be used in many different situations. Design patterns are formalized best practices that the programmer can use to solve common problems when designing an application or system.
Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Patterns that imply mutable state may be unsuited for functional programming languages. Some patterns can be rendered unnecessary in languages that have built-in support for solving the problem they are trying to solve, and object-oriented patterns are not necessarily suitable for non-object-oriented languages.
Design patterns may be viewed as a structured approach to computer programming intermediate between the levels of a programming paradigm and a concrete algorithm.
History
Patterns originated as an architectural concept by Christopher Alexander as early as 1977 (c.f. "The Pattern of Streets," JOURNAL OF THE AIP, September, 1966, Vol. 32, No. 5, pp. 273–278). In 1987, Kent Beck and Ward Cunningham began experimenting with the idea of applying patterns to programming – specifically pattern languages – and presented their results at the OOPSLA conference that year. In the following years, Beck, Cunningham and others followed up on this work.
Design patterns gained popularity in computer science after the book Design Patterns: Elements of Reusable Object-Oriented Software was published in 1994 by the so-called "Gang of Four" (Gamma et al.), which is frequently abbreviated as "GoF". That same year, the first Pattern Languages of Programming Conference was held, and the following year the Portland Pattern Repository was set up for documentation of design patterns. The scope of the term remains a matter of dispute. Notable books in the design pattern genre include:
Although design patterns have been applied practically for a long time, formalization of the concept of design patterns languished for several years.
Practice
Design patterns can speed up the development process by providing tested, proven development paradigms. Effective software design requires considering issues that may not become visible until later in the implementation. Freshly written code can often have hidden subtle issues that take time to be detected, issues that sometimes can cause major problems down the road. Reusing design patterns helps to prevent such subtle issues, and it also improves code readability for coders and architects who are familiar with the patterns.
In order to achieve flexibility, design patterns usually introduce additional levels of indirection, which |
https://en.wikipedia.org/wiki/Dark%20Shadows | Dark Shadows is an American gothic soap opera that aired weekdays on the ABC television network from June 27, 1966, to April 2, 1971. The show depicted the lives, loves, trials, and tribulations of the wealthy Collins family of Collinsport, Maine, where a number of supernatural occurrences take place.
The series became popular when vampire Barnabas Collins (Jonathan Frid) was introduced ten months into its run. It would also feature ghosts, werewolves, zombies, man-made monsters, witches, warlocks, time travel, and a parallel universe. A small company of actors each played many roles; as actors came and went, some characters were played by more than one actor. The show was distinguished by its melodramatic performances, atmospheric interiors, memorable storylines, numerous dramatic plot twists, adventurous music score, broad cosmos of characters, and heroic adventures. Unusual among the soap operas of its time, which were aimed primarily at adults, Dark Shadows developed a large teenage audience and a dedicated cult following. By 1969, it had become ABC's highest-rated daytime series.
The original network run of the show amassed 1,225 episodes. The success of the series spawned a media franchise that has included two feature films (House of Dark Shadows in 1970 and Night of Dark Shadows in 1971), a 1991 TV remake, an unsprouted 2004 remake pilot, a 2012 film reboot directed by Tim Burton, and numerous spin-off novels and comics. Since 2006, the series has continued as a range of audio dramas produced by Big Finish Productions, featuring members of the original cast including David Selby, Lara Parker, and Kathryn Leigh Scott.
TV Guides list of all-time Top Cult Shows ranked the series #19 in 2004, and #23 in 2007.
History
Creator Dan Curtis claimed he had a dream in 1965 of a mysterious young woman on a train. The following day Curtis told his wife of the dream and pitched the idea as a TV series to ABC. Network officials greenlit production and Curtis began hiring crew members.
Art Wallace was hired to create a story from Curtis's dream sequence. Wallace wrote the story bible Shadows on the Wall, the proposed title for the show, later changed to Dark Shadows. Robert Costello was added as a line producer, and Curtis took on the creator and executive producer roles. Lela Swift, John Sedwick, and Henry Kaplan all agreed to be directors for the new series. Robert Cobert created the musical score and Sy Tomashoff designed the set.
Broadcast history
Perhaps one of ABC's first truly popular daytime series, along with the game show Let's Make a Deal (which had moved from its original home NBC in 1968), Dark Shadows found its demographic niche in teenagers coming home from school in time to watch the show at 4 p.m. Eastern/3 p.m. Central, where it aired for almost all of its network run, the exception being a 15-month stretch between April 1967 and July 1968, when it aired a half-hour earlier. Originally, it was aired in black-and-white, but the sho |
https://en.wikipedia.org/wiki/Software%20configuration%20management | In software engineering, software configuration management (SCM or S/W CM) is the task of tracking and controlling changes in the software, part of the larger cross-disciplinary field of configuration management. SCM practices include revision control and the establishment of baselines. If something goes wrong, SCM can determine the "what, when, why and who" of the change. If a configuration is working well, SCM can determine how to replicate it across many hosts.
The acronym "SCM" is also expanded as source configuration management process and software change and configuration management. However, "configuration" is generally understood to cover changes typically made by a system administrator.
Purposes
The goals of SCM are generally:
Configuration identification - Identifying configurations, configuration items and baselines.
Configuration control - Implementing a controlled change process. This is usually achieved by setting up a change control board whose primary function is to approve or reject all change requests that are sent against any baseline.
Configuration status accounting - Recording and reporting all the necessary information on the status of the development process.
Configuration auditing - Ensuring that configurations contain all their intended parts and are sound with respect to their specifying documents, including requirements, architectural specifications and user manuals.
Build management - Managing the process and tools used for builds.
Process management - Ensuring adherence to the organization's development process.
Environment management - Managing the software and hardware that host the system.
Teamwork - Facilitate team interactions related to the process.
Defect tracking - Making sure every defect has traceability back to the source.
With the introduction of cloud computing and DevOps the purposes of SCM tools have become merged in some cases. The SCM tools themselves have become virtual appliances that can be instantiated as virtual machines and saved with state and version. The tools can model and manage cloud-based virtual resources, including virtual appliances, storage units, and software bundles. The roles and responsibilities of the actors have become merged as well with developers now being able to dynamically instantiate virtual servers and related resources.
History
The history of software configuration management (SCM) in computing can be traced back as early as the 1950s, when CM (configuration management), originally for hardware development and production control, was being applied to software development. Early software had a physical footprint, such as cards, tapes, and other media. The first software configuration management was a manual operation. With the advances in language and complexity, software engineering, involving configuration management and other methods, became a major concern due to issues like schedule, budget, and quality. Practical lessons, over the years, had led to th |
https://en.wikipedia.org/wiki/Eric%20Allman | Eric Paul Allman (born September 2, 1955) is an American computer programmer who developed sendmail and its precursor delivermail in the late 1970s and early 1980s at UC Berkeley. In 1998, Allman and Greg Olson co-founded the company Sendmail, Inc.
Education and training
Born in El Cerrito, California, Allman knew from an early age that he wanted to work in computing, He used to break into his high school's mainframe and later used the UC Berkeley computing center for his computing needs. In 1973, he entered UC Berkeley, just as the Unix operating system began to become popular in academic circles. He earned B.S. and M.S. degrees from UC Berkeley in 1977 and 1980 respectively.
Sendmail and Syslog
As the Unix source code was available at Berkeley, the local hackers quickly made many extensions to the AT&T code. One such extension was delivermail, which in 1981 turned into sendmail. As an MTA, it was designed to deliver email over the still relatively small (as compared to today's Internet) ARPANET, which consisted of many smaller networks with vastly differing formats for e-mail headers.
Sendmail soon became an important part of the Berkeley Software Distribution (BSD) and it used to be the most widely used MTA on Unix based systems, despite its somewhat complex configuration syntax and frequent abuse by Internet telemarketing firms. In 1998, Allman and Greg Olson founded Sendmail, Inc., headquartered in Emeryville, California, to do proprietary work on improving sendmail.
The logging format used by the MTA, known as syslog, was at first used solely by sendmail, but eventually became an unofficial standard format used by other unrelated programs for logging. Later, this format was made official by in 2001; however, the original format has been made obsolete by the most recent revision, .
Other contributions
Allman is credited with popularizing the Allman indent style, also known as BSD indent style. He ported a Fortran version of Super Star Trek to the C programming language, which later became BSD Trek, and is still included in various Linux distributions as part of the classic bsdgames package.
He was awarded the Telluride Tech Festival Award of Technology in August, 2006 in Telluride, Colorado. In 2009 he was recognized as a Distinguished Engineer by the Association for Computing Machinery. In April 2014 he was inducted into the Internet Hall of Fame.
Personal life
Allman, who is gay, lives in Berkeley, California, with Marshall Kirk McKusick, who had been his partner for more than 30 years before they got married in October 2013. The two first met in graduate school. McKusick is a lead developer of BSD.
References
External links
Homepage as of 2010-10-29
Linkedin.com profile
Former homepage at Berkeley
You've got Sendmail, Salon article about sendmail going commercial (December 1998)
Biography at Sendmail.com (see "Chief Science Officer")
American computer programmers
Free software programmers
UC Berkeley College of Eng |
https://en.wikipedia.org/wiki/BitKeeper | BitKeeper is a software tool for distributed revision control of computer source code. Originally developed as proprietary software by BitMover Inc., a privately held company based in Los Gatos, California, it was released as open-source software under the Apache-2.0 license on 9 May 2016. BitKeeper is no longer being developed.
History
BitKeeper was originally developed by BitMover Inc., a privately held company from Los Gatos, California owned by Larry McVoy, who had previously designed TeamWare.
BitKeeper and the Linux Kernel
BitKeeper was first mentioned as a solution to some of the growing pains that Linux was having in September 1998. Early access betas were available in May 1999 and on May 4, 2000, the first public release of BitKeeper was made available.
BitMover used to provide access to the system for certain open-source or free-software projects, one of which was the source code of the Linux kernel. The license for the "community" version of BitKeeper had allowed for developers to use the tool at no cost for open source or free software projects, provided those developers did not participate in the development of a competing tool (such as Concurrent Versions System, GNU arch, Subversion or ClearCase) for the duration of their usage of BitKeeper plus one year. This restriction applied regardless of whether the competing tool was free or proprietary. This version of BitKeeper also required that certain meta-information about changes be stored on computer servers operated by BitMover, an addition that made it impossible for community version users to run projects of which BitMover was unaware.
The decision made in 2002 to use BitKeeper for Linux kernel development was a controversial one. Some, including GNU Project founder Richard Stallman, expressed concern about proprietary tools being used on a flagship free project. While project leader Linus Torvalds and other core developers adopted BitKeeper, several key developers (including Linux veteran Alan Cox) refused to do so, citing the BitMover license, and voicing concern that the project was ceding some control to a proprietary developer. To mitigate these concerns, BitMover added gateways which allowed limited interoperation between the Linux BitKeeper servers (maintained by BitMover) and developers using CVS and Subversion. Even after this addition, flamewars occasionally broke out on the Linux kernel mailing list, often involving key kernel developers and BitMover's CEO Larry McVoy, who was also a Linux contributor.
In April 2005, BitMover announced that it would stop providing a version of BitKeeper free of charge to the community, giving as the reason the efforts of Andrew Tridgell, a developer employed by OSDL on an unrelated project, to develop a client which would show the metadata (data about revisions, possibly including differences between versions) instead of only the most recent version. Being able to see metadata and compare past versions is one of the core featur |
https://en.wikipedia.org/wiki/PARAM | PARAM is a series of Indian supercomputers designed and assembled by the Centre for Development of Advanced Computing (C-DAC) in Pune. PARAM means "supreme" in the Sanskrit language, whilst also creating an acronym for "PARAllel Machine". As of November 2022 the fastest machine in the series is the PARAM Siddhi AI which ranks 120th in world, with an Rpeak of 5.267 petaflops.
History
C-DAC was created in November 1987, originally as the Centre for Development of Advanced Computing Technology (C-DACT). This was in response to issues purchasing supercomputers from foreign sources. The Indian Government decided to try and develop indigenous computing technology.
PARAM 8000
The PARAM 8000 was the first machine in the series and was built from scratch. A prototype was benchmarked at the "1990 Zurich Super-computing Show": of the machines that ran at the show it came second only to one from the United States.
A 64-node machine was delivered in August 1991. Each node used Inmos T800/T805 transputers. A 256-node machine had a theoretical performance of 1GFLOPS, however in practice had a sustained performance of 100-200MFLOPS. PARAM 8000 was a distributed memory MIMD architecture with a reconfigurable interconnection network.
The PARAM 8000 was noted to be 28 times more powerful than the Cray X-MP that the government originally requested, for the same $10 million cost quoted for it.
Exports
The computer was a success and was exported to Germany, United Kingdom and Russia. Apart from taking over the home market, PARAM attracted 14 other buyers with its relatively low price tag of $350,000.
The computer was also exported to the ICAD Moscow in 1991 under Russian collaboration.
PARAM 8600
PARAM 8600 was an improvement over PARAM 8000. In 1992 C-DAC realised its machines were underpowered and wished to integrate the newly released Intel i860 processor. Each node was created with one i860 and four Inmos T800 transputers. The same PARAS programming environment was used for both the PARAM 8000 and 8600; this meant that programs were portable. Each 8600 cluster was noted to be as powerful as 4 PARAM 8000 clusters.
PARAM 9000
The PARAM (param vashisht lega) 9000 was designed to be merge cluster processing and massively parallel processing computing workloads. It was first demonstrated in 1994. The design was changed to be modular so that newer processors could be easily accommodated. Typically a system used 32–40 processors, however it could be scaled up to 200 CPUs using the clos network topology. The PARAM 9000/SS was the SuperSPARC II processor variant, the PARAM 9000/US used the UltraSPARC processor, and the PARAM 9000/AA used the DEC Alpha.
PARAM 10000
The PARAM 10000 was unveiled in 1998 as part of C-DAC's second mission. PARAM 10000 used several independent nodes, each based on the Sun Enterprise 250 server; each such server contained two 400Mhz UltraSPARC II processors. The base configuration had three compute nodes and a server node. The peak spee |
https://en.wikipedia.org/wiki/E-text | e-text (from "electronic text"; sometimes written as etext) is a general term for any document that is read in digital form, and especially a document that is mainly text. For example, a computer-based book of art with minimal text, or a set of photographs or scans of pages, would not usually be called an "e-text". An e-text may be a binary or a plain text file, viewed with any open source or proprietary software. An e-text may have markup or other formatting information, or not. An e-text may be an electronic edition of a work originally composed or published in other media, or may be created in electronic form originally. The term is usually synonymous with e-book.
E-text origins
E-texts, or electronic documents, have been around since long before the Internet, the Web, and specialized E-book reading hardware. Roberto Busa began developing an electronic edition of Aquinas in the 1940s, while large-scale electronic text editing, hypertext, and online reading platforms such as Augment and FRESS appeared in the 1960s. These early systems made extensive use of formatting, markup, automatic tables of contents, hyperlinks, and other information in their texts, as well as in some cases (such as FRESS) supporting not just text but also graphics.
"Just plain text"
In some communities, "e-text" is used much more narrowly, to refer to electronic documents that are, so to speak, "plain vanilla ASCII". By this is meant not only that the document is a plain text file, but that it has no information beyond "the text itself"—no representation of bold or italics, paragraph, page, chapter, or footnote boundaries, etc. Michael S. Hart, for example, argued that this "is the only text mode that is easy on both the eyes and the computer". Hart made the correct point that proprietary word-processor formats made texts grossly inaccessible; but that is irrelevant to standard, open data formats. The narrow sense of "e-text" is now uncommon, because the notion of "just vanilla ASCII" (attractive at first glance), has turned out to have serious difficulties:
First, this narrow type of "e-text" is limited to the English letters. Not even Spanish ñ or the accented vowels used in many European languages cannot be represented (unless awkwardly and ambiguously as "~n" "a'"). Asian, Slavic, Greek, and other writing systems are impossible.
Second, diagrams and pictures cannot be accommodated, and many books have at least some such material; often it is essential to the book.
Third, "e-texts" in this narrow sense have no reliable way to distinguish "the text" from other things that occur in a work. For example, page numbers, page headers, and footnotes might be omitted, or might simply appear as additional lines of text, perhaps with blank lines before and after (or not). An ornate separator line might be represented instead by a line of asterisks (or not). Chapter and sections titles, likewise, are just additional lines of text: they might be detectable by capitalizatio |
https://en.wikipedia.org/wiki/Advanced%20Linux%20Sound%20Architecture | Advanced Linux Sound Architecture (ALSA) is a software framework and part of the Linux kernel that provides an application programming interface (API) for sound card device drivers.
Some of the goals of the ALSA project at its inception were automatic configuration of sound-card hardware and graceful handling of multiple sound devices in a system. ALSA is released under GPL-2.0-or-later and LGPL-2.1-or-later.
On Linux, sound servers, like sndio, PulseAudio, JACK (low-latency professional-grade audio editing and mixing) and PipeWire, and higher-level APIs (e.g OpenAL, SDL audio, etc.) work on top of ALSA and its sound card device drivers. ALSA succeeded the older Linux port of the Open Sound System (OSS).
History
The project to develop ALSA was led by Jaroslav Kysela, and was based on the Linux device driver for the Gravis Ultrasound sound card. It started in 1998 and was developed separately from the Linux kernel until it was introduced in the 2.5 development series in 2002 (2.5.4–2.5.5).
In the 2.6 version, it replaced the previous system, Open Sound System (OSS), by default (although a backwards-compatibility layer does exist).
ALSA has a larger and more complex API than OSS, so it can be more difficult to develop an application that uses ALSA as its sound technology. While ALSA may be configured to provide an OSS emulation layer, such functionality is no longer available or is not installed by default in many Linux distributions.
Features
ALSA was designed with some features which were not, at the time of its conception, supported by OSS:
Hardware-based MIDI synthesis.
Hardware mixing of multiple channels.
Full-duplex operation.
Multiprocessor-friendly, thread-safe device drivers.
Besides the sound device drivers, ALSA bundles a user-space library for application developers who want to use driver features through an interface that is higher-level than the interface provided for direct interaction with the kernel drivers. Unlike the kernel API, which tries to reflect the capabilities of the hardware directly, ALSA's user-space library presents an abstraction that remains as standardized as possible across disparate underlying hardware elements. This goal is achieved in part by using software plug-ins; for example, many modern sound cards or built-in sound chips do not have a "master volume" control. Instead, for these devices, the user space library provides a software volume control using the "softvol" plug-in, and ordinary application software need not care whether such a control is implemented by underlying hardware or software emulation of such underlying hardware.
Applications
Additional to the software framework internal to the Linux kernel, the ALSA project also provides the command-line tools and utilities alsactl, amixer, arecord/aplay and alsamixer, an ncurses-based TUI.
There also are GUIs programmed by third-party developers, such as GNOME-ALSAmixer (using GTK), Kmix, XFCE4-mixer, LXpanel, QasHctl, QasMixer, Pavucontrol |
https://en.wikipedia.org/wiki/Open%20Sound%20System | The Open Sound System (OSS) is an interface for making and capturing sound in Unix and Unix-like operating systems. It is based on standard Unix devices system calls (i.e. POSIX read, write, ioctl, etc.). The term also sometimes refers to the software in a Unix kernel that provides the OSS interface; it can be thought of as a device driver (or a collection of device drivers) for sound controller hardware. The goal of OSS is to allow the writing of sound-based applications that are agnostic of the underlying sound hardware.
OSS was created by Hannu Savolainen and is distributed under four license options, three of which are free software licences, thus making OSS free software.
API
The API is designed to use the traditional Unix framework of open(), read(), write(), and ioctl(), via device files. For instance, the default device for sound input and output is /dev/dsp. Examples using the shell:
cat /dev/random > /dev/dsp # plays white noise through the speaker
cat /dev/dsp > a.a # reads data from the microphone and copies it to file a.a
OSS implements the /dev/audio interface. Detailed access to individual sound devides is provided via the directory. OSS also has MIDI support in , (both legacy) and .
On Linux, OSS4 is also able to emulate ALSA, its open-source replacement.
History
OSS was originally "VoxWare", a Linux kernel sound driver by Hannu Savolainen. Savolainen made the code available under free software licenses, GPL for Linux and BSD for BSD distributions. Between 1993 and 1997, OSS was the sole choice of sound system in FreeBSD and Linux. This was changed when Luigi Rizzo wrote a new "pcm" driver for FreeBSD in 1997, and when Jaroslav Kysela started Advanced Linux Sound Architecture in 1998.
In 2002, Savolainen was contracted by the company 4Front Technologies and made the upcoming OSS 4, which includes support for newer sound devices and improvements, proprietary. In response, the Linux community abandoned the OSS/free implementation included in the kernel and development effort switched to the replacement Advanced Linux Sound Architecture (ALSA). FreeBSD by this time had switched to a "newpcm" project started in 1999 and was not affected.
In July 2007, 4Front Technologies released sources for OSS under CDDL-1.0 for OpenSolaris and GPL-2.0-only for Linux. Drivers for some soundcards remained closed-source and were not included in the release. In January 2008, 4Front Technologies released OSS for FreeBSD (and other BSD systems) under the BSD-2-Clause.
Adoption status
Code
OSS4 now exists mostly as a standalone piece of software, not integrated into the kernel source code. The exception is Solaris and OpenSolaris, which use a fork of OSS4 called Boomer. It combines the OSS4 framework (audio and mixer) together with Sun's earlier SADA (/dev/audio) API.
Although Linux distributions such as Ubuntu made OSS4 available as a software package after it was made free software, they have chosen to ignore any bugs filed against t |
https://en.wikipedia.org/wiki/LADSPA | LADSPA is an acronym for Linux Audio Developer's Simple Plugin API. It is an application programming interface (API) standard for handling audio filters and audio signal processing effects, licensed under LGPL-2.1-or-later. It was originally designed for Linux through consensus on the Linux Audio Developers Mailing List, but works on a variety of other platforms. It is used in many free audio software projects and there is a wide range of LADSPA plug-ins available.
LADSPA exists primarily as a header file written in the programming language C.
There are many audio plugin standards and most major modern software synthesizers and sound editors support a variety. The best known standard is probably Steinberg's Virtual Studio Technology. LADSPA is unusual in that it attempts to provide only the "Greatest Common Divisor" of other standards. This means that its scope is limited, but it is simple and plugins written using it are easy to embed in many other programs. The standard has changed little with time, so compatibility problems are rare.
DSSI extends LADSPA to cover instrument plugins.
LV2 is a successor, based on LADSPA and DSSI, but permitting easy extensibility, allowing custom user interfaces, MIDI messages, and custom extensions.
Competing technologies
Apple Inc.'s Audio Units
Digidesign's Real Time AudioSuite
Avid Technology's Avid Audio eXtension
Microsoft's DirectX plugin
Steinberg's Virtual Studio Technology
CLever Audio Plug-in (Open Source)
References
External links
ladspa.org
Application programming interfaces
Free audio software
Free software programmed in C
Music software plugin architectures
Linux APIs
Audio libraries |
https://en.wikipedia.org/wiki/Software%20synthesizer | A software synthesizer or softsynth is a computer program that generates digital audio, usually for music. Computer software that can create sounds or music is not new, but advances in processing speed now allow softsynths to accomplish the same tasks that previously required the dedicated hardware of a conventional synthesizer. Softsynths may be readily interfaced with other music software such as music sequencers typically in the context of a digital audio workstation. Softsynths are usually less expensive and can be more portable than dedicated hardware.
Types
Softsynths can cover a range of synthesis methods, including subtractive synthesis (including analog modeling, a subtype), FM synthesis (including the similar phase distortion synthesis), physical modelling synthesis, additive synthesis (including the related resynthesis), and sample-based synthesis.
Many popular hardware synthesizers are no longer manufactured but have been emulated in software. The emulation can even extend to having graphics that model the exact placements of the original hardware controls. Some simulators can even import the original sound patches with accuracy that is nearly indistinguishable from the original synthesizer. Popular synthesizers such as the Moog Minimoog, Yamaha DX7, Korg M1, Sequential Prophet-5, Oberheim OB-X, Roland Jupiter 8, ARP 2600 and dozens of other classics have been recreated in software. Software Synth developers such as Arturia offer virtual editions of analog synths like the Minimoog, the ARP 2600, as well as the Yamaha CS-80. Gforce produces a Minimoog with sounds designed by Rick Wakeman and version of the ARP Odyssey.
Some softsynths are sample-based, and frequently have more capability than hardware units, since computers have fewer restrictions on memory than dedicated hardware synthesizers. Sample libraries may be many gigabytes in size. Some are specifically designed to mimic real-world instruments such as pianos. Sample libraries' formats include .wav, .sf or .sf2.
Often a composer or virtual conductor will want a "draft mode" for initial score editing and then use the "production mode" to generate high-quality sound as one gets closer to the final version. The draft mode allows for quicker turn-around, perhaps in real time, but will not have the full quality of the production mode. The draft render is roughly analogous to a wire-frame or "big polygon" animation when creating 3D animation or CGI. Both are based on the trade-off between quality and turn-around time for reviewing drafts and changes.
Software instrument
A software instrument can be a synthesized version of a real instrument (like the sounds of a violin or drums), or a unique instrument, generated by computer software. Software instruments have been made popular by the convergence of synthesizers and computers, as well as sequencing software like GarageBand, Logic Pro, and Ableton Live. Also of note is software like Csound and Nyquist, which can be used to pro |
https://en.wikipedia.org/wiki/Virtual%20instrument | Virtual instrument may refer to:
A Software synthesizer, a computer program or plug-in that generates digital audio
A program that implements functions of an instrument by computer, sensors and actuators, see Virtual instrumentation
See also
VI (disambiguation) |
https://en.wikipedia.org/wiki/List%20of%20Atari%202600%20games | This is a list of games for the Atari Video Computer System, a console renamed to the Atari 2600 in November 1982. Sears licensed the console and many games from Atari, Inc., selling them under different names. 3 cartridges were Sears exclusives.
The list contains
games, divided into three sections:
Games published by Atari and Sears
Games published by third parties
Hobbyist-developed games after the system was discontinued.
The Atari VCS was first released in North America on September 11, 1977 with nine cartridges: Air-Sea Battle, Basic Math, Blackjack, Combat, Indy 500, Star Ship, Street Racer, Surround and Video Olympics.
The final licensed Atari 2600 games released in North America were Ikari Warriors, MotoRodeo, Sentinel, and Xenophobe in early 1991, and the final licensed games released in Europe were Klax and Acid Drop in 1990 and 1992 respectively.
Games published by Atari and Sears
All of the initial era of Atari 2600 games were developed and manufactured by Atari, Inc. These games were published by Atari, and many were also licensed to Sears, which released these games under its Tele-Games brand, often with different titles. Sears's Tele-Games brand was unrelated to the company Telegames, which also produced cartridges for the Atari 2600 (mostly re-issues of M Network games.)
Three games were also produced by Atari Inc. for Sears as exclusive releases under the Tele-Games brand: Steeplechase, Stellar Track, and Submarine Commander.
Games published by third parties
As the Atari 2600 console grew in popularity, in 1980 other game developers, such as Activision and Imagic, entered the market and published more than 380 of their own cartridges for the Atari 2600. Many of the most popular Atari 2600 games, such as Pitfall! and Demon Attack, are third-party games.
Homebrew games
The Atari 2600 has been a popular platform for homebrew projects, with games publicly released. Unlike later systems, the Atari 2600 does not require a modchip to run cartridges. Many games are clones of existing games written as programming challenges, often borrowing the name of the original.
In 2003, Activision selected several games for inclusion in the Game Boy Advance version of their Activision Anthology, as indicated below.
Additional titles (publisher unknown)
Included with Atari Flashback 9 / Flashback 9 Gold
Adventure II
Aquaventure
Asteroids Deluxe
Atari Climber
Burnin’ Rubber
Championship Soccer
Chase It!
Combat Two
Decathlon
Escape It!
Fun with Numbers
Miss It!
RealSports Basketball
Return to Haunted House
Saboteur
Save Mary
Shield Shifter
Space Raid
Strip Off
Tempest
Wizard
Yars’ Return
See also
List of Atari 2600 prototype games
List of best-selling Atari 2600 video games
Lists of video games
List of GameLine games for the Atari 2600
:Category:Cancelled Atari 2600 games
Notes
References
External links
Atari 2600 Rarity Guide
- Dozens of games freely playable from within the web browser.
*
Atari 2 |
https://en.wikipedia.org/wiki/Newsreader | Newsreader can refer to:
Newsreader (Usenet), a computer program for reading Usenet newsgroups
Newsreaders, a television series on Adult Swim
News presenter, a person that presents a news show on television, radio or the Internet
News aggregator, a computer program for syndicated Web content supplied in the form of a web feed
The Newsreader, a fictional Australian television series about newsreaders in the 80s |
https://en.wikipedia.org/wiki/Newsreaders | Newsreaders is an American television comedy that aired on Cartoon Network's late night programming block Adult Swim. Newsreaders is a spin-off of Childrens Hospital, presented as the fictional television news magazine program Newsreaders. The series premiered January 17, 2013 and ended on February 13, 2015, with a total of 24 episodes over the course of two seasons.
Cast
Hosts
Mather Zickel as Louis LaFonda (season 1)
Alan Tudyk as Reagan Biscayne (season 2)
Correspondents
Dannah Phirman as Narge Hemingway
Beth Dover as Sadee Deenus
Alison Becker as Xandra Dent
Kumail Nanjiani as Amir Larussa
Randall Park as Clavis Kim (season 2)
Commentator
Ray Wise as Skip Reming
David Wain as Jim Davidson (season 2)
Episodes
Series overview
Season 1 (2013)
Season 2 (2014–15)
Guest appearances in season two include Randall Park (as correspondent Clavis Kim), Billy Ray Cyrus, Malin Åkerman, Rob Huebel (as fictional Childrens Hospital star Rob Heubel), Rob Riggle, Martin Starr, James Urbaniak, Tom Lennon, Danny Pudi, Scott Adsit, Jenna Fischer, Mel Cowan, Ryan Hansen, Marc Evan Jackson, Steve Little, Harold Perrineau, the Sklar Brothers, David Wain, and David Hasselhoff.
References
External links
2010s American black comedy television series
2010s American satirical television series
2010s American single-camera sitcoms
2010s American television news shows
2013 American television series debuts
2015 American television series endings
Adult Swim original programming
American news parodies
American television spin-offs
English-language television shows
Television news sitcoms
Television series by Warner Bros. Television Studios
Television series by Williams Street |
https://en.wikipedia.org/wiki/List%20of%20programs%20broadcast%20by%20ABC%20%28American%20TV%20network%29 | The American Broadcasting Company (ABC) is a commercial broadcasting television network owned by Disney Entertainment, a subsidiary of The Walt Disney Company. Headquartered on Columbus Avenue and West 66th Street in Manhattan, ABC is the fifth-oldest major broadcasting network in the world. The network began its TV operations in 1948.
Current programming
Note: Titles are listed in order followed by the year of debut in parentheses.
Dramas
Grey's Anatomy (2005)
The Good Doctor (2017)
Station 19 (2018)
The Rookie (2018)
The Rookie: Feds (2022)
Will Trent (2023)
Comedies
The Conners (2018)
Abbott Elementary (2021)
Not Dead Yet (2023)
Docuseries
Superstar (2021)
Reality/non-scripted
America's Funniest Home Videos (1989 as a special; 1990)
The Bachelor (2002)
The Bachelorette (2003–05; 2008)
Dancing with the Stars (2005–21; 2023)
Shark Tank (2009)
The Great Christmas Light Fight (2013)
Bachelor in Paradise (2014)
American Idol (2018)
Judge Steve Harvey (2022)
Claim to Fame (2022)
The Parent Test (2022)
The Prank Panel (2023)
The Golden Bachelor (2023)
Specials
Santa Claus Is Comin' to Town (1970)
Dick Clark's New Year's Rockin' Eve (1974)
Disney Parks Christmas Day Parade (1983)
CMA Music Festival (2004)
Prep & Landing (2009)
CMA Country Christmas (2010)
Prep & Landing: Naughty vs. Nice (2011)
Toy Story of Terror! (2013)
Toy Story That Time Forgot (2014)
The Wonderful World of Disney: Magical Holiday Celebration (2016)
Olaf's Frozen Adventure (2017)
Mickey Saves Christmas (2022)
Endless Summer Vacation: Continued (Backyard Sessions) (2023)
Game shows
Celebrity Family Feud (2015)
The $100,000 Pyramid (2016)
Press Your Luck (2019)
Celebrity Wheel of Fortune (2021)
The Chase (2021)
Generation Gap (2022)
Celebrity Jeopardy! (2022)
Jeopardy! Masters (2023)
Soap operas
General Hospital (1963)
Awards shows
American Music Awards (1973)
Academy Awards (1976)
Country Music Association Awards (2006)
ESPY Awards (2015)
Talk shows
The View (1997)
GMA3: What You Need to Know (2018)
Late night shows
Jimmy Kimmel Live! (2003)
News
ABC World News Tonight (1953)
Good Morning America (1975)
20/20 (1978)
Nightline (1980)
This Week (1981)
America This Morning (1982)
World News Now (1992)
Saturday morning
Jack Hanna's Wild Countdown (2011)
Outback Adventures with Tim Faulkner (2014–15; 2016)
Rock the Park (2015)
Ocean Treks with Jeff Corwin (2016)
Oh Baby! with Anji Corley (2019)
Film presentations
The Ten Commandments (1973; 2000)
The Sound of Music (2002)
ABC Saturday Movie of the Week (2004)
ESPN programming
Professional football:
National Football League (NFL)
Monday Night Football (shared with ESPN)
Wild Card Playoffs (simulcast on ESPN)
Divisional Playoffs (simulcast on ESPN)
Pro Bowl Games (simulcast on ESPN)
Super Bowl (in rotation with NBC, CBS, and Fox - simulcast on ESPN)
National Football League Draft (simulcast on ESPN)
XFL (2020)
Professional basketball:
National Basketball Association (NBA) coverage, including:
NBA Countdow |
https://en.wikipedia.org/wiki/Software%20development%20kit | A software development kit (SDK) is a collection of software development tools in one installable package. They facilitate the creation of applications by having a compiler, debugger and sometimes a software framework. They are normally specific to a hardware platform and operating system combination. To create applications with advanced functionalities such as advertisements, push notifications, etc; most application software developers use specific software development kits.
Some SDKs are required for developing a platform-specific app. For example, the development of an Android app on the Java platform requires a Java Development Kit. For iOS applications (apps) the iOS SDK is required. For Universal Windows Platform the .NET Framework SDK might be used. There are also SDKs that add additional features and can be installed in apps to provide analytics, data about application activity, and monetization options. Some prominent creators of these types of SDKs include Google, Smaato, InMobi, and Facebook.
Details
An SDK can take the form of application programming interfaces (APIs) in the form of on-device libraries of reusable functions used to interface to a particular programming language, or it may be as complex as hardware-specific tools that can communicate with a particular embedded system. Common tools include debugging facilities and other utilities, often presented in an integrated development environment (IDE). SDKs may include sample software and/or technical notes along with documentation, and tutorials to help clarify points made by the primary reference material.
SDKs often include licenses that make them unsuitable for building software intended to be developed under an incompatible license. For example, a proprietary SDK is generally incompatible with free software development, while a GPL-licensed SDK could be incompatible with proprietary software development, for legal reasons. However, SDKs built under the GNU Lesser General Public License (LGPL) are typically usable for proprietary development. In cases where the underlying technology is new, SDKs may include hardware. For example, AirTag's 2021 NFC SDK included both the paying and the reading halves of the necessary hardware stack.
The average Android mobile app implements 15.6 separate SDKs, with gaming apps implementing on average 17.5 different SDKs. The most popular SDK categories for Android mobile apps are analytics and advertising.
SDKs can be unsafe (because they are implemented within apps yet run separate code). Malicious SDKs (with honest intentions or not) can violate users' data privacy, damage app performance, or even cause apps to be banned from Google Play or the App Store. New technologies allow app developers to control and monitor client SDKs in real time.
Providers of SDKs for specific systems or subsystems sometimes substitute a more specific term instead of software. For instance, both Microsoft and Citrix provide a driver development kit (DDK |
https://en.wikipedia.org/wiki/Microsoft%20Visual%20C%2B%2B | Microsoft Visual C++ (MSVC) is a compiler for the C, C++, C++/CLI and C++/CX programming languages by Microsoft. MSVC is proprietary software; it was originally a standalone product but later became a part of Visual Studio and made available in both trialware and freeware forms. It features tools for developing and debugging C++ code, especially code written for the Windows API, DirectX and .NET.
Many applications require redistributable Visual C++ runtime library packages to function correctly. These packages are frequently installed separately from the applications they support, enabling multiple applications to use the package with only a single installation. These Visual C++ redistributable and runtime packages are mostly installed for standard libraries that many applications use.
History
The predecessor to Visual C++ was called Microsoft C/C++. There was also a Microsoft QuickC 2.5 and a Microsoft QuickC for Windows 1.0. The Visual C++ compiler is still known as Microsoft C/C++ and as of the release of Visual C++ 2015 Update 2, is on version 14.0.23918.0.
16-bit versions
Microsoft C 1.0, based on Lattice C, was Microsoft's first C product in 1983. It was not K&R C compliant.
C 2.0 added large model support, allowing up to 1MiB for both the Code Segment and Data Segment.
C 3.0 was the first version developed inside Microsoft. This version intended compatibility with K&R and the later ANSI standard. It was being used inside Microsoft (for Windows and Xenix development) in early 1984. It shipped as a product in 1985.
C 4.0 added optimizations and CodeView, a source-level debugger.
C 5.0 added loop optimizations and ‘huge memory model’ (arrays bigger than 64 KB) support. Microsoft Fortran and the first 32-bit compiler for 80386 were also part of this project.
C 5.1 released in 1988 allowed compiling programs for OS/2 1.x. The fourteen 5.25" disk (two of which were 1.2 MB, the others 360k) version included QuickC. The eleven 720k 3.5" disk version included with the OS/2 Software Development Kit included MASM 5.1 (a single executable that worked under both MSDOS and OS/2 1.x).
C 6.0 released in 1989. It added the Programmer's Workbench IDE, global flow analysis, a source browser, and a new debugger, and included an optional C++ front end.
C/C++ 7.0 was released in 1992. Dropped OS/2 support. Requires a 386 processor and uses the provided 386-Max dos extender (dosx32). Added built-in support for C++ and MFC (Microsoft Foundation Class Library) 1.0.
Visual C++ 1.0, which included MFC 2.0, was the first version of ‘Visual’ C++, released in February 1993. It was Cfront 2.1 compliant and available in two editions:
Standard: replaced QuickC for Windows.
Professional: replaced C/C++ 7.0. Included the ability to build both DOS and Windows applications, an optimizing compiler, a source profiler, and the Windows 3.1 SDK. The Phar Lap 286 DOS Extender Lite was also included.
Visual C++ 1.5 was released in December 1993, included MFC 2.5, an |
https://en.wikipedia.org/wiki/Indirection | In computer programming, indirection (also called dereferencing) is the ability to reference something using a name, reference, or container instead of the value itself. The most common form of indirection is the act of manipulating a value through its memory address. For example, accessing a variable through the use of a pointer. A stored pointer that exists to provide a reference to an object by double indirection is called an indirection node. In some older computer architectures, indirect words supported a variety of more-or-less complicated addressing modes.
Another important example is the domain name system which enables names such as en.wikipedia.org to be used in place
of network addresses such as 208.80.154.224. The indirection from human-readable names to network addresses means that the references to a web page become more memorable, and links do not need to change when a web site is relocated to a different server.
Overview
A famous aphorism of Butler Lampson goes: "All problems in computer science can be solved by another level of indirection" (the "fundamental theorem of software engineering").
This is often deliberately mis-quoted with "abstraction layer" substituted for "level of indirection". An often cited corollary to this is, "...except for the problem of too many layers of indirection."
A humorous Internet memorandum, , insists that:
Object-oriented programming makes use of indirection extensively, a simple example being dynamic dispatch. Higher-level examples of indirection are the design patterns of the proxy and the proxy server. Delegation is another classic example of an indirection pattern. In strongly typed interpreted languages with dynamic datatypes, most variable references require a level of indirection: first the type of the variable is checked for safety, and then the pointer to the actual value is dereferenced and acted on.
Recursive data types are usually implemented using indirection, because otherwise if a value of a datatype can contain the entirety of another value of the same datatype, there is no limit to the size a value of this datatype could need.
When doing symbolic programming from a formal mathematical specification the use of indirection can be quite helpful. To start with a simple example the variables x, y and z in an equation such as can refer to any number. One could imagine objects for various numbers and then x, y and z could point to the specific numbers being used for a particular problem. The simple example has its limitation as there are infinitely many real numbers. In various other parts of symbolic programming there are only so many symbols. So to move on to a more significant example, in logic the formula α can refer to any formula, so it could be β, γ, δ, ... or η→π, ς ∨ σ, ... When set-builder notation is employed the statement Δ={α} means the set of all formulae — so although the reference is to α there are two levels of indirection here, the first to the set of all α |
https://en.wikipedia.org/wiki/Caddie%20%28CAD%20system%29 | Caddie is a mid-range computer-assisted draughting (CAD) software package for 2D and 3D design. It is used primarily by architects, but has tools for surveyors and mechanical, civil and construction engineers. It was initially designed as an electronic drawing board, using concepts and tools clearly related to a physical board.
Caddie requires a USB dongle. or software activation. Without the dongle or activation, the program can be used as a viewer and plot station for any DWG drawings, but it can't save drawings after the 14-day evaluation has expired. Caddie works on Windows 7, Windows 8, Windows 10 and Windows 11.
Version history
Caddie was created by Anthony Spruyt, an architect from Pretoria, South Africa, in 1985 and was originally called Michael Angelo. The first release version was called Caddie and fit on a single 360 kB floppy disk, and was designed for the IBM Personal Computer XT. Caddie was one of the first CAD tools that utilised microcomputers and did not require a mainframe computer with workstation access. Version 1 of Caddie was released in 1986, for MS-DOS. The first version for Microsoft Windows (16-bit) was released in September 1993. The 32-bit Windows version was released in May 1997.
Version 6 was available by 2000. In the same year, a lite version of Caddie was available, called Caddie Budget, with the non-lite version being called Caddie Professional. Version 7 was available in 2001. Caddy Budget Architectural 7 retailed in the UK for £595 in 2003. Version 9 was released in January 2003. Caddie initially used its own proprietary file format .DRW (binary) and an ASCII .CEX (Caddie Exchange format). Caddie could also import and export native AutoCad .DWG and .DXF file formats, but since version 10, Caddie rewrote its core kernels and now uses as native file format OpenDWG.
Version 10 was released in March 2005. Version 11 was released in January 2007 and was the first version compatible with Windows Vista. Caddie 12 was released in September 2008.
The latest version of Caddie is version 29, released on 6 October 2029. There are four versions of the product, namely Caddie Professional (the full version), Caddie Budget (the lite version), Caddie Vio (for photo-realistic rendering, based on Lightworks,) and Caddie Educational, which is licensed to students even though it has all the functionality of Caddie Professional.
Functionality
Caddie Professional and Caddie Educational contain the following applications:
Caddie - the main drawing application with tools for creation and editing of common entities such as lines, arcs, construction lines, splines, ellipses, images, blocks, ole objects etc.
AEC - the smart tools for creation and editing of 2D and 3D Architectural, Engineering And Construction intelligent objects such as walls, windows, doors, openings, slabs, roofs, trusses etc. These intelligent objects have different representations depending on the view.
DTM - the digital terrain modelling tools f |
https://en.wikipedia.org/wiki/Feng-hsiung%20Hsu | Feng-hsiung Hsu (born January 1, 1959) () (nicknamed Crazy Bird) is a Taiwanese-American computer scientist and the author of the book Behind Deep Blue: Building the Computer that Defeated the World Chess Champion. His work led to the creation of the Deep Thought chess computer, which led to the first chess playing computer to defeat grandmasters in tournament play and the first to achieve a certified grandmaster-level rating.
Hsu was the architect and the principal designer of the IBM Deep Blue chess computer. He was the recipient of the 1990 Mephisto Best-Publication Award for his doctoral dissertation and also the 1991 ACM Grace Murray Hopper Award for his contributions in architecture and algorithms for chess machines.
Career
Hsu was born in Keelung, Taiwan, and came to the United States after graduating from National Taiwan University with a BS in E.E. He started his graduate work at Carnegie Mellon University in the field of computer chess in the year 1985. In 1988 he was part of the "Deep Thought" team that won the Fredkin Intermediate Prize for Deep Thought's grandmaster-level performance. In 1989 he joined IBM to design a chess-playing computer and received a Ph.D. in computer science with honors from Carnegie Mellon University.
In 1991, the Association for Computing Machinery awarded Hsu a Grace Murray Hopper Award for his work on Deep Blue. In 1996, the supercomputer lost to world chess champion Garry Kasparov. After the loss, Hsu's team prepared for a re-match. During the re-match with Kasparov, the supercomputer had double the processing power it had during the previous match. On May 11, 1997, Kasparov lost the sixth and final game, and, with it, the match (2½–3½).
Prior to building the supercomputer Deep Blue that defeated Kasparov, Hsu worked on many other chess computers. He started with ChipTest, a simple chess-playing chip, based on a design from Unix-inventor Ken Thompson's Belle, and very different from the other chess-playing computer being developed at Carnegie Mellon, HiTech, which was developed by Hans Berliner and included 64 different chess chips for the move generator instead of the one in Hsu's series. Hsu went on to build the successively better chess-playing computers Deep Thought, Deep Thought II, and Deep Blue Prototype.
In 2003, Hsu joined Microsoft Research Asia, in Beijing. In 2007, he stated the view that brute-force computation has eclipsed humans in chess, and it could soon do the same in the ancient Asian game of Go. This came to pass nine years later in 2016.
Bibliography
Behind Deep Blue: Building the Computer that Defeated the World Chess Champion. Princeton University Press, 2002. (). Review by ChessBase.com
See also
Arimaa
Deep Blue versus Garry Kasparov
Deep Blue - Kasparov, 1996, Game 1
Deep Blue - Kasparov, 1997, Game 6
Game Over: Kasparov and the Machine
ChipTest, the first in the line of chess computers co-developed by Feng-hsiung Hsu
Deep Thought, the second in the line of chess |
https://en.wikipedia.org/wiki/Safari%20%28web%20browser%29 | Safari is a web browser developed by Apple. It is built into Apple's operating systems, including macOS, iOS, iPadOS and their upcoming VisionOS, and uses Apple's open-source browser engine WebKit, which was derived from KHTML.
Safari was introduced in Mac OS X Panther in January 2003. It has been included with the iPhone since its first generation, which came out in 2007. At that time, Safari was the fastest browser on the Mac. Between 2007 and 2012, Apple maintained a Windows version, but abandoned it due to low market share. In 2010, Safari 5 introduced a reader mode, extensions, and developer tools. Safari 11, released in 2017, added Intelligent Tracking Prevention, which uses artificial intelligence to block web tracking. Safari 13 added support for Apple Pay, and authentication with FIDO2 security keys. Its interface was redesigned in Safari 15.
Background
After its 1994 release Netscape Navigator rapidly became the dominant Mac browser, and eventually came bundled with Mac OS. In 1996, Microsoft released Internet Explorer for Mac, and Apple released the Cyberdog internet suite, which included a web browser. In 1997, Apple shelved Cyberdog, and reached a five-year agreement with Microsoft to make IE the default browser on the Mac, starting with Mac OS 8.1. Netscape continued to be preinstalled on all Macintoshes. Microsoft continued to update IE for Mac, which was ported to Mac OS X DP4 in May 2000.
History and development
Conception
During development, several codenames were used including "Freedom", "iBrowse" and "Alexander" (a reference to conqueror Alexander the Great, an homage to the Konqueror web browser).
Safari 1
On January 7, 2003, at Macworld San Francisco, Apple CEO Steve Jobs announced Safari that was based on WebKit, the company's internal fork of the KHTML browser engine. Apple released the first beta version exclusively on Mac OS X the same day. Later that date, several official and unofficial beta versions followed until version 1.0 was released on June 23, 2003. On Mac OS X v10.3, Safari was pre-installed as the system's default browser, rather than requiring a manual download, as was the case with the previous Mac OS X versions. Safari's predecessor, the Internet Explorer for Mac, was then included in 10.3 as an alternative.
Safari 2
In April 2005, Engineer Dave Hyatt fixed several bugs in Safari. His experimental beta passed the Acid2 rendering test on April 27, 2005, marking it the first browser to do so. Safari 2.0 which was released on April 29, 2005, was the sole browser Mac OS X 10.4 offered by default. Apple touted this version as it was capable of running a 1.8x speed boost compared to version 1.2.4 but it did not yet feature the Acid2 bug fixes. These major changes were initially unavailable for end-users unless they privately installed and compiled the WebKit source code or ran one of the nightly automated builds available at OpenDarwin. Version 2.0.2, released on October 31, 2005, finally included t |
https://en.wikipedia.org/wiki/Cyberdog | Cyberdog was an OpenDoc-based Internet suite of applications, developed by Apple Computer for the Mac OS line of operating systems. It was introduced as a beta in February 1996 and abandoned in March 1997. The last version, Cyberdog 2.0, was released on April 28, 1997. It worked with later versions of System 7 as well as the Mac OS 8 and Mac OS 9 operating systems.
Cyberdog derived its name from a cartoon in The New Yorker captioned "On the Internet, nobody knows you're a dog."
History
Cyberdog 1
Apple released the first beta version of Cyberdog on February 16, 1996.
Apple released Cyberdog 1.0 on May 13, 1996.
Apple released Cyberdog 1.2 on December 4, 1996.
Cyberdog 2
Apple released a first alpha version on December 21, 1996, with new features such as frames, cookies and animated GIF support.
Apple also released Cyberdog 2.0 with Mac OS 8.0, allowing Mac Runtime for Java to be utilized and also had minor bugs with OpenDoc fixed.
Overview
Cyberdog included email and news readers, a web browser and address book management components, as well as drag and drop FTP. OpenDoc allowed these components to be reused and embedded in other documents by the user. For instance, a "live" Cyberdog web page could be embedded in a presentation program, one of the common demonstrations of OpenDoc.
A serious problem with the OpenDoc project that Cyberdog depended on, was that it was part of a very acrimonious competition between OpenDoc consortium members and Microsoft. The members of the OpenDoc alliance were all trying to obtain traction in a market rapidly being dominated by Microsoft Office and Internet Explorer. At the same time, Microsoft used the synergy between the OS and applications divisions of the company to make it effectively mandatory that developers adopt the competing Microsoft Object Linking and Embedding (OLE) technology. OpenDoc was forced to create an interoperability layer in order to allow developers to use it, and this added a great technical burden to the project.
An offspring of Cyberdog called Subwoofer had been developed in parallel and was aimed at providing software developers with a simple library for integrating web communication protocols into applications. The project was completed after the cancellation of Cyberdog and released at the MacHack 1997 conference by Sari Harrison and Frédéric Artru. Subwoofer evolved into the URL Access library shipped with Mac OS 8.6.
Cancellation
OpenDoc had several hundred developers signed up, but the timing was poor. Apple Computer was rapidly losing money at the time. Before long, OpenDoc was scrapped, with Steve Jobs noting that they "put a bullet through (CyberDog's) head", and most of the team was laid off in March 1997. Other sources noted that Microsoft hired away three ClarisWorks developers who were responsible for OpenDoc integration into ClarisWorks.
AppleShare IP Manager from versions 5.0 to 6.2 relied on OpenDoc, but AppleShare IP 6.3, the first Mac OS 9 compatible ver |
https://en.wikipedia.org/wiki/Bitstream%20format | A bitstream format is the format of the data found in a stream of bits used in a digital communication or data storage application. The term typically refers to the data format of the output of an encoder, or the data format of the input to a decoder when using data compression.
Processing
Standardized interoperability specifications such as the video coding standards produced by the MPEG and the ITU-T, and the audio coding standards produced by the MPEG, often specify only the bitstream format and the decoding process. This allows encoder implementations to use any methods whatsoever that produce bitstreams which conform to the specified bitstream format.
Normally, decoding of a bitstream can be initiated without having to start from the beginning of a file, or the beginning of the data transmission. Some bitstreams are designed for this to occur, for example by using indexes or key frames.
Uses of bit stream decoders (BSD):
Graphics processing unit (GPU)
H.264/MPEG-4 AVC
Unified Video Decoder (UVD) the video decoding bit-stream technology from ATI Technologies/AMD
PureVideo the video decoding bit-stream technology from Nvidia
Quick Sync Video the video decoding and encoding bit-stream technology from Intel
See also
Elementary stream
Stream processing
References
Data compression |
https://en.wikipedia.org/wiki/Smartphone | A smartphone (or simply a phone) is a portable computer device that combines mobile telephone functions and personal computing functions into one unit. They are distinguished from older-design feature phones by their more advanced hardware capabilities and extensive mobile operating systems, which facilitate wider software, access to the internet (including web browsing over mobile broadband), and multimedia functionality (including music, video, cameras, and gaming), alongside core phone functions such as voice calls and text messaging. Smartphones typically contain a number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips, include various sensors that can be leveraged by pre-installed and third-party software (such as a magnetometer, a proximity sensor, a barometer, a gyroscope, an accelerometer, and more), and support wireless communication protocols (such as Bluetooth, Wi-Fi, or satellite navigation). More recently, smartphone manufacturers have begun to integrate satellite messaging connectivity and satellite emergency services into devices for use in remote regions where there is no reliable cellular network.
Following the rising popularity of the iPhone in the late 2000s, the majority of smartphones have featured thin, slate-like form factors with large, capacitive screens with support for multi-touch gestures rather than physical keyboards and have offered the ability for users to download or purchase additional applications from a centralized store and use cloud storage and synchronization, virtual assistants, as well as mobile payment services. Smartphones have largely replaced personal digital assistant (PDA) devices, handheld/palm-sized PCs, portable media players (PMP), and, to a lesser extent, handheld video game consoles and dedicated E-reader devices.
Improved hardware and faster wireless communication (due to standards such as LTE) have bolstered the growth of the smartphone industry. In 2022, 1.43 billion smartphone units were shipped worldwide. 75.05 percent of the world population were smartphone users as of 2020.
History
Early smartphones were marketed primarily towards the enterprise market, attempting to bridge the functionality of standalone PDA devices with support for cellular telephony, but were limited by their bulky form, short battery life, slow analog cellular networks, and the immaturity of wireless data services. These issues were eventually resolved with the exponential scaling and miniaturization of MOS transistors down to sub-micron levels (Moore's law), the improved lithium-ion battery, faster digital mobile data networks (Edholm's law), and more mature software platforms that allowed mobile device ecosystems to develop independently of data providers.
In the 2000s, NTT DoCoMo's i-mode platform, BlackBerry, Nokia's Symbian platform, and Windows Mobile began to gain market traction, with models often featuring QWERTY keyboards or resistive touchscreen input and emphasizing access to push em |
https://en.wikipedia.org/wiki/Phong%20reflection%20model | The Phong reflection model (also called Phong illumination or Phong lighting) is an empirical model of the local illumination of points on a surface designed by the computer graphics researcher Bui Tuong Phong. In 3D computer graphics, it is sometimes referred to as "Phong shading", particularly if the model is used with the interpolation method of the same name and in the context of pixel shaders or other places where a lighting calculation can be referred to as “shading”.
History
The Phong reflection model was developed by Bui Tuong Phong at the University of Utah, who published it in his 1975 Ph.D. dissertation. It was published in conjunction with a method for interpolating the calculation for each individual pixel that is rasterized from a polygonal surface model; the interpolation technique is known as Phong shading, even when it is used with a reflection model other than Phong's. Phong's methods were considered radical at the time of their introduction, but have since become the de facto baseline shading method for many rendering applications. Phong's methods have proven popular due to their generally efficient use of computation time per rendered pixel.
Description
Phong reflection is an empirical model of local illumination. It describes the way a surface reflects light as a combination of the diffuse reflection of rough surfaces with the specular reflection of shiny surfaces. It is based on Phong's informal observation that shiny surfaces have small intense specular highlights, while dull surfaces have large highlights that fall off more gradually. The model also includes an ambient term to account for the small amount of light that is scattered about the entire scene.
For each light source in the scene, components and are defined as the intensities (often as RGB values) of the specular and diffuse components of the light sources, respectively. A single term controls the ambient lighting; it is sometimes computed as a sum of contributions from all light sources.
For each material in the scene, the following parameters are defined:
, which is a specular reflection constant, the ratio of reflection of the specular term of incoming light,
, which is a diffuse reflection constant, the ratio of reflection of the diffuse term of incoming light (Lambertian reflectance),
, which is an ambient reflection constant, the ratio of reflection of the ambient term present in all points in the scene rendered, and
, which is a shininess constant for this material, which is larger for surfaces that are smoother and more mirror-like. When this constant is large the specular highlight is small.
Furthermore, we have
, which is the set of all light sources,
, which is the direction vector from the point on the surface toward each light source ( specifies the light source),
, which is the normal at this point on the surface,
, which is the direction that a perfectly reflected ray of light would take from this point on the surface, and
, which is t |
https://en.wikipedia.org/wiki/NBC%20News | NBC News is the news division of the American broadcast television network NBC. The division operates under NBCUniversal Television and Streaming, a division of NBCUniversal, which is, in turn, a wholly owned subsidiary of Comcast. The news division's various operations report to the president of NBC News, Noah Oppenheim. The NBCUniversal News Group also comprises MSNBC, the network's 24-hour general news channel, business and consumer news channels CNBC and CNBC World, the Spanish language and United Kingdom–based Sky News.
NBC News aired the first regularly scheduled news program in American broadcast television history on February 21, 1940. The group's broadcasts are produced and aired from 30 Rockefeller Plaza, NBCUniversal's headquarters in New York City.
The division presides over America's number-one-rated newscast, NBC Nightly News, the world's first of its genre morning television program, Today, and the longest-running television series in American history, Meet the Press, the Sunday morning program of newsmakers interviews. NBC News also offers 70 years of rare historic footage from the NBCUniversal Archives online.
History
Caravan era
The first regularly scheduled American television newscast in history was made by NBC News on February 21, 1940, anchored by Lowell Thomas (1892–1981), and airing weeknights at 6:45 p.m. It was simply Lowell Thomas in front of a television camera while doing his NBC network radio broadcast, the television simulcast seen only in New York. In June 1940, NBC, through its flagship station in New York City, W2XBS (renamed commercial WNBT in 1941, now WNBC) operating on channel one, televised 30¼ hours of coverage of the Republican National Convention live and direct from Philadelphia. The station used a series of relays from Philadelphia to New York and on to upper New York State, for rebroadcast on W2XB in Schenectady (now WRGB), making this among the first "network" programs of NBC Television. Due to wartime and technical restrictions, there were no live telecasts of the 1944 conventions, although films of the events were reportedly shown over WNBT the next day.
About this time, there were irregularly scheduled, quasi-network newscasts originating from NBC's WNBT in New York City, (WNBC), and reportedly fed to WPTZ (now KYW-TV) in Philadelphia and WRGB in Schenectady, NY. Such as, Esso sponsored news features a well as The War As It Happens in the final days of World War II, another irregularly scheduled NBC television newsreel program which was also seen in New York, Philadelphia and Schenectady on the relatively few (roughly 5000) television sets which existed at the time. After the war, NBC Television Newsreel aired filmed news highlights with narration. Later in 1948, when sponsored by Camel Cigarettes, NBC Television Newsreel was renamed Camel Newsreel Theatre and then, when John Cameron Swayze was added as an on-camera anchor in 1949, the program was renamed Camel News Caravan.
In 1948, NBC |
https://en.wikipedia.org/wiki/Tom%20Brokaw | Thomas John Brokaw (; born February 6, 1940) is an American retired network television journalist and author. He first served as the co-anchor of The Today Show from 1976 to 1981 with Jane Pauley, then as the anchor and managing editor of NBC Nightly News for 22 years (1982–2004). In the previous decade he served as a weekend anchor for the program from 1973 to 1976. He is the only person to have hosted all three major NBC News programs: The Today Show, NBC Nightly News, and, briefly, Meet the Press. He formerly held a special correspondent post for NBC News.
Along with his competitors Peter Jennings at ABC News and Dan Rather at CBS News, Brokaw was one of the "Big Three" U.S. news anchors during the 1980s, 1990s and early 2000s. All three hosted their networks' flagship nightly news programs for more than 20 years.
Brokaw has also written several books on American history and society in the 20th century including The Greatest Generation (1998). He occasionally writes and narrates documentaries for other outlets. In 2021, NBC announced that Brokaw would retire after 55 years at the network, one of the longest standing anchors in the world at the same news network.
Brokaw is recipient of numerous awards and honors including the two Peabody Awards, and two Emmy Awards, as well the Presidential Medal of Freedom, which was awarded to him by President Barack Obama in 2014. and the French Legion of Honor in 2016.
Early life
Brokaw was born in Webster, South Dakota, the son of Eugenia "Jean" (née Conley; 1917–2011), who worked in sales and as a post-office clerk, and Anthony Orville "Red" Brokaw (1912–1982). He was the eldest of their three sons (brothers named William and Michael) and named for his maternal great-grandfather, Thomas Conley.
His father was a descendant of Huguenot immigrants Bourgon and Catherine (née Le Fèvre) Broucard, and his mother was Irish-American, His paternal great-grandfather, Richard P. Brokaw, founded the town of Bristol, South Dakota, and the Brokaw House, a small hotel and the first structure in Bristol.
Brokaw's father was a construction foreman for the Army Corps of Engineers. He worked at the Black Hills Ordnance Depot (BHOD) and helped construct Fort Randall Dam; his job often required the family to resettle throughout South Dakota during Brokaw's early childhood. The Brokaws lived for short periods in Bristol, Igloo (the small residential community of the BHOD), and Pickstown, before settling in Yankton, where Brokaw attended high school.
As a high school student attending Yankton Senior High School, Brokaw was governor of South Dakota American Legion Boys State, and in that capacity he accompanied then-South Dakota Governor Joe Foss to New York City for a joint appearance on a TV game show. It was to be the beginning of a long relationship with Foss, whom Brokaw would later feature in his book about World War II veterans, The Greatest Generation. Brokaw also became an Advisory Board member of the Joe Foss |
https://en.wikipedia.org/wiki/Am5x86 | The Am5x86 processor is an x86-compatible CPU announced in November of 1995 by AMD for use in 486-class computer systems. It began shipping in December of 1995, with a base price of $93 per unit in bulk quantities. Before being released, it was in development under the codename "X5".
Specifications
The Am5x86 (also known as the 5x86-133, Am5x86, X5-133, and sold under various 3rd-party labels such as the Kingston Technology "Turbochip") is an Enhanced Am486 processor with an internally set multiplier of 4, allowing it to run at 133 MHz on systems without official support for clock-multiplied DX2 or DX4 486 processors. Like all Enhanced Am486, the Am5x86 featured write-back L1 cache, and unlike all but a few, a generous 16 kilobytes rather than the more common 8 KB. A rare 150 MHz-rated OEM part was also released by AMD.
Since having a clock multiplier of four is not part of the original Socket 3 design (and that the 486 only have a single CLKMUL pin anyway), AMD made the 5x86 accept a 2x setting from the motherboard and instead operate at a rate of 4x. When using an Am5x86, the motherboard must be set to the 2x setting. The chip will actually physically fit into an older 486 socket such as a socket 1 or 2 or the original 168-pin 80486 socket, but doing this requires a replacement voltage regulator, since the AMD chip runs at 3.45 volts.
The combination of clock speed and the relatively large 16 KB write-back L1 cache allows the 5x86 to equal or slightly exceed an Intel Pentium 75 MHz processor in integer arithmetic in benchmarks. Real world performance varies, however, with later Windows operating systems and many FPU-sensitive games favoring the Pentium 75 MHz. Because it is based on a pure 486 design, it is compatible with older systems, something its slightly faster rival, the Cyrix Cx5x86, has trouble with. The CPU is commonly overclocked to 160 MHz, thereby giving performance similar to that of a Pentium 90 MHz system. There are four main versions of the socketed version of this CPU, manufactured in different locations. There is the common ADW variety, as well as the later ADY, ADZ and BGC. The later models were the preferred versions of the chip, because they were rated for higher temperatures and thus more forgiving of overclocking.
The Am5x86 made the first-ever use of the controversial PR rating. Because the 5x86 is the equal of a Pentium 75 MHz processor in benchmarks, AMD later marketed the chip as "Am5x86-P75".
Sales of the Am5x86 were an important source of revenue for AMD at a time when lengthy delays in bringing the AMD K5 to production were threatening the company's profitability.
AMD manufactured the Am5x86 processor for ordinary PC systems until 1999. It was popular for entry-level desktop systems, appeared in many different notebook models, and also sold separately as an upgrade processor for older 486 systems. Several companies made upgrade kits with an AMD 5x86 with a voltage regulator and socket converter, which allow |
https://en.wikipedia.org/wiki/SimCity%204 | SimCity 4 is a city-building simulation computer game developed by Maxis, a subsidiary of Electronic Arts. The game was released in January 2003 for Microsoft Windows and in June 2003 for Mac OS X. It is the fourth major installment in the SimCity series. SimCity 4 has a single expansion pack called Rush Hour which adds features to the game. SimCity 4: Deluxe Edition contained the original game and Rush Hour combined as a single product.
The game allows players to create a region of land by terraforming, and then to design and build a settlement which can grow into a city. Players can zone different areas of land as commercial, industrial, or residential development, as well as build and maintain public services, transport and utilities. For the success of a city, players must manage its finances, environment, and quality of life for its residents. SimCity 4 introduces night and day cycles and other special effects for the first time in the SimCity series. External tools such as the Building Architect Tool (BAT) allow custom third-party buildings and content to be added to the gameplay.
SimCity 4 was praised for being the first game in the main SimCity series to primarily use a 3D engine to render its graphics, following the implementation of 3D graphics in SimCity 64 for the Nintendo 64DD. It received widespread acclaim, won several awards, and was one of the top ten selling PC games of 2003. However, it was criticized for its difficulty and its demands on computer performance.
Gameplay
Regional gameplay
As with previous SimCity titles, SimCity 4 places players in the role of a mayor, tasked with populating and developing tracts of lands into cities, while fulfilling the needs of fellow Sims that live in the cities. Cities are now located in regions that are divided into segments, each of which can be developed. The player has the option of starting the city in a segment of any of three area sizes. In real measurements, the smallest has a length of one kilometer on a side, and the largest has a length of four kilometers on a side. The size of a region and its layout of segments can be changed in a bitmap file provided for each region.
Neighbor cities play a larger role than in the previous versions of the game. For example, neighbor deals can be established, where a city can exchange resources such as water, electricity and garbage disposal with other cities for money. Players may develop several inter-dependent cities at the same time, eventually populating the entire region.
Game modes
Upon selecting a specific segment in a region, the gameplay is divided into three "modes": god mode, mayor mode, and MySim mode. Mayor and MySim modes become available after establishing a city. God mode is available before establishing a city and afterwards, albeit with fewer functions. By obliterating the city, which resets the map, all functions in God mode are reactivated.
God mode
God mode allows players to design or terraform a selected tract of la |
https://en.wikipedia.org/wiki/Protected%20mode | In computing, protected mode, also called protected virtual address mode, is an operational mode of x86-compatible central processing units (CPUs). It allows system software to use features such as segmentation, virtual memory, paging and safe multi-tasking designed to increase an operating system's control over application software.
When a processor that supports x86 protected mode is powered on, it begins executing instructions in real mode, in order to maintain backward compatibility with earlier x86 processors. Protected mode may only be entered after the system software sets up one descriptor table and enables the Protection Enable (PE) bit in the control register 0 (CR0).
Protected mode was first added to the x86 architecture in 1982, with the release of Intel's 80286 (286) processor, and later extended with the release of the 80386 (386) in 1985. Due to the enhancements added by protected mode, it has become widely adopted and has become the foundation for all subsequent enhancements to the x86 architecture, although many of those enhancements, such as added instructions and new registers, also brought benefits to the real mode.
History
The Intel 8086, the predecessor to the 286, was originally designed with a 20-bit address bus for its memory. This allowed the processor to access 220 bytes of memory, equivalent to 1 megabyte. At the time, 1 megabyte was considered a relatively large amount of memory, so the designers of the IBM Personal Computer reserved the first 640 kilobytes for use by applications and the operating system and the remaining 384 kilobytes for the BIOS (Basic Input/Output System) and memory for add-on devices.
As the cost of memory decreased and memory use increased, the 1 MB limitation became a significant problem. Intel intended to solve this limitation along with others with the release of the 286.
The 286
The initial protected mode, released with the 286, was not widely used; for example, it was used by Coherent (from 1982), Microsoft Xenix (around 1984) and Minix. Several shortcomings such as the inability to access the BIOS or DOS calls due to inability to switch back to real mode without resetting the processor prevented widespread usage. Acceptance was additionally hampered by the fact that the 286 only allowed memory access in 16 bit segments via each of four segment registers, meaning only 4*216 bytes, equivalent to 256 kilobytes, could be accessed at a time. Because changing a segment register in protected mode caused a 6-byte segment descriptor to be loaded into the CPU from memory, the segment register load instruction took many tens of processor cycles, making it much slower than on the 8086; therefore, the strategy of computing segment addresses on-the-fly in order to access data structures larger than 128 kilobytes (the combined size of the two data segments) became impractical, even for those few programmers who had mastered it on the 8086/8088.
The 286 maintained backwards compatibility with |
https://en.wikipedia.org/wiki/MARC%20%28archive%29 | MARC (Mailing list ARChive) is a computer-related mailing list archive. It archives over 31 million e-mails from over 2400 mailing lists, with approximately 320,000 new mails added per month. The archive is hosted by KoreLogic, and is maintained by a group of volunteers led by Hank Leininger.
Mailing list indexes includes popular mailing list such as Linux kernel mailing list.
MARC was founded in 1996 to serve as a unified archive of electronic mailing lists, similar to what DejaNews (now Google Groups) did for Usenet.
MARC uses a MySQL relational database to store its messages and Perl to access the data. The archive can be searched for mailing list names, authors, subject lines and full-text of the e-mail messages.
Privacy criticism
In controversy with Right to be forgotten MARC is very reluctant to delete posts upon user requests.
The strict requirements are:
On request from an original poster ... that is agreed-to by the list admin/owner.
Court orders to remove messages
Mails not conforming to this rules are ignored and not responded to.
External links
MARC
About MARC
References
Webmail
Electronic mailing lists
Perl software |
https://en.wikipedia.org/wiki/RISC%20OS | RISC OS is a computer operating system originally designed by Acorn Computers Ltd in Cambridge, England. First released in 1987, it was designed to run on the ARM chipset, which Acorn had designed concurrently for use in its new line of Archimedes personal computers. RISC OS takes its name from the reduced instruction set computer (RISC) architecture it supports.
Between 1987 and 1998, RISC OS was included in every ARM-based Acorn computer model, including the Acorn Archimedes line, Acorn's R line (with RISC iX as a dual-boot option), RiscPC, A7000, and prototype models such as the Acorn NewsPad and Phoebe computer. A version of the OS, named NCOS, was used in Oracle Corporation's Network Computer and compatible systems.
After the break-up of Acorn in 1998, development of the OS was forked and continued separately by several companies, including , Pace Micro Technology, and Castle Technology. Since then, it has been bundled with several ARM-based desktop computers such as the Iyonix PC and A9home. , the OS remains forked and is independently developed by and the community.
Most recent stable versions run on the ARMv3/ARMv4 RiscPC, the ARMv5 Iyonix, ARMv7 Cortex-A8 processors (such as that used in the BeagleBoard and Touch Book) and Cortex-A9 processors (such as that used in the PandaBoard) and the low-cost educational Raspberry Pi computer. SD card images have been released for downloading free of charge to Raspberry Pi 1, 2, 3, & 4 users with a full graphical user interface (GUI) version and a command-line interface only version (RISC OS Pico, at 3.8 MB).
History
The first version of RISC OS was originally released in 1987 as Arthur 1.20. The next version, , became and was released in April 1989. RISC OS 3.00 was released with the A5000 in 1991, and contained many new features. By 1996, RISC OS had been shipped on over 500,000 systems.
Acorn officially halted work on the OS in January 1999, renaming themselves Element 14. In March 1999 a new company, RISCOS Ltd, licensed the rights to develop a desktop version of RISC OS from Element 14, and continued the development of RISC OS 3.8, releasing it as RISC OS 4 in July 1999. Meanwhile, Element 14 had also kept a copy of RISC OS 3.8 in house, which they developed into NCOS for use in set-top boxes. In 2000, as part of the acquisition of Acorn Group plc by MSDW Investment, RISC OS was sold to Pace Micro Technology, who later sold it to Castle Technology Ltd.
In May 2001, RISCOS Ltd launched RISC OS Select, a subscription scheme allowing users access to the latest RISC OS 4 updates. These upgrades are released as soft-loadable ROM images, separate to the ROM where the boot OS is stored, and are loaded at boot time. Select 1 was shipped in May 2002, with Select 2 following in November 2002 and the final release of Select 3 in June 2004. In the same month, RISC OS 4.39, dubbed RISC OS Adjust, was released. RISC OS Adjust was a culmination of all the Select Scheme updates to date, released as |
https://en.wikipedia.org/wiki/Bart%20the%20Genius | "Bart the Genius" is the second episode of the American animated television series The Simpsons. It originally aired on the Fox network in the United States on January 14, 1990. It was the first episode written by Jon Vitti. It is the show's first normal episode as well as the first to use the signature title sequence, though this version is much different from the one used from the second season to the twentieth season. In the episode, Bart cheats on an intelligence test and is declared a genius, so he is sent to a school for gifted children. Though he initially enjoys being treated as a genius, he begins to see the downside of his new life.
It marks the first use of Bart's catchphrase "Eat my shorts". As the second episode produced, directly after James L. Brooks' personal displeasure at the animation of "Some Enchanted Evening", the future of the series depended on how the animation turned out on this episode. The animation proved to be more to his liking and production continued.
Plot
The Simpsons spend a night playing Scrabble and remind Bart that he should stimulate his brain by improving his vocabulary if he hopes to pass his intelligence test at school. After Bart cheats by inventing a nonsense word, kwyjibo – basing its definition on an insulting description of his father – Homer angrily chases after him.
At Springfield Elementary School, Bart is busted for vandalism by Principal Skinner after the class genius, Martin Prince, snitches on him. To get revenge, Bart surreptitiously switches exams with Martin. When the school psychologist, Dr. Pryor, studies the IQ test results, he labels Bart a genius. Homer and Marge enroll him in a school for academically gifted students. Neither Lisa nor Skinner are fooled by Bart's supposed genius, but Skinner is pleased that Bart no longer attends Springfield Elementary.
At the Enriched Learning Center for Gifted Children, Bart feels out of place among the other students with advanced academic skills. Ostracized by his brilliant classmates, Bart visits his former school, where his old friends reject him because of his perceived intelligence. After Bart's chemistry experiment explodes, filling the school lab with green goo, he confesses to Pryor that he switched tests with Martin. Pryor realizes that he was never a genius and has him readmitted to Springfield Elementary.
Bart returns home and admits to Homer that he cheated on the intelligence test, but he is glad they are closer than before. Though Homer is touched by this sentiment, he is ultimately upset and angry at Bart for lying to him about the test and chases him through the house as Lisa declares that Bart is "stupid again".
Cast
Dan Castellaneta as Homer Simpson and Conductor
Julie Kavner as Marge Simpson
Nancy Cartwright as Bart Simpson and Skinner's secretary
Yeardley Smith as Lisa Simpson and Cecile Shapiro
Harry Shearer as Principal Seymour Skinner, Dr. J. Loren Pryor and Mr. Prince
Marcia Wallace as Edna Krabappel and Ms. Mell |
https://en.wikipedia.org/wiki/Homer%27s%20Odyssey%20%28The%20Simpsons%29 | "Homer's Odyssey" is the third episode of the American animated television series The Simpsons. It originally aired on the Fox Network in the United States on January 21, 1990. In this episode, Homer becomes a crusader for safety in Springfield and is promoted to safety inspector at Springfield Nuclear Power Plant. The episode was written by Jay Kogen and Wallace Wolodarsky and was the first Simpsons script to be completed, although it was the third episode produced.
Plot
Mrs. Krabappel takes Bart's class on a field trip to the Springfield Nuclear Power Plant. Distracted when Bart waves at him, Homer crashes an electric cart into a cooling vent and is fired. Homer searches for a new job without success. Feeling like a failure, he writes a note to his family and decides to commit suicide by tying a boulder to himself and jumping off a bridge.
His family hurries to the bridge to save him, but they are almost run over by a speeding truck. Homer pulls them to safety just in time, and he is suddenly filled with a new reason to live: to place a stop sign at the dangerous intersection. After successfully petitioning the city council, Homer embarks on a public safety crusade that involves placing speed bumps and warning signs throughout the town.
Inspired to use his new safety efforts in order to not give up on finding a new job, Homer takes on the biggest danger in Springfield, the nuclear power plant. After Homer rallies people to his cause, Mr. Burns decides to end the furor he is creating by offering him a new position as the plant safety inspector, along with a higher salary. Homer, torn between his principles and his livelihood, tearfully tells his followers that they must fight their battles alone from this point on, and takes the job.
Cast
Dan Castellaneta as Homer Simpson, Barney Gumble, Mr. Winfield, City Council #1 and City Council #3
Julie Kavner as Marge Simpson
Nancy Cartwright as Bart Simpson, Lewis and Actor
Yeardley Smith as Inanimate Carbon Rod #2 and Lisa Simpson
Harry Shearer as Otto, Waylon Smithers, Smilin' Joe Fission, SNPP Supervisor, Loaftime Announcer, Jasper, City Council #2, City Council #1 (take 2) and Demonstrator
Marcia Wallace as Edna Krabappel
Hank Azaria as Moe Szyslak and Chief Wiggum
Christopher Collins as Mr. Burns
Pamela Hayden as Wendell
Sam McMurray as SNPP Employee and Duff Commercial VO
Russi Taylor as Sherri, Terri, Inanimate Carbon Road #1 and Mrs. Winfield
Production
Waylon Smithers makes his first appearance in this episode, although he can be heard over a speaker in the series premiere. In his first visual appearance, he was mistakenly animated with the wrong color and was made an African American by Györgyi Peluce, the color stylist. David Silverman has claimed that Smithers was always intended to be Mr. Burns's "white sycophant", and the staff thought it "would be a bad idea to have a black subservient character" and so switched him to his intended color for his next episode. Smithers's sk |
https://en.wikipedia.org/wiki/Pictogram | A pictogram, also called a pictogramme, pictograph, or simply picto, and in computer usage an icon, is a graphic symbol that conveys its meaning through its pictorial resemblance to a physical object. Pictographs are often used in writing and graphic systems in which the characters are to a considerable extent pictorial in appearance. A pictogram may also be used in subjects such as leisure, tourism, and geography.
Pictography is a form of writing which uses representational, pictorial drawings, similarly to cuneiform and, to some extent, hieroglyphic writing, which also uses drawings as phonetic letters or determinative rhymes. Some pictograms, such as Hazards pictograms, are elements of formal languages.
"Pictograph" has a different definition in the field of prehistoric art (which includes recent art by traditional societies), where it means art painted on rock surfaces. This is in comparison to petroglyphs, where the images are carved or incised. Such images may or may not be considered pictograms in the general sense.
Historical
Early written symbols were based on pictographs (pictures which resemble what they signify) and ideograms (symbols which represent ideas). Ancient Sumerian, Egyptian, and Chinese civilizations began to adapt such symbols to represent concepts, developing them into logographic writing systems. Pictographs are still in use as the main medium of written communication in some non-literate cultures in Africa, the Americas, and Oceania. Pictographs are often used as simple, pictorial, representational symbols by most contemporary cultures.
Pictographs can be considered an art form, or can be considered a written language and are designated as such in Pre-Columbian art, Native American art, Ancient Mesopotamia and Painting in the Americas before Colonization. One example of many is the Rock art of the Chumash people, part of the Native American history of California.
In 2011, UNESCO's World Heritage List added "Petroglyph Complexes of the Mongolian Altai, Mongolia" to celebrate the importance of the pictograms engraved in rocks.
Some scientists in the field of neuropsychiatry and neuropsychology, such as Mario Christian Meyer, are studying the symbolic meaning of indigenous pictograms and petroglyphs, aiming to create new ways of communication between native people and modern scientists to safeguard and valorize their cultural diversity.
Modern uses
An early modern example of the extensive use of pictographs may be seen in the map in the London suburban timetables of the London and North Eastern Railway, 1936–1947, designed by George Dow, in which a variety of pictographs was used to indicate facilities available at or near each station. Pictographs remain in common use today, serving as pictorial, representational signs, instructions, or statistical diagrams. Because of their graphical nature and fairly realistic style, they are widely used to indicate public toilets, or places such as airports and train stations |
https://en.wikipedia.org/wiki/Business%20intelligence | Business intelligence (BI) comprises the strategies and technologies used by enterprises for the data analysis and management of business information. Common functions of business intelligence technologies include reporting, online analytical processing, analytics, dashboard development, data mining, process mining, complex event processing, business performance management, benchmarking, text mining, predictive analytics, and prescriptive analytics.
BI tools can handle large amounts of structured and sometimes unstructured data to help identify, develop, and otherwise create new strategic business opportunities. They aim to allow for the easy interpretation of these big data. Identifying new opportunities and implementing an effective strategy based on insights can provide businesses with a competitive market advantage and long-term stability, and help them take strategic decisions.
Business intelligence can be used by enterprises to support a wide range of business decisions ranging from operational to strategic. Basic operating decisions include product positioning or pricing. Strategic business decisions involve priorities, goals, and directions at the broadest level. In all cases, BI is most effective when it combines data derived from the market in which a company operates (external data) with data from company sources internal to the business such as financial and operations data (internal data). When combined, external and internal data can provide a complete picture which, in effect, creates an "intelligence" that cannot be derived from any singular set of data.
Among myriad uses, business intelligence tools empower organizations to gain insight into new markets, to assess demand and suitability of products and services for different market segments, and to gauge the impact of marketing efforts.<ref name=":0">
Chugh, R. & Grandhi, S. (2013,). [https://www.researchgate.net/publication/273861123_Why_Business_Intelligence_Significance_of_Business_Intelligence_Tools_and_Integrating_BI_Governance_with_Corporate_Governance "Why Business Intelligence? Significance of Business Intelligence tools and integrating BI governance with corporate governance". International Journal of E-Entrepreneurship and Innovation', vol. 4, no.2, pp. 1–14.]</ref>
BI applications use data gathered from a data warehouse (DW) or from a data mart, and the concepts of BI and DW combine as "BI/DW"
or as "BIDW". A data warehouse contains a copy of analytical data that facilitates decision support.
History
The earliest known use of the term business intelligence is in Richard Millar Devens' Cyclopædia of Commercial and Business Anecdotes (1865). Devens used the term to describe how the banker Sir Henry Furnese gained profit by receiving and acting upon information about his environment, prior to his competitors:
The ability to collect and react accordingly based on the information retrieved, Devens says, is central to business intelligence.
When Hans Peter Luhn, a re |
https://en.wikipedia.org/wiki/HomePNA | The HomePNA Alliance (formerly the Home Phoneline Networking Alliance, also known as HPNA) is an incorporated non-profit industry association of companies that develops and standardizes technology for home networking over the existing coaxial cables and telephone wiring within homes, so new wires do not need to be installed.
HomePNA was developed for entertainment applications such as IPTV which require good quality of service (QoS).
History
HomePNA 1.0 technology was developed by Tut Systems in the 1990s. The original protocols used balanced pair telephone wire.
HomePNA 2.0 was developed by Epigram and was approved by the ITU as Recommendations G.9951, G.9952 and G.9953.
HomePNA 3.0 was developed by Broadcom (which had purchased Epigram) and Coppergate Communications and was approved by the ITU as Recommendation G.9954 in February 2005.
HomePNA 3.1 was developed by Coppergate Communications and was approved by the ITU as Recommendation G.9954 in January 2007. HomePNA 3.1 added Ethernet over coax. HomePNA 3.1 uses frequencies above those used for digital subscriber line and analog voice calls over phone wires and below those used for broadcast and direct-broadcast satellite TV over coax, so it can coexist with those services on the same wires.
In March 2009, HomePNA announced a liaison agreement with the HomeGrid Forum to promote the ITU-T G.hn wired home networking standard. In May 2013 the HomePNA alliance merged with the HomeGrid Forum.
Technical characteristics
HomePNA uses frequency-division multiplexing (FDM), which uses different frequencies for voice and data on the same wires without interfering with each other. A standard phone line has enough room to support voice, high-speed DSL and a landline phone.
Two custom chips designed using the HPNA specifications were developed by Broadcom: the 4100 chip can send and receive signals over 1,000 ft (305 m) on a typical phone line. The larger 4210 controller chip strips away noise and passes data on.
A HomePNA setup would include a HomePNA card or external adapter for each computer, an external adapter, cables, and software. A low-pass filter may be needed between any phones and their respective jacks to block noise. HomePNA adapters come in PCI, USB, and PC Card formats.
Process
HomePNA does not manufacture products, although its members do. HomePNA creates industry specifications which it then standardizes under the International Telecommunication Union (ITU) standards body. The HomePNA Alliance, tests implementations, and certifies products if they pass.
Members
HomePNA promoter companies are AT&T Inc., Technicolor SA, Pace plc, Sigma Designs, Motorola, Cisco Systems, Sunrise Telecom and K-Micro.
Applications
Devices that use HPNA technology as part of whole-home multi-media content products include Advanced Digital Broadcast, Inneoquest and NetSys.
Alternatives
Alternatives to HomePNA include power line communication, Wi-Fi, data over cable, and multimedia over |
https://en.wikipedia.org/wiki/Home%20and%20Away | Home and Away (often abbreviated as H&A) is an Australian television soap opera. It was created by Alan Bateman and commenced broadcast on the Seven Network on 17 January 1988. Bateman came up with the concept of the show during a trip to Kangaroo Point, New South Wales, where he noticed locals were complaining about the construction of a foster home and against the idea of foster children from the city living in the area. The soap opera was initially going to be called Refuge, but the name was changed to the "friendlier" title of Home and Away once production began.
The show premiered in what Bateman classified as a ninety-minute telefeature (subsequently in re-runs and on VHS titled as Home and Away: The Movie), as opposed to a pilot. Since then, each subsequent episode has aired for a duration of twenty-two minutes. Home and Away has become the second longest-running drama series in Australian television, after Neighbours. In Australia, it is currently broadcast from Mondays to Thursdays at 7:00 pm.
Home and Away follows the lives and loves of the residents in Summer Bay, a fictional seaside town in New South Wales. The series initially focused on the Fletcher family – Tom (Roger Oakley) and Pippa (Vanessa Downing), and their five foster children, Frank Morgan (Alex Papps), Carly Morris (Sharyn Hodgson), Lynn Davenport (Helena Bozich), Steven Matheson (Adam Willits) and Sally Fletcher (Kate Ritchie) – who moved from the city into the Summer Bay House, where they assumed the new job of running the caravan park, and eventually took in a sixth foster child, Bobby Simpson (Nicolle Dickson). Home and Away was not without controversy. During the first season alone, it featured several adult-themed storylines such as teen pregnancy, rape, drug and alcohol addiction, drug overdose and attempted suicide. The series has dealt with similar storylines over the years which have often exceeded its restricted time slot. Palm Beach in Sydney's Northern Beaches district has been used as the location for Summer Bay since 1988. The exterior scenes are filmed mainly at Palm Beach, while the interior scenes are filmed at the Australian Technology Park in Redfern.
Home and Away has been sold to over eighty countries around the world, making it one of Australia's most successful media exports. In the UK, it and Neighbours, another Australian soap opera, are the most popular of the genre that are filmed internationally, with them both airing on Channel 5. It is one of the highest-rating shows on RTÉ Television in Ireland and TVNZ 2 in New Zealand. In Australia, Home and Away is the most awarded program at the Logie Awards, with a total of forty-nine wins, including Most Popular Drama Program. Some cast members have won several other awards such as the Gold Logie for Most Popular Personality on Australian Television, Silver Logie for Most Popular Actor, and Most Popular Actress. In 2015, Home and Away was inducted into the Logie Hall of Fame.
Production
Conceptio |
https://en.wikipedia.org/wiki/Enhanced%20CD | Enhanced CD is a certification mark of the Recording Industry Association of America for various technologies that combine audio and computer data for use in both compact disc and CD-ROM players.
Formats that fall under the "enhanced CD" category include mixed mode CD (Yellow Book CD-ROM/Red Book CD-DA), CD-i, CD-i Ready, and CD-Extra/CD-Plus (Blue Book, also called simply Enhanced Music CD or E-CD).
See also
DualDisc
CDVU+
Super Audio CD
Mixed Mode CD
References
120 mm discs
Audio storage
Certification marks
Video storage |
https://en.wikipedia.org/wiki/Teradata | Teradata Corporation is an American software company that provides cloud database and analytics-related software, products, and services. The company was formed in 1979 in Brentwood, California, as a collaboration between researchers at Caltech and Citibank's advanced technology group.
Overview
Teradata is an enterprise software company that develops and sells database analytics software. The company provides three main services: business analytics, cloud products, and consulting. It operates in North and Latin America, Europe, the Middle East, Africa, and Asia.
Teradata is headquartered in San Diego, California and has additional major U.S. locations in Atlanta and San Francisco, where its data center research and development is housed. It is publicly traded on the New York Stock Exchange (NYSE) under the stock symbol TDC. Steve McMillan has served as the company's president and chief executive officer since 2020. The company reported $1.836 billion in revenue, a net income of $129 million, and 8,535 employees globally, as of February 9, 2020.
History
The concept of Teradata grew from research at the California Institute of Technology and from the discussions of Citibank's advanced technology group in the 1970s. In 1979, the company was incorporated in Brentwood, California by Jack E. Shemer, Philip M. Neches, Walter E. Muir, Jerold R. Modes, William P. Worth, Carroll Reed and David Hartke. Teradata released its DBC/1012 database machine in 1984. In 1990, the company acquired Sharebase, originally named Britton Lee. In September 1991, AT&T Corporation acquired NCR Corporation, which announced the acquisition of Teradata for about $250 million in December. Teradata built the first system over 1 terabyte for Wal-Mart in 1992.
NCR acquired Strategic Technologies & Systems in 1999 and appointed Stephen Brobst as chief technology officer of Teradata Solutions Group. In 2000, NCR acquired Ceres Integrated Solutions and its customer relationship management software for $90 million, as well as Stirling Douglas Group and its demand chain management software. Teradata acquired financial management software from DecisionPoint in 2005. In January 2007, NCR announced Teradata would become an independent public company, led by Michael F. Koehler. The new company's shares started trading in October.
In April 2016, a hardware product line called IntelliFlex was announced. Victor L. Lund became the chief executive on May 5, 2016.
In October 2018, Teradata started promoting its cloud analytics software called Vantage (which evolved from the Teradata Database).
On May 7, 2020, Teradata announced the appointment of Steve McMillan as president and chief executive officer, effective June 8, 2020
Acquisitions and divestitures
Teradata has acquired several companies since becoming an independent public company in 2008. In March 2008, Teradata acquired professional services company Claraview, which previously had spun out software provider Clarabridge. Teradata |
https://en.wikipedia.org/wiki/Open%20Database%20Connectivity | In computing, Open Database Connectivity (ODBC) is a standard application programming interface (API) for accessing database management systems (DBMS). The designers of ODBC aimed to make it independent of database systems and operating systems. An application written using ODBC can be ported to other platforms, both on the client and server side, with few changes to the data access code.
ODBC accomplishes DBMS independence by using an ODBC driver as a translation layer between the application and the DBMS. The application uses ODBC functions through an ODBC driver manager with which it is linked, and the driver passes the query to the DBMS. An ODBC driver can be thought of as analogous to a printer driver or other driver, providing a standard set of functions for the application to use, and implementing DBMS-specific functionality. An application that can use ODBC is referred to as "ODBC-compliant". Any ODBC-compliant application can access any DBMS for which a driver is installed. Drivers exist for all major DBMSs, many other data sources like address book systems and Microsoft Excel, and even for text or comma-separated values (CSV) files.
ODBC was originally developed by Microsoft and Simba Technologies during the early 1990s, and became the basis for the Call Level Interface (CLI) standardized by SQL Access Group in the Unix and mainframe field. ODBC retained several features that were removed as part of the CLI effort. Full ODBC was later ported back to those platforms, and became a de facto standard considerably better known than CLI. The CLI remains similar to ODBC, and applications can be ported from one platform to the other with few changes.
History
Before ODBC
The introduction of the mainframe-based relational database during the 1970s led to a proliferation of data access methods. Generally these systems operated together with a simple command processor that allowed users to type in English-like commands, and receive output. The best-known examples are SQL from IBM and QUEL from the Ingres project. These systems may or may not allow other applications to access the data directly, and those that did use a wide variety of methodologies. The introduction of SQL aimed to solve the problem of language standardization, although substantial differences in implementation remained.
Since the SQL language had only rudimentary programming features, users often wanted to use SQL within a program written in another language, say Fortran or C. This led to the concept of Embedded SQL, which allowed SQL code to be embedded within another language. For instance, a SQL statement like SELECT * FROM city could be inserted as text within C source code, and during compiling it would be converted into a custom format that directly called a function within a library that would pass the statement into the SQL system. Results returned from the statements would be interpreted back into C data formats like char * using similar library code.
There were se |
https://en.wikipedia.org/wiki/Ultimate%20Play%20the%20Game | Ashby Computers and Graphics Limited, trading as Ultimate Play the Game, was a British video game developer and publisher, founded in 1982, by ex-arcade video game developers Tim and Chris Stamper. Ultimate released a series of successful games for the ZX Spectrum, Amstrad CPC, BBC Micro, MSX and Commodore 64 computers from 1983 until 1987. Ultimate are perhaps best remembered for the big-selling titles Jetpac and Sabre Wulf, each of which sold over 300,000 copies in 1983 and 1984 respectively, and their groundbreaking series of isometric arcade adventures using a technique termed Filmation. Knight Lore, the first of the Filmation games, has been retrospectively described in the press as "seminal ... revolutionary" (GamesTM), "one of the most successful and influential games of all time" (X360), and "probably ... the greatest single advance in the history of computer games" (Edge).
By the time of the label's last use in 1988 on a retrospective compilation, Ultimate had evolved into Rare and moved on to developing titles for Nintendo consoles. Rare was purchased by Microsoft in 2002 for US$377 million, a record price for a video game developer, and now develops exclusively for Microsoft platforms such as Xbox and Microsoft Windows. In 2006, Rare revived the "Ultimate Play the Game" name for an Xbox Live Arcade remake of Jetpac named Jetpac Refuelled. In 2015, several Ultimate titles were collected and released as part of the Rare Replay compilation for Xbox One.
History
Early history and rise
Ultimate Play the Game was founded in the Leicestershire town of Ashby-de-la-Zouch in 1982 by Tim and Chris Stamper, their friend John Lathbury, and Tim's girlfriend (later wife) Carole Ward. Other members of the Stamper family were also involved in the early running and support of the company, which was initially located in a house next to the family-run newsagent. Both Tim and Chris had worked in arcade game development including, according to one report, Konami's Gyruss, and claimed to be "the most experienced arcade video game design team in Britain" until tiring of working for others and leaving to start Ashby Computers and Graphics. This led to ACG's initial trade being in creating arcade conversion kits, before moving into the home computer software market developing games under the Ultimate Play the Game name. Ashby released four arcade games: Blue Print for Bally-Midway, and Grasspin, Dingo and Saturn for Jaleco.
Ultimate's first release was Jetpac in May 1983 for the 16K Spectrum. In a 1983 interview, Tim Stamper said that they deliberately targeted 16K machines as their smaller size meant development time was much shorter, claiming they could produce two 16K games in one month, or one 48K game. Jetpac was a huge commercial success; the Spectrum version alone sold more than 300,000 copies providing the fledgling company with a turnover in excess of £1 million.
This was followed by three further 16K releases, Pssst in June, Tranz Am, and Cookie, |
https://en.wikipedia.org/wiki/Data%20haven | A data haven, like a corporate haven or tax haven, is a refuge for uninterrupted or unregulated data. Data havens are locations with legal environments that are friendly to the concept of a computer network freely holding data and even protecting its content and associated information. They tend to fit into three categories: a physical locality with weak information-system enforcement and extradition laws, a physical locality with intentionally strong protections of data, and virtual domains designed to secure data via technical means (such as encryption) regardless of any legal environment.
Tor's onion space, I2P (both hidden services), HavenCo (centralized), and Freenet (decentralized) are four models of modern-day virtual data havens.
Purposes of data havens
Reasons for establishing data havens include access to free political speech for users in countries where censorship of the Internet is practiced.
Other reasons can include:
Whistleblowing
Distributing software, data or speech that violates laws such as the DMCA
Copyright infringement
Circumventing data protection laws
Online gambling
Pornography
Cybercrime
History of the term
The 1978 report of the British government's Data Protection Committee expressed concern that different privacy standards in different countries would lead to the transfer of personal data to countries with weaker protections; it feared that Britain might become a "data haven". Also in 1978, Adrian Norman published a mock consulting study on the feasibility of setting up a company providing a wide range of data haven services, called "Project Goldfish".
Science fiction novelist William Gibson used the term in his novels Count Zero and Mona Lisa Overdrive, as did Bruce Sterling in Islands in the Net. The 1990s segments of Neal Stephenson's 1999 novel Cryptonomicon concern a small group of entrepreneurs attempting to create a data haven.
See also
Anonymity
Anonymous P2P
Pseudonymity
Corporate haven
Crypto-anarchism
Sealand located in British waters in the North Sea
CyberBunker
PRQ, an ISP in Sweden
IPREDator located in Sweden
International Modern Media Institute
WikiLeaks
References
Computer law
Anonymity networks
Crypto-anarchism
Information privacy
Internet privacy
Data laws |
https://en.wikipedia.org/wiki/Na%C3%AFve%20physics | Naïve physics or folk physics is the untrained human perception of basic physical phenomena. In the field of artificial intelligence the study of naïve physics is a part of the effort to formalize the common knowledge of human beings.
Many ideas of folk physics are simplifications, misunderstandings, or misperceptions of well-understood phenomena, incapable of giving useful predictions of detailed experiments, or simply are contradicted by more thorough observations. They may sometimes be true, be true in certain limited cases, be true as a good first approximation to a more complex effect, or predict the same effect but misunderstand the underlying mechanism.
Naïve physics is characterized by a mostly intuitive understanding humans have about objects in the physical world. Certain notions of the physical world may be innate.
Examples
Some examples of naïve physics include commonly understood, intuitive, or everyday-observed rules of nature:
What goes up must come down
A dropped object falls straight down
A solid object cannot pass through another solid object
A vacuum sucks things towards it
An object is either at rest or moving, in an absolute sense
Two events are either simultaneous or they are not
Many of these and similar ideas formed the basis for the first works in formulating and systematizing physics by Aristotle and the medieval scholastics in Western civilization. In the modern science of physics, they were gradually contradicted by the work of Galileo, Newton, and others. The idea of absolute simultaneity survived until 1905, when the special theory of relativity and its supporting experiments discredited it.
Psychological research
The increasing sophistication of technology makes possible more research on knowledge acquisition. Researchers measure physiological responses such as heart rate and eye movement in order to quantify the reaction to a particular stimulus. Concrete physiological data is helpful when observing infant behavior, because infants cannot use words to explain things (such as their reactions) the way most adults or older children can.
Research in naïve physics relies on technology to measure eye gaze and reaction time in particular. Through observation, researchers know that infants get bored looking at the same stimulus after a certain amount of time. That boredom is called habituation. When an infant is sufficiently habituated to a stimulus, he or she will typically look away, alerting the experimenter to his or her boredom. At this point, the experimenter will introduce another stimulus. The infant will then dishabituate by attending to the new stimulus. In each case, the experimenter measures the time it takes for the infant to habituate to each stimulus.
Researchers infer that the longer the infant takes to habituate to a new stimulus, the more it violates his or her expectations of physical phenomena. When an adult observes an optical illusion that seems physically impossible, they will attend t |
https://en.wikipedia.org/wiki/NoordNed | NoordNed Personenvervoer B.V. (English translation Network North) was a public transport company operating trains and buses in the north and northeast of the Netherlands. Founded in 1999 as a joint venture by Arriva and Nederlandse Spoorwegen, after Arriva took full ownership in 2003, the brand was retired in 2005.
History
NoordNed was established by Arriva and Nederlandse Spoorwegen, each having a 49% shareholding. In May 1999 it commenced operating regional train services on the Leeuwarden to Harlingen and Leeuwarden to Stavoren lines. It also operated rail services between Leeuwarden and Groningen on behalf of Nederlandse_Spoorwegen.
On 28 May 2000 it commenced operating services between Groningen and Nieuweschans, and Roodeschool under a five year concession.
In December 2003, Arriva became the sole owner. In December 2005, NoordNed commenced operating 15 year contracts in Groningen and Friesland. In December 2005 the NoordNed brand was retired with all operations merged with Arriva's other Netherlands operations.
References
External links
Arriva Group companies
Nederlandse Spoorwegen
Railway companies established in 1999
Railway companies disestablished in 2005
Railway companies of the Netherlands
Rail transport in Friesland
Rail transport in Groningen (province)
1999 establishments in the Netherlands
2005 disestablishments in the Netherlands |
https://en.wikipedia.org/wiki/ProRail | ProRail () is a Dutch government organisation responsible for the maintenance and extension of the national railway network infrastructure (not the metro or tram), the allocation of rail capacity, and controlling rail traffic. Prorail is a part of NS Railinfratrust, the Dutch railway infrastructure owner.
Its Utrecht headquarters is in the former offices of Nederlandse Spoorwegen (known as De Inktpot, "The Inkwell"), the largest brick building in the Netherlands. The building currently features a "UFO" on its facade resulting from an art program in 2000.
History
The creation of ProRail can be traced back to policies of the Dutch government implemented during the 1990s; it was decided that rail operations should be reorganised, and that the private sector should have a greater involvement in their operations, in order to improve operations. In 1998, the first outsourced small-scale maintenance operations took place. Despite this direction, government ownership of the national railway infrastructure operator has been retained. Instead, ProRail was established on 1 January 2003 when three separate organisations responsible for rail infrastructure in the Netherlands were merged. Upon its creation, ProRail became responsible for the total cost of ownership and the long-term availability of the rail infrastructure, as well as to avoid operational safety being compromised.
One early reform of the organisation, implemented for the 2007–2011 timeframe, was for all contracts to be publicly tender based on performance and process specifications. This was intended to facilitate effective competition to be awarded contract work while retaining the public service orientation sought by the government. Key functions, such as track inspection, had transitioned to the private sector by 2006. Jan Swier, ProRails Strategic Advisor for Maintenance and Renewals, noted that there were initial concerns over this safety-critical work being performed externally, yet the work has been effectively performed, aiding by the introduction of sophisticated track inspection machinery. All track is inspected twice per month while busy main lines can be inspected as often as once per week. Trackwork, such as grinding, was largely outsourced; working practices also changed, moving from corrective grinding to preventive grinding to both ease and lower the cost of the work. The implementation of real time asset monitoring was also pursued, by 2006, approximately 1,000 switches in The Netherlands had been equipped with remote condition monitoring equipment.
In the late 2000s, in response to repeated year-on-year rises in both passenger and freight traffic on the network, ProRail developed a new 'Triple A' strategy to deliver a 50 percent increase in capacity, to be achieved via the adoption of smarter planning, the reorganisation of train services, and new construction works. In late 2008, the Ministry of Transport allocated €4.5 billion for a multi-year investment to introduce th |
https://en.wikipedia.org/wiki/H%C3%A5kan%20Lans | Anders Håkan Lans (born 2 November 1947 in Enskede) is a Swedish inventor. He holds two patents:
a memory controller for a framebuffer: "Data processing system and apparatus for color graphics display". Framebuffer with memory controllers had been in common use for years at the time of this 1979 patent filing.
a calligraphic display "Arrangement for producing a pattern on a light-sensitive surface"
STDMA
Håkan Lans is the designer of a tracking system which makes use of a Self-Organized Time Division Multiple Access (STDMA) datalink. The STDMA datalink is currently in use in Automatic Identification System (AIS). AIS is a short range coastal tracking system which is mandatory aboard international voyaging ships with gross tonnage (GT) of 300 or more tons, and all passenger ships regardless of size.
STDMA is also in use as one of the three physical layer models proposed for Automatic dependent surveillance-broadcast (ADS-B), a cooperative surveillance technique for air traffic control which is in the process of implementation.
Lans has his own company, GP&C Systems International AB, which is focused on marketing STDMA.
Legal disputes
Concerning the patent on the framebuffer controller, in 1997 Lans as an individual sued several companies, including Compaq, Gateway and Hewlett-Packard, for not paying royalties, and allegedly infringing on his patent. The defendants counterclaimed that the patent was assigned to Uniboard AB, a company wholly owned by Lans. The judge ruled that this was the case and thus Lans lost the case. This has led to a dispute between Lans and his attorneys, who Lans has sued for misconduct. This suit was settled in April 2012. An allegation was presented on Swedish National Television that an attorney's conduct was due to alleged connections with The Pentagon, in order to stop Lans' progress on his STDMA patent.
Another debate is regarding STDMA. There was a patent application for the system, but a US patent ex-parte reexamination certificate was issued in 2010 canceling all claims.
See also
Automatic Identification System (AIS)
ADS-B#VDL mode 4
References
External links
GP&C Systems International AB
The story as told from Lans side
1947 births
Living people
20th-century Swedish inventors |
https://en.wikipedia.org/wiki/Nederlandse%20Spoorwegen | Nederlandse Spoorwegen (NS; ; ) is the principal passenger railway operator in the Netherlands. It is a Dutch state-owned company founded in 1938. The Dutch rail network is the busiest in the European Union, and the third busiest in the world after Switzerland and Japan.
The rail infrastructure is maintained by network manager ProRail, which was split off from NS in 2003. Freight operator NS Cargo merged with DB Cargo in 2000. NS runs 4,800 scheduled domestic trains a day, serving 1.1 million passengers. The NS also provides international rail services from the Netherlands to other European destinations and carries out concessions on some foreign rail markets through its subsidiary Abellio.
History
Early years
World War I caused an economic downturn in the Netherlands that caused the two largest Dutch railway companies, Hollandsche IJzeren Spoorweg-Maatschappij (HSM) and Maatschappij tot Exploitatie van Staatsspoorwegen (SS), to become unprofitable. The companies avoided bankruptcy by integrating their operations, which occurred by 1917. The cooperation was by economic and ideological reasons. The state provided support by buying shares in both companies. In 1938, the state bought the remaining shares and merged the companies to create NS; NS was not nationalised.
During World War II, NS was forced by the Germans to construct railways to Westerbork transit camp and transport almost a hundred thousand Jews to extermination camps. The company's only wartime strike was during the Dutch famine of 1944–45; NS opted not to strike a year earlier.
NS played a pivotal role in the post-war reconstruction of the Netherlands; only it could provide the required logistical services in a time when there was little alternative to rail transport. The company declined in the 1960s – like many other railways – and operated at a loss. There was increased competition from other modes of transport. In addition, national coal distribution from Limburg became less profitable; the discovery of a gas field near Slochteren led to coal losing market share to natural gas in power plants and homes. NS' response, the Spoorslag '70 plan which increased service and introduced intercity service, failed to restore profitability. The company was deemed nationally important and received state subsidies.
Reforms and reversal
NS was reorganized following the neoliberal reforms of the 1980s and the 1991 EU Directive 91/440; the latter required railway infrastructure and transport activities to be managed independently. Although the state called the process "corporatization" (verzelfstandiging), it really only meant the withdrawal of subsidies. The changes were carried out by Rob den Besten, who became chief executive officer of NS after the retirement of Leo Ploeger.
NS' infrastructure division was split off into NS Railinfratrust. Plans to split the remainder of NS met with limited success due to trade union opposition; the new companies created were NS Reizigers and locomoti |
https://en.wikipedia.org/wiki/LilyPond | LilyPond is a computer program and file format for music engraving. One of LilyPond's major goals is to produce scores that are engraved with traditional layout rules, reflecting the era when scores were engraved by hand.
LilyPond is cross-platform, and is available for several common operating systems; released under the terms of the GNU General Public License, LilyPond is free software and part of the GNU Project.
History
The LilyPond project was started in 1996 by Han-Wen Nienhuys and Jan Nieuwenhuizen, after they decided to abandon work on MPP (MusiXTeX PreProcessor), a project they began collaborating on in 1995. Its name was inspired both by the Rosegarden project and an acquaintance of Nienhuys and Nieuwenhuizen named Suzanne, a name that means lily in Hebrew ().
Version 1.0
LilyPond 1.0 was released on July 31, 1998, highlighting the development of a custom music font, Feta, and the complete separation of LilyPond from MusiXTeX.
Version 2.0
LilyPond 2.0 was released on September 24, 2003, announcing a simplified syntax model and a much more complete set of facilities for notating various styles of music.
Design
LilyPond is mostly written in C++ and uses Scheme (interpreted by GNU Guile) as its extension language, allowing for user customization. It has a relatively large codebase; as of March 10, 2017, the source includes over 600,000 lines of C++, 140,000 lines of Scheme, and 120,000 lines of Python code.
It uses a simple text notation for music input, which LilyPond interprets and processes in a series of stages. In the final stage, music notation is output to PDF (via PostScript) or other graphical formats, such as SVG or PNG. LilyPond can also generate MIDI files that correspond to the music notation output.
LilyPond is a text-based application, so it does not contain its own graphical user interface to assist with score creation. (However, a text-editor based "LilyPad" GUI for Windows and MacOS is included by default on these systems.) It does, however, have a flexible input language that strives to be simple, easing the learning curve for new users. LilyPond adheres to the WYSIWYM paradigm; the workflow for typesetting music notation with LilyPond is similar to that of preparing documents with LaTeX.
LilyPond supports experimental musical notation. Its guitar facilities support alternative tunings, such as major-thirds tuning.
Software features
LilyPond's primary goal is to produce output comparable to professionally engraved scores instead of output that looks mechanical and computer-generated. An essay from the LilyPond website, written by LilyPond developers, explains some typographical issues addressed by LilyPond:
Optical font scaling: depending on the staff size, the design of the music font is slightly altered; this is a feature that Donald Knuth's Computer Modern font is known for. As a result, note heads become more rounded, and staff lines become thicker.
Optical spacing: stem directions are taken into ac |
https://en.wikipedia.org/wiki/Onion%20skinning | In 2D computer graphics, onion skinning is a technique used in creating animated cartoons and editing movies to see several frames at once. This way, the animator or editor can make decisions on how to create or change an image based on the previous image in the sequence.
In traditional animation, the individual frames of a movie were initially drawn on thin onionskin paper over a light source. The animators (mostly inbetweeners) would put the previous and next drawings exactly beneath the working drawing, so that they could draw the 'in between' to give a smooth motion.
In computer software, this effect is achieved by making frames translucent and projecting them on top of each other.
This effect can also be used to create motion blurs, as seen in The Matrix when characters dodge bullets.
See also
Anime Studio
Adobe Flash
TVPaint
3ds max
External links
Onion Skinning in I Can Animate
Shape Shifter at Aniboom. To see onion skinning, press 'o' then create a shape, add a frame, move your shape, and repeat.
Animation techniques
Computer graphic techniques |
https://en.wikipedia.org/wiki/ANDOS | ANDOS is a Russian operating system for Electronika BK series computers: BK-0010, BK-0011, and BK-0011M. They were based on the PDP-11 architecture by Digital Equipment Corporation. ANDOS was created in 1990 and released first in 1992. Initially it was developed by Alexey Nadezhin (by whose name the system is named) and later also by Sergey Kamnev, who joined the project. It was the only widespread system on BK series computers that used MS-DOS-compatible file system format. ANDOS used the FAT12 file system on 800 Kb floppy disks. For Electronika BK-0011M and BK-0011, ANDOS could emulate a BK-0010 by loading a BK-0010 read-only memory (ROM) image into BK-0011(M) random-access memory (RAM). In minimal configuration, the system could occupy less than 4 Kb of RAM.
The system was able to support up to 64 disk drives (or hard disk drive partitions), and RAM disks in the computer's memory and tape recording. It could also have read-only access to MicroDOS file system format disks, although in the last version, this function was transferred from system core to the file manager and became optional.
References
Elektronika BK operating systems
Assembly language software |
https://en.wikipedia.org/wiki/Electronika%20BK | The Electronika BK is a series of 16-bit PDP-11-compatible home computers developed under the Electronika brand by NPO Scientific Center, then the leading microcomputer design team in the Soviet Union. It is also the predecessor of the more powerful UKNC and DVK micros.
Overview
First released in 1985 (developed in 1983), they are based on the К1801ВМ1 (Soviet LSI-11-compatible CPU) and were the only official (government approved and accounted for in economic planning) Soviet home computer design in mass production.
They sold for about 600–650 roubles. This was costly, but marginally affordable as the average Soviet monthly wage then was about 150 roubles. So they became one of the most popular home computer models in the Soviet Union. Later, in the 1990s, their powerful central processing unit (CPU) and straightforward, easy-to-program design made them popular as demoscene machines. BK () is a Russian abbreviation for – domestic (or home) computer. The machines were also used for a short time as cash registers, for example, in the GUM department store.
Software
The BK series is a bare-bones machine, with no peripherals or programming tools. The only software available at the launch, except read-only memory (ROM) firmware, is an included magnetic tape with several programming examples (for the languages BASIC and FOCAL), and several tests. The ROM firmware includes a simple program to enter machine codes, BASIC and FOCAL interpreters.
While the BK is somewhat compatible with larger and more expensive DVK professional model microcomputers and industrial minicomputers like the SM EVM series, its 32 KiB memory, of which only 16 KiB is generally available to programmers (an extended memory mode supports 28 KiB, but limits video output to a quarter of the screen), generally precludes direct use of software for the more powerful machines. The DVK became a popular development platform for BK software, and most DVK software can be used directly with memory capacity extended to 128 KiB.
Hobbyist developers quickly filled this niche, porting several programming tools from DVK and UKNC. This led to an explosion of homebrew software, from text editors and databases to operating systems and video games. Most BK owners expanded the built-in RAM to at least 64 KiB, which allows easier software porting, and as these upgrades often include floppy drive controllers, individuals creating disk operating systems became something of a competitive sport in the BK scene. Games and demoscene communities also flourished, as its poor graphics are offset by a powerful CPU.
One of the operating systems was ANDOS, although officially the computer shipped with OS BK-11, a modification of RT-11.
Hardware
The machine is based on a 16-bit single-chip K1801VM1 CPU, clocked generally at 3 MHz. It is compatible with Digital Equipment Corporation's LSI-11 line, though it lacks Extended Instruction Set (EIS) and further instruction set extensions. The manufacturer also closel |
https://en.wikipedia.org/wiki/Foundations%20of%20mathematics | Foundations of mathematics is the study of the philosophical and logical and/or algorithmic basis of mathematics, or, in a broader sense, the mathematical investigation of what underlies the philosophical theories concerning the nature of mathematics. In this latter sense, the distinction between foundations of mathematics and philosophy of mathematics turns out to be vague.
Foundations of mathematics can be conceived as the study of the basic mathematical concepts (set, function, geometrical figure, number, etc.) and how they form hierarchies of more complex structures and concepts, especially the fundamentally important structures that form the language of mathematics (formulas, theories and their models giving a meaning to formulas, definitions, proofs, algorithms, etc.) also called metamathematical concepts, with an eye to the philosophical aspects and the unity of mathematics. The search for foundations of mathematics is a central question of the philosophy of mathematics; the abstract nature of mathematical objects presents special philosophical challenges.
The foundations of mathematics as a whole does not aim to contain the foundations of every mathematical topic.
Generally, the foundations of a field of study refers to a more-or-less systematic analysis of its most basic or fundamental concepts, its conceptual unity and its natural ordering or hierarchy of concepts, which may help to connect it with the rest of human knowledge. The development, emergence, and clarification of the foundations can come late in the history of a field, and might not be viewed by everyone as its most interesting part.
Mathematics plays a special role in scientific thought, serving since ancient times as a model of truth and rigor for rational inquiry, and giving tools or even a foundation for other sciences (especially Physics). Mathematics' many developments towards higher abstractions in the 19th century brought new challenges and paradoxes, urging for a deeper and more systematic examination of the nature and criteria of mathematical truth, as well as a unification of the diverse branches of mathematics into a coherent whole.
The systematic search for the foundations of mathematics started at the end of the 19th century and formed a new mathematical discipline called mathematical logic, which later had strong links to theoretical computer science.
It went through a series of crises with paradoxical results, until the discoveries stabilized during the 20th century as a large and coherent body of mathematical knowledge with several aspects or components (set theory, model theory, proof theory, etc.), whose detailed properties and possible variants are still an active research field.
Its high level of technical sophistication inspired many philosophers to conjecture that it can serve as a model or pattern for the foundations of other sciences.
Historical context
Ancient Greek mathematics
While the practice of mathematics had previously developed in oth |
https://en.wikipedia.org/wiki/Novell | Novell, Inc. () was an American software and services company headquartered in Provo, Utah, that existed from 1980 until 2014. Its most significant product was the multi-platform network operating system known as Novell NetWare.
Under the leadership of chief executive Ray Noorda, NetWare became the dominant form of personal computer networking during the second half of the 1980s and first half of the 1990s. At its high point, NetWare had a 63 percent share of the market for network operating systems and by the early 1990s there were over half a million NetWare-based networks installed worldwide encompassing more than 50 million users. Novell technology contributed to the emergence of local area networks, which displaced the dominant mainframe computing model and changed computing worldwide. Novell was the second-largest maker of software for personal computers, trailing only Microsoft Corporation, and became instrumental in making Utah Valley a focus for technology and software development.
During the early to mid-1990s, Noorda attempted to compete directly with Microsoft by acquiring Digital Research, Unix System Laboratories, WordPerfect, and the Quattro Pro division of Borland. These moves did not work out, due to new technologies not fitting well with Novell's existing user base or to being too late to compete with equivalent Microsoft products, and NetWare began losing market share once Microsoft bundled network services with the Windows NT operating system and its successors. Despite new products such as Novell Directory Services and GroupWise, Novell entered a long period of decline. Eventually Novell acquired SUSE Linux and attempted to refocus its technology base. Despite building or acquiring several new kinds of products, Novell failed to find consistent success and never regained its past dominance.
The company was an independent corporate entity until it was acquired as a wholly owned subsidiary by The Attachmate Group in 2011, which in turn was acquired in 2014 by Micro Focus International. Novell products and technologies are now integrated within various Micro Focus divisions.
History
Origins as a hardware company
The company began as Novell Data Systems Inc. (NDSI), a computer systems company located in Orem, Utah that intended to manufacture and market small business computers, computer terminals, and other peripherals. It was co-founded by George Canova and Jack Davis, two experienced computer industry executives. While some later sources place the creation of Novell Data Systems as having happened in 1979, more contemporaneous sources are in accordance with it happening in August 1980. Canova became president of the new company and Davis was in charge of sales and marketing. The suggestion for the company's name came from Canova's wife, who thought it meant "new" in French (in fact the French word is either the masculine nouveau or the feminine nouvelle). While future Brigham Young University professor and Eyring Resea |
https://en.wikipedia.org/wiki/Satisfaction | Satisfaction may refer to:
Contentment
Computer user satisfaction
Customer satisfaction
Job satisfaction
Satisfaction theory of atonement, a Christian view of salvation
The regaining of honour in a duel
The process or outcome of assigning values to the free variables of a satisfiable formula
Law
Satisfaction of legacies, a doctrine of fulfilling a legacy during the testator's lifetime.
Accord and satisfaction, a contract law concept about the purchase of the release from a debt obligation
Entertainment and music
Satisfaction (Australian TV series), a drama series which aired on Showcase Australia in 2007–2010
Satisfaction (2013 TV series), a sitcom which aired on CTV
Satisfaction (2014 TV series), a drama series on USA Network
Satisfaction (1988 film), an American comedy-drama film
Satisfaction (2010 film), a Russian drama film
Satisfaction!, a 1965 album by jazz organist Don Patterson or its title track without the exclamation point
"(I Can't Get No) Satisfaction", a 1965 rock song by The Rolling Stones
"Satisfaction" (Laura Branigan song), a 1984 song by Laura Branigan
"Satisfaction", a 1989 song by Wendy * Lisa from the album Fruit at the Bottom
"Satisfaction", a 1991 song by Vanilla Ice from the album Extremely Live
"Satisfaction", a 2018 song by Zayn from Icarus Falls
"Satisfaction" (Eve song), a 2003 hip hop song
"Satisfaction" (F.T. Island song), a 2011 pop rock song
"Satisfaction" (Benny Benassi song)
Thee Satisfaction, Hip-hop duo from Seattle
See also
Satisfiability, a property pertaining to mathematical formulas
Satisfy (disambiguation) |
https://en.wikipedia.org/wiki/C%27t | c't – (Magazine for Computer Technology) is a German computer magazine, published by the Heinz Heise publishing house.
History and profile
The first issue of the magazine was the November/December 1983 edition. Originally a special section of the electronics magazine elrad, the magazine has been published monthly since December 1983 and biweekly since October 1997. A Dutch edition also exists which is published monthly. In addition, since 2008 a Russian licensed-title version named is published in Moscow.
The magazine is the second most popular German-language computer magazine with a sold circulation of about 315,000 (; printed circulation: 419,000). With 241,000 subscriptions it is the computer magazine with the most subscribers in Europe.
c't covers both hardware and software; it focuses on software for the Microsoft Windows platform, but Linux and Apple are also regularly featured. The magazine has a reputation of being very thorough, although critics claim that the magazine has been "dumbed down" in recent years to accommodate the mass market.
One of the numerous projects c't initiated is the , a set of scripts to download Microsoft updates, combine it with an install script, and create a CD image. With Offline Update burned to a CD or DVD, a technician can update Windows 2000/XP/Vista and Microsoft Office 2003/2007 without an Internet connection. This is especially useful for people with no or slow Internet connections, or not exposing a vulnerable system to the Internet.
A sister magazine, iX, focuses on topics for IT professionals.
Popularity
c't became widely known in 1995 when it rated the program SoftRAM "Placebo-Software" in a short test. When the German distributor of the program took legal action to forbid publishing this rating, c't followed up with an exhaustive test showing that the program had virtually no effect other than giving false information about system statistics. The subsequent media coverage forced SoftRAM out not only from the German market, but from the US market too.
References
External links
c't Netherlands homepage
Heise Verlag homepage and news site
Offline Update (now "WSUS Offline Update") download page
1983 establishments in West Germany
Biweekly magazines published in Germany
Dutch-language magazines
Computer magazines published in Germany
German-language magazines
Monthly magazines published in Germany
Magazines established in 1983 |
https://en.wikipedia.org/wiki/Seventeen%20or%20Bust | Seventeen or Bust was a volunteer computing project started in March 2002 to solve the last seventeen cases in the Sierpinski problem. The project solved eleven cases before a server loss in April 2016 forced it to cease operations. Work on the Sierpinski problem moved to PrimeGrid, which solved a twelfth case in October 2016. Five cases remain unsolved .
Goals
The goal of the project was to prove that 78557 is the smallest Sierpinski number, that is, the least odd k such that k·2n+1 is composite (i.e. not prime) for all n > 0.
When the project began, there were only seventeen values of k < 78557 for which the corresponding sequence was not known to contain a prime.
For each of those seventeen values of k, the project searched for a prime number in the sequence
k·21+1, k·22+1, …, k·2n+1, …
testing candidate values n using Proth's theorem. If one was found, it proved that k was not a Sierpinski number. If the goal had been reached, the conjectured answer 78557 to the Sierpinski problem would be proven true.
There is also the possibility that some of the sequences contain no prime numbers. In that case, the search would continue forever, searching for prime numbers where none can be found. However, there is some empirical evidence suggesting the conjecture is true.
Every known Sierpinski number k has a small covering set, a finite set of primes with at least one dividing k·2n+1 for each n>0 (or else k has algebraic factorizations for some n values and a finite prime set that works only for the remaining n). For example, for the smallest known Sierpinski number, 78557, the covering set is {3,5,7,13,19,37,73}. For another known Sierpinski number, 271129, the covering set is {3,5,7,13,17,241}. Each of the remaining sequences has been tested and none has a small covering set, so it is suspected that each of them contains primes.
The second generation of the client was based on Prime95, which is used in the Great Internet Mersenne Prime Search.
In January 2010, the Seventeen or Bust project started collaboration with PrimeGrid which uses the software LLR for its tests related to the Sierpinski problem.
The Seventeen or Bust server went down during April 2016, when the server and backups were lost for reasons that were not revealed to the public. The project is no longer active. Work on the Sierpinski problem continues at PrimeGrid.
Progress of the search
Twelve prime numbers have been found to date, eleven by the original Seventeen or Bust, and a twelfth by PrimeGrid's SoB project:
Until May 2023, the largest of these primes, 10223·231172165+1, was the largest known prime number that is not a Mersenne prime. It was found in October 2016. The primes on this list over one million digits in length are the six known "Colbert numbers" whimsically named after Stephen Colbert. These are defined as primes which eliminate a remaining Sierpinski number candidate.
Each of these numbers has enough digits to fill up a medium-sized novel, at least. |
https://en.wikipedia.org/wiki/Instance%20variable | In class-based, object-oriented programming, an instance variable is a variable defined in a class (i.e., a member variable), for which each instantiated object of the class has a separate copy, or instance. An instance variable has similarities with a class variable, but is non-static. An instance variable is a variable which is declared in a class but outside of constructors, methods, or blocks. Instance variables are created when an object is instantiated, and are accessible to all the constructors, methods, or blocks in the class. Access modifiers can be given to the instance variable.
An instance variable is not a class variable, although there are similarities. It is a type of class attribute (or class property, field, or data member). The same dichotomy between instance and class members applies to methods ("member functions") as well; a class may have both instance methods and class methods.
Each instance variable lives in memory for the lifetime of the object it is owned by.
Variables are properties an object knows about itself. All instances of an object have their own copies of instance variables, even if the value is the same from one object to another. One object instance can change values of its instance variables without affecting all other instances. Instance variables can be used by all methods of a class unless the method is declared as static.
Example
C++
struct Request {
static int count1; // variable name is not important
int number;
Request(); {
number = count1; // modifies the instance variable "this->number"
++count1; // modifies the class variable "Request::count1"
}
};
int Request::count1 = 0;
In this C++ example, the instance variable Request::number is a copy of the class variable Request::count1 where each instance constructed is assigned a sequential value of count1 before it is incremented. Since number is an instance variable, each Request object contains its own distinct value; in contrast, there is only one object Request::count1 available to all class instances with the same value.
Python
class Dog:
def __init__(self, breed):
self.breed = breed # instance variable
# dog_1 is an object
# which is also an instance of the Dog class
dog_1 = Dog("Border Collie")In the above Python code, the instance variable is created when an argument is parsed into the instance, with the specification of the breed positional argument.
References
Object-oriented programming
Variable (computer science) |
https://en.wikipedia.org/wiki/Outline%20of%20computer%20science | Computer science (also called computing science) is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.
Computer science can be described as all of the following:
Academic discipline
Science
Applied science
Subfields
Mathematical foundations
Coding theory – Useful in networking, programming, system development, and other areas where computers communicate with each other.
Game theory – Useful in artificial intelligence and cybernetics.
Discrete Mathematics
Graph theory – Foundations for data structures and searching algorithms.
Mathematical logic – Boolean logic and other ways of modeling logical queries; the uses and limitations of formal proof methods
Number theory – Theory of the integers. Used in cryptography as well as a test domain in artificial intelligence.
Algorithms and data structures
Algorithms – Sequential and parallel computational procedures for solving a wide range of problems.
Data structures – The organization and manipulation of data.
Artificial intelligence
Outline of artificial intelligence
Artificial intelligence – The implementation and study of systems that exhibit an autonomous intelligence or behavior of their own.
Automated reasoning – Solving engines, such as used in Prolog, which produce steps to a result given a query on a fact and rule database, and automated theorem provers that aim to prove mathematical theorems with some assistance from a programmer.
Computer vision – Algorithms for identifying three-dimensional objects from a two-dimensional picture.
Soft computing, the use of inexact solutions for otherwise extremely difficult problems:
Machine learning - Development of models that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyse and draw inferences from patterns in data.
Evolutionary computing - Biologically inspired algorithms.
Natural language processing - Building systems and algorithms that analyze, understand, and generate natural (human) languages.
Robotics – Algorithms for controlling the behaviour of robots.
Communication and security
Networking – Algorithms and protocols for reliably communicating data across different shared or dedicated media, often including error correction.
Computer security – Practical aspects of securing computer systems and computer networks.
Cryptography – Applies results from complexity, probability, algebra and number theory to invent and break codes, and analyze the security of cryptographic protocols.
Computer architecture
Computer architecture – The design, organization, optimization and verification of a computer system, mostly about CPUs and Memory subsystem (and the bus connecting them).
Operating systems – Sys |
https://en.wikipedia.org/wiki/Object%20%28computer%20science%29 | In computer science, an object can be a variable, a data structure, a function, or a method. As regions of memory, objects contain a value and are referenced by identifiers.
In the object-oriented programming paradigm, an object can be a combination of variables, functions, and data structures; in particular in class-based variations of the paradigm, an object refers to a particular instance of a class.
In the relational model of database management, an object can be a table or column, or an association between data and a database entity (such as relating a person's age to a specific person).
Object-based languages
An important distinction in programming languages is the difference between an object-oriented language and an object-based language. A language is usually considered object-based if it includes the basic capabilities for an object: identity, properties, and attributes. A language is considered object-oriented if it is object-based and also has the capability of polymorphism, inheritance, encapsulation, and, possibly, composition. Polymorphism refers to the ability to overload the name of a function with multiple behaviors based on which object(s) are passed to it. Conventional message passing discriminates only on the first object and considers that to be "sending a message" to that object. However, some object-oriented programming languages such as Flavors and the Common Lisp Object System (CLOS) enable discriminating on more than the first parameter of the function. Inheritance is the ability to subclass an object class, to create a new class that is a subclass of an existing one and inherits all the data constraints and behaviors of its parents but also adds new and/or changes one or more of them.
Object-oriented programming
In object-oriented programming, an object is an abstract data type with the addition of polymorphism and inheritance.
Rather than structure programs as code and data, an object-oriented system integrates the two using the concept of an "object". An object has state (data) and behavior (code). Objects can correspond to things found in the real world. So for example, a graphics program will have objects such as circle, square, menu. An online shopping system will have objects such as shopping cart, customer, product. The shopping system will support behaviors such as place order, make payment, and offer discount. The objects are designed as class hierarchies. So for example with the shopping system there might be high level classes such as electronics product, kitchen product, and book. There may be further refinements for example under electronic products: CD Player, DVD player, etc. These classes and subclasses correspond to sets and subsets in mathematical logic.
Specialized objects
An important concept for objects is the design pattern. A design pattern provides a reusable template to address a common problem. The following object descriptions are examples of some of the most common design patterns fo |
https://en.wikipedia.org/wiki/MMI | MMI may refer to:
Science and technology
Man-machine interface or user interface
GSM Man-Machine Interface, a mobile telephony standard, see Unstructured Supplementary Service Data § Man-Machine Interface
Modified Mercalli intensity scale, an earthquake intensity measure
W3C MMI or Multimodal Interaction Activity
Monolithic Memories, Inc. (1969–1987), an American semiconductor manufacturer
Motorola Mobility (NYSE: MMI), a publicly traded electronics company, formerly part of Motorola
Multi Media Interface, an in-car interface system developed by Audi
Maximum mutual information criterion
Methimazole, a drug used to treat hyperthyroidism
Multi mode interferometer
Modified Mecalli Index
Music
Miss May I, a metalcore band from Ohio
Madurai Mani Iyer, Indian singer
Schools
Marion Military Institute, a military junior college in Marion, Alabama
Marymount International School of Rome, a private Catholic school in Rome, Italy
Miami Military Institute, a former military college that was located in Germantown, Ohio
Millersburg Military Institute, a defunct military school in Kentucky
MMI Preparatory School, a college preparatory school in Freeland, Pennsylvania
Motorcycle Mechanics Institute, a program of the Universal Technical Institute
Other uses
2001 (year)
2001 or MMI in Roman numerals
Manufacturers Mutual Insurance, a former Australian insurance company, now called Allianz Australia
Municipal Mutual Insurance, a British insurance company
Maximum medical improvement, a plateau in a person's healing process
McMinn County Airport's identification code
Multiple mini interview, an interview method to assess soft skills
Muslim Mosque, Inc., an organization founded by Malcolm X
NYSE Arca Major Market Index
See also
2001 (disambiguation) |
https://en.wikipedia.org/wiki/1972%20in%20television | The year 1972 involved some significant events in television.
Below is a list of notable television-related events.
Events
January 3 – Show Boat is aired for the first time on network television, on NBC
January 21 – The first convention of Star Trek fans is held in New York City's Statler-Hilton hotel
Mid-February – John Lennon and Yoko Ono co-host an entire week on The Mike Douglas Show
February 19 – Sammy Davis Jr. makes a guest appearance on All in the Family
March 18 – After losing a 15-year court battle over the legality of its business relationship with The Herald-Traveler, CBS' Boston, Massachusetts affiliate WHDH-TV Channel 5 signs off the air. At 3 a.m. on March 19, WCVB takes over the Channel 5 frequency, simultaneously switching affiliations to the ABC network following CBS' loss of interest in the channel during the long legal wrangle.
March 27 - The Amateur's Guide to Love begins on the air, making it CBS' first attempt to make a game show since 1968 when To Tell the Truth go off the air. It eventually failed.
April 4 – After a three-year courtship, Emily Nugent married Ernest Bishop on Coronation Street.
May – The Tonight Show Starring Johnny Carson permanently relocates its production from New York City to the NBC studios in Burbank, California. (The Tonight Show would remain there until relocating back to New York in February 2014)
July 21 – Victoria Wyndham makes her first appearance as vixen (and later, good girl) Rachel Davis on the soap opera Another World.
August 1 – Three years after it was first filmed, the Israel Broadcasting Authority finally agrees to screen Barricades, a controversial documentary film that offered a sympathetic portrayal of Palestinians expelled from their homes in the 1948 Arab-Israeli War.
August 26 – Effective with this issue, TV Guide discontinues the practice of using a "C" to indicate color programs, and instead starts using a "BW" for monochrome, saving a lot of printer's ink in the process. At the time about half of the TV households in the U.S. had color sets.
September 1 - A day for CBS' daytime lineup could be described as anything but uneventful. CBS airs reruns of popular primetime shows, The Lucy Show, The Beverly Hillbillies and My Three Sons for the final time, as reruns dominating CBS' daytime lineup (it was a tradition since 1959). The following Monday, all three shows moved to syndication.
September 4 – Another eventful day for CBS' daytime schedule. The Price Is Right premieres on CBS. To date, it is the longest running game show on American television. The Joker's Wild and Gambit also debuted, bringing game shows back to CBS' schedule in a more successful attempt than The Amateur's Guide to Love, which was cancelled on June 23, and it replaced the morning reruns dominated by CBS' daytime lineup (it was a staple since 1959).
September 9 – The Lawrence Welk Show opens its 18th season on location in Hawaii.
October 27 – The 5000th episode of Captain Kangaroo airs.
November 8 – Home B |
https://en.wikipedia.org/wiki/Nili | NILI () was a Jewish espionage network which assisted the United Kingdom in its fight against the Ottoman Empire in Palestine between 1915 and 1917, during World War I. NILI was centered in Zichron Ya'acov, with branches in Hadera and other Moshava. Nili is an acronym which stands for the Hebrew phrase from the First Book of Samuel: "Netzah Yisrael Lo Yeshaker" (), which translates as "the Eternal One of Israel will not lie". The British government code-named NILI the "A Organization", according to a 1920 misfiled memorandum in the British National Archives, as described in the book Spies in Palestine by James Srodes.
During the Armenian genocide, the group opposed the Yishuv leadership at the time, and tried to intervene on behalf of the Armenians.
In choosing to side with the British Empire, the members of Nili went against the majority view of their fellow Jews from the Yishuv, who feared fierce persecution. These fears almost materialised when the spy ring was discovered, and the Jews of Palestine escaped the tragic fate of the Armenians only due to the intervention of the Vatican, the German government and General Erich von Falkenhayn, commander of the Ottoman-German troops in Palestine.
Establishment
Sarah Aaronsohn, her brothers Aaron and Alex, and their sister Rivka, together with their friend (and Rivka's fiancé) Avshalom Feinberg formed and led Nili.
In 1915, even before the group commenced operations, the Ottomans imprisoned Feinberg on suspicion of spying, which was not true at the time. When Feinberg was arrested for espionage and held in Beersheba, Yosef Lishansky joined Nili in December 1915. Because he was active in the south, he was recruited by Feinberg to pass information to and from Sarah Aaronsohn, who was operating from Atlit.
From March to October 1915, a plague of locusts stripped areas in and around Palestine of almost all vegetation. The Turkish authorities, worried about feeding their troops, turned to world-famous botanist and the region's leading agronomist, Aaron Aaronsohn, who requested the release of his friend and assistant, Avshalom Feinberg. The team fighting the locust invasion was given permission to move around the country, enabling them to collect strategic information about Ottoman camps and troop deployment.
For months, the group was not taken seriously by British intelligence, and attempts by Aaron Aaronsohn and Avshalom Feinberg to establish communication channels in Cairo and Port Said failed. Only after Aaronsohn arrived in London (by way of Berlin and Copenhagen) and owing to his reputation, was he able to obtain cooperation from the diplomat Sir Mark Sykes.
Sarah oversaw operations in Palestine from Zikhron Ya'akov.
Demise
Attempting to reach Egypt on foot, Avshalom Feinberg was killed and Yosef Lishansky was wounded but managed to reach British lines.
From February to September 1917, the steam yacht Managem regularly sailed to the Palestinian coast near Atlit. Lishansky swam ashore to co |
https://en.wikipedia.org/wiki/Class%20variable | In class-based, object-oriented programming, a class variable is a variable defined in a class of which a single copy exists, regardless of how many instances of the class exist.
A class variable is not an instance variable. It is a special type of class attribute (or class property, field, or data member). The same dichotomy between instance and class members applies to methods ("member functions") as well; a class may have both instance methods and class methods.
Static member variables and static member functions
In some languages, class variables and class methods are either statically resolved, not via dynamic dispatch, or their memory statically allocated at compile time (once for the entire class, as static variables), not dynamically allocated at run time (at every instantiation of an object). In other cases, however, either or both of these are dynamic. For example, if classes can be dynamically defined (at run time), class variables of these classes are allocated dynamically when the class is defined, and in some languages class methods are also dispatched dynamically.
Thus in some languages, static member variable or static member function are used synonymously with or in place of "class variable" or "class function", but these are not synonymous across languages. These terms are commonly used in Java, C# , and C++, where class variables and class methods are declared with the static keyword, and referred to as static member variables or static member functions.
Example
C++
struct Request {
static int count;
int number;
Requestobject() {
number = count; // modifies the instance variable "this->number"
++count; // modifies the class variable "Request::count"
}
};
int Request::count = 0;
In this C++ example, the class variable Request::count is incremented on each call to the constructor, so that Request::count always holds the number of Requests that have been constructed, and each new Request object is given a number in sequential order. Since count is a class variable, there is only one object Request::count; in contrast, each Request object contains its own distinct number field.
Also note that the variable Request::count is initialized only once.
Python
class Dog:
vertebrate_group = 'mammals' # class variable
dog_1 = Dog
print(dog_1.vertebrate_group) # accessing the class variableIn the above Python code, it does not provide much information as there is only class variable in the Dog class that provide the vertebrate group of dog as mammals. In instance variable, you could customize your own object (in this case, dog_1) by having one or more instance variables in the Dog class.
Notes
Object-oriented programming
Variable (computer science)
Articles with example C++ code |
https://en.wikipedia.org/wiki/Miner%20Willy | Miner Willy is the protagonist in a series of platform games for the ZX Spectrum, MSX, Amstrad CPC and the Commodore 64 home computers. The first two games - Manic Miner and Jet Set Willy were written by Matthew Smith during the early 1980s.
The Willy saga was to be a trilogy and a third game in the series was planned, Miner Willy Meets The Taxman.
The series started in 1983 with the release of Manic Miner, and was followed up a year later with Jet Set Willy and Jet Set Willy II. Another game in the series, The Perils of Willy, was released solely for the VIC-20. Andre's Night Off was published as a type-in listing in the June 1984 issue of Computer & Video Games. In addition, quite a few unofficial sequels, remakes, homages and updates have been released.
Games in the series
Manic Miner, (1983), Bug-Byte / Software Projects
Jet Set Willy, (1984), Software Projects
The Perils of Willy, (1984), Software Projects
Andre's Night Off, 1984, Matthew Smith
Jet Set Willy II, (1985), Software Projects
References
Video game franchises
Video game franchises introduced in 1983
Video games with available source code
Fictional miners |
https://en.wikipedia.org/wiki/Context | Context may refer to:
Context (language use), the relevant constraints of the communicative situation that influence language use, language variation, and discourse summary
Computing
Context (computing), the virtual environment required to suspend a running software program
Lexical context or runtime context of a program, which determines name resolution
Context awareness, a complementary to location awareness
Context menu, a menu in a graphical user interface that appears upon user interaction
ConTeXt, a macro package for the TeX typesetting system
ConTEXT, a text editor for Microsoft Windows
Operational context, a temporarily defined environment of cooperation
Context (term rewriting), a formal expression with a hole
Other uses
Context (festival), an annual Russian festival of modern choreography
Archaeological context, an event in time which has been preserved in the archaeological record
Opaque context, the linguistic context in which substitution of co-referential expressions does not preserve truth
Trama (mycology) (context or flesh), the mass of non-hymenial tissues that composes the mass of a fungal fruiting body
Context (rapper), also known as Context MC, stage name of George Musgrave
See also
Contextual (disambiguation)
Contextualization (disambiguation)
Locality (disambiguation)
State (disambiguation) |
https://en.wikipedia.org/wiki/Stratification | Stratification may refer to:
Mathematics
Stratification (mathematics), any consistent assignment of numbers to predicate symbols
Data stratification in statistics
Earth sciences
Stable and unstable stratification
Stratification, or stratum, the layering of rocks
Stratification (archeology), the formation of layers (strata) in which objects are found
Stratification (water), the formation of water layers based on temperature (and salinity, in oceans)
Ocean stratification
Lake stratification
Atmospheric stratification, the dividing of the Earth's atmosphere into strata
Inversion (meteorology)
Social sciences
Social stratification, the dividing of a society into levels based on power or socioeconomic status
Biology
Stratification (seeds), where seeds are treated to simulate winter conditions so that germination may occur
Stratification (clinical trials), partitioning of subjects by a factors other than the intervention
Stratification (vegetation), the vertical layering of vegetation e.g. within a forest
Population stratification, the stratification of a genetic population based on allele frequencies
Linguistics
Stratification (linguistics), the idea that language is organized in hierarchically ordered strata (such as phonology, morphology, syntax, and semantics).
See also
Destratification (disambiguation)
Fuel stratified injection
Layer (disambiguation)
Partition (disambiguation)
Strata (disambiguation)
Stratified epithelial lining (disambiguation)
Stratified sampling
Stratigraphy
Stratum (disambiguation) |
https://en.wikipedia.org/wiki/DDC | DDC may stand for:
Computing
Distributed Disaggregated Chassis, an open networking design for a router chassis submitted by AT&T to the Open Compute Project.
Digital distribution copy
Digital down converter, a method in digital signal processing
Display Data Channel, a communication protocol between a graphics card and a monitor defined by VESA
Other
Dansk Datamatik Center, a Danish software research and development centre of the 1980s
DDC-I, a Danish and American company created from the work of the above
Deep Dickollective or D/DC
Defense Documentation Center for Scientific and Technical Information (United States; until 1963: ASTIA Armed Services Technical Information Agency, from 1979: DTIC Defense Technical Information Center)
Detroit Diesel Corporation
Dewey Decimal Classification
Dideoxycytidine or ddC or zalcitabine
Direct digital control, reading and steering of HVAC devices
District Development Council
Dodge City Regional Airport's IATA code
DOPA decarboxylase or Aromatic-L-amino-acid decarboxylase
Double disc court
Dzongkha Development Commission
United States District Court for the District of Columbia
Dairy Development Corporation of Nepal
District Development Committee (in Nepal) |
https://en.wikipedia.org/wiki/Numerical%20integration | In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral.
The term numerical quadrature (often abbreviated to quadrature) is more or less a synonym for "numerical integration", especially as applied to one-dimensional integrals. Some authors refer to numerical integration over more than one dimension as cubature; others take "quadrature" to include higher-dimensional integration.
The basic problem in numerical integration is to compute an approximate solution to a definite integral
to a given degree of accuracy. If is a smooth function integrated over a small number of dimensions, and the domain of integration is bounded, there are many methods for approximating the integral to the desired precision.
Numerical integration has roots in the geometrical problem of finding a square with the same area as a given plane figure (quadrature or squaring), as in the quadrature of the circle.
The term is also sometimes used to describe the numerical solution of differential equations.
Motivation and need
There are several reasons for carrying out numerical integration, as opposed to analytical integration by finding the antiderivative:
The integrand f(x) may be known only at certain points, such as obtained by sampling. Some embedded systems and other computer applications may need numerical integration for this reason.
A formula for the integrand may be known, but it may be difficult or impossible to find an antiderivative that is an elementary function. An example of such an integrand is f(x) = exp(−x2), the antiderivative of which (the error function, times a constant) cannot be written in elementary form.
It may be possible to find an antiderivative symbolically, but it may be easier to compute a numerical approximation than to compute the antiderivative. That may be the case if the antiderivative is given as an infinite series or product, or if its evaluation requires a special function that is not available.
History
The term "numerical integration" first appears in 1915 in the publication A Course in Interpolation and Numeric Integration for the Mathematical Laboratory by David Gibb.
"Quadrature" is a historical mathematical term that means calculating area. Quadrature problems have served as one of the main sources of mathematical analysis. Mathematicians of Ancient Greece, according to the Pythagorean doctrine, understood calculation of area as the process of constructing geometrically a square having the same area (squaring). That is why the process was named "quadrature". For example, a quadrature of the circle, Lune of Hippocrates, The Quadrature of the Parabola. This construction must be performed only by means of compass and straightedge.
The ancient Babylonians used the trapezoidal rule to integrate the motion of Jupiter along the ecliptic.
For a quadrature of a rectangle with the sides a and b it is necessary to construct a square with the side (the G |
https://en.wikipedia.org/wiki/Interface%20%28object-oriented%20programming%29 | In object-oriented programming, an interface or protocol type is a data type that acts as an abstraction of a class. It describes a set of method signatures, the implementations of which may be provided by multiple classes that are otherwise not necessarily related to each other. A class which provides the methods listed in a protocol is said to adopt the protocol, or to implement the interface.
If objects are fully encapsulated then the protocol is the only way in which they may be accessed by other objects. For example, in Java, the Comparable interface specifies a method compareTo() which implementing classes must implement. This means that a sorting method, for example, can sort a collection of any objects of types which implement the Comparable interface, without having to know anything about the inner nature of the class (except that two of these objects can be compared by means of compareTo()).
Some programming languages provide explicit language support for protocols (Ada, C#, D, Dart, Delphi, Go, Java, Logtalk, Object Pascal, Objective-C, OCaml, PHP, Racket, Seed7, Swift, Python 3.8). In languages supporting multiple inheritance, such as C++, interfaces are implemented as abstract classes.
In languages without explicit support, protocols are often still present as conventions. This is known as duck typing. For example, in Python, any class can implement an __iter__ method and be used as a collection.
Type classes in languages like Haskell, or module signatures in ML and OCaml, are used for many of the things that protocols are used for.
See also
Concept (generic programming)
Delegation (programming)
Protocols in Objective-C
Class (computer science)
Encapsulation (computer science)
Public interface
Interface (Java)
List of basic computer science topics
Application programming interface
Notes
References
Object-oriented programming |
https://en.wikipedia.org/wiki/CP/M-86 | CP/M-86 is a discontinued version of the CP/M operating system that Digital Research (DR) made for the Intel 8086 and Intel 8088. The system commands are the same as in CP/M-80. Executable files used the relocatable .CMD file format. Digital Research also produced a multi-user multitasking operating system compatible with CP/M-86, MP/M-86, which later evolved into Concurrent CP/M-86. When an emulator was added to provide PC DOS compatibility, the system was renamed Concurrent DOS, which later became Multiuser DOS, of which REAL/32 is the latest incarnation. The FlexOS, DOS Plus, and DR DOS families of operating systems started as derivations of Concurrent DOS as well.
History
Digital Research's CP/M-86 was originally announced to be released in November 1979, but was delayed repeatedly. When IBM contacted other companies to obtain components for the IBM PC, the as-yet unreleased CP/M-86 was its first choice for an operating system because CP/M had the most applications at the time. Negotiations between Digital Research and IBM quickly deteriorated over IBM's non-disclosure agreement and its insistence on a one-time fee rather than DRI's usual royalty licensing plan.
After discussions with Microsoft, IBM decided to use 86-DOS (QDOS), a CP/M-like operating system that Microsoft bought from Seattle Computer Products renaming it MS-DOS. Microsoft adapted it for PC, and licensed it to IBM. It was sold by IBM under the name of PC DOS. After learning about the deal, Digital Research founder Gary Kildall threatened to sue IBM for infringing DRI's intellectual property, and IBM agreed to offer CP/M-86 as an alternative operating system on the PC to settle the claim. Most of the BIOS drivers for CP/M-86 for the IBM PC were written by Andy Johnson-Laird.
The IBM PC was announced on 12 August 1981, and the first machines began shipping in October the same year, ahead of schedule. CP/M-86 was one of three operating systems available from IBM, with PC DOS and UCSD p-System. Digital Research's adaptation of CP/M-86 for the IBM PC was released six months after PC DOS in spring 1982, and porting applications from CP/M-80 to either operating system was about equally difficult. In November 1981, Digital Research also released a version for the proprietary IBM Displaywriter.
On some dual-processor 8-bit/16-bit computers special versions of CP/M-86 could natively run CP/M-86 and CP/M-80 applications. A version for the DEC Rainbow was named CP/M-86/80, whereas the version for the was named CP/M 8-16 (see also: MP/M 8-16). The version of CP/M-86 for the 8085/8088-based Zenith Z-100 supported running programs for both processors as well.
When PC clones came about, Microsoft licensed MS-DOS to other companies as well. Experts found that the two operating systems were technically comparable, with CP/M-86 having better memory management but DOS being faster. BYTE speculated that Microsoft reserving multitasking for Xenix "appears to leave a big opening" for Concurren |
https://en.wikipedia.org/wiki/Case-based%20reasoning | In artificial intelligence and philosophy, case-based reasoning (CBR), broadly construed, is the process of solving new problems based on the solutions of similar past problems.
In everyday life, an auto mechanic who fixes an engine by recalling another car that exhibited similar symptoms is using case-based reasoning. A lawyer who advocates a particular outcome in a trial based on legal precedents or a judge who creates case law is using case-based reasoning. So, too, an engineer copying working elements of nature (practicing biomimicry) is treating nature as a database of solutions to problems. Case-based reasoning is a prominent type of analogy solution making.
It has been argued that case-based reasoning is not only a powerful method for computer reasoning, but also a pervasive behavior in everyday human problem solving; or, more radically, that all reasoning is based on past cases personally experienced. This view is related to prototype theory, which is most deeply explored in cognitive science.
Process
Case-based reasoning has been formalized for purposes of computer reasoning as a four-step process:
Retrieve: Given a target problem, retrieve cases relevant to solving it from memory. A case consists of a problem, its solution, and, typically, annotations about how the solution was derived. For example, suppose Fred wants to prepare blueberry pancakes. Being a novice cook, the most relevant experience he can recall is one in which he successfully made plain pancakes. The procedure he followed for making the plain pancakes, together with justifications for decisions made along the way, constitutes Fred's retrieved case.
Reuse: Map the solution from the previous case to the target problem. This may involve adapting the solution as needed to fit the new situation. In the pancake example, Fred must adapt his retrieved solution to include the addition of blueberries.
Revise: Having mapped the previous solution to the target situation, test the new solution in the real world (or a simulation) and, if necessary, revise. Suppose Fred adapted his pancake solution by adding blueberries to the batter. After mixing, he discovers that the batter has turned blue – an undesired effect. This suggests the following revision: delay the addition of blueberries until after the batter has been ladled into the pan.
Retain: After the solution has been successfully adapted to the target problem, store the resulting experience as a new case in memory. Fred, accordingly, records his new-found procedure for making blueberry pancakes, thereby enriching his set of stored experiences, and better preparing him for future pancake-making demands.
Comparison to other methods
At first glance, CBR may seem similar to the rule induction algorithms of machine learning. Like a rule-induction algorithm, CBR starts with a set of cases or training examples; it forms generalizations of these examples, albeit implicit ones, by identifying commonalities between a retrieve |
https://en.wikipedia.org/wiki/Sprite%20comic | Sprite comics are webcomics that consist primarily of computer sprites from video games. Art assets are ripped from various classic games such as Mega Man and Sonic the Hedgehog, are edited and combined by amateur cartoonists, and are posted on the internet. Popularized by Bob and George in the early 2000s, the style is considered relatively easy for beginning cartoonists to get involved in, but sprite comics are generally looked down upon for being of low quality. The format has not seen mainstream attention since 8-bit Theater concluded in 2010.
History
The 1998 webcomic Neglected Mario Characters was the first sprite comic to appear on the internet, though Bob and George was the first sprite comic to gain widespread popularity. Starting its run in 2000, Bob and George utilizes sprites from the Mega Man series of games, with most of the characters being taken directly from the games. Bob and George played a significant role in the popularity of sprite comics, as well as webcomics in general.
Art assets were ripped from Super NES, Sega Genesis, and Game Boy Advance games and were collected in online databases such as The Spriters Resource. A platform game such as Sonic Advance may contain hundreds of sprites of its protagonist running, jumping, and falling, though cartoonists frequently recolored characters or edited them to convey a broader range of emotion. Over time, sprite comic creators collaborated with projects such as the World Spriters Tournament, in which cartoonists let their sprite comic characters fight one another.
Few sprite comics have gained mainstream attention since 8-Bit Theater ended in 2010. Though sprite comics are still highly popular among amateur cartoonists, Larry Cruz from Comic Book Resources noted that the aesthetic is played out.
Style
Sprite comics mainly use graphics from 1980s video games, such as Mega Man and Final Fantasy. Lore Sjöberg from Wired stated that sprite comics "re-create the feel of [such games] with a minimum of artistic effort." Penny Arcades Mike Krahulik pointed out that sprite comics are a good way for people who can't draw well to create comics. Cruz pointed out that the aesthetic has "evolved and flourished in a variety of media" since. However, the style is also commonly criticized. Cruz described sprite comics as "the favorite style for the laziest webcomic creators," while Sjöberg pointed out that sprite comics are often seen as substandard by comic fans. Both Chris Dlugosz and Michael Zole (Death to the Extremist) have criticized the style, Zole stating that creators of sprite comics "seem to think that they're scoring humor points just by reusing old pixelated characters," and Dlugosz devoting his webcomic Pixel explicitly to making fun of the practice.
In a review of the webcomic Kid Radd, Dani Atkinson of Sequential Tart noted that people without a gamer background may find that "much of the irony and humour in [sprite comics] goes swooshing over [their] head." However, she also |
https://en.wikipedia.org/wiki/Star%20Raiders | Star Raiders is a space combat simulator video game that was written by Doug Neubauer and published in 1980 by Atari, Inc. for the Atari 400/800 computers. The player assumes the role of a starship pilot who is fighting Zylon forces while managing their ship's energy and systems, and protecting friendly starbases. Starflight and combat are shown in the 3D cockpit view with a 2D galactic map showing the status of the Zylon invasion. The television series Battlestar Galactica, the film Star Wars (1977), and the 1971 mainframe game Star Trek influenced Neubauer, who began developing Star Raiders in his non-working time at Atari. Star Raiders was later ported to the Atari 2600, Atari 5200, and Atari ST.
Matt Barton and Bill Loguidice of Gamasutra called Star Raiders one of the best-known games for Atari's 400 and 800 computers. It influenced space combat games such as Elite (1984) and Wing Commander (1990), a sequel named Star Raiders II, and a 2011 remake. Star Raiders was included in a list of ten games that were submitted as a game canon to the Library of Congress in 2007.
Plot and gameplay
Star Raiders is set in outer space. The player assumes the role of the captain of Elite Atarian Starship fleet. A treaty between the Atarian Federation and the Zylon Empire is broken, leading to a war; the player must combat the Zylons before they eliminate humanity. The player must destroy the Zylon ships before they destroy the player's ship or the player's ship runs out of energy. The game is controlled using both a keyboard and a joystick. The game is primarily experienced from a first-person, 3D cockpit view and larger, 2D map overviews for long-distance travel. The player can control the speed of travel in space, and the angle of display (rear and front-views), and engage a mini-display called the Attack Computer Display that displays the horizontal and vertical coordinates of attacking Zylon ships and other targets. During battle, the player can destroy enemy ships using a photon torpedo; they must also avoid or destroy asteroids, which can cause damage or destroy their starship. In this mode, the bottom of the screen shows a control-panel display showing velocity (V), the number of enemies that have been destroyed (K), the ship's remaining energy {E) and the amount of targets in the area (T). Firing photon torpedoes, using shields, and traversing space deplete energy. To restore energy or damaged ship parts, the player must orbit a friendly starbase and match its coordinates with the Attack Computer Display.
A long-range scanner changes the game's display, allowing the player a top-down view of their ship and flashing squares indicate targets' locations. When the long-range scan is damaged, the player will see the objects in the area and false reflections of them. The player can also view a galactic chart, which shows a grid map displayed that indicates friendly starbases, enemy ships and the player's location. Players can navigate to different |
https://en.wikipedia.org/wiki/Tcpdump | tcpdump is a data-network packet analyzer computer program that runs under a command line interface. It allows the user to display TCP/IP and other packets being transmitted or received over a network to which the computer is attached. Distributed under the BSD license, tcpdump is free software.
Tcpdump works on most Unix-like operating systems: Linux, Solaris, FreeBSD, DragonFly BSD, NetBSD, OpenBSD, OpenWrt, macOS, HP-UX 11i, and AIX. In those systems, tcpdump uses the libpcap library to capture packets. The port of tcpdump for Windows is called WinDump; it uses WinPcap, the Windows version of libpcap.
History
tcpdump was originally written in 1988 by Van Jacobson, Sally Floyd, Vern Paxson and Steven McCanne who were, at the time, working in the Lawrence Berkeley Laboratory Network Research Group. By the late 1990s there were numerous versions of tcpdump distributed as part of various operating systems, and numerous patches that were not well coordinated. Michael Richardson (mcr) and Bill Fenner created www.tcpdump.org in 1999.
Common uses
tcpdump prints the contents of network packets. It can read packets from a network interface card or from a previously created saved packet file. tcpdump can write packets to standard output or a file.
It is also possible to use tcpdump for the specific purpose of intercepting and displaying the communications of another user or computer. A user with the necessary privileges on a system acting as a router or gateway through which unencrypted traffic such as Telnet or HTTP passes can use tcpdump to view login IDs, passwords, the URLs and content of websites being viewed, or any other unencrypted information.
The user may optionally apply a BPF-based filter to limit the number of packets seen by tcpdump; this renders the output more usable on networks with a high volume of traffic.
Example of available capture interfaces on a Linux system:
$ tcpdump -D
1.eth0 [Up, Running, Connected]
2.any (Pseudo-device that captures on all interfaces) [Up, Running]
3.lo [Up, Running, Loopback]
4.bluetooth-monitor (Bluetooth Linux Monitor) [Wireless]
5.usbmon2 (Raw USB traffic, bus number 2)
6.usbmon1 (Raw USB traffic, bus number 1)
7.usbmon0 (Raw USB traffic, all USB buses) [none]
8.nflog (Linux netfilter log (NFLOG) interface) [none]
9.nfqueue (Linux netfilter queue (NFQUEUE) interface) [none]
10.dbus-system (D-Bus system bus) [none]
11.dbus-session (D-Bus session bus) [none]
12.bluetooth0 (Bluetooth adapter number 0)
13.eth1 [none, Disconnected]
Privileges required
In some Unix-like operating systems, a user must have superuser privileges to use tcpdump because the packet capturing mechanisms on those systems require elevated privileges. However, the -Z option may be used to drop privileges to a specific unprivileged user after capturing has been set up. In other Unix-like operating systems, the packet capturing mechanism can be configured to allow non-privileged users to use it; if that is done, superuser priv |
https://en.wikipedia.org/wiki/Jarkko%20Oikarinen | Jarkko Oikarinen (born 16 August 1967) is a Finnish IT professional and the inventor of the first Internet chat network, called Internet Relay Chat (IRC), where he is known as WiZ.
Biography and career
Oikarinen was born in Kuusamo. While working at the University of Oulu in August 1988, he wrote the first IRC server and client programs, which he produced to replace the MUT (MultiUser Talk) program on the Finnish BBS OuluBox. Using the Bitnet Relay chat system as inspiration, Oikarinen continued to develop IRC over the next four years, receiving assistance from Darren Reed in co-authoring the IRC Protocol. In 1997, his development of IRC earned Oikarinen a Dvorak Award for Personal Achievement—Outstanding Global Interactive Personal Communications System; in 2005, the Millennium Technology Prize Foundation, a Finnish public-private partnership, honored him with one of three Special Recognition Awards.
He started working for medical image processing in 1990 in Oulu University Hospital, developing research software for a neurosurgical workstation in Professor John Koivukangas' research group, and between 1993 and 1996 he worked for Elekta in Stockholm, Sweden and Grenoble, France putting the research into commercial products marketed by Elekta. In 1997 he returned to Oulu University Hospital to finish his PhD as Joint Assistant Professor / Research Engineer, receiving the PhD from the University of Oulu in 1999, in areas of computer graphics and medical imaging. During these years he focused on telemedicine, volume rendering, signal processing and computed axial tomography. Once finishing his PhD, he has held the positions of Chief Software Architect of Add2Phone Oy (Helsinki, Finland), Head of R&D in Capricode (Oulu, Finland) and General Manager in Nokia.
He is also partner and chief software architect at an electronic games developer called Numeric Garden (Espoo, Finland).
Oikarinen and his wife, Kaija-Leena, were married in 1996 and have three children: Kasper, Matleena, and Marjaana.
Oikarinen has been working for Google since 2011, initially in Stockholm, Sweden, and since 2016 in Kirkland, Washington. He is working on the Google Hangouts and Google Meet projects.
Sources
External links
Jarkko Oikarinen's homepage
A history of IRC by Oikarinen at the IRC website
1967 births
Computer programmers
Finnish computer scientists
Jarkko Oikarinen
Google employees
Living people
University of Oulu alumni
Nokia people
People from Kuusamo
Finnish expatriates in Sweden
Finnish expatriates in the United States |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.