source stringlengths 32 199 | text stringlengths 26 3k |
|---|---|
https://en.wikipedia.org/wiki/God%20%26%20Golem%2C%20Inc. | God & Golem, Inc.: A Comment on Certain Points Where Cybernetics Impinges on Religion is a book written by MIT cybernetician Norbert Wiener. It won the second annual U.S. National Book Award in category Science, Philosophy and Religion.
It is based on material from a series of lectures that Wiener gave at Yale in 1962, and a seminar he led at the Colloques Philosophiques Internationaux de Royaumont near Paris later that year.
God and Golem presents Wiener's ideas on machine learning, machine reproduction, and the place of machines in society, with some religious context.
Wiener mentions some of his secondary concerns: sensory feedback in artificial limbs, the problems of human responsibility in relation with technology, the limits of machine game-playing, Darwinism, Marxism, the Cold War, the rigidity of ideological thinking, and a critique of the claims of econometrics and mathematical economics to be regarded as being scientific.
In the conclusion, he brings the burden of ethics to politics, away from religion.
Statements in the book are quoted in the science-fiction novels Hyperion and The Fall of Hyperion, written by Dan Simmons.
References
External links
God and Golem, Inc. at MIT Press
God & Golem, Inc. (full text), English and Spanish language
Ethics books
National Book Award-winning works
1964 non-fiction books
MIT Press books |
https://en.wikipedia.org/wiki/Paramount%20Network | Paramount Network is an American basic cable television channel owned by the MTV Entertainment Group unit of Paramount Media Networks, a division of Paramount Global. The network's headquarters are located at the Paramount Pictures studio lot in Los Angeles.
The channel was originally founded by a partnership between radio station WSM and Westinghouse Broadcasting as The Nashville Network (TNN) and began broadcasting on March 7, 1983. It initially featured programming catering towards the culture of the Southern United States, including country music, variety shows, outdoors programming, and motor racing coverage (such as NASCAR). TNN was purchased by the Gaylord Entertainment Company in 1983. After Gaylord bought CMT in 1991, TNN's music programming was shifted to CMT, leaving TNN to focus on entertainment and lifestyle programming.
In 1995, TNN and CMT were acquired by Westinghouse, which was in turn acquired by Viacom in 1999. Under Viacom ownership, TNN would phase out country-influenced programming in favor of a general entertainment format appealing to Middle America. It was renamed The National Network in September 2000, coinciding with the network premiere of WWF Raw. In August 2003, TNN relaunched as Spike TV, which targeted a young adult male audience. From June 2006, the network's programming had a more explicit focus on the action genre, while in 2010, the network had an increased focus on original reality series. This culminated with a final rebrand in 2015 to emphasize gender-balanced series (such as Lip Sync Battle) and a return to original scripted programming. On January 18, 2018, Spike relaunched as Paramount Network, aiming to align the network with its namesake studio (which previously lent its name to the now-defunct United Paramount Network), and to position it as a flagship, "premium" channel.
One of Paramount Network's only major successes in scripted programming has been Yellowstone—which quickly became the channel's flagship drama, and has spawned multiple spin-offs on Paramount+, the streaming service owned by its parent company Paramount Global. In 2020 and 2021, the channel cancelled most of its original series or moved them to other Paramount Global networks, as part of a proposed plan to relaunch the Paramount Network with a focus on made-for-TV films. By January 2022, these plans had been scrapped due to the impact of COVID-19 and success of the Yellowstone franchise, leaving it and Spike holdover Bar Rescue as the channel's only original, first-run programs. The channel has also featured limited engagements of new Paramount+ original series by Yellowstone co-creator Taylor Sheridan, using Yellowstone as a lead-in.
As of September 2018, approximately 80.24 million households in the United States received Paramount Network.
History
The Nashville Network (1983–2000)
The Nashville Network first launched on March 7, 1983; it was dedicated to the culture and lifestyle of country music and the U.S. South. It ori |
https://en.wikipedia.org/wiki/Protagonistas%20de%20Novela | Protagonistas de Novela () is a television series in Spanish that has been produced since 2002 by Telemundo Network USA, based on the Protagonistas... franchise.
Protagonistas de Novela has the same format and rules as Protagonistas de la Música, and that is because Protagonistas de Novela, a reality show where the winners are guaranteed a spot in a future telenovela, was released before its musical version. Both shows are produced by Telemundo USA.
The female winner of the show's first season, Millie Ruperto, has already participated in a soap opera, playing a boxing trainer, alongside Venezuelan superstar Gaby Spanic.
The winners of the show's second season in 2003 were Mexican Erick Elías and Dominican Michelle Vargas.
This particular season was highly criticized for its results, since Elías won with an alleged 50.1% of the vote over the heavy favorite, Puerto Rican actor Alfredo De Quesada, who had 49.1% of the vote according to the show.
There are also versions of Protagonistas de Novela in Chile (called Protagonistas de la Fama), Colombia and Venezuela. In Chile (airing on Canal 13), the winners were Catalina Bono and Álvaro Ballero. In Colombia (airing on RCN), the winners were Ximena Córdoba and Jaider Villa.
External links
Telemundo original programming |
https://en.wikipedia.org/wiki/Librarian | A librarian is a person who works professionally in a library providing access to information, and sometimes social or technical programming, or instruction on information literacy to users.
The role of the librarian has changed much over time, with the past century in particular bringing many new media and technologies into play. From the earliest libraries in the ancient world to the modern information hub, there have been keepers and disseminators of the information held in data stores. Roles and responsibilities vary widely depending on the type of library, the specialty of the librarian, and the functions needed to maintain collections and make them available to its users.
Education for librarianship has changed over time to reflect changing roles.
History
The ancient world
The Sumerians were the first to train clerks to keep records of accounts. "Masters of the books" or "keepers of the tablets" were scribes or priests who were trained to handle the vast amount and complexity of these records. The extent of their specific duties is unknown.
Sometime in the 8th century BC, Ashurbanipal, King of Assyria, created a library at his palace in Nineveh in Mesopotamia. Ashurbanipal was the first individual in history to introduce librarianship as a profession. We know of at least one "keeper of the books" who was employed to oversee the thousands of tablets on Sumerian and Babylonian materials, including literary texts; history; omens; astronomical calculations; mathematical tables; grammatical and linguistic tables; dictionaries; and commercial records and laws. All of these tablets were cataloged and arranged in logical order by subject or type, each having an identification tag.
The Great Library of Alexandria, created by Ptolemy I after the death of Alexander the Great in 323 BC, was created to house the entirety of Greek literature. It was notable for its famous librarians: Demetrius, Zenodotus, Eratosthenes, Apollonius, Aristophanes, Aristarchus, and Callimachus. These scholars contributed significantly to the collection and cataloging of the wide variety of scrolls in the library's collection. Most notably, Callimachus created what is considered to be the first subject catalog of the library holdings, called the pinakes. The pinakes contained 120 scrolls arranged into ten subject classes; each class was then subdivided, listing authors alphabetically by titles. The librarians at Alexandria were considered the "custodians of learning".
Near the end of the Roman Republic and the beginning of the Roman Empire, it was common for Roman aristocrats to hold private libraries in their home. Many of these aristocrats, such as Cicero, kept the contents of their private libraries to themselves, only boasting of the enormity of his collection. Others, such as Lucullus, took on the role of lending librarian by sharing scrolls in their collection. Many Roman emperors included public libraries into their political propaganda to win favor from citizen |
https://en.wikipedia.org/wiki/Selectron%20tube | The Selectron was an early form of digital computer memory developed by Jan A. Rajchman and his group at the Radio Corporation of America (RCA) under the direction of Vladimir K. Zworykin. It was a vacuum tube that stored digital data as electrostatic charges using technology similar to the Williams tube storage device. The team was never able to produce a commercially viable form of Selectron before magnetic-core memory became almost universal.
Development
Development of Selectron started in 1946 at the behest of John von Neumann of the Institute for Advanced Study, who was in the midst of designing the IAS machine and was looking for a new form of high-speed memory.
RCA's original design concept had a capacity of 4096 bits, with a planned production of 200 by the end of 1946. They found the device to be much more difficult to build than expected, and they were still not available by the middle of 1948. As development dragged on, the IAS machine was forced to switch to Williams tubes for storage, and the primary customer for Selectron disappeared. RCA lost interest in the design and assigned its engineers to improve televisions
A contract from the US Air Force led to a re-examination of the device in a 256-bit form. Rand Corporation took advantage of this project to switch their own IAS machine, the JOHNNIAC, to this new version of the Selectron, using 80 of them to provide 512 40-bit words of main memory. They signed a development contract with RCA to produce enough tubes for their machine at a projected cost of $500 per tube ($ in ).
Around this time IBM expressed an interest in the Selectron as well, but this did not lead to additional production. As a result, RCA assigned their engineers to color television development, and put the Selectron in the hands of "the mothers-in-law of two deserving employees (the Chairman of the Board and the President)."
Both the Selectron and the Williams tube were superseded in the market by the compact and cost-effective magnetic-core memory, in the early 1950s. The JOHNNIAC developers had decided to switch to core even before the first Selectron-based version had been completed.
Principle of operation
Electrostatic storage
The Williams tube was an example of a general class of cathode ray tube (CRT) devices known as storage tubes.
The primary function of a conventional CRT is to display an image by lighting phosphor using a beam of electrons fired at it from an electron gun at the back of the tube. The target point of the beam is steered around the front of the tube though the use of deflection magnets or electrostatic plates.
Storage tubes were based on CRTs, sometimes unmodified. They relied on two normally undesirable principles of phosphor used in the tubes. One was that when electrons from the CRT's electron gun struck the phosphor to light it, some of the electrons "stuck" to the tube and caused a localized static electric charge to build up. The second was that the phosphor, like many mat |
https://en.wikipedia.org/wiki/Portable%20computer | A portable computer is a computer designed to be easily moved from one place to another, as opposed to those designed to remain stationary at a single location such as desktops and workstations. These computers usually include a display and keyboard that are directly connected to the main case, all sharing a single power plug together, much like later desktop computers called all-in-ones (AIO) that integrate the system's internal components into the same case as the display. In modern usage, a portable computer usually refers to a very light and compact personal computer such as a laptop, miniature or pocket-sized computer, while touchscreen-based handheld ("palmtop") devices such as tablet, phablet and smartphone are called mobile devices instead.
The first commercially sold portable computer might be the MCM/70, released 1974. The next major portables were the IBM 5100 (1975), Osborne's CP/M-based Osborne 1 (1981) and Compaq's , advertised as 100% IBM PC compatible Compaq Portable (1983). These luggable computers still required a continuous connection to an external power source; this limitation was later overcome by the laptop computers. Laptops were followed by lighter models such as netbooks, so that in the 2000s mobile devices and by 2007 smartphones made the term "portable" rather meaningless. The 2010s introduced wearable computers such as smartwatches.
Portable computers, by their nature, are generally microcomputers. Larger portable computers were commonly known as 'Lunchbox' or 'Luggable' computers. They are also called 'Portable Workstations' or 'Portable PCs'. In Japan they were often called from "bento".
Portable computers, more narrowly defined, are distinct from desktop replacement computers in that they usually were constructed from full-specification desktop components, and often do not incorporate features associated with laptops or mobile devices. A portable computer in this usage, versus a laptop or other mobile computing device, have a standard motherboard or backplane providing plug-in slots for add-in cards. This allows mission specific cards such as test, A/D, or communication protocol (IEEE-488, 1553) to be installed. Portable computers also provide for more disk storage by using standard disk drives and provide for multiple drives.
Early history
SCAMP
In 1973, the IBM Los Gatos Scientific Center developed a portable computer prototype called SCAMP (Special Computer APL Machine Portable) based on the IBM PALM processor with a Philips compact cassette drive, small CRT and full function keyboard. SCAMP emulated an IBM 1130 minicomputer in order to run APL\1130. In 1973, APL was generally available only on mainframe computers, and most desktop sized microcomputers such as the Wang 2200 or HP 9800 offered only BASIC. Because SCAMP was the first to emulate APL\1130 performance on a portable, single user computer, PC Magazine in 1983 designated SCAMP a "revolutionary concept" and "the world's first personal comput |
https://en.wikipedia.org/wiki/Automatic%20number%20identification | Automatic number identification (ANI) is a feature of a telecommunications network for automatically determining the origination telephone number on toll calls for billing purposes. Automatic number identification was originally created by the American Telephone and Telegraph Company (AT&T) for long distance service in the Bell System, eliminating the need for telephone operators to manually record calls.
Modern ANI has two components: information digits, which identify the class of service, and the calling party billing telephone number.
The term is also used to describe the functions of two-way radio selective calling that identify the transmitting user.
ANI is distinct from newer caller ID services, such as call display, which are solely for informing a subscriber.
Toll-free telephone numbers
Modern toll-free telephone numbers, which generate itemized billing of all calls received instead of relying on the special fixed-rate trunks of the Bell System's original Inward WATS service, depend on ANI to track inbound calls to numbers in special area codes such as +1-800, 888, 877, 866, 855, 844 with 833 and 822 reserved for future toll free use (United States and Canada), 1800 (Australia) or 0800 and 0808 (United Kingdom).
Privacy
ANI is conceptually and technically different from caller ID service. A caller's telephone number and line type are captured by ANI service even if caller ID blocking is activated. The destination telephone company switching office can relay the originating telephone number to ANI delivery services subscribers. Toll-free subscribers and large companies normally have access to ANI, either instantly via installed equipment, or from a monthly billing statement. Residential subscribers can obtain access to ANI information through third party companies that charge for the service.
{{
ANI is generally not transmitted when a call is operator assisted; only the area code of the last switch to route the call is sent.
Automatic number announcement
ANI is used to provide automatic number announcement, a test facility of a central office for telephone installation technicians. The service, which is not advertised to the public, allows an installer to identify a line by dialing a telephone number. Such numbers are typically assigned in a a range reserved for testing purposes (such as 958-xxxx in much of North America).
DNIS
Dialed Number Identification Service (DNIS) is a related service feature available to private branch exchange subscribers. It transmits information about the destination number, which a service provider can use to have several toll-free numbers directed to the same call center and provide unique service. DNIS can also be used to identify other call routing information. For example, toll-free service can be configured to send a specific DNIS number that is assigned to callers from geographic regions based on city, area code, state, or country.
Similar services
Europe: Calling Line Identification (CLI)
Unit |
https://en.wikipedia.org/wiki/Unix%20File%20System | The Unix file system (UFS) is a family of file systems supported by many Unix and Unix-like operating systems. It is a distant descendant of the original filesystem used by Version 7 Unix.
Design
A UFS volume is composed of the following parts:
A few blocks at the beginning of the partition reserved for boot blocks (which must be initialized separately from the filesystem)
A superblock, containing a magic number identifying this as a UFS filesystem, and some other vital numbers describing this filesystem's geometry and statistics and behavioral tuning parameters
A collection of cylinder groups. Each cylinder group has the following components:
A backup copy of the superblock
A cylinder group header, with statistics, free lists, etc., about this cylinder group, similar to those in the superblock
A number of inodes, each containing file attributes
A number of data blocks
Inodes are numbered sequentially, starting at 0. Inode 0 is reserved for unallocated directory entries, inode 1 was the inode of the bad block file in historical UNIX versions, followed by the inode for the root directory, which is always inode 2 and the inode for the lost+found directory which is inode 3.
Directory files contain only the list of filenames in the directory and the inode associated with each file. All file metadata are kept in the inode.
History and evolution
Early Unix filesystems were referred to simply as FS. FS only included the boot block, superblock, a clump of inodes, and the data blocks. This worked well for the small disks early Unixes were designed for, but as technology advanced and disks grew larger, moving the head back and forth between the clump of inodes and the data blocks they referred to caused thrashing. Marshall Kirk McKusick, then a Berkeley graduate student, optimized the V7 FS layout to create BSD 4.2's FFS (Fast File System) by inventing cylinder groups, which break the disk up into smaller chunks, with each group having its own inodes and data blocks.
The intent of BSD FFS is to try to localize associated data blocks and metadata in the same cylinder group and, ideally, all of the contents of a directory (both data and metadata for all the files) in the same or nearby cylinder group, thus reducing fragmentation caused by scattering a directory's contents over a whole disk.
Some of the performance parameters in the superblock included number of tracks and sectors, disk rotation speed, head speed, and alignment of the sectors between tracks. In a fully optimized system, the head could be moved between close tracks to read scattered sectors from alternating tracks while waiting for the platter to spin around.
As disks grew larger and larger, sector-level optimization became obsolete (especially with disks that used linear sector numbering and variable sectors per track). With larger disks and larger files, fragmented reads became more of a problem. To combat this, BSD originally increased the filesystem block size from one |
https://en.wikipedia.org/wiki/Twistor%20memory | Twistor memory is a form of computer memory formed by wrapping magnetic tape around a current-carrying wire. Operationally, twistor was very similar to core memory. Twistor could also be used to make ROM memories, including a re-programmable form known as piggyback twistor. Both forms were able to be manufactured using automated processes, which was expected to lead to much lower production costs than core-based systems.
Introduced by Bell Labs in 1957, the first commercial use was in their 1ESS switch which went into operation in 1965. Twistor was used only briefly in the late 1960s and early 1970s, when semiconductor memory devices replaced almost all earlier memory systems. The basic ideas behind twistor also led to the development of bubble memory, although this had a similarly short commercial lifespan.
Core memory
Construction
In core memory, small ring-shaped magnets - the cores - are threaded by two crossed wires, X and Y, to make a matrix known as a plane. When one X and one Y wire are powered, a magnetic field is generated at a 45-degree angle to the wires. The core magnets sit on the wires at a 45-degree angle, so the single core wrapped around the crossing point of the powered X and Y wires will be affected by the induced field.
The materials used for the core magnets were specially chosen to have a very "square" magnetic hysteresis pattern. This meant that fields just below a certain threshold will do nothing, but those just above this threshold will cause the core to be affected by that magnetic field; it will abruptly flip its magnetization state. The square pattern and sharp flipping states ensures that a single core can be addressed within a grid; nearby cores will see a slightly different field, and not be affected.
Data retrieval
The basic operation in a core memory is writing. This is accomplished by powering a selected X and Y wire both to the current level that will, by itself, create ½ the critical magnetic field. This will cause the field at the crossing point to be greater than the core's saturation point, and the core will pick up the external field. Ones and zeros are represented by the direction of the field, which can be set simply by changing the direction of the current flow in one of the two wires.
In core memory, a third wire - the sense/inhibit line - is needed to write or read a bit. Reading uses the process of writing; the X and Y lines are powered in the same fashion that they would be to write a "0" to the selected core. If that core held a "1" at that time, then the magnetic state flips to a "0" and the transition causes a short pulse of electricity to be induced into the sense/inhibit line. If no pulse is seen, then no flip occurred, thus the core already held a "0". This process is destructive; if the core did hold a "1", that pattern is destroyed during the read, and has to be re-set in a subsequent operation.
The sense/inhibit line is shared by all of the cores in a particular plane, meaning that |
https://en.wikipedia.org/wiki/Application%20framework | In computer programming, an application framework consists of a software framework used by software developers to implement the standard structure of application software.
Application frameworks became popular with the rise of graphical user interfaces (GUIs), since these tended to promote a standard structure for applications. Programmers find it much simpler to create automatic GUI creation tools when using a standard framework, since this defines the underlying code structure of the application in advance. Developers usually use object-oriented programming (OOP) techniques to implement frameworks such that the unique parts of an application can simply inherit from classes extant in the framework.
Examples
Apple Computer developed one of the first commercial application frameworks, MacApp (first release 1985), for the Macintosh. Originally written in an extended (object-oriented) version of Pascal termed Object Pascal, it was later rewritten in C++. Another notable framework for the Mac is Metrowerks' PowerPlant, based on Carbon. Cocoa for macOS offers a different approach to an application framework, based on the OpenStep framework developed at NeXT.
Free and open-source software frameworks exist as part of the Mozilla, LibreOffice, GNOME, KDE, NetBeans, and Eclipse projects.
Microsoft markets a framework for developing Windows applications in C++ called the Microsoft Foundation Class Library, and a similar framework for developing applications with Visual Basic or C#, named .NET Framework.
Several frameworks can build cross-platform applications for Linux, Macintosh, and Windows from common source code, such as Qt, wxWidgets, Juce, Fox toolkit, or Eclipse Rich Client Platform (RCP).
Oracle Application Development Framework (Oracle ADF) aids in producing Java-oriented systems.
Silicon Laboratories offers an embedded application framework for developing wireless applications on its series of wireless chips.
MARTHA is a proprietary software Java framework that all of the RealObjects software is built on.
References
Programming tools
Proprietary software |
https://en.wikipedia.org/wiki/IChat | iChat (previously iChat AV) is a discontinued instant messaging software application developed by Apple Inc. for use on its Mac OS X operating system. It supported instant text messaging over XMPP/Jingle or OSCAR (AIM) protocol, audio and video calling, and screen-sharing capabilities. It also allowed for local network discussion with users discovered through Bonjour protocols.
In OS X 10.8 Mountain Lion, iChat was replaced by Messages for chat and FaceTime for video calling.
History
iChat was first released in August 2002 as part of Mac OS X 10.2. It featured integration with the Address Book and Mail applications and was the first officially supported AIM client that was native to Mac OS X (the first-party AIM application at the time was still running in Classic emulation).
One episode of the first season of the HBO dramedy series Entourage had Eric Murphy having an iChat conversation with Ari Gold, marking the first time that this application was used on a television series.
Interface
iChat incorporated Apple's Aqua interface and used speech bubbles and pictures to personify the online chatting experience. With iChat, green (available), yellow (idle), and red (away) icons could be displayed next to the name of each connected user on the buddy list. For color-blind users, this could be altered to show different shapes, circle (available), triangle (idle), and squares (away), to illustrate status with shape rather than color.
iChat AV
In June 2003, Apple announced iChat AV, the second major version of iChat. It added video and audio conferencing capabilities based on the industry-standard Session Initiation Protocol (SIP). The final version of the software was shipped with Mac OS X 10.3 and became available separately on the same day for Mac OS X 10.2.
iChat AV 2
In February 2004, AOL introduced AOL Instant Messenger (AIM) version 5.5 for Windows users, which enabled video, but not audio, chats over the AIM protocol and was compatible with Apple's iChat AV. On the same day, Apple released a public beta of iChat AV 2.1 to allow Mac OS X users to video conferencing with AIM 5.5 users.
iChat AV 3
In June 2004, Steve Jobs announced that the next version of iChat AV would be included with Mac OS X 10.4. iChat AV 3 provided additional support to allow up to four people in a single video conference and ten people in an audio conference. Additionally, the new version of iChat used the H.264/AVC codec, which offered superior quality video compared to the older H.263 codec used in previous versions. This release supported the XMPP protocol, which could be directly used to connect to Google Talk and indirectly be used to connect to users of services including Facebook Chat, and Yahoo! Messenger. However, support was limited as it did not support several common XMPP features such as account creation, service discovery and full multi-user chat support. iChat 3 included the Bonjour protocol (previously called Rendezvous) which allowed iChat to a |
https://en.wikipedia.org/wiki/Deming%20regression | In statistics, Deming regression, named after W. Edwards Deming, is an errors-in-variables model which tries to find the line of best fit for a two-dimensional dataset. It differs from the simple linear regression in that it accounts for errors in observations on both the x- and the y- axis. It is a special case of total least squares, which allows for any number of predictors and a more complicated error structure.
Deming regression is equivalent to the maximum likelihood estimation of an errors-in-variables model in which the errors for the two variables are assumed to be independent and normally distributed, and the ratio of their variances, denoted δ, is known. In practice, this ratio might be estimated from related data-sources; however the regression procedure takes no account for possible errors in estimating this ratio.
The Deming regression is only slightly more difficult to compute than the simple linear regression. Most statistical software packages used in clinical chemistry offer Deming regression.
The model was originally introduced by who considered the case δ = 1, and then more generally by with arbitrary δ. However their ideas remained largely unnoticed for more than 50 years, until they were revived by and later propagated even more by . The latter book became so popular in clinical chemistry and related fields that the method was even dubbed Deming regression in those fields.
Specification
Assume that the available data (yi, xi) are measured observations of the "true" values (yi*, xi*), which lie on the regression line:
where errors ε and η are independent and the ratio of their variances is assumed to be known:
In practice, the variances of the and parameters are often unknown, which complicates the estimate of . Note that when the measurement method for and is the same, these variances are likely to be equal, so for this case.
We seek to find the line of "best fit"
such that the weighted sum of squared residuals of the model is minimized:
See for a full derivation.
Solution
The solution can be expressed in terms of the second-degree sample moments. That is, we first calculate the following quantities (all sums go from i = 1 to n):
Finally, the least-squares estimates of model's parameters will be
Orthogonal regression
For the case of equal error variances, i.e., when , Deming regression becomes orthogonal regression: it minimizes the sum of squared perpendicular distances from the data points to the regression line. In this case, denote each observation as a point zj in the complex plane (i.e., the point (xj, yj) is written as zj = xj + iyj where i is the imaginary unit). Denote as Z the sum of the squared differences of the data points from the centroid (also denoted in complex coordinates), which is the point whose horizontal and vertical locations are the averages of those of the data points. Then:
If Z = 0, then every line through the centroid is a line of best orthogonal fit.
If Z ≠ |
https://en.wikipedia.org/wiki/DFT | DFT may refer to:
Businesses and organisations
Department for Transport, United Kingdom
Digital Film Technology, maker of the Spirit DataCine film digitising scanner
DuPont Fabros Technology, a US data center company (by NYSE ticker)
Science and mathematics
Decision field theory, a human cognitive decision-making model
Density functional theory, a computational quantum mechanical modelling method
Discrete Fourier transform
Technology
Deaereating feed tank
Demand flow technology
Design for testing (design for testability) |
https://en.wikipedia.org/wiki/CodedColor%20PhotoStudio%20Pro | CodedColor is a bitmap graphics editor and image organizer for computers running the Microsoft Windows operating system, and is published by 1STEIN.
CodedColor contains different tools for image editing and viewing. Additionally, it has other features such as web album export, annotations, database and keyword searching, contact sheets, screen shows, batch conversion, photo finishing, red eye correction, screen capture and TWAIN import.
Details
CodedColor PhotoStudio is a photo organizer and image editing software for digital camera users. The software comes with a handbook and a database to store Exif / IPTC data and color informationen.
The interface includes features like photo editing & printing, web album galleries, slide shows, photo management & cataloging, custom sorting, IPTC & Exif editor, thumbnail generation, resize & resample images, jp2000, batch conversion, database keyword searching, red eye removal, color / sharpness / brightness & contrast correction, artefacts removal, clone brush, scanner & TWAIN import, screen capture, lossless JPEG rotation, gamma correction, print ordering and screen shows with many transition effects, watermark text, image annotations, panorama stitch & animation, video capture, PDF album export, photo layouts, collages, frames, shadows, histograms, automatic white balance, and Skype photo sharing.
The user can also rename multiple images, remove scratches, create panorama pictures (stitch), convert RAW photos (from Canon, Nikon, Olympus, etc. cameras), send images via Skype, send photo SMS, burn digital watermarks, correct colors, run a screen show, convert and correct JPEG images in a batch process, rename fields, open pictures and image folders from the Explorer, generate a web album in HTML and compress and resize images.
It opens and converts all common image formats: BMP, WMF, GIF, JPEG, JPEG2000, TIFF, PCX, PNG, PSP, PSD, PCD, and all current RAW formats. The software package includes Pixpedia Publisher, a photo layout and desktop publishing tool, from which you can create and order individual photobooks.
Media
CodedColor PhotoStudio has received numerous awards and magazine articles, for example from CNET.
See also
List of raster graphics editors
Comparison of raster graphics editors
References
External links
3P Pixpedia Publisher
Raster graphics editors
Windows graphics-related software
Image organizers |
https://en.wikipedia.org/wiki/Be%20Inc. | Be Inc. was an American computer company founded in 1990. It is best known for the development and release of BeOS, and the BeBox personal computer. Be was founded by former Apple Computer executive Jean-Louis Gassée with capital from Seymour Cray.
Be's corporate offices were located in Menlo Park, California, with regional sales offices in France and Japan. The company later relocated to Mountain View, California for the duration of its dissolution.
The company's main intent was to develop a new operating system using the C++ programming language on a proprietary hardware platform. BeOS was initially exclusive to the BeBox, and was later ported to Apple Computer's Power Macs despite resistance from Apple, due to the hardware specifications assistance of Power Computing. In 1998, BeOS was ported to the Intel x86 architecture, and PowerPC support was reduced and finally dropped after BeOS R5. It inspired the open source operating system, Haiku.
History
Be was founded by former Apple Computer executive Jean-Louis Gassée in 1990 with Steve Sakoman after being ousted by Apple CEO John Sculley. Soon joined also Erich Ringewald, lead engineer in Apple 'Pink' OS team, as CTO.
According to several sources including Macworld UK, the company name "Be" originated in a conversation between Gassée and Sakoman. Gassée originally thought the company should be called "United Technoids Inc.", but Sakoman disagreed and said he would start looking through the dictionary for a better name. A few days later, when Gassée asked if he had made any progress, Sakoman replied that he had got tired and stopped at "B." Gassée said, " 'Be' is nice. End of story."
Be aimed to create a modern computer operating system written in C++ on a proprietary hardware platform. In 1995, the BeBox personal computer was released by Be, with its distinctive strips of lights along the front that indicate the activity of each PowerPC CPU, and the combined analogue/digital, 37-pin GeekPort. In addition to BeOS and BeBox, Be also produced BeIA, an OS for internet appliances. Its commercial deployments included the Sony eVilla and DT Research, during its short lifespan.
In 1996, Apple was searching for a new operating system to replace the classic Mac OS. Eventually, the two final options were BeOS and NeXTSTEP. NeXT was chosen and acquired due to the persuasive influence of Steve Jobs and the incomplete state of the BeOS product, criticized at the time for lacking such features as printing capability. It was rumoured that the deal fell apart because of money, with Be Inc allegedly wanting US$500M and a high-level post in the company, when the NeXT deal closed at US$400M. The rumours were dismissed by Gassée.
Dissolution and litigation
Ultimately the assets of the Be, Inc. were bought for US$11 million in 2001 by Palm, Inc., where Gassée served on the board of directors, at which point the company entered dissolution. The company then initiated litigation against Microsoft for aggressive |
https://en.wikipedia.org/wiki/Linc | Linc, The Linc or LINC may refer to:
Science
LINC, Laboratory Instrument Computer
LINC 4GL, a programming language
LINC complex, a protein complex of the cytoskeleton
LINC complex, or simply LINC, another name for the DREAM complex
Organizations
MIT LINC, Learning International Networks Consortium of the Massachusetts Institute of Technology,
Linc Energy, an Australian energy company
LINC TV, a community television station based in Lismore, New South Wales, from 1993 to 2012
Other
Linc (name), a list of people and fictional characters
Language Instruction for Newcomers to Canada, Canadian federal government language education programme
ASF LINC, Loan Identification Number Code of the American Securitization Forum
Lincoln M. Alexander Parkway, expressway in Hamilton, Ontario, Canada
Lincoln Financial Field, the home stadium of the Philadelphia Eagles
LINC (Learning Innovation Centre), Edge Hill University, Lancashire, England
LINC, the computer controlling Union City in Beneath a Steel Sky
LINC, reporting mark of the Lewis and Clark Railway, Clark County, Washington, United States
See also
Linc's, an American television series from 1998 to 2000
Lincs Wind Farm, off the east coast of England
Lincs FM, a UK Independent Local Radio radio station serving Lincolnshire and Newark
Library Information Network of Clackamas County (LINCC)
lincRNA, large intergenic non-coding RNA
Lincs (disambiguation)
Link (disambiguation)
Linq (disambiguation) |
https://en.wikipedia.org/wiki/Electric%20Word | Electric Word was a bimonthly, English-language magazine published in Amsterdam between 1987 and 1990 that offered eclectic reporting on the translation industry, linguistic technology, and computer culture.
Its editor was Louis Rossetto and it featured avant-garde graphics by the Dutch graphic designer Max Kisman.
History and profile
In 1986, Amsterdam-based INK Taalservice, a high-tech translation company serving the new PC industry, launched an English-language magazine Language Technology, which covered the burgeoning the technologies used to process language — from PCs to machine translation to networks. Louis Rossetto was the editor of Language Technology. Jane Metcalfe was the magazine's ad sales director. The first issue of Language Technology was designed by leading edge Dutch graphic designer Max Kisman. It was the first issue of any magazine to be created with desktop publishing software, in this case ReadySetGo, which Rossetto had carried back from its introduction at that year's San Francisco MacWorld exhibition.
INK later sold the magazine to a small Dutch media company Media Nederland, who renamed it Electric Word.
Electric Word'''s circulation grew to include leading research labs at universities, governments, and high tech companies around the world. Cover subjects were as diverse as computer visionary Alan Kay, AI pioneer Marvin Minsky, Timothy Leary, and MIT Media Lab founder Nicholas Negroponte. Whole Earth Review’s editor Kevin Kelley proclaimed Electric Word "the least boring computer magazine in the world," which became its tagline.Electric Word was terminated in 1990 due to Media Nederland's change of focus. Rossetto and Metcalfe went on to found Wired magazine. The last issue of Electric Word'' featured the world's first photoshopped magazine cover — of TED founder Richard Saul Wurman
References
External links
Electric Word online archive
1987 establishments in the Netherlands
1990 disestablishments in the Netherlands
Defunct magazines published in the Netherlands
Bi-monthly magazines published in the Netherlands
Science and technology magazines published in the Netherlands
English-language magazines
Magazines established in 1987
Magazines disestablished in 1990
Magazines published in Amsterdam |
https://en.wikipedia.org/wiki/Language%20technology | Language technology, often called human language technology (HLT), studies methods of how computer programs or electronic devices can analyze, produce, modify or respond to human texts and speech. Working with language technology often requires broad knowledge not only about linguistics but also about computer science. It consists of natural language processing (NLP) and computational linguistics (CL) on the one hand, many application oriented aspects of these, and more low-level aspects such as encoding and speech technology on the other hand.
Note that these elementary aspects are normally not considered to be within the scope of related terms such as natural language processing and (applied) computational linguistics, which are otherwise near-synonyms. As an example, for many of the world's lesser known languages, the foundation of language technology is providing communities with fonts and keyboard setups so their languages can be written on computers or mobile devices.
References
External links
Johns Hopkins University Human Language Technology Center of Excellence
Carnegie Mellon University Language Technologies Institute
Institute for Applied Linguistics (IULA) at Universitat Pompeu Fabra. Barcelona, Spain
German Research Centre for Artificial Intelligence (DFKI) Language Technology Lab
CLT: Centre for Language Technology in Gothenburg, Sweden
The Center for Speech and Language Technologies (CSaLT) at the Lahore University [sic] of Management Sciences (LUMS)
Globalization and Localization Association (GALA)
ScriptSource, a reference to the writing systems of the world and the remaining needs for supporting them in the computing realm.
Speech processing
Natural language processing |
https://en.wikipedia.org/wiki/Windows%20Driver%20Model | In computing, the Windows Driver Model (WDM) also known at one point as the Win32 Driver Model is a framework for device drivers that was introduced with Windows 98 and Windows 2000 to replace VxD, which was used on older versions of Windows such as Windows 95 and Windows 3.1, as well as the Windows NT Driver Model.
Overview
WDM drivers are layered in a stack and communicate with each other via I/O request packets (IRPs). The Microsoft Windows Driver Model unified driver models for the Windows 9x and Windows NT product lines by standardizing requirements and reducing the amount of code that needed to be written. WDM drivers will not run on operating systems earlier than Windows 98 or Windows 2000, such as Windows 95 (before the OSR2 update that sideloads the WDM model), Windows NT 4.0 and Windows 3.1. By conforming to WDM, drivers can be binary compatible and source-compatible across Windows 98, Windows 98 Second Edition, Windows Me, Windows 2000, Windows XP, Windows Server 2003 and Windows Vista (for backwards compatibility) on x86-based computers. WDM drivers are designed to be forward-compatible so that a WDM driver can run on a version of Windows newer than what the driver was initially written for, but doing that would mean that the driver cannot take advantage of any new features introduced with the new version. WDM is generally not backward-compatible, that is, a WDM driver is not guaranteed to run on any older version of Windows. For example, Windows XP can use a driver written for Windows 2000 but will not make use of any of the new WDM features that were introduced in Windows XP. However, a driver written for Windows XP may or may not load on Windows 2000.
WDM exists in the intermediary layer of Windows 2000 kernel-mode drivers and was introduced to increase the functionality and ease of writing drivers for Windows. Although WDM was mainly designed to be binary and source compatible between Windows 98 and Windows 2000, this may not always be desired and so specific drivers can be developed for either operating system.
Device kernel-mode drivers
With the Windows Drivers Model (WDM) for devices Microsoft implements an approach to kernel mode drivers that is unique to Windows operating systems. WDM implements a layered architecture for device drivers, and every device of a computer is served by a stack of drivers. However, every driver in that stack can chain isolate hardware-independent features from the driver above and beneath it. So drivers in the stack do not need to interact directly with one another. WDM defines architecture and device procedures for a range of devices, such as display and the network card, known as Network Driver Interface Specification (NDIS). In the NDIS architecture the layered network drivers include lower-level drivers that manage the hardware and upper-level drivers that implement network data transport, such as the Transmission Control Protocol (TCP).
While WDM defines three types of device drivers, no |
https://en.wikipedia.org/wiki/SAM%20Coup%C3%A9 | The SAM Coupé (pronounced /sæm ku:peɪ/ from its original British English branding) is an 8-bit British home computer manufactured by Miles Gordon Technology (MGT), based in Swansea in the United Kingdom and released in December 1989.
It was based on and designed to have a compatibility mode with the ZX Spectrum 48K with influences from the Loki project and marketed as a logical upgrade from the Spectrum with increased memory, graphical and sound capabilities, native peripheral support (floppy disk, MIDI, joystick, light pen/light gun and a proprietary mouse).
The inclusion of support for higher graphical modes allowed for 80-column text presentation, providing a platform to support productivity and CP/M applications via additional software.
Being based on 8-bit technology at a time when 16-bit home computers were more prevalent, coupled with a lack of commercial software titles, led to it being a commercial failure.
When MGT went into receivership in June 1990 two further attempts were made to restart the computer and brand, firstly under SAM Computers Limited and then in November 1992 under West Coast Computers, a company spun from Format Publications which lasted until liquidation in 2005.
Naming
The capitalised SAM is an acronym for 'Some Amazing Micro' according to Alan Miles.
It has also been reported to be related to 'Some Amazing Machine'.
The ‘Coupé’ nickname has two sources: one being an ice cream sundae called the “Ice Cream Coupé” and the other because the machine resembles a fastback car in profile with the feet as the wheels.
Hardware
The SAM Coupé's hardware was designed by Bruce Gordon of Miles Gordon Technology. The computer included custom silicon to handle display, memory and IO functionality. This was originally prototyped using wire-wrapped 7400-series logic chips, before being produced as a VLSI VGT-200 gate array ASIC.
Processor and logic
The machine is based around a Z80B CPU clocked at 6 MHz and a 10,000-gate ASIC. The ASIC performs a similar role in the computer to the ULA in the ZX Spectrum. The Z80B CPU accesses selected parts of the large memory space in its 64 KB address space by slicing it into 16 KB banks and using I/O registers to select the memory pages mapped into each 16 KB bank.
Memory and storage
The basic SAM Coupé model has 256 KiB of RAM, internally upgradable to 512 KiB via a connector on the main board accessible via a trapdoor underneath, and externally up to an additional 4 MiB, added in 1 MiB packs via the Euroconnector on the back of the system.
The computer has a direct connection for a cassette recorder for data storage but two 3.5 inch floppy disk drives can be installed within the case as well or externally using an interface.
Graphics
The SAM Coupé was designed primarily for the UK market, and is designed around the PAL television standard, which refreshes at 50 frames per second. Unlike a standard PAL signal which is interleaved, the SAM is designed to emit two identically p |
https://en.wikipedia.org/wiki/Microchannel | Microchannel can refer to
Basic structure used in microtechnology, see Microchannel (microtechnology).
Micro Channel architecture in computing |
https://en.wikipedia.org/wiki/PEEK%20and%20POKE | In computing, PEEK and POKE are commands used in some high-level programming languages for accessing the contents of a specific memory cell referenced by its memory address. PEEK gets the byte located at the specified memory address.
POKE sets the memory byte at the specified address. These commands originated with machine code monitors such as the DECsystem-10 monitor;
these commands are particularly associated with the BASIC programming language, though some other languages such as Pascal and COMAL also have these commands. These commands are comparable in their roles to pointers in the C language and some other programming languages.
One of the earliest references to these commands in BASIC, if not the earliest, is in Altair BASIC. The PEEK and POKE commands were conceived in early personal computing systems to serve a variety of purposes, especially for modifying special memory-mapped hardware registers to control particular functions of the computer such as the input/output peripherals. Alternatively programmers might use these commands to copy software or even to circumvent the intent of a particular piece of software (e.g. manipulate a game program to allow the user to cheat). Today it is unusual to control computer memory at such a low level using a high-level language like BASIC. As such the notions of PEEK and POKE commands are generally seen as antiquated.
The terms peek and poke are sometimes used colloquially in computer programming to refer to memory access in general.
Statement syntax
The PEEK function and POKE commands are usually invoked as follows, either in direct mode (entered and executed at the BASIC prompt) or in indirect mode (as part of a program):
integer_variable = PEEK(address)
POKE address, value
The address and value parameters may contain expressions, as long as the evaluated expressions correspond to valid memory addresses or values, respectively. A valid address in this context is an address within the computer's address space, while a valid value is (typically) an unsigned value between zero and the maximum unsigned number that the minimum addressable unit (memory cell) may hold.
Memory cells and hardware registers
The address locations that are POKEd or PEEKed at may refer either to ordinary memory cells or to memory-mapped hardware registers of I/O units or support chips such as sound chips and video graphics chips, or even to memory-mapped registers of the CPU itself (which makes software implementations of powerful machine code monitors and debugging/simulation tools possible). As an example of a POKE-driven support chip control scheme, the following POKE command is directed at a specific register of the Commodore 64's built-in VIC-II graphics chip, which will make the screen border turn black:
POKE 53280, 0
A similar example from the Atari 8-bit family tells the ANTIC display driver to turn all text upside-down:
POKE 755, 4
The difference between machines, and the importance and utility of the hard |
https://en.wikipedia.org/wiki/Kyodo%20News | is a nonprofit cooperative news agency based in Minato, Tokyo. It was established in November 1945 and it distributes news to almost all newspapers, and radio and television networks in Japan. The newspapers using its news have about 50 million subscribers. K. K. Kyodo News is Kyodo News' business arm, established in 1972. The subdivision Kyodo News International, founded in 1982, provides over 200 reports to international news media and is located in Rockefeller Center, New York City.
Their online news site is in Japanese, Chinese (Simplified and Traditional), Korean, and English.
The agency employs over 1,000 journalists and photographers, and maintains news exchange agreements with over 70 international media outlets.
Satoshi Ishikawa is the news agency's president.
Kyodo News was formed by Furuno Inosuke, the president of the Domei News Agency, following the dissolution of Domei after World War II.
See also
References
External links
Kyodo News
Official news site (English)
Official news site (traditional Chinese)
Official news site (simplified Chinese)
Official news site (Japanese)
Official corporate site (English)
Official corporate site (Japanese)
K. K. Kyodo News
Official site (Japanese)
Official site (traditional Chinese)
Official site (simplified Chinese)
Official site (Korean)
1945 establishments in Japan
Mass media companies based in Tokyo
Mass media companies established in 1945
Cooperatives in Japan
News agencies based in Japan
Minato, Tokyo |
https://en.wikipedia.org/wiki/List%20of%20Amiga%20games |
This is a list of games for the Amiga line of personal computers organised alphabetically by name. See Lists of video games for related lists.
This list has been split into multiple pages. It contains over 3000 games. Please use the Table of Contents to browse it.
List of Amiga games A through H
List of Amiga games I through O
List of Amiga games P through Z
Sources
Hall Of Light
Lemon Amiga
Amiga games at MobyGames
Amiga games |
https://en.wikipedia.org/wiki/ISAM | ISAM, an acronym for indexed sequential access method, is a method for creating, maintaining, and manipulating computer files of data so that records can be retrieved sequentially or randomly by one or more keys. Indexes of key fields are maintained to achieve fast retrieval of required file records in indexed files. IBM originally developed ISAM for mainframe computers, but implementations are available for most computer systems.
The term ISAM is used for several related concepts:
The IBM ISAM product and the algorithm it employs.
A database system where an application developer directly uses an application programming interface to search indexes in order to locate records in data files. In contrast, a relational database uses a query optimizer which automatically selects indexes.
An indexing algorithm that allows both sequential and keyed access to data. Most databases use some variation of the B-tree for this purpose, although the original IBM ISAM and VSAM implementations did not do so.
Most generally, any index for a database. Indexes are used by almost all databases.
Organization
In an ISAM system, data is organized into records which are composed of fixed length fields, originally stored sequentially in key sequence. Secondary set(s) of records, known as indexes, contain pointers to the location of each record, allowing individual records to be retrieved without having to search the entire data set. This differs from the contemporaneous navigational databases, in which the pointers to other records were stored inside the records themselves. The key improvement in ISAM is that the indexes are small and can be searched quickly, possibly entirely in memory, thereby allowing the database to access only the records it needs. Additional modifications to the data do not require changes to other data, only the table and indexes in question.
When an ISAM file is created, index nodes are fixed, and their pointers do not change during inserts and deletes that occur later (only content of leaf nodes change afterwards). As a consequence of this, if inserts to some leaf node exceed the node's capacity, new records are stored in overflow chains. If there are many more inserts than deletions from a table, these overflow chains can gradually become very large, and this affects the time required for retrieval of a record.
Relational databases can easily be built on an ISAM framework with the addition of logic to maintain the validity of the links between the tables. Typically the field being used as the link, the foreign key, will be indexed for quick lookup. While this is slower than simply storing the pointer to the related data directly in the records, it also means that changes to the physical layout of the data do not require any updating of the pointers—the entry will still be valid.
ISAM is simple to understand and implement, as it primarily consists of direct access to a database file. The trade-off is that each client machine must manage its o |
https://en.wikipedia.org/wiki/Amstrad%20CPC%20464 | The CPC 464 is the first personal home computer built by Amstrad in 1984. It was one of the bestselling and best produced microcomputers, with more than 2 million units sold in Europe. The British microcomputer boom had already peaked before Amstrad announced the CPC 464 (which stood for Colour Personal Computer) which they then released a mere 9 months later.
Amstrad was known for cheap hi-fi products but had not broken into the home computer market until the CPC 464. Their consumer electronic sales were starting to plateau and owner and founder Alan Sugar stated "We needed to move on and find another sector or product to bring us back to profit growth". Work started on the Amstrad home computer in 1983 with engineer Ivor Spital who concluded that Amstrad should enter the home computer market, offering a product that integrated low-cost hardware to be sold at an affordable "impulse-purchase price".
Spital wanted to offer a device that would not commandeer the family TV but instead be an all-in-one computer with its own monitor, thus freeing up the TV and allowing others to play video games at the same time.
Bill Poel, General Manager of Amsoft (Amstrad's software division), said during the launch press release that if the computers were not on the shelves by the end of June, "I will be prepared to sit down and eat one in Trafalgar Square".
Technical specifications
The CPC 464 is powered by the Zilog Z80 processor after the original attempts to use the 6502 processor, being used in the Apple II amongst many other 8-bit computer families, failed. The Z80 runs at 4 MHz, has 64K of memory and runs AMSDOS, Amstrad's own OS. The unit includes a built-in tape drive and the choice of a colour or green monochrome monitor.
The graphics, which uses a Motorola 6845 chip for timing and address generation, provides 3 standard display modes, each using colours chosen from a palette of 27.
Mode 0 - 160×200, 16 colours
Mode 1 - 320×200, 4 colours
Mode 2 - 640x200, 2 colours
Its sound is supplied using the General Instruments AY-3-8912 sound chip that provides 3-voice, 8-octave sound capacity through a built-in loudspeaker with volume control. Later versions of the 464 have a headphone jack that can also be used for external speakers.
The CPC 464's code name during development was 'Arnold'.
Reception
The 464 was popular with consumers for various reasons. Aside from the joystick port, the computer, keyboard, and tape deck were all combined into one unit that attached to the monitor via two cables. The monitor also contained the power supply unit which powered the whole unit via one wall plug. It did not have very many wires and was simple enough for even the most inexperienced user to install.
References
Amstrad CPC |
https://en.wikipedia.org/wiki/IBM%20Informix-4GL | Informix-4GL is a 4GL programming language developed by Informix during the mid-1980s. At the time of its initial release in 1986, supported platforms included Microsoft Xenix (on IBM PC AT), DEC Ultrix (running on Microvax II, VAX-11/750, VAX-11/785, VAX 8600), Altos 2086, AT&T 3B2, AT&T 3B5, AT&T 3B20 and AT&T Unix PC.
Description
It includes embedded SQL, a report writer language, a form language, and a limited set of imperative capabilities (functions, if and while statements, and supports arrays etc.). The language is particularly close to a natural language and is easy to learn and use. The Form Painter, Screen Code Generator, Report Code Generator (Featurizer) enabled adding custom business logic. It also had, as additional components a menu system, and a front-end GUI (graphical user interface) Generator.
The package includes two versions of compiler which either produce 1) intermediate byte code for an interpreter (known as the rapid development system), or 2) C Programming Language code for compilation with a C compiler into machine-code (which executes faster, but compiles slower, and executables are bigger). It is specifically designed to run as a client on a network, connected to an IBM Informix database engine service. It has a mechanism for calling C Programming Language functions and conversely, to be called from executing C programs. The RDS version also features an interactive debugger for Dumb terminals. A particular feature is the comprehensive error checking which is built into the final executable and the extremely helpful error messages produced by both compilers and executables. It also features embedded modal statements for changing compiler and executable behaviour (e.g. causing the compiler to include memory structures matching database schema structures and elements, or to continue executing in spite of error conditions, which can be trapped later on).
History
The Informix-4GL project was started in 1985, with Chris Maloney as chief architect. Roy Harrington was in charge of the related Informix Turbo (later renamed Online) engine, which bypassed the "cooked" file system and instead used "raw" disk access. It was based on software developed in 1983 by FourGen Software Technologies, which were based in Seattle. The bundled product was presented by Informix as ‘’’Forms’’’ and ‘’’Menu’’’ until 1996. This ‘’’Rapid Application Development’’’ product, marketed as FourGen CASE Tools, could access the user’s choice of ‘’’Informix’’’ and/or IBM’s DB2 databases. Another flavor of Informix programming-tool was produced, called "NewEra", which supported object-oriented programming and a level of code-compatibility with Informix-4GL.
Informix was acquired by IBM in April 2001. Despite its age, Informix-4GL is still widely used to develop business applications, and a sizable market exists around it due to its popularity. With accounting being an inherently text based activity, it is often chosen for its purely text-bas |
https://en.wikipedia.org/wiki/The%20Perfect%20General | The Perfect General is a computer wargame published in 1991 by Quantum Quality Productions.
Publication
The game was designed by Peter Zaccagnino and published in 1991 for the Amiga and DOS. A sequel, The Perfect General II, was released in 1994. The original game was modified for the 3DO by Game Guild in 1996 and published by Kirin Entertainment. The 3DO version includes a few scenarios which are absent from the personal computer versions. A refurbished version is available for Windows since 2003.
The rights for the original version were purchased by Mark Kinkead in 2002, and later released in 2003 as "The Perfect General Internet Edition" by Killer Bee Software. As the name suggests, this version can be played via Internet.
Gameplay
The game is a turn-based map-oriented military simulation game. Along with Modem Wars and Populous, it was one of the early games offering an online mode for real-time-matches via telecommunication networks. The original online-game was played via modem or null modem serial connection.
Reception
The Perfect General sold 75,000 copies by June 1993. Computer Gaming World in 1992 described The Perfect General as "a wonderful game system with a mediocre AI and great two-player potential", and later named it the best wargame of the year. A 1993 survey in the magazine of wargames gave the game three-plus stars out of five, stating that it "sacrifices realism for playability". A 1994 survey gave the Greatest Battles of the 20th Century two-plus stars out of five, noting the game's ease of use and "enjoyable", but inaccurate, scenarios.
In 1996, Computer Gaming World declared The Perfect General the 107th-best computer game ever released. The magazine's wargame columnist Terry Coleman named it his pick for the 12th-best computer wargame released by late 1996.
Reviews
Casus Belli #71 (Sep 1992)
References
External links
Killer Bee Software: The Perfect General Internet Edition
1991 video games
3DO Interactive Multiplayer games
Amiga games
Computer wargames
DOS games
Multiplayer and single-player video games
Online games
Quantum Quality Productions games
Ubisoft games
Video games developed in the United States |
https://en.wikipedia.org/wiki/Turn%20A%20Gundam | , also stylized as ∀ Gundam, is a 1999 Japanese mecha anime series produced by Sunrise, and aired between 1999 and 2000 on Japan's FNN networks. It was created for the Gundam Big Bang 20th Anniversary celebration, and is the eighth installment in the Gundam franchise. It was later compiled in 2002 into two feature-length films entitled Turn A Gundam I: Earth Light and Turn A Gundam II: Moonlight Butterfly.
Turn A Gundam was directed by Yoshiyuki Tomino, who is the main creator of the Gundam franchise, and who had written and directed many previous Gundam works. Tomino created the series as a means of "affirmatively accepting all of the Gundam series", which is reflected in the series title's use of the Turned A, a mathematical symbol representing universal quantification.
Overview
Turn A Gundam takes place in the year , in a different calendar era than the previous Gundam projects. The Japanese term for Correct Century, , is a wordplay on the Japanese term for the Common Era (CE) Western calendar system (; pronounced ). The population of the Earth is, at the beginning of the series, limited to simple, steam-driven technology after past cataclysms; the Moon is populated by the Moonrace, humans who left Earth after a great war long ago to reside in technologically advanced lunar colonies until such time as they deemed the Earth suitable to return to.
Plot
Turn A Gundam follows the character Loran Cehack, a young member of the Moonrace. Selected as part of a reconnaissance mission to determine whether the Earth was fit for resettlement, Loran lands on the continent of North America, spends two years living on Earth as the chauffeur to the Heim family, and grows attached to its people. With the expectation of a peaceful resettlement operation from his people, he and a pair of his close friends sent down with him confirm that the Earth is now fit for the Moonrace to make their return. He's taken by surprise when the Moonrace intends to return to Earth via an offensive with mobile suits, and their first attack sparks a violent conflict between Earth and moon.
The night of the first attack, Loran is at the White Doll, an enormous humanoid statue, for a coming-of-age ceremony. When the Moonrace attacks and the battle in town can be seen from a distance the children panic. In the midst of this panic, the White Doll shatters, revealing a metallic figure within, and the shrine collapses around it. During the panic, Loran recognizes the White Doll as a mobile suit, and succeeds in applying his knowledge of the Moonrace's mobile suits to pilot it. The death of the Heim patriarch in the attack pulls the family and Loran into the budding war; Loran becomes the designated pilot of the White Doll, and its discovery prompts the excavation of further mobile suits in the various "mountain cycles" covering the Earth. As the Moonrace's invasion rapidly turns into a full-fledged war against the increasingly armed Earthrace, it becomes clear that this state of af |
https://en.wikipedia.org/wiki/ARPANET | The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched network with distributed control and one of the first computer networks to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the Internet. The ARPANET was established by the Advanced Research Projects Agency (ARPA) of the United States Department of Defense.
Building on the ideas of J. C. R. Licklider, Bob Taylor initiated the ARPANET project in 1966 to enable resource sharing between remote computers. Taylor appointed Larry Roberts as program manager. Roberts made the key decisions about the request for proposal to build the network. He incorporated Donald Davies' concepts and designs for packet switching, and sought input from Paul Baran. ARPA awarded the contract to build the network to Bolt Beranek & Newman. The design was led by Bob Kahn who developed the first protocol for the network. Roberts engaged Leonard Kleinrock at UCLA to develop mathematical methods for analyzing the packet network technology.
The first computers were connected in 1969 and the Network Control Protocol was implemented in 1970, development of which was led by Steve Crocker at UCLA and other graduate students, including Vint Cerf. The network was declared operational in 1971. Further software development enabled remote login and file transfer, which was used to provide an early form of email. The network expanded rapidly and operational control passed to the Defense Communications Agency in 1975.
Stephen J. Lukasik directed DARPA to focus on internetworking research in the early 1970s. Bob Kahn moved to DARPA and, together with Vint Cerf at Stanford University formulated the Transmission Control Program, which incorporated concepts from the French CYCLADES project directed by Louis Pouzin. As this work progressed, a protocol was developed by which multiple separate networks could be joined into a network of networks. Version 4 of TCP/IP was installed in the ARPANET for production use in January 1983 after the Department of Defense made it standard for all military computer networking.
Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In the early 1980s, the NSF funded the establishment of national supercomputing centers at several universities and provided network access and network interconnectivity with the NSFNET project in 1986. The ARPANET was formally decommissioned in 1990, after partnerships with the telecommunication and computer industry had assured private sector expansion and future commercialization of an expanded worldwide network, known as the Internet.
History
Inspiration
Historically, voice and data communications were based on methods of circuit switching, as exemplified in the traditional telephone network, wherein each telephone call is allocated a dedicated end-to-end electronic connection between the two communicating stations |
https://en.wikipedia.org/wiki/RedLibre | RedLibre is a non-profit project in which a combination of people, groups, entities, administrations or companies interested in the development and/or use of networks create a free, community data network that allows users to contribute content and share resources, among other uses.
RedLibre has been connected mainly to Wireless Communities, being a point of union and synergy for them. RedLibre also facilitates the tasks that the different organizations involved decide to perform together.
History
The RedLibre project was created in September 2001 by Jaime Robles, based on the philosophy of the “Open Source” initiative. It was the first free network project in Spain and was inspired by similar movements that were appearing in the United States, such as New York Wireless and Seattle Wireless.
RedLibre was born with a much wider range than a “wireless city”, because it was evaluated that a project of this type would need many people (critical mass) to be able to succeed. Instead of creating a local project, the objective was to create a widespread project in which all Spanish-speaking people could gather and share ideas, projects, and more.
Later on, the National Association of Wireless Network Users (ANURI) was formed with the goal of offering legal support to the users of this type of network.
Since the inception of RedLibre in 2001-2002, hundreds of wireless communities have appeared worldwide. Everyone registered domains and created websites with “su_pueblowireless.net” or “su_ciudadwireless”; it was a moment of new hope and great movement. Many of these wireless communities, against their own philosophy, saw RedLibre as a threat; instead of uniting to generate a common network with RedLibre as a meeting point, they saw it as a project that intended to control the free networks in Spain. These issues slowed the growth and development of free networks in Spain since efforts were divided into small groups that worked independently, without the common goal of generating a free network.
When the issues with communities appeared (because everyone wanted to go off on their own), RedLibre changed its initial direction to try to adapt to the circumstances. Instead of directing itself as a community for people to unite and create a single project, it became a meta-community where support, media, and infrastructure were given to smaller groups. Communities that believed in a common free network gathered together and tried to agree on standardized goals, requirements, and actions.
In December of 2002, the City Council of Gran Canaria organized a meeting of free network communities in Las Palmas. Representatives from many local communities attended the meeting. The first-hand knowledge and the power of speech worked to bring positions closer together.
Devices that had been modified to work internally with Linux systems, referred to as LinuxAP, started to be used. These devices provided flexibility and an endless amount of new possibilities.
By 20 |
https://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt%20algorithm | In computer science, the Knuth–Morris–Pratt algorithm (or KMP algorithm) is a string-searching algorithm that searches for occurrences of a "word" W within a main "text string" S by employing the observation that when a mismatch occurs, the word itself embodies sufficient information to determine where the next match could begin, thus bypassing re-examination of previously matched characters.
The algorithm was conceived by James H. Morris and independently discovered by Donald Knuth "a few weeks later" from automata theory.
Morris and Vaughan Pratt published a technical report in 1970.
The three also published the algorithm jointly in 1977. Independently, in 1969, Matiyasevich discovered a similar algorithm, coded by a two-dimensional Turing machine, while studying a string-pattern-matching recognition problem over a binary alphabet. This was the first linear-time algorithm for string matching.
Background
A string-matching algorithm wants to find the starting index m in string S[] that matches the search word W[].
The most straightforward algorithm, known as the "brute-force" or "naive" algorithm, is to look for a word match at each index m, i.e. the position in the string being searched that corresponds to the character S[m]. At each position m the algorithm first checks for equality of the first character in the word being searched, i.e. S[m] =? W[0]. If a match is found, the algorithm tests the other characters in the word being searched by checking successive values of the word position index, i. The algorithm retrieves the character W[i] in the word being searched and checks for equality of the expression S[m+i] =? W[i]. If all successive characters match in W at position m, then a match is found at that position in the search string. If the index m reaches the end of the string then there is no match, in which case the search is said to "fail".
Usually, the trial check will quickly reject the trial match. If the strings are uniformly distributed random letters, then the chance that characters match is 1 in 26. In most cases, the trial check will reject the match at the initial letter. The chance that the first two letters will match is 1 in 26 (1 in 26^2 chances of a match over 26 possible letters). So if the characters are random, then the expected complexity of searching string S[] of length n is on the order of n comparisons or O(n). The expected performance is very good. If S[] is 1 million characters and W[] is 1000 characters, then the string search should complete after about 1.04 million character comparisons.
That expected performance is not guaranteed. If the strings are not random, then checking a trial m may take many character comparisons. The worst case is if the two strings match in all but the last letter. Imagine that the string S[] consists of 1 million characters that are all A, and that the word W[] is 999 A characters terminating in a final B character. The simple string-matching algorithm will now examine 1000 |
https://en.wikipedia.org/wiki/Quantum%20Quality%20Productions | Quantum Quality Productions (also known by their initials QQP) was a computer games company specializing in strategy games and war games.
Run by Bruce Williams Zaccagnino and Mark Baldwin it produced a number of games that achieved "cult status", most prominently The Perfect General.
Computer Gaming World reported in March 1994 that QQP was as "a very satisfying wellspring of entertainment", although its "fine games" had "below average documentation". In 1994, due to financial difficulties, QQP accepted a buy-out from American Laser Games. ALG "unceremoniously" closed the studio in December 1995, according to Computer Game Review.
Partial list of games produced by QQP
Battles in Time
Battles of Destiny
Bridge Olympiad
Conquered Kingdoms
Dealer's Choice Collection
Erben des Throns
Grandest Fleet
Heirs to the Throne (German import)
Lucky's Casino Adventure
Lost Admiral
Merchant Prince
Perfect General II
Perfect General
Pure Wargame
Solitaire's Journey
The Red Crystal: The Seven Secrets of Life
WWII: Battles of the South Pacific
Zig Zag
References
External links
Quantum Quality Productions at MobyGames
Video game development companies
Video game publishers
Defunct companies based in New Jersey
1995 disestablishments in New Jersey
Defunct video game companies of the United States |
https://en.wikipedia.org/wiki/Automated%20Mathematician | The Automated Mathematician (AM) is one of the earliest successful discovery systems. It was created by Douglas Lenat in Lisp, and in 1977 led to Lenat being awarded the IJCAI Computers and Thought Award.
AM worked by generating and modifying short Lisp programs which were then interpreted as defining various mathematical concepts; for example, a program that tested equality between the length of two lists was considered to represent the concept of numerical equality, while a program that produced a list whose length was the product of the lengths of two other lists was interpreted as representing the concept of multiplication. The system had elaborate heuristics for choosing which programs to extend and modify, based on the experiences of working mathematicians in solving mathematical problems.
Controversy
Lenat claimed that the system was composed of hundreds of data structures called "concepts," together with hundreds of "heuristic rules" and a simple flow of control: "AM repeatedly selects the top task from the agenda and tries to carry it out. This is the whole control structure!" Yet the heuristic rules were not always represented as separate data structures; some had to be intertwined with the control flow logic. Some rules had preconditions that depended on the history, or otherwise could not be represented in the framework of the explicit rules.
What's more, the published versions of the rules often involve vague terms that are not defined further, such as "If two expressions are structurally similar, ..." (Rule 218) or "... replace the value obtained by some other (very similar) value..." (Rule 129).
Another source of information is the user, via Rule 2: "If the user has recently referred to X, then boost the priority of any tasks involving X." Thus, it appears quite possible that much of the real discovery work is buried in unexplained procedures.
Lenat claimed that the system had rediscovered both Goldbach's conjecture and the fundamental theorem of arithmetic. Later critics accused Lenat of over-interpreting the output of AM. In his paper Why AM and Eurisko appear to work, Lenat conceded that any system that generated enough short Lisp programs would generate ones that could be interpreted by an external observer as representing equally sophisticated mathematical concepts. However, he argued that this property was in itself interesting—and that a promising direction for further research would be to look for other languages in which short random strings were likely to be useful.
Successor
This intuition was the basis of AM's successor Eurisko, which attempted to generalize the search for mathematical concepts to the search for useful heuristics.
See also
Computer-assisted proof
Automated theorem proving
Symbolic mathematics
Experimental mathematics
HR (software) and Graffiti (program), related math discovery systems
References
External links
Edmund Furse; Why did AM run out of steam?
Ken Haase's Ph.D. Thes |
https://en.wikipedia.org/wiki/Manchester%20Metrolink | Manchester Metrolink is a tram/light rail system in Greater Manchester, England. The network has 99 stops along of standard-gauge route, making it the most extensive light rail system in the United Kingdom. Metrolink is owned by the public body Transport for Greater Manchester (TfGM) and operated and maintained under contract by a Keolis/Amey consortium. Over the 2022/23 financial year 36 million passenger journeys were made on the system.
The network consists of eight lines which radiate from Manchester city centre to termini at Altrincham, Ashton-under-Lyne, Bury, East Didsbury, Eccles, Manchester Airport, Rochdale and The Trafford Centre. It runs on a mixture of on-street track shared with other traffic; reserved track sections segregated from other traffic, and converted former railway lines. Metrolink is operated by a fleet of 147 high-floor Bombardier M5000 light rail vehicles. Each service runs to a 12-minute headway; stops with more than one service experience combined headways of 6 minutes or less. At the busiest times some services operate as 'doubles', with two vehicles coupled together.
A light rail system for Greater Manchester emerged from the failure of the 1970s Picc-Vic tunnel scheme to obtain central government funding. A light-rail scheme was proposed in 1982 as the least expensive rail-based transport solution for Manchester city centre and the surrounding Greater Manchester metropolitan area. Government approval was granted in 1988, and the network began operating services between Bury Interchange and on 6 April 1992. Metrolink became the United Kingdom's first modern street-running rail system; the 1885-built Blackpool tramway being the only first-generation tram system in the UK that had survived up to Metrolink's creation.
Expansion of Metrolink has been a critical strategy of transport planners in Greater Manchester, who have overseen its development in successive projects, known as Phases 1, 2, 3a, 3b, 2CC and Trafford Park. The latest extension, the Trafford Park Line from to the Trafford Centre, opened in March 2020. The Greater Manchester Combined Authority has proposed numerous further expansions of the network, including the addition of tram-train technology to extend Metrolink services onto local heavy-rail lines.
History
Predecessors
Manchester's first tram age began in 1877 with the first horse-drawn trams of Manchester Suburban Tramways Company. Electric traction was introduced in 1901, and the municipal Manchester Corporation Tramways expanded across the city. By 1930, Manchester's tram network had grown to , making it the third-largest tram system in the United Kingdom. After World War II, electric trolleybuses and motor buses began to be favoured by local authorities as a cheaper transport alternative, and by 1949 the last Manchester tram line was closed. Trolleybuses were withdrawn from service in 1966.
Origins
Greater Manchester's railway network historically suffered from poor north–south connec |
https://en.wikipedia.org/wiki/DNA%20computing | DNA computing is an emerging branch of unconventional computing which uses DNA, biochemistry, and molecular biology hardware, instead of the traditional electronic computing. Research and development in this area concerns theory, experiments, and applications of DNA computing. Although the field originally started with the demonstration of a computing application by Len Adleman in 1994, it has now been expanded to several other avenues such as the development of storage technologies, nanoscale imaging modalities, synthetic controllers and reaction networks, etc.
History
Leonard Adleman of the University of Southern California initially developed this field in 1994. Adleman demonstrated a proof-of-concept use of DNA as a form of computation which solved the seven-point Hamiltonian path problem. Since the initial Adleman experiments, advances have occurred and various Turing machines have been proven to be constructible.
Since then the field has expanded into several avenues. In 1995, the idea for DNA-based memory was proposed by Eric Baum who conjectured that a vast amount of data can be stored in a tiny amount of DNA due to its ultra-high density. This expanded the horizon of DNA computing into the realm of memory technology although the in vitro demonstrations were made almost after a decade.
The field of DNA computing can be categorized as a sub-field of the broader DNA nanoscience field started by Ned Seeman about a decade before Len Adleman's demonstration. Ned's original idea in the 1980s was to build arbitrary structures using bottom-up DNA self-assembly for applications in crystallography. However, it morphed into the field of structural DNA self-assembly which as of 2020 is extremely sophisticated. Self-assembled structure from a few nanometers tall all the way up to several tens of micrometers in size have been demonstrated in 2018.
In 1994, Prof. Seeman's group demonstrated early DNA lattice structures using a small set of DNA components. While the demonstration by Adleman showed the possibility of DNA-based computers, the DNA design was trivial because as the number of nodes in a graph grows, the number of DNA components required in Adleman's implementation would grow exponentially. Therefore, computer scientist and biochemists started exploring tile-assembly where the goal was to use a small set of DNA strands as tiles to perform arbitrary computations upon growth. Other avenues that were theoretically explored in the late 90's include DNA-based security and cryptography, computational capacity of DNA systems, DNA memories and disks, and DNA-based robotics.
In 2003, John Reif's group first demonstrated the idea of a DNA-based walker that traversed along a track similar to a line follower robot. They used molecular biology as a source of energy for the walker. Since this first demonstration, a wide variety of DNA-based walkers have been demonstrated.
Applications, examples, and recent developments
In 1994 Leonard Adleman prese |
https://en.wikipedia.org/wiki/Vulnerable | Vulnerable may refer to:
General
Vulnerability
Vulnerability (computing)
Vulnerable adult
Vulnerable species
Music
Albums
Vulnerable (Marvin Gaye album), 1997
Vulnerable (Tricky album), 2003
Vulnerable (The Used album), 2012
Songs
"Vulnerable" (Roxette song), 1994
"Vulnerable" (Selena Gomez song), 2020
"Vulnerable", a song by Secondhand Serenade from Awake, 2007
"Vulnerable", a song by Pet Shop Boys from Yes, 2009
"Vulnerable", a song by Tinashe from Black Water, 2013
"Vulnerability", a song by Operation Ivy from Energy, 1989
Other uses
Climate change vulnerability, vulnerability to anthropogenic climate change used in discussion of society's response to climate change
Vulnerable, a scoring feature of the game of contract bridge where larger bonuses and penalties apply; see Glossary of contract bridge terms#Vulnerable
See also |
https://en.wikipedia.org/wiki/Horizon%20effect | The horizon effect, also known as the horizon problem, is a problem in artificial intelligence whereby, in many games, the number of possible states or positions is immense and computers can only feasibly search a small portion of them, typically a few plies down the game tree. Thus, for a computer searching only five plies, there is a possibility that it will make a detrimental move, but the effect is not visible because the computer does not search to the depth of the error (i.e., beyond its "horizon").
When evaluating a large game tree using techniques such as minimax with alpha-beta pruning, search depth is limited for feasibility reasons. However, evaluating a partial tree may give a misleading result. When a significant change exists just over the horizon of the search depth, the computational device falls victim to the horizon effect.
In 1973 Hans Berliner named this phenomenon, which he and other researchers had observed, the "Horizon Effect." He split the effect into two: the Negative Horizon Effect "results in creating diversions which ineffectively delay an unavoidable consequence or make an unachievable one appear achievable." For the "largely overlooked" Positive Horizon Effect, "the program grabs much too soon at a consequence that can be imposed on an opponent at leisure, frequently in a more effective form."
Greedy algorithms tend to suffer from the horizon effect.
The horizon effect can be mitigated by extending the search algorithm with a quiescence search. This gives the search algorithm ability to look beyond its horizon for a certain class of moves of major importance to the game state, such as captures in chess.
Rewriting the evaluation function for leaf nodes and/or analyzing more nodes will solve many horizon effect problems.
Example
For example, in chess, assume a situation where the computer only searches the game tree to six plies and from the current position determines that the queen is lost in the sixth ply; and suppose there is a move in the search depth where it may sacrifice a rook, and the loss of the queen is pushed to the eighth ply. This is, of course, a worse move than sacrificing the queen because it leads to losing both a queen and a rook. However, because the loss of the queen was pushed over the horizon of search, it is not discovered and evaluated by the search. Losing the rook seems to be better than losing the queen, so the sacrifice is returned as the best option whereas delaying the sacrifice of the queen has in fact additionally weakened the computer's position.
See also
Fog of war
Anti-computer tactics
Monte Carlo tree search
References
External links
Horizon Effect at Chess Programming WIKI (CPW)
Game artificial intelligence
Computer chess |
https://en.wikipedia.org/wiki/USENIX | USENIX is an American 501(c)(3) nonprofit membership organization based in Berkeley, California and founded in 1975 that supports advanced computing systems, operating system (OS), and computer networking research. It organizes several highly respected conferences in these fields. Its stated mission is to foster technical excellence and innovation, support and disseminate research with a practical bias, provide a neutral forum for discussion of technical issues, and encourage computing outreach into the community at large.
History
USENIX was established in 1975 under the name "Unix Users Group," focusing primarily on the study and development of the Unix OS family and similar systems. In June 1977, a lawyer from AT&T Corporation informed the group that they could not use the word "Unix" in their name as it was a trademark of Western Electric (the manufacturing arm of AT&T until 1995), which led to the change of name to USENIX. It has since grown into a respected organization among practitioners, developers, and researchers of computer operating systems more generally. Since its founding, it has published a technical journal titled ;login:.
USENIX was started as a technical organization. As commercial interest grew, a number of separate groups started in parallel, most notably the Software Tools Users Group (STUG), a technical adjunct for Unix-like tools and interface on non-Unix operating systems, and /usr/group, a commercially oriented user group.
USENIX's founding President was Lou Katz.
Conferences
USENIX hosts numerous conferences and symposia each year, including:
USENIX Symposium on Operating Systems Design and Implementation (OSDI) (was bi-annual till 2020)
USENIX Security Symposium (USENIX Security)
USENIX Conference on File and Storage Technologies (FAST)
USENIX Symposium on Networked Systems Design and Implementation (NSDI)
USENIX Annual Technical Conference (USENIX ATC) (co-located with OSDI since 2021)
SREcon, a conference for engineers focused on site reliability, systems engineering, and working with complex distributed systems at scale
LISA, the Large Installation System Administration Conference
Enigma, a conference focused on practical privacy and security expertise and knowledge sharing in a welcoming and inclusive environment
Publications
USENIX publishes a magazine called ;login: that appears four times a year. From 2021, it has become an all-digital magazine and openly accessible. ;login: content informs the community about practically relevant research, useful tools, and relevant events.
From 1988-1996, USENIX published the quarterly journal Computing Systems, about the theory and implementation of advanced computing systems in the UNIX tradition. It was published first by the University of California Press, then by the MIT Press. The issues have been scanned and are online.
Open access
USENIX became the first computing association to provide open access to their conference and workshop papers in 2008. Since 2011, |
https://en.wikipedia.org/wiki/COBUILD | COBUILD, an acronym for Collins Birmingham University International Language Database, is a British research facility set up at the University of Birmingham in 1980 and funded by Collins publishers.
The facility was initially led by Professor John Sinclair. The most important achievements of the COBUILD project have been the creation and analysis of an electronic corpus of contemporary text, the Collins Corpus, later leading to the development of the Bank of English, and the production of the monolingual learner's dictionary Collins COBUILD English Language Dictionary, again based on the study of the COBUILD corpus and first published in 1987.
A number of other dictionaries and grammars have also been published, all based exclusively on evidence from the Bank of English.
References
Further reading
External links
COBUILD Reference
1980 establishments in the United Kingdom
Organizations established in 1980
University of Birmingham
Online English dictionaries
Linguistic research institutes
Applied linguistics |
https://en.wikipedia.org/wiki/FTX%20%28disambiguation%29 | FTX is a defunct cryptocurrency exchange platform that operated from 2019 to 2022.
FTX may also refer to:
Fault-Tolerant UNIX, a Stratus Technologies operating system
FTX Games, an American video game publisher
Field training exercise, a type of military exercise
Toyota FTX, a make of car
Ftx gene, a non-coding RNA gene in humans
FtX (gender), a gender identity in Japan for nonbinary people born female
Owando Airport (IATA code: FTX), an airport in the Republic of Congo
See also
FXT (disambiguation) |
https://en.wikipedia.org/wiki/Apple%20ProDOS | ProDOS is the name of two similar operating systems for the Apple II series of personal computers. The original ProDOS, renamed ProDOS 8 in version 1.2, is the last official operating system usable by all 8-bit Apple II series computers, and was distributed from 1983 to 1993. The other, ProDOS 16, was a stop-gap solution for the 16-bit Apple II that was replaced by GS/OS within two years.
ProDOS was marketed by Apple as meaning Professional Disk Operating System, and became the most popular operating system for the Apple II series of computers 10 months after its release in January 1983.
Background
ProDOS was released to address shortcomings in the earlier Apple operating system (called simply DOS), which was beginning to show its age.
Apple DOS only has built-in support for 5.25" floppy disks and requires patches to use peripheral devices such as hard disk drives and non-Disk-II floppy disk drives, including 3.5" floppy drives. ProDOS adds a standard method of accessing ROM-based drivers on expansion cards for disk devices, expands the maximum volume size from about 400 kilobytes to 32 megabytes, introduces support for hierarchical subdirectories (a vital feature for organizing a hard disk's storage space), and supports RAM disks on machines with 128 KB or more of memory. ProDOS addresses problems with handling hardware interrupts, and includes a well-defined and documented programming and expansion interface, which Apple DOS had always lacked. Although ProDOS also includes support for a real-time clock (RTC), this support went largely unused until the release of the Apple II, the first in the Apple II series to include an RTC on board. Third-party clocks were available for the II Plus, IIe, and IIc, however.
ProDOS, unlike earlier Apple DOS versions, has its developmental roots in SOS, the operating system for the ill-fated Apple III computer released in 1980. Pre-release documentation for ProDOS (including early editions of Beneath Apple ProDOS) documented SOS error codes, notably one for switched disks, that ProDOS itself could never generate. Its disk format and programming interface are completely different from those of Apple DOS, and ProDOS cannot read or write DOS 3.3 disks except by means of a conversion utility; while the low-level track-and-sector format of DOS 3.3 disks was retained for 5.25-inch disks, the high-level arrangement of files and directories is completely different. For this reason, most machine-language programs that run under Apple DOS will not work under ProDOS. However, most BASIC programs work, though they sometimes require minor changes. A third-party program called DOS.MASTER enables users to have multiple virtual DOS 3.3 partitions on a larger ProDOS volume.
With the release of ProDOS came the end of support for Integer BASIC and the original Apple II model, which had long since been effectively supplanted by Applesoft BASIC and the Apple II Plus. Whereas DOS 3.3 always loads built-in support for BASIC pro |
https://en.wikipedia.org/wiki/Apple%20DOS | Apple DOS is the family of disk operating systems for the Apple II series of microcomputers from late 1978 through early 1983. It was superseded by ProDOS in 1983. Apple DOS has three major releases: DOS 3.1, DOS 3.2, and DOS 3.3; each one of these three releases was followed by a second, minor "bug-fix" release, but only in the case of Apple DOS 3.2 did that minor release receive its own version number, Apple DOS 3.2.1. The best-known and most-used version is Apple DOS 3.3 in the 1980 and 1983 releases. Prior to the release of Apple DOS 3.1, Apple users had to rely on audio cassette tapes for data storage and retrieval.
Version history
When Apple Computer introduced the Apple II in April 1977, the new computer had no disk drive or disk operating system (DOS). Although Apple co-founder Steve Wozniak designed the Disk II controller late that year, and believed that he could have written a DOS, his co-founder Steve Jobs decided to outsource the task. The company considered using Digital Research's CP/M, but Wozniak sought an operating system that was easier to use. On 10 April 1978 Apple signed a $13,000 contract with Shepardson Microsystems to write a DOS and deliver it within 35 days. Apple provided detailed specifications, and early Apple employee Randy Wigginton worked closely with Shepardson's Paul Laughton as the latter wrote the operating system with punched cards and a minicomputer.
There was no Apple DOS 1 or 2. Versions 0.1 through 2.8 were serially enumerated revisions during development, which might as well have been called builds 1 through 28. Apple DOS 3.0, a renamed issue of version 2.8, was never publicly released due to bugs. Apple published no official documentation until release 3.2.
Apple DOS 3.1 was publicly released in June 1978, slightly more than one year after the Apple II was introduced, becoming the first disk-based operating system for any Apple computer. A bug-fix release came later, addressing a problem by means of its utility, which was used to create Apple DOS master (bootable) disks: The built-in command created disks that could be booted only on machines with at least the same amount of memory as the one that had created them. includes a self-relocating version of DOS that boots on Apples with any memory configuration.
Apple DOS 3.2 was released in 1979 to reflect changes in computer booting methods that were built into the successor of the Apple II, the Apple II Plus. New firmware included an auto-start feature which automatically found a disk controller and booted from it when the system was powered up—earning it the name "Autostart ROM". DOS 3.2.1 was then released in July 1979 with some minor bug fixes.
Apple DOS 3.3 was released in 1980. It improves various functions of release 3.2, while allowing for large gains in available floppy disk storage. The newer P5A/P6A PROMs in the disk controller enable the reading and writing of data at a higher density, so 16 sectors (4 KiB) can be stored per track inst |
https://en.wikipedia.org/wiki/Eye%20of%20the%20Beholder%20%28video%20game%29 | Eye of the Beholder is a role-playing video game for personal computers and video game consoles developed by Westwood Associates. It was published by Strategic Simulations, Inc. in 1991, for the MS-DOS operating system and later ported to the Amiga, the Sega CD and the SNES. The Sega CD version features a soundtrack composed by Yuzo Koshiro and Motohiro Kawashima. A port to the Atari Lynx handheld was developed by NuFX in 1993, but was not released. In 2002, an adaptation of the same name was developed by Pronto Games for the Game Boy Advance.
The game has two sequels, Eye of the Beholder II: The Legend of Darkmoon, also released in 1991, and Eye of the Beholder III: Assault on Myth Drannor, released in 1993. The third game, however, was not developed by Westwood, which had been acquired by Virgin Interactive in 1992 and created the Lands of Lore series instead.
Plot
The lords of the city of Waterdeep hire a team of adventurers to investigate an evil coming from beneath the city. The adventurers enter the city's sewer, but the entrance gets blocked by a collapse caused by Xanathar, the eponymous beholder. The team descends further beneath the city, going through Dwarf and Drow clans, to Xanathar's lair, where the final confrontation takes place.
Once the eponymous beholder is killed, the player would be treated to a small blue window describing that the beholder was killed and that the adventurers returned to the surface where they were treated as heroes. Nothing else was mentioned in the ending and there were no accompanying graphics. This was changed in the later released Amiga version, which featured an animated ending.
Gameplay
Eye of the Beholder features a first-person perspective in a three-dimensional dungeon, very similar to the earlier Dungeon Master. The player controls four characters, initially, using a point-and-click interface to fight monsters. This can be increased to a maximum of six characters, by resurrecting one or more skeletons from dead non-player characters (NPCs), or finding NPCs that are found throughout the dungeons.
The possibility to increase the size of the player's party through the recruiting of NPCs was a tradition in all of the Eye of the Beholder series. It was also possible to import a party from Eye of the Beholder into The Legend of Darkmoon or from The Legend of Darkmoon into Assault on Myth Drannor; thus, a player could play through all three games with the same party.
Development
The graphics for the MS-DOS version were created using Deluxe Paint. Over 150 Adlib sound effects exist in the game's audio.
Reception
Critical reception
Eye of the Beholder was reviewed in 1991 in Dragon #171 by Hartley, Patricia, and Kirk Lesser in "The Role of Computers" column, who gave it 5 out of 5 stars. It was #1 on the Software Publishers Association's list of top MS-DOS games for April 1991, the last SSI D&D game to reach the rank. Dennis Owens of Computer Gaming World called it "a stunning, brilliantly graphi |
https://en.wikipedia.org/wiki/Oliver%20Twins | Andrew Nicholas Oliver and Philip Edward Oliver, together known as the Oliver Twins, are British twin brothers and video game designers.
They developed computer games while they were still at school, contributing their first type-in game to a magazine in 1983. They worked with publishers Codemasters for a number of years following their first collaboration Super Robin Hood, creating the Dizzy series of games and many of Codemasters' Simulator Series games.
In 1990 they founded Interactive Studios which later became Blitz Games Studios. In October 2013 they founded Radiant Worlds, based in Leamington Spa, with long time friend and colleague Richard Smithies.
History
Philip and Andrew Oliver first began programming computer games while at school (Clarendon School in Trowbridge). They discovered their interest in computing when their brother bought a used ZX 81 when they were 13. They bought a faster Dragon 32 in September 1982, with a bigger memory. They tried to improve the type-in games they found in magazines and eventually created their own game, Road Runner, which was published as written code in Computer and Video Games Magazine in January 1984. The same year they won first prize in a national TV competition (The Saturday Show) to design a computer game. Their first successful game, Super Robin Hood for the Amstrad CPC, was published in 1986 by Codemasters.
Codemasters
The Codemasters publishing relationship led to the origin of the Dizzy series and the Simulator series.
Interactive Studios (Blitz Games Studios)
In 1990, at the age of 22, they started Interactive Studios, later called Blitz Games Studios. Apart from their own games, the Oliver Twins were also responsible for porting a number of other prominent games to the Sega platforms, including Theme Park and Syndicate.
After 23 years, Blitz Games folded in 2013, with the loss of 175 staff, and owing millions to creditors.
Radiant Worlds
In October 2013 they founded Radiant Worlds, based in Leamington Spa, UK, with long time friend and colleague Richard Smithies to develop SkySaga: Infinite Isles for Korean-based Smilegate. SkySaga was an ambitious online voxel based game based on an original concept by members of the Blitz Games Studios team. In August 2017 Smilegate put SkySaga on hold and the Olivers and Smithies put the company up for sale. In January 2018, Rebellion, a UK games developer and publisher purchased the company and renamed it to Rebellion (Warwick). The twins remained with Rebellion until February 2019, at which point they left to form a game consultancy business.
Dizzy revival
In 2015 Philip Oliver found a hand drawn map titled Wonderland Dizzy while preparing for a talk the twins were due to give at that year's Play Blackpool event, after looking around further a disk was found which contained the full uncompiled source code of a game with the same name which they had written 22 years earlier for the NES but had forgotten about. The twins came into contact wit |
https://en.wikipedia.org/wiki/Speedball%20%28video%20game%29 | Speedball is a 1988 video game based on a violent futuristic cyberpunk sport that draws on elements of handball and ice hockey, and rewards violent play as well as goals.
Speedball was released in November 1988 for the Amiga and Atari ST and later ported to MS-DOS, Commodore 64, and the Master System. SOFEL released a port for the NES in 1991, as KlashBall. It was re-released in 2004 as one of the 30 games on the C64 Direct-to-TV.
Gameplay
The game is played by two teams on an enclosed court with a goal at each end, similar to that of ice hockey or five-a-side football. The court contains fixed bounce domes that modify the trajectory and speed of the ball, as well as one hole in the middle at each side where upon entering the ball will appear at the opposite side, keeping its momentum. The layout of the domes on the court changes as the player faces a different team, up to a maximum of 10 variations.
A player has control of only one outfield player on a team at any one time. The game may be played by one or two players; two player games are played competitively. Two game modes are supported: knockout (face increasingly tougher teams controlled by the computer in best of three matches) and league.
The game starts with the player(s) selecting a captain among three available choices, each starting with significantly more points than the other two in one of three stats: stamina, power and skill. All the members in a team start the game with the same stats. During the actual game, as team members hit opponents, the opponent loses a part of his stamina; when stamina drops low enough, that individual player will move slower than the rest. The more powerful a team member is, the more damage he will deliver with a direct hit. Extra skill on the other hand promotes aggression from any team member controlled by the computer towards the opposite team, and improves chances of a successful hit.
While in possession of the ball, the player can either press and immediately release the fire button to do a direct throw, or keep the button pressed to make the ball go higher. Players can then jump to try and catch it, but this makes them more vulnerable to being hit by the opposite team.
As the game progresses, coins and several power-ups appear randomly which can be collected. Power-ups include making the ball become electrified (the opposite team cannot pick it up and will be harmed if they try), and make it teleport to the player's team member. Coins can be traded at the end of each game for different bonuses, such as extra time or several enhancements for all members in the player's team, including a permanent increase to any of their stats. Computer-controlled players (either on the player or the computer's side) cannot collect coins, but the active player controlled by the computer can collect power-ups. The team that has amassed the most goals at the end of the game is the winner.
Reception
Speedball received scores of 862 (DOS) and 834 (Atari ST) out |
https://en.wikipedia.org/wiki/OSF/1 | OSF/1 is a variant of the Unix operating system developed by the Open Software Foundation during the late 1980s and early 1990s. OSF/1 is one of the first operating systems to have used the Mach kernel developed at Carnegie Mellon University, and is probably best known as the native Unix operating system for DEC Alpha architecture systems.
In 1994, after AT&T had sold UNIX System V to Novell and the rival Unix International consortium had disbanded, the Open Software Foundation ceased funding of research and development of OSF/1. The Tru64 UNIX variant of OSF/1 was supported by HP until 2012.
Background
In 1988, during the so-called "Unix wars", Digital Equipment Corporation (DEC) joined with IBM, Hewlett-Packard, and others to form the Open Software Foundation (OSF) to develop a version of Unix named OSF/1. The aim was to compete with System V Release 4 from AT&T Corporation and Sun Microsystems, and it has been argued that a primary goal was for the operating system to be free of AT&T intellectual property. The fact that OSF/1 is one of the first operating systems to have used the Mach kernel is cited as support of this assertion. Digital also strongly promoted OSF/1 for real-time applications, and with traditional UNIX implementations at the time providing poor real-time support at best, the real-time and multi-threading support can be interpreted as having been heavily dependent on the Mach kernel. It also incorporates a large part of the BSD kernel (based on the 4.3-Reno release) to implement the UNIX API. At the time of its introduction, OSF/1 became the third major flavor of UNIX together with System V and BSD.
Vendor releases
DEC's first release of OSF/1 (OSF/1 Release 1.0) in January 1992 was for its line of MIPS-based DECstation workstations, however this was never a fully supported product. DEC ported OSF/1 to their new Alpha AXP platform as DEC OSF/1 AXP Release 1.2, released in March 1993. OSF/1 AXP is a full 64-bit operating system. After OSF/1 AXP V2.0 onwards, UNIX System V compatibility was also integrated into the system. OSF/1 v2 was also released for DECStation MIPS systems the same year. Subsequent releases are named Digital UNIX, and later, Tru64 UNIX.
HP also released a port of OSF/1 to the early HP 9000/700 workstations based on the PA-RISC 1.1 architecture. This was withdrawn soon afterwards due to lack of software and hardware support compared to competing operating systems, specifically HP-UX.
As part of the AIM alliance and the resulting PowerOpen specification, Apple Computer intended to base A/UX 4.0 for its PowerPC-based Macintoshes upon OSF/1, but the project was cancelled and PowerOpen deprecated.
IBM used OSF/1 as the basis of the AIX/ESA operating system for System/370 and System/390 mainframes.
OSF/1 was also ported by Kendall Square Research to its proprietary microarchitecture used in the KSR1 supercomputer.
OSFMK
The Open Software Foundation created OSFMK which is a commercial version of the Mach |
https://en.wikipedia.org/wiki/OS/8 | OS/8 is the primary operating system used on the Digital Equipment Corporation's PDP-8 minicomputer.
PDP-8 operating systems which precede OS/8 include:
R-L Monitor, also referred to as MS/8.
P?S/8, requiring only 4K of memory.
PDP-8 4K Disk Monitor System
PS/8 ("Programming System/8"), requiring 8K. This is what became OS/8 in 1971.
Other/related DEC operating systems are OS/78, OS/278, and OS/12. The latter is a virtually identical version of OS/8, and runs on Digital's PDP-12 computer.
Digital released OS/8 images for non-commercial purposes which can be emulated through SIMH.
Overview
OS/8 provides a simple operating environment that is commensurate in complexity and scale with the PDP-8 computers on which it ran. I/O is supported via a series of supplied drivers which uses polled (not interrupt-driven) techniques. The device drivers have to be cleverly written as they can occupy only one or two memory pages of 128 12-bit words, and have to be able to run in any page in field 0. This often requires considerable cleverness, such as the use of the OPR instruction (7XXX) for small negative constants.
The memory-resident "footprint" of OS/8 is only 256 words; 128 words at the top of Field 0 and 128 words at the top of Field 1. The rest of the operating system (the USR, "User Service Routines") swaps in and out of memory transparently (with regard to the user's program) as needed.
The Concise Command Language
Early versions of OS/8 have a very rudimentary command-line interpreter with very few basic commands: , , , , , and . With version 3 they add a more sophisticated overlay called CCL (Concise Command Language) that implements many more commands. OS/8's CCL is directly patterned after the CCL found on Digital's PDP-10 systems running TOPS-10. In fact, much of the OS/8 software system is deliberately designed to mimic, as closely as possible, the TOPS-10 operating environment. (The CCL command language is used on PDP-11 computers running RT-11, RSX-11, and RSTS/E, providing a similar user operating environment across all three architectures: PDP-8s, PDP-10s, and PDP-11s.)
The basic OS and CCL implements many rather sophisticated commands, many of which still do not exist in modern command languages, not even in MS-DOS, Windows, or Unix-like operating systems.
For example, the command automatically finds the right compiler for a given source file and starts the compile/assemble/link cycle.
The and commands permit the use of logical device names in a program instead of physical names (as required in MS-DOS). For example, a program can write to device , and with an initial "" then the file is created on physical device RXA2 (the second floppy disk drive). VAX/VMS and the Amiga's operating system AmigaOS (and other OSes built around TRIPOS) make considerable use of this feature.
The command is capable of setting many system options by patching locations in the system binary code. One of them, a command under OS-78, is , which re-e |
https://en.wikipedia.org/wiki/Binomial%20heap | In computer science, a binomial heap is a data structure that acts as a priority queue but also allows pairs of heaps to be merged.
It is important as an implementation of the mergeable heap abstract data type (also called meldable heap), which is a priority queue supporting merge operation. It is implemented as a heap similar to a binary heap but using a special tree structure that is different from the complete binary trees used by binary heaps. Binomial heaps were invented in 1978 by Jean Vuillemin.
Binomial heap
A binomial heap is implemented as a set of binomial trees (compare with a binary heap, which has a shape of a single binary tree), which are defined recursively as follows:
A binomial tree of order 0 is a single node
A binomial tree of order has a root node whose children are roots of binomial trees of orders , , ..., 2, 1, 0 (in this order).
A binomial tree of order has nodes, and height .
The name comes from the shape: a binomial tree of order has nodes at depth , a binomial coefficient.
Because of its structure, a binomial tree of order can be constructed from two trees of order by attaching one of them as the leftmost child of the root of the other tree. This feature is central to the merge operation of a binomial heap, which is its major advantage over other conventional heaps.
Structure of a binomial heap
A binomial heap is implemented as a set of binomial trees that satisfy the binomial heap properties:
Each binomial tree in a heap obeys the minimum-heap property: the key of a node is greater than or equal to the key of its parent.
There can be at most one binomial tree for each order, including zero order.
The first property ensures that the root of each binomial tree contains the smallest key in the tree. It follows that the smallest key in the entire heap is one of the roots.
The second property implies that a binomial heap with nodes consists of at most binomial trees, where is the binary logarithm. The number and orders of these trees are uniquely determined by the number of nodes : there is one binomial tree for each nonzero bit in the binary representation of the number . For example, the decimal number 13 is 1101 in binary, , and thus a binomial heap with 13 nodes will consist of three binomial trees of orders 3, 2, and 0 (see figure below).
The number of different ways that items with distinct keys can be arranged into a binomial heap equals the largest odd divisor of . For these numbers are
1, 1, 3, 3, 15, 45, 315, 315, 2835, 14175, ...
If the items are inserted into a binomial heap in a uniformly random order, each of these arrangements is equally likely.
Implementation
Because no operation requires random access to the root nodes of the binomial trees, the roots of the binomial trees can be stored in a linked list, ordered by increasing order of the tree. Because the number of children for each node is variable, it does not work well for each node to have separate links to each of its chil |
https://en.wikipedia.org/wiki/Compatible%20Time-Sharing%20System | The Compatible Time-Sharing System (CTSS) was the first general purpose time-sharing operating system. Compatible Time Sharing referred to time sharing which was compatible with batch processing; it could offer both time sharing and batch processing concurrently.
CTSS was developed at the MIT Computation Center ("Comp Center"). CTSS was first demonstrated on MIT's modified IBM 709 in November 1961. The hardware was replaced with a modified IBM 7090 in 1962 and later a modified IBM 7094 called the "blue machine" to distinguish it from the Project MAC CTSS IBM 7094. Routine service to MIT Comp Center users began in the summer of 1963 and was operated there until 1968.
A second deployment of CTSS on a separate IBM 7094 that was received in October 1963 (the "red machine") was used early on in Project MAC until 1969 when the red machine was moved to the Information Processing Center and operated until July 20, 1973. CTSS ran on only those two machines; however, there were remote CTSS users outside of MIT including ones in California, South America, the University of Edinburgh and the University of Oxford.
History
John Backus said in the 1954 summer session at MIT that "By time sharing, a big computer could be used as several small ones; there would need to be a reading station for each user". Computers at that time, like IBM 704, were not powerful enough to implement such system, but at the end of 1958, MIT's Computation Center nevertheless added a typewriter input to its 704 with the intent that a programmer or operator could "obtain additional answers from the machine on a time-sharing basis with other programs using the machine simultaneously".
In June 1959, Christopher Strachey published a paper "Time Sharing in Large Fast Computers" at the UNESCO Information Processing Conference in Paris, where he envisaged a programmer debugging a program at a console (like a teletype) connected to the computer, while another program was running in the computer at the same time. Debugging programs was an important problem at that time, because with batch processing, it then often took a day from submitting a changed code, to getting the results. John McCarthy wrote a memo about that at MIT, after which a preliminary study committee and a working committee were established at MIT, to develop time sharing. The committees envisaged many users using the computer at the same time, decided the details of implementing such system at MIT, and started the development of the system.
Experimental Time Sharing System
By July, 1961 a few time sharing commands had become operational on the Computation Center's IBM 709, and in November 1961, Fernando J. Corbató demonstrated at MIT what was called the Experimental Time-Sharing System. On May 3, 1962, F. J. Corbató, M. M. Daggett and R. C. Daley published a paper about that system at the Spring Joint Computer Conference. Robert C. Daley, Peter R. Bos and at least 6 other programmers implemented the operating system, partl |
https://en.wikipedia.org/wiki/Fibonacci%20heap | In computer science, a Fibonacci heap is a data structure for priority queue operations, consisting of a collection of heap-ordered trees. It has a better amortized running time than many other priority queue data structures including the binary heap and binomial heap. Michael L. Fredman and Robert E. Tarjan developed Fibonacci heaps in 1984 and published them in a scientific journal in 1987. Fibonacci heaps are named after the Fibonacci numbers, which are used in their running time analysis.
For the Fibonacci heap, the find-minimum operation takes constant (O(1)) amortized time. The insert and decrease key operations also work in constant amortized time. Deleting an element (most often used in the special case of deleting the minimum element) works in O(log n) amortized time, where n is the size of the heap. This means that starting from an empty data structure, any sequence of a insert and decrease key operations and b delete operations would take O(a + b log n) worst case time, where n is the maximum heap size. In a binary or binomial heap, such a sequence of operations would take O((a + b) log n) time. A Fibonacci heap is thus better than a binary or binomial heap when b is smaller than a by a non-constant factor. It is also possible to merge two Fibonacci heaps in constant amortized time, improving on the logarithmic merge time of a binomial heap, and improving on binary heaps which cannot handle merges efficiently.
Using Fibonacci heaps for priority queues improves the asymptotic running time of important algorithms, such as Dijkstra's algorithm for computing the shortest path between two nodes in a graph, compared to the same algorithm using other slower priority queue data structures.
Structure
A Fibonacci heap is a collection of trees satisfying the minimum-heap property, that is, the key of a child is always greater than or equal to the key of the parent. This implies that the minimum key is always at the root of one of the trees. Compared with binomial heaps, the structure of a Fibonacci heap is more flexible. The trees do not have a prescribed shape and in the extreme case the heap can have every element in a separate tree. This flexibility allows some operations to be executed in a lazy manner, postponing the work for later operations. For example, merging heaps is done simply by concatenating the two lists of trees, and operation decrease key sometimes cuts a node from its parent and forms a new tree.
However, at some point order needs to be introduced to the heap to achieve the desired running time. In particular, degrees of nodes (here degree means the number of direct children) are kept quite low: every node has degree at most log n and the size of a subtree rooted in a node of degree k is at least Fk+2, where Fk is the kth Fibonacci number. This is achieved by the rule that we can cut at most one child of each non-root node. When a second child is cut, the node itself needs to be cut from its parent and becomes the root o |
https://en.wikipedia.org/wiki/J.%20C.%20R.%20Licklider | Joseph Carl Robnett Licklider (; March 11, 1915 – June 26, 1990), known simply as J. C. R. or "Lick", was an American psychologist and computer scientist who is considered to be among the most prominent figures in computer science development and general computing history.
He is particularly remembered for being one of the first to foresee modern-style interactive computing and its application to all manner of activities; and also as an Internet pioneer with an early vision of a worldwide computer network long before it was built. He did much to initiate this by funding research which led to much of it, including today's canonical graphical user interface, and the ARPANET which is the direct predecessor of the Internet.
He has been called "computing's Johnny Appleseed", for planting the seeds of computing in the digital age. Robert Taylor, founder of Xerox PARC's Computer Science Laboratory and Digital Equipment Corporation's Systems Research Center, noted that "most of the significant advances in computer technology—including the work that my group did at Xerox PARC—were simply extrapolations of Lick's vision. They were not really new visions of their own. So he was really the father of it all".
Biography
Licklider was born on March 11, 1915, in St. Louis, Missouri. He was the only child of Joseph Parron Licklider, a Baptist minister, and Margaret Robnett Licklider. Despite his father's religious background, he was not religious in later life.
He studied at Washington University in St. Louis, where he received a B.A. with a triple major in physics, mathematics, and psychology in 1937 and an M.A. in psychology in 1938. He received a Ph.D. in psychoacoustics from the University of Rochester in 1942. Thereafter he worked at Harvard University as a research fellow and lecturer in the Psycho-Acoustic Laboratory from 1943 to 1950.
He became interested in information technology, and moved to MIT in 1950 as an associate professor, where he served on a committee that established MIT Lincoln Laboratory and a psychology program for engineering students. While at MIT, Licklider was involved in the SAGE project as head of the team concerned with human factors. In 1957, he received the Franklin V. Taylor Award from the Society of Engineering Psychologists. In 1958, he was elected President of the Acoustical Society of America, and in 1990 he received the Commonwealth Award for Distinguished Service.
Licklider left MIT to become a vice president at Bolt Beranek and Newman in 1957. He learned about time-sharing from Christopher Strachey at a UNESCO-sponsored conference on Information Processing in Paris in 1959. At BBN he developed the BBN Time-Sharing System and conducted the first public demonstration of time-sharing.
In October 1962, Licklider was appointed head of the Information Processing Techniques Office (IPTO) at ARPA, the United States Department of Defense Advanced Research Projects Agency, an appointment he kept through July 1964. In April 1 |
https://en.wikipedia.org/wiki/Data%20General%20RDOS | The Data General RDOS (Real-time Disk Operating System) is a real-time operating system released in 1970. The software was bundled with the company's popular Nova and Eclipse minicomputers.
Overview
RDOS is capable of multitasking, with the ability to run up to 32 tasks (similar to the current term threads) simultaneously on each of two grounds (foreground and background) within a 64 KB memory space. Later versions of RDOS are compatible with Data General's 16-bit Eclipse minicomputer line.
A cut-down version of RDOS, without real-time background and foreground capability but still capable of running multiple threads and multi-user Data General Business Basic, is called Data General Diskette Operating System (DG-DOS or now—somewhat confusingly—simply DOS); another related operating system is RTOS, a Real-Time Operating System for diskless environments. RDOS on microNOVA-based "Micro Products" micro-minicomputers is sometimes called DG/RDOS.
RDOS was superseded in the early 1980s by Data General's AOS family of operating systems, including AOS/VS and MP/AOS (MP/OS on smaller systems).
Commands
The following list of commands are supported by the RDOS/DOS CLI.
ALGOL
APPEND
ASM
BASIC
BATCH
BOOT
BPUNCH
BUILD
CCONT
CDIR
CHAIN
CHATR
CHLAT
CLEAR
CLG
COPY
CPART
CRAND
CREATE
DEB
DELETE
DIR
DISK
DUMP
EDIT
ENDLOG
ENPAT
EQUIV
EXFG
FDUMP
FGND
FILCOM
FLOAD
FORT
FORTRAN
FPRINT
GDIR
GMEM
GSYS
GTOD
INIT
LDIR
LFE
LINK
LIST
LOAD
LOG
MAC
MCABOOT
MDIR
MEDIT
MESSAGE
MKABS
MKSAVE
MOVE
NSPEED
OEDIT
OVLDR
PATCH
POP
PRINT
PUNCH
RDOSSORT
RELEASE
RENAME
REPLACE
REV
RLDR
SAVE
SDAY
SEDIT
SMEM
SPDIS
SPEBL
SPEED
SPKILL
STOD
SYSGEN
TPRINT
TUOFF
TUON
TYPE
VFU
XFER
Antitrust lawsuit
In the late 1970s, Data General was sued (under the Sherman and Clayton antitrust acts) by competitors for their practice of bundling RDOS with the Data General Nova or Eclipse minicomputer. When Data General introduced the Data General Nova, a company called Digidyne wanted to use its RDOS operating system on its own hardware clone. Data General refused to license their software and claimed their "bundling rights". In 1985, courts including the United States Court of Appeals for the Ninth Circuit ruled against Data General in a case called Digidyne v. Data General. The Supreme Court of the United States declined to hear Data General's appeal, although Justices White and Blackmun would have heard it. The precedent set by the lower courts eventually forced Data General to license the operating system because restricting the software to only Data General's hardware was an illegal tying arrangement.
In 1999, Data General was taken over by EMC Corporation.
References
External links
RDOS documentation at the Computer History Museum
RDOS 7.50 User Parameters definition
SimuLogics' ReNOVAte - Emulator to run NOVA/Eclipse Software on DOS / WindowsNT / UN*X / VMS
Data General
Disk operating systems
Real-time operating |
https://en.wikipedia.org/wiki/Curry%E2%80%93Howard%20correspondence | In programming language theory and proof theory, the Curry–Howard correspondence (also known as the Curry–Howard isomorphism or equivalence, or the proofs-as-programs and propositions- or formulae-as-types interpretation) is the direct relationship between computer programs and mathematical proofs.
It is a generalization of a syntactic analogy between systems of formal logic and computational calculi that was first discovered by the American mathematician Haskell Curry and the logician William Alvin Howard. It is the link between logic and computation that is usually attributed to Curry and Howard, although the idea is related to the operational interpretation of intuitionistic logic given in various formulations by L. E. J. Brouwer, Arend Heyting and Andrey Kolmogorov (see Brouwer–Heyting–Kolmogorov interpretation) and Stephen Kleene (see Realizability). The relationship has been extended to include category theory as the three-way Curry–Howard–Lambek correspondence.
Origin, scope, and consequences
The beginnings of the Curry–Howard correspondence lie in several observations:
In 1934 Curry observes that the types of the combinators could be seen as axiom-schemes for intuitionistic implicational logic.
In 1958 he observes that a certain kind of proof system, referred to as Hilbert-style deduction systems, coincides on some fragment to the typed fragment of a standard model of computation known as combinatory logic.
In 1969 Howard observes that another, more "high-level" proof system, referred to as natural deduction, can be directly interpreted in its intuitionistic version as a typed variant of the model of computation known as lambda calculus.
The Curry–Howard correspondence is the observation that there is an isomorphism between the proof systems, and the models of computation. It is the statement that these two families of formalisms can be considered as identical.
If one abstracts on the peculiarities of either formalism, the following generalization arises: a proof is a program, and the formula it proves is the type for the program. More informally, this can be seen as an analogy that states that the return type of a function (i.e., the type of values returned by a function) is analogous to a logical theorem, subject to hypotheses corresponding to the types of the argument values passed to the function; and that the program to compute that function is analogous to a proof of that theorem. This sets a form of logic programming on a rigorous foundation: proofs can be represented as programs, and especially as lambda terms, or proofs can be run.
The correspondence has been the starting point of a large spectrum of new research after its discovery, leading in particular to a new class of formal systems designed to act both as a proof system and as a typed functional programming language. This includes Martin-Löf's intuitionistic type theory and Coquand's Calculus of Constructions, two calculi in which proofs are regular objects o |
https://en.wikipedia.org/wiki/Power%20Mac%20G5 | The Power Mac G5 is a series of personal computers designed, manufactured, and sold by Apple Computer, Inc. from 2003 to 2006 as part of the Power Mac series. When introduced, it was the most powerful computer in Apple's Macintosh lineup, and was marketed by the company as the world's first 64-bit desktop computer. It was also the first desktop computer from Apple to use an anodized aluminum alloy enclosure, and one of only three computers in Apple's lineup to utilize the PowerPC 970 CPU, the others being the iMac G5 and the Xserve G5.
Three generations of Power Mac G5 were released before it was discontinued as part of the Mac transition to Intel processors, making way for its replacement, the Mac Pro. The Mac Pro retained a variation of the G5's enclosure design for seven more years, making it among the longest-lived designs in Apple's history.
Introduction
Officially launched as part of Steve Jobs' keynote presentation at the Worldwide Developers Conference in June 2003, the Power Mac G5 was introduced with three models, sharing the same physical case, but differing in features and performance. Although somewhat larger than the G4 tower it replaced, the necessity for a complex cooling system meant that the G5 tower had room inside for only one optical drive and two hard drives.
Steve Jobs stated during his keynote presentation that the Power Mac G5 would reach 3 GHz "within 12 months." This would never come to pass; after three years, the G5 only reached 2.7 GHz before it was replaced by the Intel Xeon-based Mac Pro, which debuted with processors running at speeds of up to 3 GHz.
During the presentation, Apple also showed Virginia Tech's Mac OS X computer cluster supercomputer (a.k.a. supercluster) known as System X, consisting of 1,100 Power Mac G5 towers operating as processing nodes. The supercomputer managed to become one of the top five supercomputers that year. The computer was soon dismantled and replaced with a new cluster made of an equal number of Xserve G5 rack-mounted servers, which also used the G5 chip running at 2.3 GHz.
PowerPC G5 and the IBM partnership
The PowerPC G5 (called the PowerPC 970 by its manufacturer, IBM) is based upon IBM's 64-bit POWER4 microprocessor. At the Power Mac G5's introduction, Apple announced a partnership with IBM in which IBM would continue to produce PowerPC variants of their POWER processors. According to IBM's Dr. John E. Kelly, "The goal of this partnership is for Apple and IBM to come together so that Apple customers get the best of both worlds, the tremendous creativity from the Apple corporation and the tremendous technology from the IBM corporation. IBM invested over $3 billion US dollars in a new lab to produce these large, 300 mm wafers." This lab was a completely automated facility located in East Fishkill, New York, and figured heavily in IBM's larger microelectronics strategy.
The original PowerPC 970 had 50 million transistors and was manufactured using IBM CMOS 9S at 130 nm |
https://en.wikipedia.org/wiki/Presentation%20Manager | Presentation Manager (PM) is the graphical user interface (GUI) that IBM and Microsoft introduced in version 1.1 of their operating system OS/2 in late 1988.
History
Microsoft began developing a graphic user interface (GUI) in 1981. After it persuaded IBM that the latter also needed a GUI, Presentation Manager (PM; codenamed Winthorn) was co-developed by Microsoft and IBM's Hursley Lab in 1987-1988. It was a cross between Microsoft Windows and IBM's mainframe graphical system (GDDM). Like Windows, it was message based and many of the messages were even identical, but there were a number of significant differences as well. Although Presentation Manager was designed to be very similar to the upcoming Windows 2.0 from the user's point of view, and Presentation Manager application structure was nearly identical to Windows application structure, source compatibility with Windows was not an objective. For Microsoft, the development of Presentation Manager was an opportunity to clean up some of the design mistakes of Windows. The two companies stated that Presentation Manager and Windows 2.0 would remain almost identical.
One of the most significant differences between Windows and PM was the coordinate system. While in Windows the 0,0 coordinate was located in the upper left corner, in PM it was in the lower left corner. Another difference was that all drawing operations went to the Device Context (DC) in Windows. PM also used DCs but there was an added level of abstraction called Presentation Space (PS). OS/2 also had more powerful drawing functions in its Graphics Programming Interface (GPI). Some of the GPI concepts (like viewing transforms) were later incorporated into Windows NT. The OS/2 programming model was thought to be cleaner, since there was no need to explicitly export the window procedure, no WinMain, and no non-standard function prologs and epilogs.
Parting ways
One of the most-cited reasons for the IBM-Microsoft split was the divergence of the APIs between Presentation Manager and Windows, which was probably driven by IBM. Initially, Presentation Manager was based on Windows GUI code, and often had developments performed in advance, like the support for proportional fonts (which appeared in Windows only in 1990). One of the divergences regarded the position of coordinate (0,0), which was at the top-left in Windows, but at bottom-left (as in Cartesian coordinates) in Presentation Manager. In practice it became impossible to recompile a GUI program to run on the other system; an automated source code conversion tool was promised at some point. Both companies were hoping that at some point users would migrate to OS/2.
In 1990, version 3.0 of Windows was beginning to sell in volume, and Microsoft began to lose interest in OS/2 especially since, even earlier, market interest in OS/2 was always much smaller than in Windows.
The companies parted ways, and IBM took over all of subsequent development. Microsoft took OS/2 3.0, which it ren |
https://en.wikipedia.org/wiki/Richard%20Joseph | Richard Joseph (23 April 1953 – 4 March 2007) was an English computer game composer, musician and sound specialist. He had a career spanning 20 years starting in the early days of gaming on the C64 and the Amiga and onto succeeding formats.
Biography
Prior to working in games Richard Joseph had a fleeting career in the music industry working with artists such as Trevor Horn and Hugh Padgham. He released one solo single on EMI and was part of the group CMU which released two albums (although Joseph was only involved with the second, Space Cabaret) on Transatlantic before evolving into jazz funk band Shakatak.
Joseph was noted in game audio for bringing "real" voice actors into a game for the first time (Mega Lo Mania), the earliest use of interactive music (Chaos Engine), working with established recording artists (Betty Boo on Magic Pockets, Captain Sensible on Sensible Soccer, Brian May on Rise of the Robots and John Foxx on Gods and Speedball 2), and featuring vocals in title tunes, which was revolutionary for the time.
In the late 1980s and early 1990s, he produced soundtracks for development teams Sensible Software and the Bitmap Brothers. He is also credited with the soundtrack to the C64 version of the hit Defender of the Crown.
He then went on to set up Audio Interactive at Pinewood Studios and, along with composer James Hannigan, helped Electronic Arts to win the BAFTA Award for best audio in 2000 for Theme Park World. From 1990 onwards Joseph was a frequent musical collaborator with Jon Hare with whom he co-wrote and arranged all of Sensible Software's best known musical tracks including the soundtrack for Cannon Fodder the GBC version of which was also nominated for a BAFTA in 2000, and is still the only small-format soundtrack to be recognised by BAFTA to this day. In 1995 Hare and Joseph embarked upon an epic 32 track soundtrack for the multimedia product Sex 'n' Drugs 'n' Rock 'n' Roll, signed to Warner Interactive, however in 1998 Warner bowed out of the games market and their Magnum Opus was only ever released as a limited edition audio CD.
After working as Audio Director on Republic: The Revolution and Evil Genius for Elixir Studios (music composed by James Hannigan), both winning BAFTA nominations for Hannigan's scores, Joseph moved to France where he ran SoundTropez, a company offering next-technology soundtracks.
Joseph came from an entertainment family. Brother Eddy is a BAFTA-winning sound supervisor, working on films such as Harry Potter and James Bond. Brother Pat is a director of The Mill which won an Oscar for Gladiator. Nephew Alex is a foley supervisor. His father Teddy (1918–2006) was a production executive working on, amongst many others, films by John Schlesinger and Alfred Hitchcock.
After being diagnosed with lung cancer, he died on 4 March 2007 aged 53 years. Wacky Races: Mad Motors is dedicated to him.
Works
References
External links
Profiles
Richard Joseph at MobyGames
Richard Joseph at OverClocked R |
https://en.wikipedia.org/wiki/Database%20administrator | Database administrators (DBAs) use specialized software to store and organize data. The role may include capacity planning, installation, configuration, database design, migration, performance monitoring, security, troubleshooting, as well as backup and data recovery.
Skills
Some common and useful skills for database administrators are:
Knowledge of database queries
Knowledge of database theory
Knowledge of database design
Knowledge about the RDBMS itself, e.g. Microsoft SQL Server or MySQL
Knowledge of SQL, e.g. SQL/PSM or Transact-SQL
General understanding of distributed computing architectures, e.g. Client–server model
General understanding of operating system, e.g. Windows or Linux
General understanding of storage technologies and networking
General understanding of routine maintenance, recovery, and handling failover of a database
Database administrators benefit from a bachelor's degree or master's degree in computer science. An associate degree or a certificate may be sufficient with work experience.
Certification
There are many certifications available for becoming a certified database administrator. Many of these certifications are offered by database vendors themselves. Database administrator certifications may be earned by passing a series of tests and sometimes other requirements. Schools offering Database Administration degrees can also be found.
For example:
IBM Certified Advanced Database Administrator – DB2 10.1 for Linux, Unix and Windows
IBM Certified Database Administrator – DB2 10.1 for Linux, Unix, and Windows
Oracle Database 12c Administrator Certified Professional
Oracle MySQL 5.6 Database Administrator Certified Professional
MCSA SQL Server 2012
MCSE Data Platform Solutions Expert
See also
Comparison of database administration tools
References
Computer occupations
Data management |
https://en.wikipedia.org/wiki/Rudy%20Rucker | Rudolf von Bitter Rucker (; born March 22, 1946) is an American mathematician, computer scientist, science fiction author, and one of the founders of the cyberpunk literary movement. The author of both fiction and non-fiction, he is best known for the novels in the Ware Tetralogy, the first two of which (Software and Wetware) both won Philip K. Dick Awards. Until its closure in 2014 he edited the science fiction webzine Flurb.
Early life
Rucker was born and raised in Louisville, Kentucky, son of Embry Cobb Rucker Sr (October 1, 1914 - August 1, 1994), who ran a small furniture-manufacture company and later became an Episcopal priest and community activist, and Marianne (née von Bitter). The Rucker family were of Huguenot descent. Through his mother, he is a great-great-great-grandson of Georg Wilhelm Friedrich Hegel.
Rucker attended St. Xavier High School before earning a BA in mathematics from Swarthmore College (1967) and MS (1969) and PhD (1973) degrees in mathematics from Rutgers University.
Career
Rucker taught mathematics at the State University of New York at Geneseo from 1972 to 1978. Although he was liked by his students and "published a book [Geometry, Relativity and the Fourth Dimension] and several papers," several colleagues took umbrage at his long hair and convivial relationships with English and philosophy professors amid looming budget shortfalls; as a result, he failed to attain tenure in the "dysfunctional" department.
Thanks to a grant from the Alexander von Humboldt Foundation, Rucker taught at the Ruprecht Karl University of Heidelberg from 1978 to 1980. He then taught at Randolph-Macon Women's College in Lynchburg, Virginia from 1980 to 1982, before trying his hand as a full-time author for four years.
Inspired by an interview with Stephen Wolfram, Rucker became a computer science professor at San José State University in 1986, from which he retired as professor emeritus in 2004.
From 1988 to 1992 he was hired by John Walker of Autodesk as a programmer of cellular automata, which inspired his book The Hacker and the Ants.
A mathematician with philosophical interests, he has written The Fourth Dimension and Infinity and the Mind. Princeton University Press published new editions of Infinity and the Mind in 1995 and in 2005, both with new prefaces; the first edition is cited with fair frequency in academic literature.
As his "own alternative to cyberpunk," Rucker developed a writing style he terms transrealism. Transrealism, as outlined in his 1983 essay The Transrealist Manifesto, is science fiction based on the author's own life and immediate perceptions, mixed with fantastic elements that symbolize psychological change. Many of Rucker's novels and short stories apply these ideas. One example of Rucker's transreal works is Saucer Wisdom, a novel in which the main character is abducted by aliens. Rucker and his publisher marketed the book, tongue in cheek, as non-fiction.
His earliest transreal novel, White Light, w |
https://en.wikipedia.org/wiki/Scott%20McNealy | Scott McNealy (born November 13, 1954) is an American businessman. He is most famous for co-founding the computer technology company Sun Microsystems in 1982 along with Vinod Khosla, Bill Joy, and Andy Bechtolsheim. In 2004, while still at Sun, McNealy founded Curriki, a free online education service. In 2011, he co-founded Wayin, a social intelligence and visualization company based in Denver. McNealy stepped down from his position as CEO of Wayin in 2016.
Career
Unlike most people who become involved in high technology industries, McNealy did not come from the world of amateur programmers or hardware hackers; instead, his background was in business, having earned a Bachelor of Arts in economics from Harvard and an MBA from the Stanford Graduate School of Business. McNealy has self-deprecatingly referred to himself as a "golf major" rather than a computer scientist.
McNealy started out working at American Motors, where his father was vice chairman and vice president of marketing. He later became manufacturing director at Onyx Systems, a vendor of microprocessor-based Unix systems.
In 1982, he was approached by fellow Stanford alumnus Vinod Khosla to help provide the necessary organizational and business leadership for Sun Microsystems. Sun, along with companies such as Apple Inc., Silicon Graphics, 3Com, and Oracle Corporation, was part of a wave of successful startup companies in California's Silicon Valley during the early and mid-1980s. The name "Sun" was derived from co-founder Andy Bechtolsheim's original Stanford University Network (SUN) computer project, the SUN workstation.
In 1984, McNealy took over the CEO role from Khosla, who ultimately would leave the company in 1985. On April 24, 2006, McNealy stepped down as CEO after serving in that position for 22 years, and turned the job over to Jonathan I. Schwartz. McNealy is one of the few CEOs of a major corporation to have had a tenure of over twenty years.
According to the book The Decline and Fall of Nokia, Scott McNealy was the "dream candidate" to become CEO of Nokia in 2010. However, McNealy said he was not offered the job.
In 2017, Scott joined the golf app startup 18Birdies as advisor and equity partner.
In early 2018, he joined the Redis Labs Advisory Board.
Wayin
In 2010, the same year Oracle Corporation purchased Sun, McNealy co-founded the social media intelligence company Wayin. The new venture was not widely covered in the media—the day he invited reporters to his home to launch Wayin, was the same day Apple co-founder Steve Jobs died. Their product is an application store for brands to self-publish interactive advertising campaigns using reusable digital assets, removing the bulk of cost involved in delivering multi-channel digital advertising.
Wayin sought out and merged with EngageSciences in 2016, to acquire senior staff and diversify their market. In May of that year, McNealy stepped down as CEO and EngageSciences head Richard Jones became CEO of the combined co |
https://en.wikipedia.org/wiki/Coherent%20%28operating%20system%29 | Coherent is a clone of the Unix operating system for IBM PC compatibles and other microcomputers, developed and sold by the now-defunct Mark Williams Company (MWC). Historically, the operating system was a proprietary product, but it became open source in 2015, released under the BSD-3-Clause license.
Development
Coherent was not Unix; the Mark Williams Company had no rights to either the Unix trademark or the AT&T/Bell Labs source code. In the early years of its existence, MWC received a visit from an AT&T delegation looking to determine whether MWC was infringing on AT&T Unix property. The delegation included Dennis Ritchie, who concluded that "it was very hard to believe that Coherent and its basic applications were not created without considerable study of the OS code and details of its applications." However, he also stated that:
Much of the operating system was written by alumni from the University of Waterloo: Tom Duff, Dave Conroy, Randall Howard, Johann George, and Trevor John Thompson. Significant contributions were also made by people such as Nigel Bree (from Auckland, New Zealand), the later author of Ghost.
Versions
Coherent was originally written for the PDP-11 range of minicomputers in 1980, then ported to various early 1980s microcomputer systems including IBM PC compatibles and machines based on the Zilog Z8000 and Motorola 68000. Initially sold to OEMs, starting 1983 it was available on the consumer market from MWC directly. At this point, Coherent 2.3 offered roughly the functionality of Version 7 Unix on PC hardware, including the nroff formatter but not the BSD extensions offered by competing Unix/clone vendors; compared to its competitors, it was a small system distributed on only seven double-sided floppy disks, costing only US$500 for a license.
BYTE in 1984 called Coherent a "highly compatible UNIX Version 7 lookalike". In 1985 it criticized the difficulty of installation, but stated that "as a UNIX clone, Coherent is amazingly complete ... it should be easy to port programs ... the price of $495 is a bargain". Early 1990s reviews of Coherent pointed out that the system was much smaller than other contemporary Unix offerings, as well as less expensive at US$99.95, but lacking in functionality and software support. PC Magazine called Coherent 3.0 a "time capsule" that captured the state of Unix in the late 1970s, without support for mice, LANs or SCSI disks, good for learning basic Unix programming but not for business automation. A review in the AUUG's newsletter was more positive, favorably comparing Coherent to MKS Toolkit, Minix and Xenix, and suggesting it might fill a niche as a low-end training platform.
Coherent was able to run on most Intel-based PCs with Intel 8088, 286, 386, and 486 processors. Coherent version 3 for Intel-based PCs required at least a 286, Coherent version 4 for Intel-based PCs required at least a 386. Like a true Unix, Coherent was able to multitask and support multiple users. From versio |
https://en.wikipedia.org/wiki/Voicemail | A voicemail system (also known as voice message or voice bank) is a computer-based system that allows users and subscribers to exchange personal voice messages; to select and deliver voice information; and to process transactions relating to individuals, organizations, products, and services, using an ordinary phone. The term is also used more broadly to denote any system of conveying a stored telecommunications voice messages, including using an answering machine. Most cell phone services offer voicemail as a basic feature; many corporate private branch exchanges include versatile internal voice-messaging services, and *98 vertical service code subscription is available to most individual and small business landline subscribers (in the US).
History
The term Voicemail was coined by Televoice International (later Voicemail International, or VMI) for their introduction of the first US-wide Voicemail service in 1980. Although VMI trademarked the term, it eventually became a generic term for automated voice services employing a telephone. Voicemail popularity continues today with Internet telephone services such as Skype, Google Voice and ATT that integrate voice, voicemail and text services for tablets and smartphones.
Voicemail systems were developed in the late 1970s by Voice Message Exchange (VMX). They became popular in the early 1980s when they were made available on PC-based boards. In September 2012 a report from USA Today and Vonage claimed that voicemail was in decline. The report said that the number of voicemail messages declined eight percent compared to 2011.
Features
Voicemail systems are designed to convey a caller's recorded audio message to a recipient. To do so they contain a user interface to select, play, and manage messages; a delivery method to either play or otherwise deliver the message; and a notification ability to inform the user of a waiting message. Most systems use phone networks, either cellular- or landline-based, as the conduit for all of these functions. Some systems may use multiple telecommunications methods, permitting recipients and callers to retrieve or leave messages through multiple methods such as PCs, PDA, cell phones, or smartphones.
Simple voicemail systems function as a remote answering machine using touch-tones as the user interface. More complicated systems may use other input devices such as voice or a computer interface. Simpler voicemail systems may play the audio message through the phone, while more advanced systems may have alternative delivery methods, including email or text message delivery, message transfer and forwarding options, and multiple mailboxes.
Almost all modern voicemail systems use digital storage and are typically stored on computer data storage. Notification methods also vary based on the voicemail system. Simple systems may not provide active notification at all, instead requiring the recipient to check with the system, while others may provide an indication that message |
https://en.wikipedia.org/wiki/Mastering%20%28audio%29 | Mastering, a form of audio post production, is the process of preparing and transferring recorded audio from a source containing the final mix to a data storage device (the master), the source from which all copies will be produced (via methods such as pressing, duplication or replication). In recent years, digital masters have become usual, although analog masters—such as audio tapes—are still being used by the manufacturing industry, particularly by a few engineers who specialize in analog mastering.
Mastering requires critical listening; however, software tools exist to facilitate the process. Results depend upon the intent of the engineer, their skills, the accuracy of the speaker monitors, and the listening environment. Mastering engineers often apply equalization and dynamic range compression in order to optimize sound translation on all playback systems. It is standard practice to make a copy of a master recording—known as a safety copy—in case the master is lost, damaged or stolen.
History
Pre-1940s
In the earliest days of the recording industry, all phases of the recording and mastering process were entirely achieved by mechanical processes. Performers sang and/or played into a large acoustic horn and the master recording was created by the direct transfer of acoustic energy from the diaphragm of the recording horn to the mastering lathe, typically located in an adjoining room. The cutting head, driven by the energy transferred from the horn, inscribed a modulated groove into the surface of a rotating cylinder or disc. These masters were usually made from either a soft metal alloy or from wax; this gave rise to the colloquial term waxing, referring to the cutting of a record.
After the introduction of the microphone and electronic amplifier in the mid-1920s, the mastering process became electro-mechanical, and electrically driven mastering lathes came into use for cutting master discs (the cylinder format by then having been superseded). Until the introduction of tape recording, master recordings were almost always cut direct-to-disc. Only a small minority of recordings were mastered using previously recorded material sourced from other discs.
Emergence of magnetic tape
In the late 1940s, the recording industry was revolutionized by the introduction of magnetic tape. Magnetic tape was invented for recording sound by Fritz Pfleumer in 1928 in Germany, based on the invention of magnetic wire recording by Valdemar Poulsen in 1898. Not until the end of World War II could the technology be found outside Europe. The introduction of magnetic tape recording enabled master discs to be cut separately in time and space from the actual recording process.
Although tape and other technical advances dramatically improved the audio quality of commercial recordings in the post-war years, the basic constraints of the electro-mechanical mastering process remained, and the inherent physical limitations of the main commercial recording media—the 78 rp |
https://en.wikipedia.org/wiki/GMAT%20%28disambiguation%29 | GMAT can stand for
Tan Tan Airport - ICAO code for Moroccan airport
Graduate Management Admission Test
General Mission Analysis Tool, an open source astrodynamics computer program developed by NASA
Greenwich Mean Astronomical Time - see Greenwich Mean Time |
https://en.wikipedia.org/wiki/An%20Open%20Letter%20to%20Hobbyists | "An Open Letter to Hobbyists" is a 1976 open letter written by Bill Gates, the co-founder of Microsoft, to early personal computer hobbyists, in which Gates expresses dismay at the rampant software piracy taking place in the hobbyist community, particularly with regard to his company's software.
In the letter, Gates expressed frustration with most computer hobbyists who were using his company's Altair BASIC software without having paid for it. He asserted that such widespread unauthorized copying in effect discouraged developers from investing time and money in creating high-quality software. He cited the unfairness of gaining the benefits of software authors' time, effort, and capital without paying them as a rationale for refusing to publish the machine code for his company's flagship product, thereby making it available to lower-income hobbyists who could have borrowed such program listings from their local library and entered the program into their hobby computer by data entry.
Altair BASIC
In December 1974, Gates, a student at Harvard University, alongside Microsoft co-founder Paul Allen, who worked at Honeywell in Boston, both saw the Altair 8800 computer in the January 1975 issue of Popular Electronics for the first time. They had both written BASIC language programs since their days at Lakeside School in Seattle, and knew the Altair computer was powerful enough to support a BASIC interpreter. Both Gates and Allen wanted to be the first to offer BASIC for the Altair computer, and expected the software development tools they had previously created for their Intel 8008 microprocessor-based Traf-O-Data computer to give them a head start.
By early March of the following year, Allen, Gates and Monte Davidoff, a fellow Harvard student, had created a BASIC interpreter that worked under simulation on a PDP-10 mainframe computer at Harvard. Allen and Gates had been in contact with Ed Roberts of MITS, and in March 1975, Allen visited Albuquerque, New Mexico, to test the software on an actual machine. To both Allen and Roberts' surprise, the software worked. MITS agreed to license the software from Allen and Gates. Allen left his job at Honeywell, and became the Vice President and Director of Software at MITS with a salary of $30,000 () a year; Gates remained a student at Harvard, and worked under MITS as a contractor instead, with the October 1975 company newsletter giving his title at the company as "Software Specialist".
On July 22, 1975, MITS signed the contract with Allen and Gates, who would receive $3000 at the signing and a royalty for each copy of BASIC sold; $30 for the 4K version, $35 for the 8K version and $60 for the expanded version. The contract had a cap of $180,000, with MITS retaining an exclusive worldwide license to the program for 10 years. MITS would supply the computer time necessary for development on a PDP-10 owned by the Albuquerque school district.
The April 1975 issue of MITS's Computer Notes had the banner headline |
https://en.wikipedia.org/wiki/Graduate%20Management%20Admission%20Test | The Graduate Management Admission Test (GMAT ( ())) is a computer adaptive test (CAT) intended to assess certain analytical, writing, quantitative, verbal, and reading skills in written English for use in admission to a graduate management program, such as a Master of Business Administration (MBA) program. Answering the test questions requires knowledge of English grammatical rules, reading comprehension, and mathematical skills such as arithmetic, algebra, and geometry. The Graduate Management Admission Council (GMAC) owns and operates the test, and states that the GMAT assesses analytical writing and problem-solving abilities while also addressing data sufficiency, logic, and critical reasoning skills that it believes to be vital to real-world business and management success. It can be taken up to five times a year but no more than eight times total. Attempts must be at least 16 days apart.
GMAT is a registered trademark of the Graduate Management Admission Council. More than 7,000 programs at approximately 2,300+ graduate business schools around the world accept the GMAT as part of the selection criteria for their programs. Business schools use the test as a criterion for admission into a wide range of graduate management programs, including MBA, Master of Accountancy, Master of Finance programs and others. The GMAT is administered online and in standardized test centers in 114 countries around the world. According to a survey conducted by Kaplan Test Prep, the GMAT is still the number one choice for MBA aspirants. According to GMAC, it has continually performed validity studies to statistically verify that the exam predicts success in business school programs. The number of test-takers of GMAT plummeted from 2012 to 2021 as more students opted for an MBA programs that didn't require the GMAT.
History
In 1953, the organization now called the Graduate Management Admission Council (GMAC) began as an association of nine business schools, whose goal was to develop a standardized test to help business schools select qualified applicants. In the first year it was offered, the assessment (now known as the Graduate Management Admission Test), was taken just over 2,000 times; in recent years, it has been taken more than 230,000 times annually. Initially used in admissions by 54 schools, the test is now used by more than 7,000 programs at approximately 2,300 graduate business schools around the world. On June 5, 2012, GMAC introduced an integrated reasoning section to the exam that aims to measure a test taker's ability to evaluate information presented in multiple formats from multiple sources. In April of 2020, when the COVID-19 pandemic resulted in the closing of in-person testing centers around the world, GMAC quickly moved to launch an online format of the GMAT exam.
Criticism
In 2013, an independent research study evaluated student performance at three full-time MBA programs and reported that the GMAT total score had a 0.29 statistical correlat |
https://en.wikipedia.org/wiki/Apple%20IIc | The Apple IIc, the fourth model in the Apple II series of personal computers, is Apple Computer's first endeavor to produce a portable computer. The result was a notebook-sized version of the Apple II that could be transported from place to place — a portable alternative and complement to the Apple IIe. The c in the name stood for compact, referring to the fact it was essentially a complete Apple II computer setup (minus display and power supply) squeezed into a small notebook-sized housing. While sporting a built-in floppy drive and new rear peripheral expansion ports integrated onto the main logic board, it lacks the internal expansion slots and direct motherboard access of earlier Apple II models, making it a closed system like the Macintosh. However, that was the intended direction for this model — a more appliance-like machine, ready to use out of the box, requiring no technical know-how or experience to hook up and therefore attractive to first-time users.
History
The Apple IIc was released on April 24, 1984, during an Apple-held event called Apple II Forever. With that motto, Apple proclaimed the new machine was proof of the company's long-term commitment to the Apple II series and its users, despite the recent introduction of the Macintosh. The IIc was also seen as the company's response to the new IBM PCjr, and Apple hoped to sell 400,000 by the end of 1984. While essentially an Apple IIe computer in a smaller case, it was not a successor, but rather a portable version to complement it. One Apple II machine would be sold for users who required the expandability of slots, and another for those wanting the simplicity of a plug and play machine with portability in mind.
The machine introduced Apple's Snow White design language, notable for its case styling and a modern look designed by Hartmut Esslinger which became the standard for Apple equipment and computers for nearly a decade. The Apple IIc introduced a unique off-white coloring known as "Fog", chosen to enhance the Snow White design style. The IIc and some peripherals were the only Apple products to use the "Fog" coloring. While relatively light-weight and compact in design, the Apple IIc was not a true portable in design as it lacked a built-in battery and display.
Codenames for the machine while under development included Lollie, ET, Yoda, Teddy, VLC, IIb, IIp.
Overview of features
Improving the IIe
Technically the Apple IIc was an Apple IIe in a smaller case, more portable and easier to use but also less expandable. The IIc used the CMOS-based 65C02 microprocessor which added 27 new instructions to the 6502, but was incompatible with programs that used deprecated illegal opcodes of the 6502. (Apple stated that the Apple IIc was compatible with 90–95% of the 10,000 software packages available for the Apple II series.) The new ROM firmware allowed Applesoft BASIC to recognize lowercase characters and work better with an 80-column display, and fixed several bugs from the IIe RO |
https://en.wikipedia.org/wiki/Apple%20SOS | The Sophisticated Operating System, or SOS (), is the primary operating system of the Apple III computer. SOS was developed by Apple Computer and released in October 1980.
In 1985, Steve Wozniak, while critical of the Apple III's hardware flaws, called SOS "the finest operating system on any microcomputer ever".
Technical details
SOS is a single-tasking single-user operating system. It makes the resources of the Apple III available in the form of a menu-driven utility program as well as a programming application programming interface (API). A single program is loaded at boot time, called the interpreter. Once loaded, the interpreter can then use the SOS API to make requests of the system. The SOS API is divided into four main areas:
File Calls: Create, destroy, rename, open, close, read, write files; set, get prefix (current working directory); set, get file information; get volume information; set, set mark, EOF, and level of files
Device Calls: Get status, device number, information of a device; send device control data
Memory Calls: Request, find, change, release memory segment; get segment information; set segment number
Utility Calls: Get, set fence (event threshold); get, set time; get analog (joystick) data; terminate.
The Apple III System Utilities program shipped with each Apple III computer. It provides the user interface of the operating system itself, for system configuration and file management. The System Utilities program is menu-driven and performs tasks in three categories:
Device-handling commands: copy, rename, format, verify volumes (drives); list devices; set time and date
File-handling commands: list, copy, delete, rename files; create subdirectories; set file write protection; set prefix (current working directory)
System Configuration Program (SCP): configure device drivers.
SOS has two types of devices it communicates with via device drivers: character devices and block devices. Examples of SOS character devices are keyboards and serial ports. Disk drives are typical block devices. Block devices can read or write one or more 512-byte blocks at a time; character devices can read or write single characters at a time.
Boot sequence
When powered on, the Apple III runs through system diagnostics, then reads block number zero from the built-in diskette drive into memory and executes it. SOS-formatted diskettes place a loader program in block zero. That loader program searches for, loads, and executes a file named SOS.KERNEL, which is the kernel and API of the operating system. The kernel, in turn, searches for and loads a file named SOS.INTERP (the interpreter, or program, to run) and SOS.DRIVER, the set of device drivers to use. Once all files are loaded, control is passed to the SOS.INTERP program.
Apple ProDOS uses the same file system as SOS. On a disk formatted by ProDOS, the ProDOS loader and SOS loader are written to blocks zero and one, respectively. The ProDOS loader includes code that can execute on an Apple |
https://en.wikipedia.org/wiki/Cheez%20TV | Cheez TV was an Australian children's cartoon show, hosted by Ryan Lappin and Jade Gatt, that aired on weekday mornings on Network Ten. It began broadcasting on 17 July 1995 and it ended on 31 December 2004 with the presenters leaving. After eight months of being without presenters, it officially ended on 20 August 2005, and was replaced with Toasted TV.
In January 2016, Lappin and Gatt launched the official Cheez TV Facebook page with the help of friend and podcaster Brendan Dando. The page features full episodes of the show from its original run, behind the scenes photos and competitions.
Cheez TV was the first show in Australia to have an internet address in 1995.
History
Initially competing against Agro's Cartoon Connection on the Seven Network, the younger youth hosts, 'edgier' feel, and larger focus on showing cartoons allowed Ten to take the early morning children's TV crown off Seven.
During the show's run, Ryan Lappin was once nominated for Cleo magazine's Bachelor of the Year only to lose to Australian swimmer Geoff Huegill. Jade Gatt also hosted late night music show Ground Zero.
During the later years of the show, Lappin and Gatt's editorials were quickly becoming notorious for their use of more adult-oriented humour.
On 20 August 2005, the last episode of Cheez TV was broadcast after 10 years on the air, although Lappin and Gatt's final on-air appearance took place on 31 December 2004. In 2005, only cartoons showed during Cheez TV. It was later replaced with Toasted TV in the same time slot, which continues to screen series such as Naruto, One Piece and Winx Club.
In 2010, a Facebook event scheduled for 7 October 2010 appeared, attempting to gain interest in a Cheez TV reunion show. It quickly attracted the attention of Cheez TV fans. Jade and Ryan both had an interview with the E Team from U20 Radio Station which aired on 7 August 2010, where they briefly spoke to Peter Styles about the Facebook group and the show. However, after the group sent hundreds of petitioned emails to the networks, third parties made complaints which led to Facebook closing the group. It is unknown whether any consideration has been taken in relation to a Cheez TV reunion by any network contacted.
On 28 June 2011, the Adelaide Anime and Videogame Convention AVCon announced Cheez TV presenters Ryan Lappin and Jade Gatt as special guests. The pair appeared in multiple guest panels throughout the weekend.
On 22 January 2016, Ryan and Jade launched the official Cheez TV Facebook page with the help of friend and podcaster Brendan Dando, co-host of the Simpsons podcast 'Four Finger Discount', after Lappin discovered all of the tapes containing the episodes in his garage. The page features old episodes of the show as well as some new programming from Lappin and Gatt.
On 30 July 2017, Ryan and Jade started doing livestreams on Twitch, performing an impromptu version of the show called Cheez Live which involved chatting with fans, reviewing movies, readin |
https://en.wikipedia.org/wiki/LDraw | LDraw is a system of freeware tools for modeling Lego creations in 3D on a computer. The LDraw file format and original program were written by James Jessiman, although the file format has since evolved and extended. He also modeled many of the original parts in the parts library, which is under continuous maintenance and extension by the LDraw community. Following Jessiman's death in 1997, a variety of programs have been written that use the LDraw parts library, and file format. LDraw models are frequently rendered in POV-Ray or Blender, free 3D ray tracers.
File format
The LDraw format can divide a model into steps so that the building instructions can be incorporated into the design, and also allows for steps that rotate the camera and even move parts around in an elementary fashion. It also allows for models to be incorporated in the construction of larger models to make design easier. This also makes the file format space efficient: instead of specifying the polygons of every single stud of a specific brick for example, a shared stud file is included multiple times with transformation applied.
Parts, models, sub-models and polygons are all treated the same and are not specific to Lego models (only the parts library is). The format could be used to store any type of 3D model. Some have created bricks of other building systems for use with LDraw.
The following main three filename extensions are used by LDraw:
files implementing a part, subpart or primitive use .dat
a Lego model consisting of 1 or more bricks use .ldr
multiple .ldr files can be aggregated into files of type .mpd
The file format uses plain text data, and uses the charset UTF-8 without BOM.
Example File: 3003.dat, the Implementation of a 2 x 2 Brick
0 Brick 2 x 2
0 Name: 3003.dat
0 Author: James Jessiman
0 !LDRAW_ORG Part UPDATE 2002-03
0 !LICENSE Redistributable under CCAL version 2.0 : see CAreadme.txt
0 BFC CERTIFY CCW
0 !HISTORY 2001-10-26 [PTadmin] Official Update 2001-01
0 !HISTORY 2002-05-07 [unknown] BFC Certification
0 !HISTORY 2002-06-11 [PTadmin] Official Update 2002-03
0 !HISTORY 2007-05-07 [PTadmin] Header formatted for Contributor Agreement
0 !HISTORY 2008-07-01 [PTadmin] Official Update 2008-01
1 16 0 4 0 1 0 0 0 -5 0 0 0 1 stud4.dat
0 BFC INVERTNEXT
1 16 0 24 0 16 0 0 0 -20 0 0 0 16 box5.dat
4 16 20 24 20 16 24 16 -16 24 16 -20 24 20
4 16 -20 24 20 -16 24 16 -16 24 -16 -20 24 -20
4 16 -20 24 -20 -16 24 -16 16 24 -16 20 24 -20
4 16 20 24 -20 16 24 -16 16 24 16 20 24 20
1 16 0 24 0 20 0 0 0 -24 0 0 0 20 box5.dat
1 16 10 0 10 1 0 0 0 1 0 0 0 1 stud.dat
1 16 -10 0 10 1 0 0 0 1 0 0 0 1 stud.dat
1 16 10 0 -10 1 0 0 0 1 0 0 0 1 stud.dat
1 16 -10 0 -10 1 0 0 0 1 0 0 0 1 stud.dat
The above code defines the basic 2×2 brick. It consists of a five-sided box (box5.dat, outside) and an inverted five-sided box (inside), the connection between those two, consisting of four quads (the four lines starting with 4), the four studs on top of it (stud.dat) and the |
https://en.wikipedia.org/wiki/Portable%20Game%20Notation | Portable Game Notation (PGN) is a standard plain text format for recording chess games (both the moves and related data), which can be read by humans and is also supported by most chess software.
History
PGN was devised around 1993, by Steven J. Edwards, and was first popularized and specified via the Usenet newsgroup rec.games.chess.
Usage
PGN is structured "for easy reading and writing by human users and for easy parsing and generation by computer programs." The chess moves themselves are given in algebraic chess notation using English initials for the pieces. The filename extension is .pgn.
There are two formats in the PGN specification, the "import" format and the "export" format. The import format describes data that may have been prepared by hand, and is intentionally lax; a program that can read PGN data should be able to handle the somewhat lax import format. The export format is rather strict and describes data prepared under program control, similar to a pretty printed source program reformatted by a compiler. The export format representations generated by different programs on the same computer should be exactly equivalent, byte for byte.
PGN text begins with a set of "tag pairs" (a tag name and its value), followed by the "movetext" (chess moves with optional commentary).
Tag pairs
Tag pairs begin with an initial left bracket , followed by the name of the tag in plain ASCII text. The tag value is enclosed in double-quotes, and the tag is then terminated with a closing right bracket . A quote inside a tag value is represented by the backslash immediately followed by a quote. A backslash inside a tag value is represented by two adjacent backslashes. There are no special control codes involving escape characters, or carriage returns, and linefeeds to separate the fields, and superfluous embedded spaces are usually skipped when parsing.
Seven Tag Roster
PGN data for archival storage is required to provide seven tag pairs – together known as the "Seven Tag Roster". In export format, these tag pairs must appear before any other tag pairs and in this order:
Optional tag pairs
The standard allows for other optional tag pairs. The more common ones include:
Movetext
The movetext describes the actual moves of the game. This includes move number indicators (numbers followed by either one or three periods; one if the next move is White's move, three if the next move is Black's move) and movetext in Standard Algebraic Notation (SAN).
For most moves the SAN consists of the letter abbreviation for the piece, an x if there is a capture, and the two-character algebraic name of the final square the piece moved to. The letter abbreviations are K (king), Q (queen), R (rook), B (bishop), and N (knight). The pawn is given an empty abbreviation in SAN movetext, but in other contexts the abbreviation P is used. The algebraic name of any square is as per usual algebraic chess notation; from white's perspective, the leftmost square closest to whi |
https://en.wikipedia.org/wiki/Broadway%20Open%20House | Broadway Open House is network television's first late-night comedy-variety series. It was telecast live on NBC from May 29, 1950, to August 24, 1951, airing weeknights from 11pm to midnight. One of the pioneering TV creations of NBC president Pat Weaver, it demonstrated the potential for late-night programming and led to the later development of The Tonight Show.
Hosts
The show was originally planned to be hosted by comic Don "Creesh" Hornsby (so named because he yelled "Creesh" often). Hornsby had been brought into the variety show business by Bob Hope, whose topical humor would serve as the basis for most of the late-night talk shows that would follow. One week before he was to begin hosting, the 26-year-old Hornsby suddenly contracted polio, from which he died on the day he was to host his first show, May 22, 1950. Hornsby's sudden demise forced NBC to postpone the show and rush to find new hosts on short notice. For the first few weeks, there were different hosts including Dean Martin & Jerry Lewis, Henny Youngman and Robert Alda, among others, with Morey Amsterdam hosting Mondays and Wednesdays.
Broadway Open House was performed before a live studio audience, in the manner of a stage show. Pat Weaver, an NBC executive at the time, noticed the positive feedback that Jerry Lester (then hosting Cavalcade of Stars for Dumont) and his manic personality had received on a recent appearance on NBC and offered Lester the hosting position almost immediately. Lester initially hosted the Tuesday, Thursday and Friday episodes of Broadway Open House until Amsterdam exited the show, leaving Lester the sole host. Lester performed sketches with his crew of sidekicks (including some of the earliest TV appearances of brassy Barbara Nichols), running through standard nightclub comedy routines and introducing the show's vocal group, the Mello Larks. Lester's signature bit was to twist his eyeglasses at a 45 degree angle on his face. The show had occasional guests, including Lenny Bruce, who appeared May 1950 and Charlie Parker who appeared October 31, 1950 (an audio recording exists of his appearance on the show), and there were also audience participation bits, such as having women from the audience join the female cast members in modeling fur coats. Lester's fondness for bean bags became a running gag on the series. The sponsors included Anchor Hocking glassware and Blatz Beer.
Cast and crew
Other Broadway Open House cast members were tap dancer Ray Malone, accordionist Milton DeLugg, announcer Wayne Howell and vocalists Jane Harvey, Andy Roberts and David Street. The show's opening theme music was "The Beanbag Song" by DeLugg, Lester and Willie Stein. A second theme was the song "It's Almost Like Being in Love." DeLugg often played a song he wrote with Stein, "Orange Colored Sky", which became a hit for both Lester and for Nat King Cole.
Vic McLeod, Paul Munro, Ray Buffum and Jac Hein were among the producers. Hein, Munro and Joseph C. Cavalier directe |
https://en.wikipedia.org/wiki/Adium | Adium is a free and open-source instant messaging client for macOS that supports multiple IM networks, including XMPP (Jabber), IRC and more. In the past, it has also supported AIM, ICQ, Windows Live Messenger and Yahoo! Messenger. Adium is written using macOS's Cocoa API, and it is released under the GNU GPL-2.0-or-later and many other licenses for components that are distributed with Adium.
History
Adium was created by college student Adam Iser, and the first version, "Adium 1.0", was released in September 2001 and supported only AIM. The version numbers of Adium since then have followed a somewhat unusual pattern. There were several upgrades to Adium 1.0, ending with Adium 1.6.2c.
At this point, the Adium team began a complete rewrite of the Adium code, expanding it into a multiprotocol messaging program. Pidgin's (formerly "Gaim") libpurple (then called "libgaim") library was implemented to add support for IM protocols other than AIM – since then the Adium team has mostly been working on the GUI. The Adium team originally intended to release these changes as "Adium 2.0". However, Adium was eventually renamed to "Adium X" and released at version 0.50, being considered "halfway to a 1.0 product". Adium X 0.88 was the first version compiled as a universal binary, allowing it to run natively on Intel-based Macs.
In 2005, Adium received a "Special Mention" at the Apple Design Awards.
After version Adium X 0.89.1, however, the team finally decided to change the name back to "Adium", and, as such, "Adium 1.0" was released on February 2, 2007.
Apple Inc. used Adium X 0.89.1's build time in Xcode 2.3 as a benchmark for comparing the performance of the Mac Pro and Power Mac G5 Quad, and Adium 1.2's build time in Xcode 3.0 as a benchmark for comparing the performance of the eight-core Mac Pro and Power Mac G5 Quad.
On November 4, 2014, Adium scored 6 out of 7 points on the Electronic Frontier Foundation's secure messaging scorecard. It lost a point because there has not been a recent independent code audit.
From March 2019, Adium is no longer able to support the ICQ plugin.
Protocols
Adium supports a wide range of instant messaging networks through the libraries libezv (for Bonjour), STTwitterEngine (for Twitter), and libpurple (for all other protocols).
Adium supports the following protocols:
XMPP (including Google Talk, Facebook Chat, and LiveJournal services)
Twitter
Bonjour
Internet Relay Chat
Novell GroupWise
IBM Sametime
Gadu-Gadu
Skype with a plugin
Skype for Business Server (previously Microsoft Lync Server, Microsoft Office Communications Server) with a plugin
Telegram with a plugin
Tencent QQ with a plugin
Steam Chat with the "Steam IM" plugin
NateOn with a plugin
Plugins and customization
Adium makes use of a plug-in architecture; many of the program's essential features are actually provided by plugins bundled inside the application package. These plugins include functionality such as file transfer, support for the Growl notificati |
https://en.wikipedia.org/wiki/Quantum%20Link | Quantum Link (or Q-Link) was an American and Canadian online service for the Commodore 64 and 128 personal computers that operated starting November 5, 1985. It was operated by Quantum Computer Services of Vienna, Virginia, which later became America Online.
In October 1989 the service was renamed America Online, and made available to users of PC systems as well. The original Q-link service was terminated November 1, 1995 in favor of the America Online brand.
The original Q-Link was a modified version of the PlayNET system, which Control Video Corporation licensed. Q-Link featured electronic mail, online chat (in its People Connection department), public domain file sharing libraries, online news, and instant messaging using On Line Messages (OLMs). Other noteworthy features included multiplayer games like checkers, chess, backgammon, hangman, and a clone of the television game show "Wheel Of Fortune" called 'Puzzler'; and an interactive graphic resort island, called Habitat during beta-testing, then renamed Club Caribe.
In October 1986, QuantumLink expanded their services to include casino games such as bingo, slot machines, blackjack and poker in RabbitJack's Casino; and RockLink, a section about rock music. The software archives were also organized into hierarchical folders and expanded.
In November 1986, the service began offering to digitize users' photos to be included in their profiles, and started an online auction service.
Connections to Q-Link were typically made by dial-up modems with speeds from 300 to 2400 baud, with 1200 being the most common. The service was normally open weekday evenings and all day on weekends. Pricing was $9.95 per month, with additional fees of six cents per minute (later raised to eight) for so-called "plus" areas, including most of the aforementioned services. Users were given one free hour of "plus" usage per month. Hosts of forums and trivia games could also earn additional free "plus" time.
Q-Link competed with online services like CompuServe and The Source, and with bulletin board systems (single- and multiuser), including gaming systems such as Scepter of Goth and Swords of Chaos. Quantum Link's graphic display was better than many competing systems because they used specialized client software with a nonstandard protocol. However, this limited their market, because only the Commodore 64 and 128 could run the software necessary to access it.
Club Caribe / Habitat
One of the most influential Quantum Link games was Club Caribe, a predecessor to today's MMOGs.
Club Caribe was developed with Lucasfilm Games using software that later formed the basis of Lucasfilm's Maniac Mansion story system (SCUMM). Users controlled on-screen avatars that could chat with other users, carry and use objects and money (called tokens), and travel around the island one screen at a time. Club Caribe allowed users to take the heads off their characters, carry them around, and even set it down. However, other users cou |
https://en.wikipedia.org/wiki/National%20Do%20Not%20Call%20Registry | The National Do Not Call Registry is a database maintained by the United States federal government, listing the telephone numbers of individuals and families who have requested that telemarketers not contact them. Certain callers are required by federal law to respect this request. Separate laws and regulations apply to robocalls in the United States.
The Federal Trade Commission (FTC) opened the National Do Not Call Registry in order to comply with the Do-Not-Call Implementation Act of 2003 (, was , and codified at et seq.), sponsored by Representatives Billy Tauzin and John Dingell and signed into law by President George W. Bush on March 11, 2003. The law established the FTC's National Do Not Call Registry in order to facilitate compliance with the Telephone Consumer Protection Act of 1991. A guide by FTC addresses a number of cases.
Registration for the Do-Not-Call list began on June 27, 2003, and enforcement started on October 1, 2003. Since January 1, 2005, telemarketers covered by the registry have up to 31 days (initially the period was 90 days) from the date a number is registered to cease calling that number. Originally, phone numbers remained on the registry for a period of five years, but are now permanent because of the Do-Not-Call Improvement Act of 2007, effective February 2008.
Consumers may add landline or cellular numbers to the registry, but FCC regulations prohibit telemarketers from calling a cellular phone number with an automatic dialer under almost all circumstances. In 2005, a rumor began circulating via e-mail that cell phone providers were planning on making their number directories available to telemarketers. The FTC responded by clarifying that cell phones cannot legally be called by telemarketers. Similarly, fax numbers do not need to be included in the
registry due to existing federal laws and regulations that prohibit the sending of unsolicited faxes.
If a person does not want to register a number on the national registry, they can still prohibit individual telemarketers from calling by asking the caller to put the called number on the company's do-not-call list.
Legal challenges
The do-not-call list was slated to take effect on October 1, 2003, but two federal district court decisions almost delayed it. One from Oklahoma was overcome by special legislation giving the FTC specific jurisdiction over the matter. The other from Colorado revolved around questions of regulation of commercial speech and threatened to delay implementation of the list. However, President Bush signed a bill authorizing the no-call list to go ahead in September 2003. Finally, the United States Court of Appeals for the Tenth Circuit on February 17, 2004, upheld the constitutionality of the law.
Exceptions to the do-not-call rule
Exceptions
Placing one's number on the National Do Not Call Registry will stop some, but not all, unsolicited calls. The following are exceptions granted by existing laws and regulations—and these types of |
https://en.wikipedia.org/wiki/John%20C.%20Dvorak | John C. Dvorak (; born 1952) is an American columnist and broadcaster in the areas of technology and computing. His writing extends back to the 1980s, when he was a regular columnist in a variety of magazines. He was vice president of Mevio, and has been a host on TechTV and TWiT.tv. He is currently a co-host of the No Agenda podcast.
Early life
Dvorak was born in 1952 in Los Angeles, California. He is a nephew of sociologist and creator of the Dvorak keyboard, August Dvorak.
Writing career
Periodicals
Dvorak started his career as a wine writer.
He has written for various publications, including InfoWorld, PC Magazine (two separate columns since 1986), MarketWatch, BUG Magazine (Croatia), and Info Exame (Brazil). He has been a columnist for Boardwatch, Forbes, Forbes.com, MacUser, MicroTimes, PC/Computing, Barron's Magazine, Smart Business, and The Vancouver Sun. (The MicroTimes column ran under the banner Dvorak's Last Column.) He has written for The New York Times, Los Angeles Times, MacMania Networks, International Herald Tribune, The San Francisco Examiner and The Philadelphia Inquirer, among numerous other publications.
Dvorak created a few tech running jokes. In episode 18 of TWiT (This Week in Tech) he claimed that, thanks to his hosting provider, he "gets no spam."
Books
Dvorak has written or co-authored over a dozen books, including Hypergrowth: The Rise and Fall of the Osborne Computer Corporation with Adam Osborne and Dvorak's Guide to Desktop Telecommunications in 1990, Dvorak's Guide to PC Telecommunications (Osborne McGraw-Hill, Berkeley, California, 1992), Dvorak's Guide to OS/2 (Random House, New York, 1993) with co-authors Dave Whittle and Martin McElroy, Dvorak Predicts (Osborne McGraw-Hill, Berkeley, California, 1994), Online! The Book (Prentice Hall PTR, October, 2003) with co-authors Wendy Taylor and Chris Pirillo and his latest e-book is Inside Track 2013.
Awards and honors
The Computer Press Association presented Dvorak with the Best Columnist and Best Column awards. He was also the winner of the American Business Editors Association's national gold award in 2004 and 2005, for Best Online Columns of 2003 and 2004, respectively.
He was the creator and lead judge of the Dvorak Awards (1992–1997).
In 2001, he received the Telluride Tech Festival Award of Technology.
He has received the title of Kentucky Colonel, the highest title of honor awarded by the Commonwealth of Kentucky.
In July, 2016, Dvorak and co-host Adam Curry won the "Best Podcast" Podcast Award for No Agenda, in the News & Politics category.
TV and online media
Dvorak was on the start-up team for CNET Networks, appearing on the television show CNET Central. He also hosted a radio show called Real Computing, and later 'Technically Speaking' on NPR, as well as a television show on TechTV (formerly ZDTV) called Silicon Spin.
He appeared on Marketwatch TV and This Week in Tech, a podcast audio and now video program hosted by Leo Laporte and featuring o |
https://en.wikipedia.org/wiki/IBM%20System/34 | The IBM System/34 was an IBM midrange computer introduced in 1977. It was withdrawn from marketing in February 1985. It was a multi-user, multi-tasking successor to the single-user System/32. It included two processors, one based on the System/32 and the second based on the System/3. Like the System/32 and the System/3, the System/34 was primarily programmed in the RPG II language.
Hardware
The 5340 System Unit contained the processing unit, the disk storage and the diskette drive. It had several access doors on both sides. Inside, were swing-out assemblies where the circuit boards and memory cards were mounted. It weighed and used 220V power. The IBM 5250 series of terminals were the primary interface to the System/34.
Processors
S/34s had two processors, the Control Storage Processor (CSP), and the Main Storage Processor (MSP). The MSP was the workhorse, based on System/3 architecture; it performed the instructions in the computer programs. The CSP was the governor, a different processor with different RISC-like instruction set, based on System/32 architecture; it performed system functions in the background. The CSP also executed the optional Scientific Macroinstructions, which were a set of emulated floating point operations used by the System/34 Fortran compiler and optionally in assembly code. The clock speed of the CPUs inside a System/34 was fixed at 1 MHz for the MSP and 4 MHz for the CSP. Special utility programs were able to make direct calls to the CSP to perform certain functions; these are usually system programs like $CNFIG which was used to configure the computer system.
Memory and storage
The smallest S/34 had 48K of RAM and an 8.6 MB hard drive. The largest configured S/34 could support 256K of RAM and 256MB of disk space. S/34 hard drives contained a feature called "the extra cylinder," so that bad spots on the drive were detected and dynamically mapped out to good spots on the extra cylinder. Disk space on the System/34 was organized by blocks of 2560 bytes.
The System/34 supported memory paging, referring to as swapping. The System/34 could either swap out entire programs, or individual segments of a program in order to free up memory for other programs to run.
One of the machine's most distinctive features was an off-line storage mechanism that utilized "" - boxes of 8-inch floppies that the machine could load and eject in a nonsequential fashion.
Software
Operating System
The System Support Program (SSP) was the only operating system of the S/34. It contained support for multiprogramming, multiple processors, 36 devices, job queues, printer queues, security, indexed file support. Fully installed, it was about 5 MB. The Operational Control Language (OCL) was the control language of SSP.
Programming
The System/34's initial programming languages were limited to RPG II and Basic Assembler when introduced in 1977. FORTRAN was fully available six months after the 34's introduction, and COBOL was available as a PRPQ. B |
https://en.wikipedia.org/wiki/QuickDraw | QuickDraw was the 2D graphics library and associated application programming interface (API) which is a core part of classic Mac OS. It was initially written by Bill Atkinson and Andy Hertzfeld. QuickDraw still existed as part of the libraries of macOS, but had been largely superseded by the more modern Quartz graphics system. In Mac OS X Tiger, QuickDraw has been officially deprecated. In Mac OS X Leopard applications using QuickDraw cannot make use of the added 64-bit support. In OS X Mountain Lion, QuickDraw header support was removed from the operating system. Applications using QuickDraw still ran under OS X Mountain Lion to macOS High Sierra; however, the current versions of Xcode and the macOS SDK do not contain the header files to compile such programmes.
Principles of QuickDraw
QuickDraw was grounded in the Apple Lisa's LisaGraf of the early 1980s and was designed to fit well with the Pascal-based interfaces and development environments of the early Apple systems. In addition, QuickDraw was a raster graphics system, which defines the pixel as its basic unit of graphical information. This is in contrast to vector graphics systems, where graphics primitives are defined in mathematical terms and rasterized as required to the display resolution. A raster system requires much less processing power however, and was the prevailing paradigm at the time that QuickDraw was developed.
QuickDraw defined a key data structure, the graphics port, or GrafPort. This was a logical drawing area where graphics could be drawn. The most obvious on-screen "object" corresponding to a GrafPort was a window, but the entire desktop view could be a GrafPort, and off-screen ports could also exist.
The GrafPort defined a coordinate system. In QuickDraw, this had a resolution of 16 bits, giving 65,536 unique vertical and horizontal locations. These are numbered from -32,767 on the extreme left (or top), to +32,767 on the extreme right (or bottom). A window was usually set up so that the top, left corner of its content area was located at 0,0 in the associated GrafPort. A window's content area did not include the window's frame, drop shadow or title bar (if any).
QuickDraw coordinates referred to the infinitely thin lines between pixel locations. An actual pixel was drawn in the space to the immediate right and below the coordinate. This made it easier for programmers to avoid graphical glitches caused by off-by-one errors.
On the Macintosh, pixels were square, and a GrafPort had a default resolution of 72 pixels per inch, chosen to match conventions established by the printing industry of having 72 points per inch.
QuickDraw also contained a number of scaling and mapping functions.
QuickDraw maintained a number of global variables per process, chief among these being the current port. This originally simplified the API, since all operations pertained to "the current port," but as the OS developed, this use of global state has also made QuickDraw much harder to |
https://en.wikipedia.org/wiki/List%20of%20rivers%20of%20Albania | Albania has an extensive hydrographic network of 152 rivers and streams, including 10 large rivers flowing from southeast to northwest, mainly discharging towards the Adriatic Sea. Combined, they produce a total annual flow rate of .
In the mountainous regions, the rivers meander through narrow valleys with steep banks and great depth, collecting streams and silt during heavy rains. Their beds become erosive, causing frequent changes in their paths.
The rivers are mainly fed by atmospheric precipitation (65-92%) and underground water (8-35%), with an average rainfall layer of 1494 mm and a runoff layer of 945 mm. Water flow varies by season, with winter having the largest annual flow (40%), followed by spring, autumn, and summer. The rivers contain an average mineralization of 150 to 500 mg/L and an average annual volume of suspended solids of 60 million tons, with greater erosion occurring in the catchment basins of Osum, Devoll, and Erzen. Temperatures in the winter months fall to 3.5-8.9°C, and in the summer months reach 17.8-24.6°C.
List of rivers
Main rivers
Other rivers
{| class="wikitable sortable"; width="75%"
|-
! No.
! class="unsortable"|Image
! River
! Length
! Basin
! Annual flow
! class="unsortable"|Map
|-
| align="center"|11
|
| Shushicë
|
|
|
|
|-bgcolor="#e5f7e9"
| align="center"|12
|
| Ishëm
|
|
|
|
|-
| align="center"|13
|
| Cem
|
|
|
|
|-
| align="center"|14
|
| Kir
|
|
|
|
|-bgcolor="#e5f7e9"
| align="center"|15
|
| Valbonë
|
|
|
|
|-
| align="center"|16
|
| Pavllë
|
|
|
|
|-
| align="center"|17
|
| Bunë|
|
|
|
|-
| align="center"|18
|
| Lëngaricë|
|
|
|
|-
| align="center"|19
|
| Shalë|
|
|
|
|-
| align="center"|20
|
| Kalasë|
|
|
|
|-
| align="center"|21
|
| Lanë|
|
|
|
|-bgcolor="#f5f5e2"
| align="center"|22
|
| Gashi|
|
|
|
|-
| align="center"|23
|
| Bistricë|
|
|
|
|-
| align="center"|24
|
| Bënça'|
|
|
|
|-
|}
Basins
Organized by drainage basin. Albanian-language names are listed if different from English. Italics indicate that the body of water is not in or bordering Albania.
Adriatic Sea
Bojana –
Great Drin –
White Drin –
Black Drin –
Lake Ohrid –
Valbona
Gashi
Shala
Kir
Lake Skadar –
Morača –
Cem
Mareza
Gulf of Drin –
Small Drin; see Great Drin above
Aoös –
Drino
Shushicë
Sarantaporos
Seman
Osum
Devoll
Tomorrica
Shkumbin
Mat
Fan
Erzen
Ishëm
Gjole
Tërzukë
Tiranë
Lanë
Zezë
Ionian Sea
Pavllë
Bistricë
Blue Eye (spring) –
Black Sea
Danube –
Sava Drina''
Lim
See also
Protected areas of Albania
Geography of Albania
Climate of Albania
Biodiversity of Albania
References
Albania
Rivers
Landforms of Albania |
https://en.wikipedia.org/wiki/Mathematical%20problem | A mathematical problem is a problem that can be represented, analyzed, and possibly solved, with the methods of mathematics. This can be a real-world problem, such as computing the orbits of the planets in the solar system, or a problem of a more abstract nature, such as Hilbert's problems. It can also be a problem referring to the nature of mathematics itself, such as Russell's Paradox.
Real-world problems
Informal "real-world" mathematical problems are questions related to a concrete setting, such as "Adam has five apples and gives John three. How many has he left?". Such questions are usually more difficult to solve than regular mathematical exercises like "5 − 3", even if one knows the mathematics required to solve the problem. Known as word problems, they are used in mathematics education to teach students to connect real-world situations to the abstract language of mathematics.
In general, to use mathematics for solving a real-world problem, the first step is to construct a mathematical model of the problem. This involves abstraction from the details of the problem, and the modeller has to be careful not to lose essential aspects in translating the original problem into a mathematical one. After the problem has been solved in the world of mathematics, the solution must be translated back into the context of the original problem.
Abstract problems
Abstract mathematical problems arise in all fields of mathematics. While mathematicians usually study them for their own sake, by doing so, results may be obtained that find application outside the realm of mathematics. Theoretical physics has historically been a rich source of inspiration.
Some abstract problems have been rigorously proved to be unsolvable, such as squaring the circle and trisecting the angle using only the compass and straightedge constructions of classical geometry, and solving the general quintic equation algebraically. Also provably unsolvable are so-called undecidable problems, such as the halting problem for Turing machines.
Some well-known difficult abstract problems that have been solved relatively recently are the four-colour theorem, Fermat's Last Theorem, and the Poincaré conjecture.
Computers do not need to have a sense of the motivations of mathematicians in order to do what they do. Formal definitions and computer-checkable deductions are absolutely central to mathematical science.
Degradation of problems to exercises
Mathematics educators using problem solving for evaluation have an issue phrased by Alan H. Schoenfeld:
How can one compare test scores from year to year, when very different problems are used? (If similar problems are used year after year, teachers and students will learn what they are, students will practice them: problems become exercises, and the test no longer assesses problem solving).
The same issue was faced by Sylvestre Lacroix almost two centuries earlier:
... it is necessary to vary the questions that students might communicate with e |
https://en.wikipedia.org/wiki/A.%20K.%20Dewdney | Alexander Keewatin Dewdney (born August 5, 1941) is a Canadian mathematician, computer scientist, author, filmmaker, and conspiracy theorist. Dewdney is the son of Canadian artist and author Selwyn Dewdney, and brother of poet Christopher Dewdney.
He was born in London, Ontario.
Art and fiction
In his student days, Dewdney made a number of influential experimental films, including Malanga, on the poet Gerald Malanga, Four Girls, Scissors, and his most ambitious film, the pre-structural Maltese Cross Movement. Margaret Atwood wrote that a poetry scrapbook by Dewdney, based on the Maltese Cross Movement film, "raises scrapbooking to an art".
The Academy Film Archive has preserved two of Dewdney's films: The Maltese Cross Movement in 2009 and Wildwood Flower in 2011.
He has also written two novels, The Planiverse (about an imaginary two-dimensional world) and Hungry Hollow: The Story of a Natural Place. Dewdney lives in London, Ontario, Canada, where he holds the position of Professor Emeritus at the University of Western Ontario.
Computing, mathematics, and science
Dewdney has written a number of books on mathematics, computing, and bad science. He also founded and edited a magazine on recreational programming called Algorithm between 1989 and 1993.
Dewdney followed Martin Gardner and Douglas Hofstadter in authoring Scientific American magazine's recreational mathematics column, renamed to "Computer Recreations", then "Mathematical Recreations", from 1984 to 1991. He has published more than 10 books on scientific possibilities and puzzles. Dewdney was a co-inventor of programming game Core War.
Since the nineties, Dewdney has worked on biology, both as a field ecologist and as a mathematical biologist, contributing a solution to the problem of determining the underlying dynamics of species abundance in natural communities.
Conspiracy theories
Dewdney is a member of the 9/11 truth movement, and has theorized that the planes used in the September 11 attacks had been emptied of passengers and were flown by remote control.
He based these claims in part on a series of experiments (one with funding from Japan's TV Asahi) that, he claims, show that cell phones do not work on airplanes, from which he concludes that the phone calls received from hijacked passengers during the attacks must have been faked.
Works
The Planiverse: Computer Contact with a Two-Dimensional World (1984). .
The Armchair Universe: An Exploration of Computer Worlds (1988). . (collection of "Mathematical Recreations" columns)
The Magic Machine: A Handbook of Computer Sorcery (1990). . (collection of "Mathematical Recreations" columns)
The New Turing Omnibus: Sixty-Six Excursions in Computer Science (1993). .
The Tinkertoy Computer and Other Machinations (1993). . (collection of "Mathematical Recreations" columns)
Introductory Computer Science: Bits of Theory, Bytes of Practice (1996). .
200% of Nothing: An Eye Opening Tour Through the Twists and Turns of Math Abuse and Inn |
https://en.wikipedia.org/wiki/Globalstar | Globalstar, Inc. is an American satellite communications company that operates a low Earth orbit (LEO) satellite constellation for satellite phone and low-speed data communications. The Globalstar second-generation constellation consists of 25 low Earth orbiting (LEO) satellites.
History
The Globalstar project was launched in 1991 as a joint venture of Loral Corporation and Qualcomm. On March 24, 1994, the two sponsors announced the formation of Globalstar LP, a limited partnership established in the U.S., with financial participation from eight other companies, including Alcatel, AirTouch, Deutsche Aerospace, Hyundai, and Vodafone. At that time, the company predicted the system would launch in 1998, based on an investment of $1.8 billion.
Globalstar received its US spectrum allocation from the FCC in January 1995 and continued to negotiate with other nations for rights to use the same radio frequencies in their countries.
The first satellites were launched in February 1998, but system deployment was delayed due to a launch failure in September 1998 that resulted in the loss of 12 satellites in a launch by the Russian Space Agency.
The first call on the original Globalstar system was placed on November 1, 1998, from Qualcomm chairman Irwin Jacobs in San Diego to Loral Space & Communications CEO and chairman Bernard Schwartz in New York City.
In October 1999, the system began "friendly user" trials with 44 of 48 planned satellites. In December 1999, the system began limited commercial service for 200 users with the full 48 satellites (no spares in orbit). In February 2000, it began full commercial service with its 48 satellites and 4 spares in North America, Europe, and Brazil. Another eight satellites were maintained as ground spares. Initial prices were $1.79/minute for satellite phone calls.
On February 15, 2002, the predecessor company Globalstar (old Globalstar) and three of its subsidiaries filed voluntary petitions under Chapter 11 of the United States Bankruptcy Code.
In 2004, restructuring of the old Globalstar was completed. The first stage of the restructuring was completed on December 5, 2003, when Thermo Capital Partners LLC was deemed to obtain operational control of the business, as well as certain ownership rights and risks. Thermo Capital Partners became the principal owner.
Globalstar LLC was formed as a Delaware limited liability company in November 2003 and was converted into Globalstar, Inc., on March 17, 2006.
In 2007, Globalstar launched eight additional first-generation spare satellites into space to help compensate for the premature failure of their in-orbit satellites. Between 2010 and 2013, Globalstar launched 24 second-generation satellites in an effort to restore their system to full service.
Between 2010 and 2011, Globalstar moved its headquarters from Silicon Valley to Covington, Louisiana in part to take advantage of the state's tax breaks and low cost of living.
In April 2018, Globalstar announced it wou |
https://en.wikipedia.org/wiki/Oz%20%28programming%20language%29 | Oz is a multiparadigm programming language, developed in the Programming Systems Lab at Université catholique de Louvain, for programming language education. It has a canonical textbook: Concepts, Techniques, and Models of Computer Programming.
Oz was first designed by Gert Smolka and his students in 1991. In 1996, development of Oz continued in cooperation with the research group of Seif Haridi and Peter Van Roy at the Swedish Institute of Computer Science. Since 1999, Oz has been continually developed by an international group, the Mozart Consortium, which originally consisted of Saarland University, the Swedish Institute of Computer Science, and the Université catholique de Louvain. In 2005, the responsibility for managing Mozart development was transferred to a core group, the Mozart Board, with the express purpose of opening Mozart development to a larger community.
The Mozart Programming System is the primary implementation of Oz. It is released with an open source license by the Mozart Consortium. Mozart has been ported to Unix, FreeBSD, Linux, Windows, and macOS.
Language features
Oz contains most of the concepts of the major programming paradigms, including logic, functional (both lazy evaluation and eager evaluation), imperative, object-oriented, constraint, distributed, and concurrent programming. Oz has both a simple formal semantics (see chapter 13 of the book mentioned below) and Oz is a concurrency-oriented language, as the term was introduced by Joe Armstrong, the main designer of the Erlang language. A concurrency-oriented language makes concurrency easy to use and efficient. Oz supports a canonical graphical user interface (GUI) language QTk.
In addition to multi-paradigm programming, the major strengths of Oz are in constraint programming and distributed programming. Due to its factored design, Oz is able to successfully implement a network-transparent distributed programming model. This model makes it easy to program open, fault-tolerant applications within the language. For constraint programming, Oz introduces the idea of computation spaces, which allow user-defined search and distribution strategies orthogonal to the constraint domain.
Language overview
Data structures
Oz is based on a core language with very few datatypes that can be extended into more practical ones through syntactic sugar.
Basic data structures:
Numbers: floating point or integer (real integer)
Records: for grouping data : circle(x:0 y:1 radius:3 color:blue style:dots). Here the terms x,y, radius etc. are called features and the data associated with the features (in this case 0,1,3 etc.) are the values.
Tuples: Records with integer features in ascending order: circle(1:0 2:1 3:3 4:blue 5:dots) .
Lists: a simple linear structure
'|'(2 '|'(4 '|'(6 '|'(8 nil)))) % as a record.
2|(4|(6|(8|nil))) % with some syntactic sugar
2|4|6|8|nil % more syntactic sugar
[2 4 6 8] % even more syntactic sugar
Those data structures are values (cons |
https://en.wikipedia.org/wiki/National%20Highway%20%28Australia%29 | The National Highway (part of the National Land Transport Network) is a system of roads connecting all mainland states and territories of Australia, and is the major network of highways and motorways connecting Australia's capital cities and major regional centres.
History
Legislation
National funding for roads began in the 1920s, with the federal government contributing to major roads managed by the state and territory governments. However, the Federal Government did not completely fund any roads until 1974, when the Whitlam government introduced the National Roads Act 1974. Under the act, the states were still responsible for road construction and maintenance, but were fully compensated for money spent on approved projects.
In 1977, the 1974 Act was replaced by the State Grants (Roads) Act 1977, which contained similar provisions for the definition of "National Highways".
In 1988, the National Highway became redefined under the Australian Land Transport Development (ALTD) Act 1988, which had various amendments up to 2003. The 1988 Act was largely concerned with funding road development in cooperation with the state governments. The federal transport minister defined the components of the National Highway, and also a category of "Road of National Importance" (RONI), with federal funding implications. Section 10.5 of the Act required the state road authorities to place frequent, prominent, signs on the National Highways and RONI projects funded by the federal government.
In 2005, the National Highway became the National Land Transport Network, under the AusLink (National Land Transport) Act 2005. The criteria for inclusion in the network was similar to the previous legislation, but expanded to include connections to major commercial centres, and inter-modal facilities. All of the roads included in National Land Transport Network as of 2005 were formally defined by regulation in October 2005. The Minister for Transport may alter the list of roads included in the network. Three amendments to the scheduled list of roads have been made, in February 2007, September 2008 and February 2009. The third variation, published in February 2009, is current as of September 2012.
Under AusLink a program that operated between July 2004 and 2009, the AusLink National Network had additional links, both road and rail. The Federal Government encouraged funding from state, territory and local governments and public–private partnerships to upgrade the network and requires state government funding contributions on parts of the network, especially for new links. For example, the Pacific Highway and the Calder Highway are part of the National Network, yet new projects are being funded 50/50 by federal and state governments. State contributions (generally 20%) are required on some sections of the old network near major cities.
Roads and routes
The various superseded Acts defined National Highways as roads, or a series of connected roads, that were the primary connec |
https://en.wikipedia.org/wiki/Primary%20key | In the relational model of databases, a primary key is a specific choice of a minimal set of attributes (columns) that uniquely specify a tuple (row) in a relation (table). Informally, a primary key is "which attributes identify a record," and in simple cases constitute a single attribute: a unique ID. More formally, a primary key is a choice of candidate key (a minimal superkey); any other candidate key is an alternate key.
A primary key may consist of real-world observables, in which case it is called a natural key, while an attribute created to function as a key and not used for identification outside the database is called a surrogate key. For example, for a database of people (of a given nationality), time and location of birth could be a natural key. National identification number is another example of an attribute that may be used as a natural key.
Design
In relational database terms, a primary key does not differ in form or function from a key that isn't primary. In practice, various motivations may determine the choice of any one key as primary over another. The designation of a primary key may indicate the "preferred" identifier for data in the table, or that the primary key is to be used for foreign key references from other tables or it may indicate some other technical rather than semantic feature of the table. Some languages and software have special syntax features that can be used to identify a primary key as such (e.g. the PRIMARY KEY constraint in SQL).
The relational model, as expressed through relational calculus and relational algebra, does not distinguish between primary keys and other kinds of keys. Primary keys were added to the SQL standard mainly as a convenience to the application programmer.
Primary keys can be an integer that is incremented, a universally unique identifier (UUID) or can be generated using Hi/Lo algorithm.
Defining primary keys in SQL
Primary keys are defined in the ISO SQL Standard, through the PRIMARY KEY constraint. The syntax to add such a constraint to an existing table is defined in SQL:2003 like this:
ALTER TABLE <table identifier>
ADD [ CONSTRAINT <constraint identifier> ]
PRIMARY KEY ( <column name> [ {, <column name> }... ] )
The primary key can also be specified directly during table creation. In the SQL Standard, primary keys may consist of one or multiple columns. Each column participating in the primary key is implicitly defined as NOT NULL. Note that some RDBMS require explicitly marking primary key columns as NOT NULL.
CREATE TABLE table_name (
...
)
If the primary key consists only of a single column, the column can be marked as such using the following syntax:
CREATE TABLE table_name (
id_col INT PRIMARY KEY,
col2 CHARACTER VARYING(20),
...
)
Surrogate keys
In some circumstances the natural key that uniquely identifies a tuple in a relation may be cumbersome to use for s |
https://en.wikipedia.org/wiki/Foreign%20key | A foreign key is a set of attributes in a table that refers to the primary key of another table. The foreign key links these two tables. Another way to put it: In the context of relational databases, a foreign key is a set of attributes subject to a certain kind of inclusion dependency constraints, specifically a constraint that the tuples consisting of the foreign key attributes in one relation, R, must also exist in some other (not necessarily distinct) relation, S, and furthermore that those attributes must also be a candidate key in S. In simpler words, a foreign key is a set of attributes that references a candidate key. For example, a table called TEAM may have an attribute, MEMBER_NAME, which is a foreign key referencing a candidate key, PERSON_NAME, in the PERSON table. Since MEMBER_NAME is a foreign key, any value existing as the name of a member in TEAM must also exist as a person's name in the PERSON table; in other words, every member of a TEAM is also a PERSON.
Important points to note:-
The reference relation should already be created.
The referenced attribute must be a part of primary key of the referenced relation.
Data type and size of referenced and referencing attribute must be same.
Summary
The table containing the foreign key is called the child table, and the table containing the candidate key is called the referenced or parent table. In database relational modeling and implementation, a candidate key is a set of zero or more attributes, the values of which are guaranteed to be unique for each tuple (row) in a relation. The value or combination of values of candidate key attributes for any tuple cannot be duplicated for any other tuple in that relation.
Since the purpose of the foreign key is to identify a particular row of referenced table, it is generally required that the foreign key is equal to the candidate key in some row of the primary table, or else have no value (the NULL value.). This rule is called a referential integrity constraint between the two tables.
Because violations of these constraints can be the source of many database problems, most database management systems provide mechanisms to ensure that every non-null foreign key corresponds to a row of the referenced table.
For example, consider a database with two tables: a CUSTOMER table that includes all customer data and an ORDER table that includes all customer orders. Suppose the business requires that each order must refer to a single customer. To reflect this in the database, a foreign key column is added to the ORDER table (e.g., CUSTOMERID), which references the primary key of CUSTOMER (e.g. ID). Because the primary key of a table must be unique, and because CUSTOMERID only contains values from that primary key field, we may assume that, when it has a value, CUSTOMERID will identify the particular customer which placed the order. However, this can no longer be assumed if the ORDER table is not kept up to date when rows of the CUSTOMER table |
https://en.wikipedia.org/wiki/Quadrics%20%28company%29 | Quadrics was a supercomputer company formed in 1996 as a joint venture between Alenia Spazio and the technical team from Meiko Scientific. They produced hardware and software for clustering commodity computer systems into massively parallel systems. Their highpoint was in June 2003 when six out of the ten fastest supercomputers in the world were based on Quadrics' interconnect. They officially closed on June 29, 2009.
Company history
The Quadrics name was first used in 1993 for a commercialized version of the APE100 SIMD parallel computer produced by Alenia Spazio and originally developed by INFN, the Italian National Institute of Nuclear Physics. In 1996, a new Alenia subsidiary, Quadrics Supercomputers World (QSW) was formed, based in Bristol, UK and Rome, Italy, inheriting the Quadrics SIMD product line and the Meiko CS-2 massively parallel supercomputer architecture. In 2002 the company name was shortened to be simply Quadrics.
Initially, the new company focussed on the development potential of the CS-2's processor interconnect technology. Their first design was the Elan2 network ASIC, intended for use with the UltraSPARC CPU, attached to it using the Ultra Port Architecture (UPA) system bus. Plans to introduce the Elan2 were later dropped, and a new Elan3 hosted on PCI introduced instead. By the time of its release Elan3 had been re-aimed at the Alpha/PCI market instead, after Quadrics had formed a relationship with Digital Equipment Corporation (DEC).
The combination of Quadrics and Alpha 21264 (EV6) microprocessors proved very successful, and Digital/Compaq rapidly became one of the world's largest suppliers of supercomputers. This culminated with the building the largest machine in the US, the 20 TFLOP ASCI Q, installed at Los Alamos National Laboratory during 2002 and 2003. The machine consisted of 2,048 AlphaServer SC nodes (which are based on AlphaServer ES45), each with four 1.25 GHz Alpha 21264A (EV67) microprocessors and two rails of the Quadrics QsNet network. Unfortunately this system failed in reliability and was never put into production use.
Quadrics also had success in selling Linux based systems. Quadrics' first Linux based system was installed in June/July 2001 at SHARCNET. It was the fastest civilian system in Canada at the time of installation. Another high-profile Quadrics system was the fastest Linux cluster in the world called Thunder installed at Lawrence Livermore National Laboratory in 2003/2004. Thunder consisted of 1024 Intel Tiger Quad Itanium II Processor servers to deliver 19.94 teraflops on parallel Linpack. Peak performance of the system was 22.9 teraflops, at a level of efficiency of 87%.
In 2004, Quadrics was selected by Bull for what was the fastest supercomputer in Europe: TERA-10 at the French CEA: 544 Bull NovaScale 6160 computing nodes, each including eight Itanium 2 processors. The global configuration will feature 8,704 processors with 27 terabytes of core memory. Each of these computing nod |
https://en.wikipedia.org/wiki/Kenneth%20E.%20Iverson | Kenneth Eugene Iverson (17 December 1920 – 19 October 2004) was a Canadian computer scientist noted for the development of the programming language APL. He was honored with the Turing Award in 1979 "for his pioneering effort in programming languages and mathematical notation resulting in what the computing field now knows as APL; for his contributions to the implementation of interactive systems, to educational uses of APL, and to programming language theory and practice".
Life
Ken Iverson was born on 17 December 1920 near Camrose, a town in central Alberta, Canada. His parents were farmers who came to Alberta from North Dakota; his ancestors came from Trondheim, Norway.
During World War II, he served first in the Canadian Army and then in the Royal Canadian Air Force. He received a B.A. degree from Queen's University and the M.Sc. and Ph.D. degrees from Harvard University. In his career, he worked for Harvard, IBM, I. P. Sharp Associates, and Jsoftware Inc. (née Iverson Software Inc.).
Iverson suffered a stroke while working at the computer on a new J lab on 16 October 2004, and died in Toronto on 19 October 2004 at age 83.
Education
Iverson began school on 1 April 1926 in a one-room school, initially in Grade 1, promoted to Grade 2 after 3 months and to Grade 4 by the end of June 1927.
He left school after Grade 9 because it was the depths of the Great Depression and there was work to do on the family farm, and because he thought further schooling only led to becoming a schoolteacher and he had no desire to become one. At age 17, while still out of school, he enrolled in a correspondence course on radios with De Forest Training in Chicago, and learned calculus by self-study from a textbook.
During World War II, while serving in the Royal Canadian Air Force, he took correspondence courses toward a high school diploma.
After the war, Iverson enrolled in Queen's University in Kingston, Ontario, taking advantage of government support for ex-servicemen and under threat from an Air Force buddy who said he would "beat his brains out if he did not grasp the opportunity". He graduated in 1950 as the top student with a Bachelor's degree in mathematics and physics.
Continuing his education at Harvard University, he began in the Department of Mathematics and received a Master's degree in 1951. He then switched to the Department of Engineering and Applied Physics, working with Howard Aiken and Wassily Leontief.
Howard Aiken had developed the Harvard Mark I, one of the first large-scale digital computers, while Wassily Leontief was an economist who was developing the input–output model of economic analysis, work for which he would later receive the Nobel prize. Leontief's model required large matrices and Iverson worked on programs that could evaluate these matrices on the Harvard Mark IV computer. Iverson received a Ph.D. in applied mathematics in 1954 with a dissertation based on this work.
At Harvard, Iverson met Eoin Whitney, a 2-time Putnam |
https://en.wikipedia.org/wiki/Fork%20bomb | In computing, a fork bomb (also called rabbit virus or wabbit) is a denial-of-service attack wherein a process continually replicates itself to deplete available system resources, slowing down or crashing the system due to resource starvation.
History
Around 1978, an early variant of a fork bomb called wabbit was reported to run on a System/360. It may have descended from a similar attack called RABBITS reported from 1969 on a Burroughs 5500 at the University of Washington.
Implementation
Fork bombs operate both by consuming CPU time in the process of forking, and by saturating the operating system's process table. A basic implementation of a fork bomb is an infinite loop that repeatedly launches new copies of itself.
In Unix-like operating systems, fork bombs are generally written to use the fork system call. As forked processes are also copies of the first program, once they resume execution from the next address at the frame pointer, they continue forking endlessly within their own copy of the same infinite loop; this has the effect of causing an exponential growth in processes. As modern Unix systems generally use a copy-on-write resource management technique when forking new processes, a fork bomb generally will not saturate such a system's memory.
Microsoft Windows operating systems do not have an equivalent functionality to the Unix fork system call; a fork bomb on such an operating system must therefore create a new process instead of forking from an existing one.
A classic example of a fork bomb is one written in Unix shell :(){ :|:& };:, possibly dating back to 1999, which can be more easily understood as
fork() {
fork | fork &
}
fork
In it, a function is defined (fork()) as calling itself (fork), then piping (|) its result into itself, all in a background job (&).
The code using a colon : as the function name is not valid in a shell as defined by POSIX, which only permits alphanumeric characters and underscores in function names. However, its usage is allowed in GNU Bash as an extension.
Prevention
As a fork bomb's mode of operation is entirely encapsulated by creating new processes, one way of preventing a fork bomb from severely affecting the entire system is to limit the maximum number of processes that a single user may own. On Linux, this can be achieved by using the ulimit utility; for example, the command ulimit -u 30 would limit the affected user to a maximum of thirty owned processes.
On PAM-enabled systems, this limit can also be set in /etc/security/limits.conf,
and on FreeBSD, the system administrator can put limits in /etc/login.conf.
Modern Linux systems also allow finer-grained fork bomb prevention through cgroups and process number (PID) controllers.
See also
Deadlock
Logic bomb
Time bomb (software)
References
External links
Denial-of-service attacks
Process (computing) |
https://en.wikipedia.org/wiki/Transport%20finance | Transport finance is the subject that explores how transport networks are paid for.
The timing of the money required to finance transport is a principal issue. Many projects are "pay-as-you-go", that is infrastructure, which lasts many years, is expected to be paid out of ongoing cash flow. Other projects are financed with bonds raised in capital markets. Bonds must be secured with an expected future cash flow.
The cash flow, required for either pay-as-you-go or for bonds, must be raised. Common sources are user fees, such as gas taxes, and tolls. Other sources are general revenue. This issue is related to who bears the burden: users or the general public. Even if users bear the burden, that class must be subdivided, e.g. users during peak times or off-peak, freight or passenger traffic, urban or rural users, residents or non-residents (many toll plazas are located on the state line to maximize revenue from non-residents).
A third issue concerns the full costs of transportation. There are monetary costs, which are financed with money, as considered above, but there are also non-monetary costs (sometimes called hidden costs), which are paid for by people's time, by clean air, by peace and quiet, etc. See the discussion of externalities for a fuller explication of non-monetary costs.
References
See also
Transport divide
Transport economics
Fields of finance
Transport economics |
https://en.wikipedia.org/wiki/Together%20We%20Stand | Together We Stand, also known as Nothing Is Easy, is an American sitcom that aired on the CBS network from 1986 to 1987. It was written by Stephen Sustarsic and directed by Andrew D. Weyman.
Together We Stand is about a married couple, David (Elliott Gould) and Lori Randall (Dee Wallace), and their array of adopted children from all walks of life. According to producer Sherwood Schwartz, the plot for this show was originally written as a spin-off of The Brady Bunch called Kelly's Kids. In the January 4, 1974, episode of The Brady Bunch, also titled "Kelly's Kids" (season 5, episode 14), which served as a backdoor pilot, the Bradys' neighbors plan to adopt one child but end up adopting three boys of different ethnicities.
Summary
David Randall (Elliott Gould) and his wife Lori (Dee Wallace) had two kids, adopted daughter Amy (Katie O'Neill) and biological son Jack (Scott Grimes). After seeing how well the Randall family did with an adopted child and a biological child, a pushy social worker (Edie McClurg) gives them two more children: an Asian-American boy named Sam (Ke Huy Quan) and a little African-American girl named Sally (Natasha Bobo). The story lines centered on the cultural differences and adjustments that had to be made by all: Sam and Sally having parents for the first time, and Jack and Amy competing with the new arrivals for their parents' time and affection. After six episodes, Gould's character was killed off, and the series focused on Lori's struggles as a single mother.
Cast
Elliott Gould as David Randall (in Together We Stand only)
Dee Wallace as Lori Randall
Scott Grimes as Jack Randall
Katie O'Neill as Amy Randall
Ke Huy Quan (credited as Jonathan Ke Quan) as Sam Randall
Natasha Bobo as Sally Randall
Julia Migenes as Marion Simmons (in Nothing Is Easy only)
Episodes
Together We Stand (1986)
Nothing Is Easy (1987)
Network run
Premiering on Monday, September 22, 1986, at 8:30 PM ET after Kate and Allie where its ratings were initially strong, Together We Stand moved to Wednesdays at 8:00 PM ET beginning on October 1 to make room for My Sister Sam, putting it up against ABC's Perfect Strangers and NBC's Highway to Heaven instead. When the show's ratings plunged, CBS pulled the series after six episodes had aired.
The show returned three months later with a new title – Nothing Is Easy – new opening credits, a new time slot (Sundays at 9:30 PM ET, after Designing Women and up against movies on the other two networks), a new theme song, and a new cast member – Julia Migenes as bitter divorced neighbor Marion Simmons. Elliott Gould did not appear in the revamped series – his character was killed off in an automobile accident, and Dee Wallace-Stone continued on as a single mother. After two weeks in that time slot, it went on hiatus again for a month only to resurface on Fridays at 8:00 PM beginning on March 27, competing with ABC's The Charmings and NBC's Roomies. The revamp lasted only for a total of seven episodes befo |
https://en.wikipedia.org/wiki/Alphabet%20Synthesis%20Machine | The Alphabet Synthesis Machine (2002) is a work of interactive art which makes use of genetic algorithms to "evolve" a set of glyphs similar in appearance to a real-world alphabet. Users create initial glyphs and the program takes over. As the creators of the project put it, their goal was "to bring about the specific feeling of semi-sense one experiences when one recognizes—- but cannot read—- the unfamiliar writing of another culture." The project was developed by Golan Levin, a new-media artist, in collaboration with Cassidy Curtis and Jonathan Feinberg.
Notes
References
https://www.pbs.org/art21/series/seasonone/online.html (PBS and Art21 commissioned the work)
http://www.flong.com/storage/pdf/reports/alphabet_report.pdf (White paper by artists that describes the work in detail)
https://web.archive.org/web/20071027165630/http://www.ciac.ca/magazine/archives/no_19/en/entrevue.htm (Interview with Golan Levin by CIAC's Electronic Magazine)
External links
http://www.alphabetsynthesis.com/ (and a project page archived composed by Golan Levin)(link does not work)
https://web.archive.org/web/20080513044335/http://www.alphabetsynthesis.com/ (Archived version of previous link)
http://www.users.globalnet.co.uk/~ngo/font0000.htm (examples of fonts produced by the Machine)
http://www.tug.org/TUGboat/Articles/tb26-1/tb82beet.pdf (brief mention in TUGboat)
Computer art |
https://en.wikipedia.org/wiki/DICT | DICT is a dictionary network protocol created by the DICT Development Group in 1997, described by RFC 2229. Its goal is to surpass the Webster protocol to allow clients to access a variety of dictionaries via a uniform interface.
In section 3.2 of the DICT protocol RFC, queries and definitions are sent in clear-text, meaning that there is no encryption. Nevertheless, according to section 3.1 of the RFC, various forms of authentication (sans encryption) are supported, including Kerberos version 4.
The protocol consists of a few commands a server must recognize so a client can access the available data and lookup word definitions. DICT servers and clients use TCP port 2628 by default. Queries are captured in the following URL scheme:dict://<user>;<auth>@<host>:<port>/<c>:<word>:<database>:<strategy>:<n>
Resources for free dictionaries from DICT protocol servers
A repository of source files for the DICT Development group's dict protocol server (with a few sample dictionaries) is available online.
Dictionaries of English
Bouvier's Law Dictionary, Revised 6th Ed (1856)
CIA World Factbook
Easton's Bible Dictionary (1897)
Elements database
Free On-line Dictionary of Computing
Hitchcock's Bible Names Dictionary
Jargon File
Moby Thesaurus
Oxford Advanced Learner's Dictionary
The Devil's Dictionary (1911)
The U.S. Gazetteer (1990 Census)
V.E.R.A. – Virtual Entity of Relevant Acronyms which are used in the field of computing
Webster's Revised Unabridged Dictionary (1913)
WordNet
Bilingual dictionaries
Big English–Russian Dictionary
English–French dictionary
Freedict provides a collection of over 85 translating dictionaries, as XML source files with the data, mostly accompanied by databases generated from the XML files in the format used by DICT servers and clients. These are available from the Freedict project web site at.
FREELANG Dictionary
Lingvo English–Russian and Russian–English dictionaries are not free, but when purchased, can easily be converted into DICT format
Mueller's English–Russian dictionary
Slovak-English legal dictionary
Slovak-Italian legal dictionary
DICT servers
dictd (the standard server made by the DICT Development Group)
DictD++ – modern powerful server written in C++ with heavy usage of STL and boost (abandoned)
GNU Dico
JDictd – a Java-based DICT server implementation (abandoned)
DICT clients
A dictd server can be used from Telnet. For example, to connect to the DICT server on localhost, on a Unix system one can normally type:
telnet localhost dict
and then enter the command "help" to see the available commands. The standard dictd package also provides a "dict" command for command-line use.
More sophisticated DICT clients include:
cURL
dictc (DICT Client) client for Windows written in Delphi.
dict.org's own client (part of the dictd package)
dictem, for the Emacs text editor
Dictionary, an application included with Mac OS X. Online dictionaries can be accessed by setting it as the helper |
https://en.wikipedia.org/wiki/IBM%20System/36 | The IBM System/36 (often abbreviated as S/36) was a midrange computer marketed by IBM from 1983 to 2000 - a multi-user, multi-tasking successor to the System/34.
Like the System/34 and the older System/32, the System/36 was primarily programmed in the RPG II language. One of the machine's optional features was an off-line storage mechanism (on the 5360 model) that utilized "magazines" – boxes of 8-inch floppies that the machine could load and eject in a nonsequential fashion. The System/36 also had many mainframe features such as programmable job queues and scheduling priority levels.
While these systems were similar to other manufacturer's minicomputers, IBM themselves described the System/32, System/34 and System/36 as "small systems" and later as midrange computers along with the System/38 and succeeding IBM AS/400 range.
The AS/400 series and IBM Power Systems running IBM i can run System/36 code in the System/36 Environment, although the code needs to be recompiled on IBM i first.
Overview of the IBM System/36
The IBM System/36 was a popular small business computer system, first announced on 16 May 1983 and shipped later that year. It had a 17-year product lifespan. The first model of the System/36 was the 5360.
In the 1970s, the US Department of Justice brought an antitrust lawsuit against IBM, claiming it was using unlawful practices to knock out competitors. At this time, IBM had been about to consolidate its entire line (System/370, 4300, System/32, System/34, System/38) into one "family" of computers with the same ISAM database technology, programming languages, and hardware architecture. After the lawsuit was filed, IBM decided it would have two families: the System/38 line, intended for large companies and representing IBM's future direction, and the System/36 line, intended for small companies who had used the company's legacy System/32/34 computers. In the late 1980s the lawsuit was dropped, and IBM decided to recombine the two product lines, creating the AS/400 - which replaced both the System/36 and System/38.
The System/36 used virtually the same RPG II, Screen Design Aid, OCL, and other technologies that the System/34 used, though it was object-code incompatible. The S/36 was a small business computer; it had an 8-inch diskette drive, between one and four hard drives in sizes of 30 to 716 MB, and memory from 128K up to 7MB. Tape drives were available as backup devices; the 6157 QIC (quarter-inch cartridge) and the reel-to-reel 8809 both had capacities of roughly 60MB. The Advanced/36 9402 tape drive had a capacity of 2.5GB. The IBM 5250 series of terminals were the primary interface to the System/36.
System architecture
Processors
S/36s had two sixteen-bit processors, the CSP or Control Storage Processor, and the MSP or Main Storage Processor. The MSP was the workhorse; it performed the instructions in the computer programs. The CSP was the governor; it performed system functions in the background. Special utility pro |
https://en.wikipedia.org/wiki/TPN | TPN may refer to:
Science and Medicine
Total parenteral nutrition
Triphosphopyridine nucleotide, the previous name for nicotinamide adenine dinucleotide phosphate (NADP+)
Task Positive Network, see Dorsal attention network
Organisations
Towarzystwo Przyjaciół Nauk (Society of Friends of Science) in Warsaw
Other
Third-party note or Note verbale, a diplomatic document
Tupinambá language, by ISO 639 code
Tiputini Airport, Ecuador
Treaty on the Prohibition of Nuclear Weapons
"The Promised Neverland" (Manga / Anime Series) |
https://en.wikipedia.org/wiki/Colossal%20Cave%20Adventure | Colossal Cave Adventure (also known as Adventure or ADVENT) is a text-based adventure game, released in 1976 by developer Will Crowther for the PDP-10 mainframe computer. It was expanded upon in 1977 by Don Woods. In the game, the player explores a cave system rumored to be filled with treasure and gold. The game is composed of dozens of locations, and the player moves between these locations and interacts with objects in them by typing one- or two-word commands which are interpreted by the game's natural language input system. The program acts as a narrator, describing the player's location and the results of the player's attempted actions. It is the first well-known example of interactive fiction, as well as the first well-known adventure game, for which it was also the namesake.
The original game, written in 1975 and 1976, was based on Crowther's maps and experiences caving in Mammoth Cave in Kentucky, the longest cave system in the world; further, it was intended, in part, to be accessible to non-technical players, such as his two daughters. Woods's version expanded the game in size and increased the number of fantasy elements present in it, such as a dragon and magic spells. Both versions, typically played over teleprinters connected to mainframe computers, were spread around the nascent ARPANET, the precursor to the Internet, which Crowther was involved in developing.
Colossal Cave Adventure was one of the first teletype games and was massively popular in the computer community of the late 1970s, with numerous ports and modified versions being created based on Woods's source code. It directly inspired the creation of numerous games, including Zork (1977), Adventureland (1978), Mystery House (1980), Rogue (1980), and Adventure (1980), which went on to be the foundations of the interactive fiction, adventure, roguelike, and action-adventure genres. It also influenced the creation of the MUD and computer role-playing game genres. It has been noted as one of the most influential video games, and in 2019 was inducted into the World Video Game Hall of Fame by The Strong and the International Center for the History of Electronic Games.
Gameplay
Colossal Cave Adventure is a text-based adventure game wherein the player explores a mysterious cave that is rumored to be filled with treasure and gold. The player must explore the cave system and solve puzzles by using items that they find to obtain the treasures and leave the cave. The player types in one- or two-word commands to move their character through the cave system, interact with objects in the cave, pick up items to put into their inventory, and perform other actions. The allowable commands are contextual to the location, or room, the player is in; for example, "get lamp" only has an effect if there is a lamp present. There are dozens of rooms, each of which has a name such as "Debris Room" and a description, and may contain objects or obstacles. The program acts as a narrator, describing |
https://en.wikipedia.org/wiki/Polymorphism%20%28computer%20science%29 | In programming language theory and type theory, polymorphism is the provision of a single interface to entities of different types or the use of a single symbol to represent multiple different types. The concept is borrowed from a principle in biology where an organism or species can have many different forms or stages.
The most commonly recognized major forms of polymorphism are:
Ad hoc polymorphism: defines a common interface for an arbitrary set of individually specified types.
Parametric polymorphism: not specifying concrete types and instead use abstract symbols that can substitute for any type.
Subtyping (also called subtype polymorphism or inclusion polymorphism): when a name denotes instances of many different classes related by some common superclass.
History
Interest in polymorphic type systems developed significantly in the 1990s, with practical implementations beginning to appear by the end of the decade. Ad hoc polymorphism and parametric polymorphism were originally described in Christopher Strachey's Fundamental Concepts in Programming Languages, where they are listed as "the two main classes" of polymorphism. Ad hoc polymorphism was a feature of Algol 68, while parametric polymorphism was the core feature of ML's type system.
In a 1985 paper, Peter Wegner and Luca Cardelli introduced the term inclusion polymorphism to model subtypes and inheritance, citing Simula as the first programming language to implement it.
Forms
Ad hoc polymorphism
Christopher Strachey chose the term ad hoc polymorphism to refer to polymorphic functions that can be applied to arguments of different types, but that behave differently depending on the type of the argument to which they are applied (also known as function overloading or operator overloading). The term "ad hoc" in this context is not intended to be pejorative; it refers simply to the fact that this form of polymorphism is not a fundamental feature of the type system. In the Pascal / Delphi example below, the Add functions seem to work generically over two types (integer and string) when looking at the invocations, but are considered to be two entirely distinct functions by the compiler for all intents and purposes:
program Adhoc;
function Add(x, y : Integer) : Integer;
begin
Add := x + y
end;
function Add(s, t : String) : String;
begin
Add := Concat(s, t)
end;
begin
Writeln(Add(1, 2)); (* Prints "3" *)
Writeln(Add('Hello, ', 'Mammals!')); (* Prints "Hello, Mammals!" *)
end.
In dynamically typed languages the situation can be more complex as the correct function that needs to be invoked might only be determinable at run time.
Implicit type conversion has also been defined as a form of polymorphism, referred to as "coercion polymorphism".
Parametric polymorphism
Parametric polymorphism allows a function or a data type to be written generically, so that it can handle values uniformly without depending on their type. Parametric pol |
https://en.wikipedia.org/wiki/GAP%20%28computer%20algebra%20system%29 | GAP (Groups, Algorithms and Programming) is a computer algebra system for computational discrete algebra with particular emphasis on computational group theory.
History
GAP was developed at Lehrstuhl D für Mathematik (LDFM), Rheinisch-Westfälische Technische Hochschule Aachen, Germany from 1986 to 1997. After the retirement of Joachim Neubüser from the chair of LDFM, the development and maintenance of GAP was coordinated by the School of Mathematical and Computational Sciences at the University of St Andrews, Scotland. In the summer of 2005 coordination was transferred to an equal partnership of four 'GAP Centres', located at the University of St Andrews, RWTH Aachen, Technische Universität Braunschweig, and Colorado State University at Fort Collins; in April 2020, a fifth GAP Centre located at the TU Kaiserslautern was added.
Distribution
GAP and its sources, including packages (sets of user contributed programs), data library (including a list of small groups) and the manual, are distributed freely, subject to "copyleft" conditions. GAP runs on any Unix system, under Windows, and on Macintosh systems. The standard distribution requires about 300 MB (about 400 MB if all the packages are loaded).
The user contributed packages are an important feature of the system, adding a great deal of functionality. GAP offers package authors the opportunity to submit these packages for a process of peer review, hopefully improving the quality of the final packages, and providing recognition akin to an academic publication for their authors. , there are 151 packages distributed with GAP, of which approximately 71 have been through this process.
An interface is available for using the SINGULAR computer algebra system from within GAP. GAP is also included in the mathematical software system SageMath.
Sample session
See also
Comparison of computer algebra systems
References
External links
Computer algebra system software for Linux
Computer algebra system software for macOS
Computer algebra system software for Windows
Free computer algebra systems |
https://en.wikipedia.org/wiki/IEEE%20802.6 | IEEE 802.6 is a standard governed by the ANSI for Metropolitan Area Networks (MAN). It is an improvement of an older standard (also created by ANSI) which used the Fiber distributed data interface (FDDI) network structure. The FDDI-based standard failed due to its expensive implementation and lack of compatibility with current LAN standards. The IEEE 802.6 standard uses the Distributed Queue Dual Bus (DQDB) network form. This form supports 150 Mbit/s transfer rates. It consists of two unconnected unidirectional buses. DQDB is rated for a maximum of 160 km before significant signal degradation over fiberoptic cable with an optical wavelength of 1310 nm.
This standard has also failed, mostly for the same reasons that the FDDI standard failed. MANs are traditionally designed using Synchronous Optical Network (SONET), Synchronous Digital Hierarchy (SDH) or Asynchronous Transfer Mode (ATM). Recent designs use native Ethernet or MPLS.
References
IEEE 802.06
Networking standards
Metropolitan area networks |
https://en.wikipedia.org/wiki/Open%20Grid%20Services%20Architecture | Open Grid Services Architecture (OGSA) describes a service-oriented architecture for a grid computing environment for business and scientific use.
It was developed within the Open Grid Forum, which was called the Global Grid Forum (GGF) at the time, around 2002 to 2006.
Description
OGSA is a distributed interaction and computing architecture based around services, assuring interoperability on heterogeneous systems so that different types of resources can communicate and share information. OGSA is based on several other Web service technologies, such as the Web Services Description Language (WSDL) and the Simple Object Access Protocol (SOAP), but it aims to be largely independent of transport-level handling of data.
OGSA has been described as a refinement of a Web services architecture, specifically designed to support grid requirements.
The concept of OGSA is derived from work presented in the 2002 Globus Alliance paper "The Physiology of the Grid" by Ian Foster, Carl Kesselman, Jeffrey M. Nick, and Steven Tuecke.
It was developed by GGF working groups which resulted in a document, entitled The Open Grid Services Architecture, Version 1.5 in 2006.
The GGF published some use case scenarios.
According to the "Defining the Grid: A Roadmap for OGSA Standards v 1.0", OGSA is:
An architectural process in which the GGF's OGSA Working Group collects requirements and maintains a set of informational documents that describe the architecture;
A set of normative specifications and profiles that document the precise requirements for a conforming hardware or software component;
Software components that adhere to the OGSA specifications and profiles, enabling deployment of grid solutions that are interoperable even though they may be based on implementations from multiple sources.
The Open Grid Services Architecture, Version 1.5 described these capabilities:
Infrastructure services
Execution Management services
Data services
Resource Management services
Security services
Self-management services
Information services
In late 2006 an updated version of OGSA and several associated documents were published, including the first of several planned normative documents, "Open Grid Services Architecture Glossary of Terms, Version 1.5".
The Open Grid Services Infrastructure (OGSI) is related to OGSA, as it was originally intended to form the basic “plumbing” layer for OGSA. It was superseded by Web Services Resource Framework (WSRF) and WS-Management.
References
External links
OGSA WSRF Basic Profile, Version 1.0
Grid computing |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.