source stringlengths 32 199 | text stringlengths 26 3k |
|---|---|
https://en.wikipedia.org/wiki/IJN | IJN may refer to:
International Justice Network, a human rights organization
Imperial Japanese Navy, the navy of Japan from 1868 until it was dissolved in 1945
Institut Jean Nicod, a French interdisciplinary research center
Institut Jantung Negara, National Heart Institute of Malaysia
Intermountain Jewish News, an international weekly newspaper publication located in Denver, Colorado |
https://en.wikipedia.org/wiki/Coba | Coba () is an ancient Maya city on the Yucatán Peninsula, located in the Mexican state of Quintana Roo. The site is the nexus of the largest network of stone causeways of the ancient Maya world, and it contains many engraved and sculpted stelae that document ceremonial life and important events of the Late Classic Period (AD 600–900) of Mesoamerican civilization. The adjacent modern village bearing the same name, reported a population of 1,278 inhabitants in the 2010 Mexican federal census.
The ruins of Coba lie 47 km (approx. 29 mi) northwest of Tulum, in the State of Quintana Roo, Mexico. The geographical coordinates of Coba Group (main entrance for tourist area of the archaeological site) are North 19° 29.6’ and West 87° 43.7’. The archaeological zone is reached by a two-kilometer branch from the asphalt road connecting Tulum with Nuevo Xcán (a community of Lázaro Cárdenas, another municipality of Quintana Roo) on the Valladolid to Cancún highway.
Coba is located around two lagoons, Lake Coba and Lake Macanxoc. A series of elevated stone and plaster roads radiate from the central site to various smaller sites near and far. These are known by the Maya term sacbe (plural sacbeob) or white road. Some of these causeways go east, and the longest runs over westward to the site of Yaxuna. The site contains a group of large temple pyramids known as the Nohoch Mul, the tallest of which, Ixmoja, is some in height. Ixmoja is among the tallest pyramids on the Yucatán peninsula, exceeded by Calakmul at .
Coba was estimated to have had some 50,000 inhabitants (and possibly significantly more) at its peak of civilization, and the built up area extends over some 80 km2. The site was occupied by a sizable agricultural population by the first century. The bulk of Coba's major construction seems to have been made in the middle and late Classic period, about 500 to 900 AD, with most of the dated hieroglyphic inscriptions from the 7th century (see Mesoamerican Long Count calendar). However, Coba remained an important site in the Post-Classic era and new temples were built and old ones kept in repair until at least the 14th century, possibly as late as the arrival of the Spanish.
Cobá lies in the tropics, subject to alternating wet and dry seasons which, on average, differ somewhat from those in the rest of the northern peninsula, where the rainy season generally runs from June through October and the dry season from November through May. At Cobá, rain can occur in almost any time of the year, but there is a short dry period in February and March, and a concentration of rain from September through November.
Sakbe'ob
Sakbe'ob (Maya plural of sacbe), or sacbes, are very common at Coba. They are raised pathways lined with stones on each side and filled with smaller stones and lined with sand, shell, and/or paster on top. These paths were the connecting points to most areas of Coba. Although the Maya used wheels in artifacts such as toys, anthropologists not |
https://en.wikipedia.org/wiki/Computer-assisted%20language%20learning | Computer-assisted language learning (CALL), British, or Computer-Aided Instruction (CAI)/Computer-Aided Language Instruction (CALI), American, is briefly defined in a seminal work by Levy (1997: p. 1) as "the search for and study of applications of the computer in language teaching and learning". CALL embraces a wide range of information and communications technology applications and approaches to teaching and learning foreign languages, from the "traditional" drill-and-practice programs that characterised CALL in the 1960s and 1970s to more recent manifestations of CALL, e.g. as used in a virtual learning environment and Web-based distance learning. It also extends to the use of corpora and concordancers, interactive whiteboards, computer-mediated communication (CMC), language learning in virtual worlds, and mobile-assisted language learning (MALL).
The term CALI (computer-assisted language instruction) was in use before CALL, reflecting its origins as a subset of the general term CAI (computer-assisted instruction). CALI fell out of favour among language teachers, however, as it appeared to imply a teacher-centred approach (instructional), whereas language teachers are more inclined to prefer a student-centred approach, focusing on learning rather than instruction. CALL began to replace CALI in the early 1980s (Davies & Higgins 1982: p. 3) and it is now incorporated into the names of the growing number of professional associations worldwide.
An alternative term, technology-enhanced language learning (TELL), also emerged around the early 1990s: e.g. the TELL Consortium project, University of Hull.
The current philosophy of CALL puts a strong emphasis on student-centred materials that allow learners to work on their own. Such materials may be structured or unstructured, but they normally embody two important features: interactive learning and individualised learning. CALL uses tools that help teachers facilitate the language learning process. They can be used to reinforce what has already been learned in the classroom or help learners who require additional support.
The design of CALL materials generally takes into consideration principles of language pedagogy and methodology, which may be derived from different learning theories (e.g., behaviourist, cognitive, constructivist) and second-language learning theories such as Stephen Krashen's monitor hypothesis.
A combination of face-to-face teaching and CALL is usually referred to as blended learning. Blended learning is designed to increase learning potential and is more commonly found than pure CALL (Pegrum 2009: p. 27).
See Davies et al. (2011: Section 1.1, What is CALL?). See also Levy & Hubbard (2005), who raise the question Why call CALL "CALL"?
History
CALL dates back to the 1960s, when it was first introduced on university mainframe computers. The PLATO project, initiated at the University of Illinois in 1960, is an important landmark in the early development of CALL (Marty 1981). T |
https://en.wikipedia.org/wiki/Serial%20communication | In telecommunication and data transmission, serial communication is the process of sending data one bit at a time, sequentially, over a communication channel or computer bus. This is in contrast to parallel communication, where several bits are sent as a whole, on a link with several parallel channels.
Serial communication is used for all long-haul communication and most computer networks, where the cost of cable and synchronization difficulties make parallel communication impractical. Serial computer buses have become more common even at shorter distances, as improved signal integrity and transmission speeds in newer serial technologies have begun to outweigh the parallel bus's advantage of simplicity (no need for serializer and deserializer, or SerDes) and to outstrip its disadvantages (clock skew, interconnect density). The migration from PCI to PCI Express is an example.
Cables
Many serial communication systems were originally designed to transfer data over relatively large distances through some sort of data cable.
Practically all long-distance communication transmits data one bit at a time, rather than in parallel, because it reduces the cost of the cable. The cables that carry this data (other than "the" serial cable) and the computer ports they plug into are usually referred to with a more specific name, to reduce confusion.
Keyboard and mouse cables and ports are almost invariably serial—such as PS/2 port, Apple Desktop Bus and USB.
The cables that carry digital video are also mostly serial—such as coax cable plugged into a HD-SDI port, a webcam plugged into a USB port or FireWire port, Ethernet cable connecting an IP camera to a Power over Ethernet port, FPD-Link, digital telephone lines (ex. ISDN), etc.
Other such cables and ports, transmitting data one bit at a time, include Serial ATA, Serial SCSI, Ethernet cable plugged into Ethernet ports, the Display Data Channel using previously reserved pins of the VGA connector or the DVI port or the HDMI port.
Serial buses
Many communication systems were generally designed to connect two integrated circuits on the same printed circuit board, connected by signal traces on that board (rather than external cables).
Integrated circuits are more expensive when they have more pins. To reduce the number of pins in a package, many ICs use a serial bus to transfer data when speed is not important. Some examples of such low-cost serial buses include RS-232, SPI, I²C, UNI/O, 1-Wire and PCI Express.
Serial versus parallel
The communication links, across which computers (or parts of computers) talk to one another, may be either serial or parallel. A parallel link transmits several streams of data simultaneously along multiple channels (e.g., wires, printed circuit tracks, or optical fibers); whereas, a serial link transmits only a single stream of data.
Although a serial link may seem inferior to a parallel one, since it can transmit less data per clock cycle, it is often the case that serial li |
https://en.wikipedia.org/wiki/Interplanetary%20Transport%20Network | The Interplanetary Transport Network (ITN) is a collection of gravitationally determined pathways through the Solar System that require very little energy for an object to follow. The ITN makes particular use of Lagrange points as locations where trajectories through space can be redirected using little or no energy. These points have the peculiar property of allowing objects to orbit around them, despite lacking an object to orbit. While it would use little energy, transport along the network would take a long time.
History
Interplanetary transfer orbits are solutions to the gravitational three-body problem, which, for the general case, does not have analytical solutions, and is addressed by numerical analysis approximations. However, a small number of exact solutions exist, most notably the five orbits referred to as "Lagrange points", which are orbital solutions for circular orbits in the case when one body is significantly more massive.
The key to discovering the Interplanetary Transport Network was the investigation of the nature of the winding paths near the Earth-Sun and Earth-Moon Lagrange points. They were first investigated by Henri Poincaré in the 1890s. He noticed that the paths leading to and from any of those points would almost always settle, for a time, on an orbit about that point. There are in fact an infinite number of paths taking one to the point and away from it, and all of which require nearly zero change in energy to reach. When plotted, they form a tube with the orbit about the Lagrange point at one end.
The derivation of these paths traces back to mathematicians Charles C. Conley and Richard P. McGehee in 1968. Hiten, Japan's first lunar probe, was moved into lunar orbit using similar insight into the nature of paths between the Earth and the Moon. Beginning in 1997, Martin Lo, Shane D. Ross, and others wrote a series of papers identifying the mathematical basis that applied the technique to the Genesis solar wind sample return, and to lunar and Jovian missions. They referred to it as an Interplanetary Superhighway (IPS).
Paths
As it turns out, it is very easy to transit from a path leading to the point to one leading back out. This makes sense, since the orbit is unstable, which implies one will eventually end up on one of the outbound paths after spending no energy at all. Edward Belbruno coined the term "weak stability boundary" or "fuzzy boundary" for this effect.
With careful calculation, one can pick which outbound path one wants. This turns out to be useful, as many of these paths lead to some interesting points in space, such as the Earth's Moon or between the Galilean moons of Jupiter, within a few months or years.
For trips from Earth to other planets, they are not useful for crewed or uncrewed probes, as the trip would take many generations. Nevertheless, they have already been used to transfer spacecraft to the Earth–Sun point, a useful point for studying the Sun that was employed in a number of re |
https://en.wikipedia.org/wiki/LexisNexis%20Risk%20Solutions | LexisNexis Risk Solutions is a global data and analytics company that provides data and technology services, analytics, predictive insights and fraud prevention for a wide range of industries. It is headquartered in Alpharetta, Georgia (part of the Atlanta metropolitan area), and has offices throughout the U.S. and in:
Australia;
Brazil;
China;
France;
Hong Kong SAR;
India;
Ireland;
Israel;
the Philippines;
and the U.K.
The company's customers include businesses within the insurance, financial services, healthcare and corporate sectors as well as the local, state and federal government, law enforcement and public safety.
LexisNexis Risk Solutions operates within the Risk & Business Analytics market segment of RELX, a multinational information and analytics company based in London.
Overview
Market segments
LexisNexis Risk Solutions operates in four market segments:
Insurance Services;
Business Services;
Health Care Services;
and Government Services.
Technology
LexisNexis Risk Solutions uses HPCC Systems, also known as DAS (Data Analytics Supercomputer), extensively—its software architecture runs from commodity computing clusters to provide high-performance, data-parallel processing for big data applications. The HPCC Systems platform includes a data refinery (Thor) and a rapid data delivery engine (ROXIE) that utilize the Enterprise Control Language (ECL). LexisNexis Risk Solutions open-sourced the HPCC Systems platform in 2011, and has seen some success with the adoption of this platform by diverse entities, an example being GuardHat, which makes smart hard hats with embedded HPCC Systems technology.
History
A subsidiary of RELX (formerly Reed Elsevier), LexisNexis Risk Solutions first began as the Risk & Information Analytics Group (RIAG) within LexisNexis, a corporation offering legal database services. In 2000, Reed Elsevier acquired RiskWise and PeopleWise, which together became the basis of RIAG. The creation of RIAG expanded LexisNexis offerings to include public records collections. In 2000, LexisNexis also launched HPCC Systems, its data-intensive computing system platform.
LexisNexis Risk Solutions moved into Collections after Reed Elsevier acquired the public records businesses of Dolan Media Company in 2003. That same year, the LexisNexis Special Services Inc. (LNSSI) was founded to provide government agencies with global sources of data fusion technology and analytics. LNSSI also granted Reed Elsevier the ability to participate in classified U.S. government programs as a foreign-owned entity.
In 2004, Reed Elsevier purchased Seisint Inc., based in Boca Raton, Florida. Seisint housed and operated the Multistate Anti-Terrorism Information Exchange (MATRIX).
In September 2008, Reed Elsevier purchased data aggregator ChoicePoint. This acquisition included an insurance business and the C.L.U.E. database, an underwriting database for the U.S. auto insurance market. LexisNexis completed the migration of public rec |
https://en.wikipedia.org/wiki/Internet%20Key%20Exchange | In computing, Internet Key Exchange (IKE, versioned as IKEv1 and IKEv2) is the protocol used to set up a security association (SA) in the IPsec protocol suite. IKE builds upon the Oakley protocol and ISAKMP. IKE uses X.509 certificates for authentication ‒ either pre-shared or distributed using DNS (preferably with DNSSEC) ‒ and a Diffie–Hellman key exchange to set up a shared session secret from which cryptographic keys are derived. In addition, a security policy for every peer which will connect must be manually maintained.
History
The Internet Engineering Task Force (IETF) originally defined IKE in November 1998 in a series of publications (Request for Comments) known as RFC 2407, RFC 2408 and RFC 2409:
defined the Internet IP Security Domain of Interpretation for ISAKMP.
defined the Internet Security Association and Key Management Protocol (ISAKMP).
defined the Internet Key Exchange (IKE).
updated IKE to version two (IKEv2) in December 2005. clarified some open details in October 2006. combined these two documents plus additional clarifications into the updated IKEv2, published in September 2010. A later update upgraded the document from Proposed Standard to Internet Standard, published as in October 2014.
The parent organization of the IETF, the Internet Society (ISOC), has maintained the copyrights of these standards as freely available to the Internet community.
Architecture
Most IPsec implementations consist of an IKE daemon that runs in user space and an IPsec stack in the kernel that processes the actual IP packets.
User-space daemons have easy access to mass storage containing configuration information, such as the IPsec endpoint addresses, keys and certificates, as required. Kernel modules, on the other hand, can process packets efficiently and with minimum overhead—which is important for performance reasons.
The IKE protocol uses UDP packets, usually on port 500, and generally requires 4–6 packets with 2–3 round trips to create an ISAKMP security association (SA) on both sides. The negotiated key material is then given to the IPsec stack. For instance, this could be an AES key, information identifying the IP endpoints and ports that are to be protected, as well as what type of IPsec tunnel has been created. The IPsec stack, in turn, intercepts the relevant IP packets if and where appropriate and performs encryption/decryption as required. Implementations vary on how the interception of the packets is done—for example, some use virtual devices, others take a slice out of the firewall, etc.
IKEv1 consists of two phases: phase 1 and phase 2.
IKEv1 phases
IKE phase one's purpose is to establish a secure authenticated communication channel by using the Diffie–Hellman key exchange algorithm to generate a shared secret key to encrypt further IKE communications. This negotiation results in one single bi-directional ISAKMP security association. The authentication can be performed using either pre-shared key (shared se |
https://en.wikipedia.org/wiki/Code%20page | In computing, a code page is a character encoding and as such it is a specific association of a set of printable characters and control characters with unique numbers. Typically each number represents the binary value in a single byte. (In some contexts these terms are used more precisely; see .)
The term "code page" originated from IBM's EBCDIC-based mainframe systems, but Microsoft, SAP, and Oracle Corporation are among the vendors that use this term. The majority of vendors identify their own character sets by a name. In the case when there is a plethora of character sets (like in IBM), identifying character sets through a number is a convenient way to distinguish them. Originally, the code page numbers referred to the page numbers in the IBM standard character set manual, a condition which has not held for a long time. Vendors that use a code page system allocate their own code page number to a character encoding, even if it is better known by another name; for example, UTF-8 has been assigned page numbers 1208 at IBM, 65001 at Microsoft, and 4110 at SAP.
Hewlett-Packard uses a similar concept in its HP-UX operating system and its Printer Command Language (PCL) protocol for printers (either for HP printers or not). The terminology, however, is different: What others call a character set, HP calls a symbol set, and what IBM or Microsoft call a code page, HP calls a symbol set code. HP developed a series of symbol sets, each with an associated symbol set code, to encode both its own character sets and other vendors’ character sets.
The multitude of character sets leads many vendors to recommend Unicode.
The code page numbering system
IBM introduced the concept of systematically assigning a small, but globally unique, 16 bit number to each character encoding that a computer system or collection of computer systems might encounter. The IBM origin of the numbering scheme is reflected in the fact that the smallest (first) numbers are assigned to variations of IBM's EBCDIC encoding and slightly larger numbers refer to variations of IBM's extended ASCII encoding as used in its PC hardware.
With the release of PC DOS version 3.3 (and the near identical MS-DOS 3.3) IBM introduced the code page numbering system to regular PC users, as the code page numbers (and the phrase "code page") were used in new commands to allow the character encoding used by all parts of the OS to be set in a systematic way.
After IBM and Microsoft ceased to cooperate in the 1990s, the two companies have maintained the list of assigned code page numbers independently from each other, resulting in some conflicting assignments. At least one third-party vendor (Oracle) also has its own different list of numeric assignments. IBM's current assignments are listed in their CCSID repository, while Microsoft's assignments are documented within the MSDN. Additionally, a list of the names and approximate IANA (Internet Assigned Numbers Authority) abbreviations for the installed code |
https://en.wikipedia.org/wiki/Kansas%20City%20standard | The Kansas City standard (KCS), or Byte standard, is a data storage protocol for standard cassette tapes at . It originated in a symposium sponsored by Byte magazine in November 1975 in Kansas City, Missouri to develop a standard for the storage of digital microcomputer data on inexpensive consumer quality cassettes. The first systems based on the standard appeared in 1976.
One variation on the basic standard is CUTS, which is identical at 300 bit/s, but with an optional 1200 bit/s mode. CUTS is the default encoding used by several later machine families, including those from Acorn and the MSX. MSX added a higher 2400 bit/s mode that is otherwise similar. The 1200 bit/s mode of CUTS was used as the standard for cross-platform BASICODE distribution.
KCS originated from the earliest days of the microcomputer revolution, among other prolific protocols. Most home computers of the era have unique formats that are incompatible with anything.
History
Early microcomputers generally use punched tape for program storage, an expensive option. Computer consultant Jerry Ogdin conceived the use of audio tones on a cassette to replace the paper tapes. He took the idea to Les Solomon, editor of Popular Electronics magazine, who was similarly frustrated by punched tapes. In September 1975, the two co-authored an article on the HITS (Hobbyists' Interchange Tape System), using two tones to represent 1s and 0s. Soon after, several manufacturers started using similar approaches, all incompatible.
Wayne Green, who had just started Byte magazine, wanted all the manufacturers to collaborate on a single cassette standard. He organized a two-day meeting on 7–8 November 1975 in Kansas City, Missouri. The participants settled on a system based on Don Lancaster's design. After the meeting, Lee Felsenstein (of Processor Technology) and Harold Mauch (of Percom) wrote the standard, which was published in Byte magazine's first issue.
A KCS cassette interface is similar to a modem connected to a serial port. The 1s and 0s from the serial port are converted to audio tones using audio frequency-shift keying (AFSK). A "0" bit is represented as four cycles of a 1200 Hz sine wave, and a "1" bit as eight cycles of 2400 Hz. This gives a data rate of 300 baud. Each frame starts with one "0" start bit, followed by eight data bits (least significant bit first) followed by two "1" stop bits, so each frame is 11 bits, for a data rate of bytes per second.
The February 1976 issue of Byte has a report on the symposium, and the March issue features two hardware examples by Don Lancaster and Harold Mauch. The 300 baud rate is reliable, but slow; a typical 8-kilobyte BASIC program takes five minutes to load. Most audio cassette circuits support higher speeds.
According to Solomon, the efforts were unsuccessful: "Unfortunately, it didn't last long; before the month ended, everyone went back to his own tape standard and the recording confusion got worse."
The participants of the Kansas Ci |
https://en.wikipedia.org/wiki/CUTS | CUTS may refer to
Computer Users' Tape Standard, a standard for storage of digital microcomputer data on consumer quality cassettes
CUTS International (Consumer Unity & Trust Society), a non-profit organisation committed to fulfilling the developmental aspirations of the poor
Compact utility tractors, tractors designed primarily for landscaping and estate management tasks
Central University for Tibetan Studies, in Sarnath, Varanasi, Uttar Pradesh, India
Cuts, Oise, a commune in France
See also
Cut (disambiguation) |
https://en.wikipedia.org/wiki/Ferguson%20Big%20Board | The Big Board (1980) and Big Board II (1982) were Z80 based single-board computers designed by Jim Ferguson. They provided a complete CP/M compatible computer system on a single printed circuit board, including CPU, memory, disk drive interface, keyboard and video monitor interface. The printed circuit board was sized to match the Shugart 801 or 851 floppy drive. This allowed attachment to up to two 8 inch or 5 1/4 inch floppy disk drives . The Big Board II added a SASI interface for hard disk drives, enhancements to system speed (4 MHz vs. 2.5 MHz) and enhancements to the terminal interface.
One version of the Big Board was used in the Xerox 820.
Hardware
The Big Board was sold as an unpopulated printed circuit board with sockets for integrated circuits, with documentation and options to purchase additional components
.
The Big Board design was simple enough to build a system around that many people with no prior electronics experience were able to build and bring up a capable computer system of their own at a cost far less than that of a fully assembled system of the time. In this way, the Big Boards anticipated the DIY PC clones that became popular later. In its most popular form, the fully assembled and tested Big Board need only be connected to a power supply, one or two eight inch floppy disk drives, a composite monitor, and an ASCII encoded keyboard in order to provide a fully functioning system. A serial terminal could be used in place of the monitor and keyboard, further simplifying assembly. The only tool required for basic assembly was a screwdriver for the terminal block power connections.
The design was also simple to modify for the sake of system expansion and enhancement. Many different modifications to increase the system clock speed were possible, including some that required nothing more than jumpers (e.g. the 3.5 MHz speed upgrade obtained by jumpering the clock divider, with no software modifications or changes to the ICs on the board.) There was also a minor industry in user-installable system upgrades such as real time clocks, 4 MHz upgrades, double density floppy upgrades, character enhancements for the display (reverse video, blinking, etc.), and the addition of hard disk interfaces such as SASI and SCSI. Most of these upgrades were accomplished through the use of daughter boards that plugged into existing IC sockets on the board, with the original IC either replaced by a more capable IC or placed into a socket on the daughter board.
It was possible to upgrade the memory to 256 KB, which was extremely large for the time. While not directly supported by CP/M, the extra memory could be used to implement a ram disk, caching of the operating system image (to greatly improve warm boot time), or a print spooler.
The Big Board II (1982) incorporated many of the most popular upgrades for the original Big Board into its design. It also featured a small breadboard area that allowed for many simple upgrades to be performed with |
https://en.wikipedia.org/wiki/Xerox%20820 | The Xerox 820 Information Processor is an 8-bit desktop computer sold by Xerox in the early 1980s. The computer runs under the CP/M operating system and uses floppy disk drives for mass storage. The microprocessor board is a licensed variant of the Big Board computer.
820
Xerox introduced the 820 in June 1981 for $2,995 with two -inch single-density disk drives with 81K of capacity per diskette, or $3,795 with two 8-inch drives with 241K capacity. To beat the IBM PC to market, Xerox created little of the computer's design; it is based on the Ferguson Big Board computer kit and other off-the-shelf components, including a Zilog Z80 processor clocked at 2.5 MHz, and 64 KB of RAM.
Xerox chose CP/M as its operating system because of the large software library—The 820 is compatible with all Big Board software—and sold a customized version of WordStar for $495, although by 1982 the company offered the standard version for the same price.
By 1984, surplus 820 mainboards were available from Xerox for about $50 each, and one of these could be combined with other surplus components to build a working system for a few hundred dollars.
820-II
Overview
The Xerox 820-II followed in 1982, featuring a Z80A processor clocked at 4.0 MHz. Pricing started at .
Hardware: The processor board is located inside the CRT unit, and includes the Z80A, 64 KB of RAM and a boot ROM which enables booting from any of the supported external drives in 8-bit mode.
Screen: The display is a 24-line, 80-character (7×10 dot matrix) white-on-black monochrome CRT, with software-selectable variations such as reverse video, blinking, low-intensity (equivalent to grey text), and 4×4-resolution graphics.
Communication ports: These include two 25-pin RS-232 serial ports (including one intended for a Xerox 620 or 630 printer or compatible, and one intended for a modem), and two optional parallel ports which can be added via an internal pin header, usable with a Xerox or other cable.
Keyboard: A bulky 96-character ASCII keyboard with a 10-key numeric keypad and a cursor diamond which otherwise defaults to Ctrl-A to Ctrl-D. It also includes and keys, and is attached to the back of the CRT unit by a thick cable.
Software: A typical 820-II comes with CP/M 2.2, diagnostic software, WordStar, and Microsoft's BASIC-80 programming language.
Expansion
The Xerox 820-II is different from the 820:
the 820 mainboard has a floppy disk controller (Western Digital FD1771) but no hard disk controller or any expansion bay capabilities, whereas
the 820-II mainboard has no built-in disk controller nor a built-in processor expansion capability (these are required to be on expansion bay cards; there are two different expansion bay connectors, one which accommodates one of several disk I/O boards, and one which accommodates a processor board—the processor board was the taller of the two).
The Xerox 820-II's disk I/O capability is on one of two different cards:
a floppy disk I/O card, which can con |
https://en.wikipedia.org/wiki/Single-board%20computer | A single-board computer (SBC) is a complete computer built on a single circuit board, with microprocessor(s), memory, input/output (I/O) and other features required of a functional computer. Single-board computers are commonly made as demonstration or development systems, for educational systems, or for use as embedded computer controllers. Many types of home computers or portable computers integrate all their functions onto a single printed circuit board.
Unlike a desktop personal computer, single board computers often do not rely on expansion slots for peripheral functions or expansion. Single board computers have been built using a wide range of microprocessors. Simple designs, such as those built by computer hobbyists, often use static RAM and low-cost 32- or 64-bit processors like ARM. Other types, such as blade servers, would perform similar to a server computer, only in a more compact format.
A computer-on-module is a type of single-board computer made to plug into a carrier board, baseboard, or backplane for system expansion.
History
The first true single-board computer was based on the Intel C8080A, also using Intel's first EPROM, the C1702A. Schematics for the machine, called the "dyna-micro" were published in Radio-Electronics magazine in May of 1976. Later that year, production of the system began by E&L Instruments, a Derby, Connecticut based computer manufacturer, which branded the system as the "Mini Micro Designer 1", intending it for use as a programmable microcontroller for prototyping electronic products. The MMD-1 was made famous as an example microcomputer in popular 8080 instruction series of the time.
Early SBCs figured heavily in the early history of home computers, such as the Acorn Electron and the BBC Micro, also developed by Acorn. Other typical early single board computers like the KIM-1 were often shipped without enclosure, which had to be added by the owner. Other early examples are the Ferguson Big Board, the Ampro Little Board, and the Nascom. Many home computers in the 1980's were single-board computers, with some even encouraging owners to solder upgraded components directly to pre-marked points on the board.
As the PC became more prevalent, SBC's decreased in market share due to their low extensibility. The rapid adoption of IBM's standards for peripherals and the standardization of the PCI bus in the 1990's made motherboards and compatible components and peripherals cheap and ubiquitous, while the development of multimedia platforms such as the CD-ROM and Sound Blaster cards had begun to fast outpace the rate at which users needed to replace their personal computers. These two trends disincentivized single-board computers, and instead encouraged the proliferation of motherboards, which typically housed the CPU and other core components, with peripheral components such as hard disk drive controllers and graphics processors, and even some core components such as RAM modules, located on daughterboards.
Com |
https://en.wikipedia.org/wiki/MYCRO-1 | The MYCRO-1 was a microcomputer manufactured and sold by Mycron of Oslo, Norway. Built around the Intel 8080 CPU, it was one of the first commercial single-board computer after the Intel SDK-80. One is currently displayed at the Norwegian Museum of Science and Technology.
When introduced it was sold for apx. $6.000
MYCRO-1 is a microcomputer system based on the microprocessor Intel 8080. Some models have an Z80 CPU. Since the Z80 is backward compatible with the 8080, this was probably a cost reduction measure. The MYCRO-1 system was designed by MYCRON Data Industri as an entry In the market place for higher powered microcomputer systems.
A typical basic configuration of the system:
DIM-1001 CPU
DIM-1013 16K Byte Dynamic RAM
DIM-1090 Chassis with Motherboard
DIM-1091 Power Supply with Switch Panel
The modules available for the MYCRO-1 system
Computer modules
DIM-1001 CPU (8080)
DIM-1003 CPU (Z80)
DIM-1027 High Speed Slave CPU
Memory modules
DIM-1010 4K Byte Static RAM
DIM-1012 4K Byte PROM
DIM-1013 16K Byte Dynamic RAM
DIM-1014 6K Byte EPROM
DIM-1015 16K Byte Static RAM
DIM-1016 64K Byte Dynamic RAM
Input/output modules
DIM-1019 Dual Serial I/O, Synchronous/Asynchronous
DIM-1020 4 Channel Serial I/O
DIM-1021 Two Input and Two Output Parallel I/O
DIM-1022 Triple Serial I/O Module
DIM-1030 Floppy Disc Controller
DIM-1031 Floppy Disc Controller
DIM-1035 12M Byte Disk Drive
Process control modules
DIM-1023 32 Channel Digital Input Module
DIM-1025 8 Channel Pulse Counter Module
DIM-1026 16 Channel Digital Output Module
DIM-1029 16 Channel Level Detector Input Module
DIM-1042 16 Channel Analog Input Modules
DIM-1043 8 Channel Analog Output Module
DIM-1044 16 Channel Analog Diff. Amp. Input
Data storage
DIP 11133 Single Floppy Disk Drive Unit
DIP 11134 Double Floppy Disk Drive Unit
Mycro-1 modules
DIM-1090 Chassis with Motherboard
DIM-1091 Power Supply with Switch Panel
Pheripherals
DIP-1022 Display
DIP-1023 Teletype
Software
DIS 1001 PROCOM
DIS 1002/1022 MYCRA One-Pass Assembler
DIS 1003 Editor Text Page Editor
DIS 1007 Diskette Service Package
DIS 1010/1018 Floating Point Software Packages
DIS 1011 Basic Interpreter
DIS 1012 DDS Diskette Dataset Support
DIS 1014 Mycrop Diskette Based Monitor
DIS 1015 EPROM Programmer Routines
DIS 1018 Floating Point Software Package
DIS 1019 QEdit Text Editor
DIS 1020 Multi-Access Disk Driver
DIS 1021 Mycrop with Remote Disk Access
DIS 1024 Fast Floating Ooint Micro Program for DIM 1027
DIS 1024E Communication Package for Nord-10
DIS 1027 MyMon. A real-Time and Time-Sharing Monitor for Mycro-1
DIS 1029 PL/Mycro Resident One-Pass Compiler
DIS 1030 DINIT Diskette Initialization Program
DIS 1031 GRTS 115/Mycro-1 Emulator
DIS 1032 Trace - A Software Debugging Tool
DIS 1033 MyReg System for Data Registration on Diskette
DIS 1034 Utility Routines on the Mycron System Diskette
DIS 1035 Pas 80 Sequential Pascal Compiler
DIS 1036 DLSM Dataset Label Maintenance |
https://en.wikipedia.org/wiki/Mycron | Mycron was a pioneer manufacturer of microcomputers, located in Oslo, Norway.
Originally named Norsk Data Industri, the company was founded in 1975 by Lars Monrad Krohn, who was also one of the founding fathers of Norsk Data. Among the employees are Arne Maus (1986–89) and Gisle Hannemyr.
The company was renamed MySoft in 1999.
Computers manufactured by Mycron
MYCRO-1 was an Intel 8080 machine, running the MYCROPoperating system. Afterwards the Mycron 3 was developed, running CP/M. The Mycron 1000 featured a Zilog Z80 processor and ran MP/M. Finally the Mycron 2000 was released, based on an Intel 8086 CPU, running CP/M-86 and MP/M-86 operating systems.
References
External links
Computer-Archiv - Mycron
Mycron image collection
Computer companies of Norway
Defunct companies of Norway
Defunct computer hardware companies
Computer companies established in 1975
1975 establishments in Norway |
https://en.wikipedia.org/wiki/Tiki%20100 | Tiki-100 was a desktop home/personal computer manufactured by Tiki Data of Oslo, Norway. The computer was launched in the spring of 1984 under the original name Kontiki-100, and was first and foremost intended for the emerging educational sector, especially for primary schools. Early prototypes had 4 KB ROM, and the '100' in the machine's name was based on the total KB amount of memory.
Development
It was decided by the Norwegian government that Norwegian schools should all use the same standardized computer in education. The Tiki-100 was developed as a direct response to this decision, and was as such greatly influenced by the specifications laid out by the government. One of the most influential of these specifications was compatibility with CP/M and the Z80 CPU.
Being designed as a computer intended for education, interactivity was prioritized. The machine was given good audiovisual capabilities for its time. While other educational computers at the time had a main focus on BASIC and simple computer-science, the Tiki-100 had more focus on being a tool to aid in education and everyday-life situations. This put forth the need and memory requirements to run more complex applications.
The first prototype was built using wire-wrap and a bigger prototype case. Soon followed a prototype made on PCB, and there were very little changes from this prototype to the final product. The most significant changes was the change from Siemens keyboard switches to cheaper Sasse switches, along with the re-arranging of the analog video output connection. Very few, if any, revision A or B Tiki-100 computers ever hit the store shelves.
Tiki-100 was released under the original name 'Kontiki-100' in the spring of 1984. Thor Heyerdahl threatened to open a legal case on the use of the Kontiki name, with reference to the name of his famous raft. The name was changed to "Tiki-100" as a result. Around the same time, Computerworld magazine claimed the operating system "KP/M" was a direct copy of CP/M, due to KP/M being able to run CP/M software. As a response to these claims, KP/M was renamed "Tiko" to avoid direct association to CP/M and Digital Research.
Specifications
Specifications for the basic Tiki-100 model:
CPU: Zilog Z80 running at 4 MHz.
Memory: 64 KB of RAM (main memory), 32 KB of graphical memory and 8 KB of ROM.
keyboard: A n-key rollover mechanical keyboard integrated into the computer case
Graphics: PAL-compatible, based on discrete TTL components. Bitmap graphics with a 256-color palette, supporting 3 different resolutions with 256x256x16 colours, 512x256x4 colours or 1024x256x2 simultaneous colours. The has no text-mode as it used bitmapped graphics only. However, terminal emulators provided options of 40, 80 or 160 by 25 characters, each option using one of the three modes. All of the graphics modes have hardware vertical scroll.
Audio: An AY-3-8912 polyphonic sound generator
Storage: One or two integrated 5¼ inch floppy disk drives
Interfaces: Two |
https://en.wikipedia.org/wiki/SWTPC | Southwest Technical Products Corporation, or SWTPC, was an American producer of electronic kits, and later complete computer systems. It was incorporated in 1967 in San Antonio, Texas, succeeding the Daniel E. Meyer Company. In 1990, SWTPC became Point Systems, before ceasing a few years later.
History
In the 1960s, many hobbyist electronics magazines such as Popular Electronics and Radio-Electronics published construction articles, for many of which the author would arrange for a company to provide a kit of parts to build the project. Daniel Meyer published several popular projects and successfully sold parts kits. He soon started selling kits for other authors such as Don Lancaster and Louis Garner. Between 1967 and 1971, SWTPC sold kits for over 50 Popular Electronics articles. Most of these kits were intended for audio use, such as hi-fi, utility amplifiers, and test equipment such as a function generator based on the Intersil ICL8038.
Many of these early kits used analog electronics technology, since digital technology was not yet affordable for most hobbyists. Some of the kits took advantage of new integrated circuits to allow low-cost construction of projects. For example, the new Signetics NE565 phase-locked loop chip was the core of a subsidiary communications authority (SCA) decoder board, which could be built and added to an FM radio to demodulate special programming (often, background music) not previously available to the general public. FCC regulations did not ban reception or decoding of radio transmissions, but SCA demodulation had previously required complex and expensive circuitry. Another popular new integrated circuit was the Signetics NE555, a versatile and low-cost timing oscillator chip, which was used in signal generators and simple timers. In 1972, SWTPC had a large enough collection of kits to justify printing a 32-page catalog.
In January 1975, SWTPC introduced a computer terminal kit, the "TV Typewriter", or CT-1024. By November 1975, they were delivering complete computer kits based on Motorola MPUs. They were very successful for the next 5 or so years and grew to over 100 employees.
As the new market evolved rapidly, most of the companies that were selling a computer kit in 1975 were out of business by 1978. Around 1987, SWTPC moved to selling point of sale computer systems, eventually changing its name to Point Systems. This new company lasted only a few years.
Microcomputer pioneers
When microprocessors (CPU chips) became available, SWTPC became one of the first suppliers of microcomputers to the general public, focusing on designs using the Motorola 6800 and, later, the 6809 CPUs. The first such microcomputer introduced by the company, in November 1975, was the SWTPC 6800, which is also the progenitor of the widely used SS-50 bus.
Many of SWTPC's products, including the 6800 microcomputer, were available in kit form. SWTPC also designed and supplied computer terminals, chassis, processor cards, memory ca |
https://en.wikipedia.org/wiki/S100 | S100 or S-100 may refer to:
S-100 bus, an early computer bus
S-100, an International Hydrographic Organization standard
S100 protein, low-molecular-weight proteins in vertebrates
The road number used in the Netherlands for inner-city ring roads
AVE Class 100, or S100, a high speed train
Canon PowerShot S100, a camera
Colyaer Freedom S100, a Spanish amphibious ultralight aircraft
Colyaer Gannet S100, a Spanish ultralight flying boat
Colyaer Martin3 S100 is a Spanish ultralight aircraft
A class of WWII German E-Boat.
Guild S-100, an electric guitar
Lenovo IdeaPad S100, a netbook computer
Qtek S100, a mobile phone
Schiebel Camcopter S-100, an unmanned aerial vehicle
Škoda 100, a 1970s car
USATC S100 Class, a 1942 steam locomotive class
See also |
https://en.wikipedia.org/wiki/Centronics | Centronics Data Computer Corporation was an American manufacturer of computer printers, now remembered primarily for the parallel interface that bears its name, the Centronics connector.
History
Foundations
Centronics began as a division of Wang Laboratories. Founded and initially operated by Robert Howard (president) and Samuel Lang (vice president and owner of the well known K & L Color Photo Service Lab in New York City), the group produced remote terminals and systems for the casino industry. Printers were developed to print receipts and transaction reports. Wang spun off the business in 1971 and Centronics was formed as a corporation in Hudson, New Hampshire with Howard as president and chairman.
The Centronics Model 101 was introduced at the 1970 National Computer Conference in May. The print head used an innovative seven-wire solenoid impact system. Based on this design, Centronics later developed the first dot matrix impact printer (while the first such printer was the OKI Wiredot in 1968).
Howard developed a personal relationship with his neighbor, Max Hugel, the founder and president of Brother International, the United States arm of Brother Industries, Ltd., a manufacturer of sewing machines and typewriters. A business relationship developed when Centronics needed reliable manufacturing of the printer mechanisms—a relationship that would help propel Brother into the printer industry. Hugel would later become executive vice president of Centronics. Print heads and electronics were built in Centronics plants in New Hampshire and Ireland, mechanisms were built in Japan by Brother and the printers were assembled in New Hampshire.
In the 1970s, Centronics formed a relationship with Canon to develop non-impact printers. No products were ever produced, but Canon continued to work on laser printers, eventually developing a highly successful series of engines.
In 1977, Centronics sued competitor Mannesmann AG in a patent dispute regarding the return spring used in the print actuator.
In 1975, Centronics formed an OEM agreement with Tandy and produced DMP and LP series printers for several years. The 6000 series band printers were introduced in 1978. By 1979 company revenues were over $100 million.
In 1980, the Mini-Printer Model 770 was introduced—a small, low-cost desktop serial matrix printer. This was the first printer built completely in-house, and there were problems. Flaws in the microprocessor led to a recall and a stoppage of manufacturing for a year. During this period, Epson, Brother and others began to gain market share and Centronics never recovered. 1980 also saw the introduction of the E Series 900 and 1200 LPM band printers.
Change of ownership
In 1982, Control Data Corporation merged their current printer business unit, CPI, into Centronics and at the same time invested $25 million in the company, effectively taking control from Howard. During 1980-1985 the company lost $80 million.
Control Data contro |
https://en.wikipedia.org/wiki/IAS%20machine | The IAS machine was the first electronic computer built at the Institute for Advanced Study (IAS) in Princeton, New Jersey. It is sometimes called the von Neumann machine, since the paper describing its design was edited by John von Neumann, a mathematics professor at both Princeton University and IAS. The computer was built under his direction, starting in 1946 and finished in 1951.
The general organization is called von Neumann architecture, even though it was both conceived and implemented by others. The computer is in the collection of the Smithsonian National Museum of American History but is not currently on display.
History
Julian Bigelow was hired as chief engineer in May 1946.
Hewitt Crane, Herman Goldstine, Gerald Estrin, Arthur Burks, George W. Brown and Willis Ware also worked on the project.
The machine was in limited operation in the summer of 1951 and fully operational on June 10, 1952. It was in operation until July 15, 1958.
Description
The IAS machine was a binary computer with a 40-bit word, storing two 20-bit instructions in each word. The memory was 1,024 words (5 kilobytes in modern terminology). Negative numbers were represented in two's complement format. It had two general-purpose registers available: the Accumulator (AC) and Multiplier/Quotient (MQ). It used 1,700 vacuum tubes (triode types: 6J6, 5670, 5687, a few diodes: type 6AL5, 150 pentodes to drive the memory CRTs, and 41 CRTs (type: 5CP1A): 40 used as Williams tubes for memory plus one more to monitor the state of a memory tube). The memory was originally designed for about 2,300 RCA Selectron vacuum tubes. Problems with the development of these complex tubes forced the switch to Williams tubes.
It weighed about .
It was an asynchronous machine, meaning that there was no central clock regulating the timing of the instructions. One instruction started executing when the previous one finished. The addition time was 62 microseconds and the multiplication time was 713 microseconds.
Although some claim the IAS machine was the first design to mix programs and data in a single memory, that had been implemented four years earlier by the 1948 Manchester Baby. The Soviet MESM also became operational prior to the IAS machine.
Von Neumann showed how the combination of instructions and data in one memory could be used to implement loops, by modifying branch instructions when a loop was completed, for example. The requirement that instructions, data and input/output be accessed via the same bus later came to be known as the Von Neumann bottleneck.
IAS machine derivatives
Plans for the IAS machine were widely distributed to any schools, businesses, or companies interested in computing machines, resulting in the construction of several derivative computers referred to as "IAS machines", although they were not software compatible.
Some of these "IAS machines" were:
AVIDAC (Argonne National Laboratory)
BESK (Stockholm)
BESM (Moscow)
Circle Computer (Hogan Laboratories, I |
https://en.wikipedia.org/wiki/Completeness%20%28statistics%29 | In statistics, completeness is a property of a statistic in relation to a parameterised model for a set of observed data.
A complete statistic T is one for which any proposed distribution on the domain of T is predicted by one or more prior distributions on the model parameter space. In other words, the model space is 'rich enough' that every possible distribution of T can be explained by some prior distribution on the model parameter space. In contrast, a sufficient statistic T is one for which any two prior distributions will yield different distributions on T. (This last statement assumes that the model space is identifiable, i.e. that there are no 'duplicate' parameter values. This is a minor point.)
Put another way: assume that we have an identifiable model space parameterised by , and a statistic (which is effectively just a function of one or more i.i.d. random variables drawn from the model). Then consider the map which takes each distribution on model parameter to its induced distribution on statistic . The statistic is said to be complete when is surjective, and sufficient when is injective.
Definition
Consider a random variable X whose probability distribution belongs to a parametric model Pθ parametrized by θ.
Say T is a statistic; that is, the composition of a measurable function with a random sample X1,...,Xn.
The statistic T is said to be complete for the distribution of X if, for every measurable function g,:
The statistic T is said to be boundedly complete for the distribution of X if this implication holds for every measurable function g that is also bounded.
Example 1: Bernoulli model
The Bernoulli model admits a complete statistic. Let X be a random sample of size n such that each Xi has the same Bernoulli distribution with parameter p. Let T be the number of 1s observed in the sample, i.e. . T is a statistic of X which has a binomial distribution with parameters (n,p). If the parameter space for p is (0,1), then T is a complete statistic. To see this, note that
Observe also that neither p nor 1 − p can be 0. Hence if and only if:
On denoting p/(1 − p) by r, one gets:
First, observe that the range of r is the positive reals. Also, E(g(T)) is a polynomial in r and, therefore, can only be identical to 0 if all coefficients are 0, that is, g(t) = 0 for all t.
It is important to notice that the result that all coefficients must be 0 was obtained because of the range of r. Had the parameter space been finite and with a number of elements less than or equal to n, it might be possible to solve the linear equations in g(t) obtained by substituting the values of r and get solutions different from 0. For example, if n = 1 and the parameter space is {0.5}, a single observation and a single parameter value, T is not complete. Observe that, with the definition:
then, E(g(T)) = 0 although g(t) is not 0 for t = 0 nor for t = 1.
Relation to sufficient statistics
For some parametric families, a complete sufficient stat |
https://en.wikipedia.org/wiki/CBS%20Evening%20News | The CBS Evening News is the flagship evening television news program of CBS News, the news division of the CBS television network in the United States. The CBS Evening News is a daily evening broadcast featuring news reports, feature stories and interviews by CBS News correspondents and reporters covering events around the world. The program has been broadcast since July 1, 1941, under the original title CBS Television News, eventually adopting its current title in 1963.
Since July 15, 2019, the nightly broadcast has been anchored by Norah O'Donnell and has been titled CBS Evening News with Norah O’Donnell; since December 2, 2019, the newscast has originated from CBS News’ bureau in Washington, D.C. Previous weeknight anchors have included Douglas Edwards, Walter Cronkite, Dan Rather, Connie Chung, Bob Schieffer, Katie Couric, Scott Pelley, and Jeff Glor.
Saturday and Sunday broadcasts of the CBS Evening News began in February 1966. On May 2, 2016, CBS announced that the weekend edition would be rebranded, effective May 7, 2016, as the CBS Weekend News. Weekend newscasts originate from the CBS Broadcast Center in New York City and were anchored by Reena Ninan on Saturday and Elaine Quijano on Sunday. By the summer of 2020 Ninan and Quijano were replaced by Major Garrett and Jamie Yuccas. In December 2020, it was announced that Adriana Diaz and Jericka Duncan would be the new weekend anchors.
The weeknight edition of the CBS Evening News airs live at 6:30 p.m. in the Eastern and 5:30 p.m. in the Central Time Zones and is tape delayed for the Mountain Time Zone. A "Western Edition", with updated segments covering breaking news stories, airs pre-recorded at 6:30 p.m. in the Pacific Time Zone and 5:30 p.m. in the Alaska time zone and on tape delay in the Hawaii–Aleutian Time Zone.
As of March 4, 2019, CBS Evening News remains in third place of the three major television news programs, with 6,309,000 total viewers.
History
Early years (1941–1948)
Upon becoming commercial station WCBW (channel 2, now WCBS-TV) on July 1, 1941, the pioneer CBS television station in New York City broadcast two daily news programs, at 2:30 p.m. and 8:00 p.m. weekdays, anchored by Richard Hubbell. Most of the newscasts featured Hubbell reading a script with only occasional cutaways to a map or still photograph. When Pearl Harbor was bombed on December 7, 1941, WCBW (which was usually off the air on Sunday to give the engineers a day off), took to the air at 8:45 p.m. with an extensive special report. The national emergency broke down the unspoken wall between CBS radio and television. WCBW executives convinced radio announcers and experts such as George Fielding Elliot and Linton Wells to come to the CBS television studios at Grand Central Station from the radio network's base at 485 Madison Avenue, to give information and commentary on the attack. The WCBW special report that night lasted less than 90 minutes, but it pushed the limits of live television in 1941 and o |
https://en.wikipedia.org/wiki/Tiki%20Data | Tiki Data was a manufacturer of microcomputers, located in Oslo, Norway. The company was founded in 1983 by Lars Monrad Krohn and Gro Jørgensen, and was targeting the then emerging computer market in the educational sector. Following the launch of the Tiki 100 computer, which was designed by Tiki Data from the bottom up, the company started publishing software for the educational sector. Following the impact of the IBM PC, the company switched to selling rebranded PC-compatible computers.
Tiki Data was bought by Merkantildata in 1996, and ceased to exist from that point on.
References
Companies established in 1983
Defunct companies of Norway
Computer companies of Norway
Defunct computer hardware companies
Home computer hardware companies
Software companies of Norway
Companies disestablished in 1996
1983 establishments in Norway
1996 disestablishments in Norway |
https://en.wikipedia.org/wiki/Kaypro | Kaypro Corporation was an American home and personal computer manufacturer based in San Diego in the 1980s. The company was founded by Non-Linear Systems (NLS) to compete with the popular Osborne 1 portable microcomputer. Kaypro produced a line of rugged, "luggable" CP/M-based computers sold with an extensive software bundle which supplanted its competitors and quickly became one of the top-selling personal computer lines of the early 1980s.
Kaypro was exceptionally loyal to its original customer base but slow to adapt to the changing computer market and the advent of IBM PC compatible technology. It faded from the mainstream before the end of the decade and was eventually forced into bankruptcy in 1992.
History
Kaypro began as Non-Linear Systems, a maker of electronic test equipment, founded in 1952 by Andrew Kay, the inventor of the digital voltmeter.
In the 1970s, NLS was an early adopter of microprocessor technology, which enhanced the flexibility of products such as production-line test sets. In 1981, Non-Linear Systems began designing a personal computer, called KayComp, that would compete with the popular Osborne 1 transportable microcomputer. In 1982, Non-Linear Systems organized a daughter company named the Kaypro Corporation.
Despite being the first model to be released commercially, the original system was branded as the Kaypro II (at a time when one of the most popular microcomputers was the Apple II). The Kaypro II was designed to be portable like the Osborne, contained in a single enclosure with a handle for carrying. Set in an aluminum case, with a keyboard that snapped onto the front, covering the 9" CRT display and drives, it weighed and was equipped with a Zilog Z80 microprocessor, 64 kilobytes of RAM, and two 5¼-inch double-density single-sided floppy disk drives. It ran Digital Research, Inc.'s CP/M operating system, the industry standard for 8-bit computers with 8080 or Z80 CPUs, and sold for about .
The company advertised the Kaypro II as "the computer that sells for ". Although some of the press mocked its design—one magazine described Kaypro as "producing computers packaged in tin cans"—others raved about its value, noting that the included software bundle had a retail value over by itself, and by mid-1983 the company was selling more than 10,000 units a month, briefly making it the fifth-largest computer maker in the world.
The Kaypro II was part of a new generation of consumer-friendly personal computers that were designed to appeal to novice users who wanted to perform basic productivity on a machine that was relatively easy to set up and use. It managed to correct most of the Osborne 1's deficiencies: the screen was larger and showed more characters at once, the floppy drives stored over twice as much data, and it was better-built and more reliable.
Computers such as the Kaypro II were widely referred to as "appliance" or "turnkey" machines; they offered little in the way of expandability or features that w |
https://en.wikipedia.org/wiki/OpenType | OpenType is a format for scalable computer fonts. Derived from TrueType, it retains TrueType's basic structure but adds many intricate data structures for describing typographic behavior. OpenType is a registered trademark of Microsoft Corporation.
The specification germinated at Microsoft, with Adobe Systems also contributing by the time of the public announcement in 1996.
Because of wide availability and typographic flexibility, including provisions for handling the diverse behaviors of all the world's writing systems, OpenType fonts are used commonly on major computer platforms.
History
OpenType's origins date to Microsoft's attempt to license Apple's advanced typography technology GX Typography in the early 1990s. Those negotiations failed, motivating Microsoft to forge ahead with its own technology, dubbed "TrueType Open" in 1994. Adobe joined Microsoft in those efforts in 1996, adding support for the glyph outline technology used in its Type 1 fonts.
The joint effort intended to supersede both Apple's TrueType and Adobe's PostScript Type 1 font format, and to create a more expressive system that handles fine typography and the complex behavior of many of the world's writing systems. The two companies combined the underlying technologies of both formats and added new extensions intended to address their limitations. The name OpenType was chosen for the joint technology, which they announced later that year.
Open Font Format
Adobe and Microsoft continued to develop and refine OpenType over the next decade. Then, in late 2005, OpenType began migrating to an open standard under the International Organization for Standardization (ISO) within the MPEG group, which had previously (in 2003) adopted OpenType 1.4 by reference for MPEG-4. Adoption of the new standard reached formal approval in March 2007 as ISO Standard ISO/IEC 14496-22 (MPEG-4 Part 22) called Open Font Format (OFF, not to be confused with Web Open Font Format), sometimes referred to as "Open Font Format Specification" (OFFS). The initial standard was technically equivalent to OpenType 1.4 specification, with appropriate language changes for ISO. The second edition of the OFF was published in 2009 (ISO/IEC 14496-22:2009) and was declared "technically equivalent" to the "OpenType font format specification". Since then, OFF and OpenType specifications have been maintained in sync. OFF is a free, publicly available standard.
By 2001 hundreds of OpenType fonts were on the market. Adobe finished converting their entire font library to OpenType toward the end of 2002. , around 10,000 OpenType fonts had become available, with the Adobe library comprising about a third of the total. By 2006, every major font foundry and many minor ones were developing fonts in OpenType format.
Unicode Variation Sequences
Unicode version 3.2 (published in 2002) introduced variation selectors as an encoding mechanism to represent particular glyph forms for characters. Unicode did not, however, specify |
https://en.wikipedia.org/wiki/Norsk%20Data | Norsk Data was a minicomputer manufacturer located in Oslo, Norway. Existing from 1967 to 1998, it had its most active period from the early 1970s to the late 1980s. At the company's peak in 1987, it was the second largest company in Norway and employed over 4,500 people.
Throughout its history Norsk Data produced a long string of extremely innovative systems, with a disproportionately large number of world firsts. Some examples of this are the NORD-1, the first minicomputer to have memory paging as a standard option, and the first machine to have floating-point instructions standard, the NORD-5, the world's first 32-bit minicomputer (beating the VAX, often claimed the first, by 6 years).
Historical overview
The origins of Norsk Data go back to the development of digital computers at the Norwegian Defense Research Establishment at Kjeller, Norway, where several early computers had been designed, such as the SAM and the SAM 2, also known as the FLINK.
The success of this program resulted in the founding of A/S Nordata – Norsk Data Elektronikk on August 8, 1967, by Lars Monrad Krohn, Per Bjørge and Rolf Skår. The company became a significant supplier of minicomputers to many research projects, in particular to CERN in Geneva, Switzerland, where they were chosen to produce the computers for many projects, starting with the SPS Project, Norsk Data's international breakthrough contract. The other market segments Norsk Data succeeded in were process control, Norwegian municipal administration data centers, newspapers, as well as parts of the educational, health, and university sector.
For a period in 1987, Norsk Data was the second largest company by stock value in Norway, second only to Norsk Hydro, and employed over 4,500 people.
In March 1991, shortly after the January Events, Norsk Data donated the first computer to Lithuanian Institute of Mathematics and Informatics. This donation started the development of LITNET, an academic and research network in Lithuania. Later that year, the network connection lines directly connecting Vilnius to Moscow were shut down. With the help of additional hardware donated by Norsk Data, Lithuania was able to use its first satellite-based Internet connection, which operated at 9,6 kbit/s. This was the first Lithuanian communications line that was totally independent from the former Soviet Union.
After a long period of exceptional success, the Norsk Data "empire" collapsed in the early 1990s, mostly due to not realizing the impact of the PC revolution (as well as the growing competition from Unix-based workstations). In 1989, alongside upgraded versions of the company's proprietary minicomputer range, notably the ND-5850, attempts were made to introduce Unix products such as the Uniline 33 range, based on Motorola system designs for the 68030 processor. Such conventional Unix systems were primarily aimed at international customers, whereas in Scandinavia the company reportedly sought to offer only its NDIX impl |
https://en.wikipedia.org/wiki/Dolphin%20Interconnect%20Solutions | Dolphin Interconnect Solutions is a privately held manufacturer of high-speed data communication systems headquartered in Oslo, Norway and Woodsville, New Hampshire, USA.
The technology of Dolphin was based on development work at Norsk Data during the late 1980s. Dolphin Interconnect Solutions was founded in 1992 as a spin-off from Dolphin Server Technology which was, in turn, a spin-off from Norsk Data in 1989. Dolphin Interconnect Solutions develops technology for low latency and high-speed communication between servers and/or embedded computer systems.
History
Dolphin Server Technology emerged from Norsk Data, "a formerly flourishing Norwegian minicomputer maker", with the aim of building a business developing systems based on the Motorola 88000 architecture, these being adopted by Norsk Data as the new company's initial customer, with the intention of gradually reducing Norsk Data's stake to less than 50 percent and thus gradually increasing the new company's independence.
Following an initial product announcement in late 1989, by April 1990, Dolphin Server Technology had started shipping products in its Triton 88 series, based on the Motorola 88000 processor family, with these systems supporting up to four processors. Compliant with the 88open Consortium's standards, the Triton 88 series ran a Unix product developed by UniSoft, providing binary compatibility with contemporary 88000-based systems. Dolphin offered these products through value-added resellers in European, North American and South American markets, also cultivating business with original equipment manufacturers, resulting in the Triton 88 models appearing "under several different brand names" worldwide.
Having announced plans for an emitter-coupled logic (ECL) version of the Motorola 88000, projected to run at 125 MHz, executing up to eight instructions in parallel, and delivering a peak performance of 1000 MIPS, Dolphin Server Technology participated in the development and standardisation of Scalable Coherent Interface (SCI) technology, delivering the first prototype in 1992 for an implementation of the base SCI standard as a gate array fabricated by Vitesse Semiconductor. A CMOS implementation was demonstrated in 1994 in association with LSI Logic.
The company had announced its plans for the ECL variant of the 88000, named Orion and developed in conjunction with Motorola, in December 1989. This processor, employing a technique called "instruction folding" originating from research done within Norsk Data, involved "a mutual exchange of patented technology" between the companies. It was hoped that Orion would ship in the first half of 1992.
In 1993, Dolphin, described as a vendor of "RISC-based UNIX multiprocessor servers" specialising in solutions for the government and banking, announced a deal with NeXT to resell NeXT computer products and to license NeXT's software technology. Ultimately, Dolphin "abandoned the server market entirely".
Products
Dolphin started out c |
https://en.wikipedia.org/wiki/Ericsson%20Television | Ericsson Television, formerly Tandberg Television, is a company providing MPEG-4 AVC, MPEG-2 and HEVC encoding decoding and control solutions, plus stream processing, packaging, network adaption and related products, for Contribution & Distribution (C+D), IPTV, Cable, DTT, Satellite DTH and OTT.
The global headquarters are located in Southampton, England, with additional offices in Rennes, France.
The company was honored with its first Technology & Engineering Emmy Award in 2008 for the development of interactive Video-on-Demand infrastructure and signaling, leading to large scale VOD implementations.
It was also awarded another Emmy® in 2009 for Pioneering Development of MPEG-4 AVC systems for HDTV. Then in 2011, for the Pioneering Development and Deployment of Active Format Description Technology and System. In 2013, the company acquired another Emmy® award for the Pioneering Development Of Video On Demand (VOD) Dynamic Advertising Insertion. Recently in 2014, Ericsson got its fifth Emmy® Award recognition for its work in developing pioneering JPEG2000 interoperability technology.
History
Tandberg is a long-standing Norwegian company whose history goes back to the 1930s when it supplied domestic radio equipment. It grew into other areas during the decades after WW2 including a well-respected audio equipment manufacturer and its reel-to-reel tape recorders were sought after by HiFi enthusiasts.
The Kjelsas factory also started producing TV sets in 1960, and in 1966, a second TV plant was opened in Kjeller in Skedsmo. Color TVs were added to their lineup in 1969. In 1972, Tandberg purchased Radionette, another large Norwegian electronics firm that has just begun focusing on televisions. By 1976, TVs were Tandberg's major product and their factories employed 3,500. However, that same year a major economic downturn seriously disrupted the company, and by 1978 it was insolvent. A shareholder revolt removed Vebjorn Tandberg from control of the company, and he committed suicide in August. In December the company declared bankruptcy.
Tandberg Television, originally with headquarters in Lillestrom near Oslo, Norway, was formed in 1979 when the original Tandberg company split into Tandberg, Tandberg Data, and Tandberg Television.
After the breakup of 1979
In 1999, Tandberg Television entered into a £170 million agreement to acquire all the assets of NDS Group’s Digital-TV products business, the Digital Broadcasting Business (DBB), a subsidiary of The News Corporation group. After the acquisition, Tandberg Television could offer digital video compression encoders, multiplexers and modulation products for large satellite DTH systems, terrestrial networks and mobile news gathering solutions.
NDS Group itself was a merger in 1996 between News Corp's existing News Data Communications (NDC) based in Israel, a company that supplied smart cards to pay TV operators like Sky TV, and Digi-Media Vision (DMV) a video compression company that News had acquired |
https://en.wikipedia.org/wiki/Tandberg%20Data | Tandberg Data GmbH is a company focused on data storage products, especially streamers, headquartered in Dortmund, Germany. They are the only company still selling drives that use the QIC (also known as SLR) and VXA formats, but also produce LTO along with autoloaders, tape libraries, NAS devices, RDX Removable Disk Drives, Media and Virtual Tape Libraries.
Tandberg Data used to manufacture computer terminals (e.g. TDV 2200), keyboards, and other hardware.
They have offices in Dortmund, Germany; Tokyo, Japan; Singapore; Guangzhou, China and Westminster, Colorado, U.S.
History
Tandberg radio factory was founded in Oslo on January 25, 1933 by Vebjørn Tandberg.
In 1970, Tandberg produces its first data tape drives.
In December 1978, Tandbergs Radiofabrikk goes bankrupt.
In January 1979, Siemens and the state of Norway establish Tandberg Data, rescuing the data storage and computer terminal divisions from the ashes. Siemens holds 51% of the new company and controls it. The other divisions of Tandberg go to Norsk Data.
In 1981, Tandberg becomes a founding member of QIC committee for standardising interfaces and recording formats, and produces its first streaming linear tape drive.
In 1984,Tandberg Data goes public.
In 1990, Siemens sells most of its shares when merging its computer business with Nixdorf.
In 1991, The terminal business is split off as Tandberg Data Display, which ends up in the Swedish company MultiQ.
In 2003, Tandberg Storage and its subsidiary O-Mass split and became separate companies, also listed on Oslo Stock Exchange. Tandberg Data is the largest owner of Tandberg Storage with a 33.48% stake.
On August 30, 2006, Tandberg Data purchased the assets of Exabyte. Combined revenue is expected to be USD 215 Million for 2006.
On May 15, 2007, Tandberg Data sold all remaining Tandberg Storage shares.
On January 9, 2008, Pat Clarke was promoted CEO of Tandberg Data.
On September 12, 2008, Tandberg Data announced the reacquisition of Tandberg Storage.
On April 24, 2009, Tandberg Data ASA and Tandberg Storage ASA filed for bankruptcy.
On May 19, 2009, Tandberg Data announced that the new holding company, TAD Holding AS, was established, owning all global Tandberg Data subsidiaries, including Tandberg Storage ASA. Cyrus Capital is the majority shareholder and owner of the newly established company. Operations in Norway continue in the newly formed company Tandberg Data Norge AS.
On January 22, 2014, Tandberg Data was acquired by Overland Storage.
Tandberg Storage
Tandberg Storage ASA was a magnetic tape data storage company based in Lysaker, Norway. The company was a subsidiary of Tandberg Data. The company was spun off from Tandberg Data in 2003 to focus exclusively on tape drives. It was purchased by the same company in 2008. Tandberg Storage developed four drive series, all based on Linear Tape-Open (LTO) specifications. Manufacturing was outsourced to the Chinese-based Lafè Peripherals International. Tandberg Stora |
https://en.wikipedia.org/wiki/Superminicomputer | A superminicomputer, colloquially supermini, is a high-end minicomputer. The term is used to distinguish the emerging 32-bit architecture midrange computers introduced in the mid to late 1970s from the classical 16-bit systems that preceded them. The development of these computers was driven by the need of applications to address larger memory. The term midicomputer had been used earlier to refer to these systems. Virtual memory was often an additional criteria that was considered for inclusion in this class of system. The computational speed of these machines was significantly greater than the 16-bit minicomputers and approached the performance of small mainframe computers. The name has at times been described as a "frivolous" term created by "marketeers" that lacks a specific definition. Describing a class of system has historically been seen as problematic: "In the computer kingdom, taxonomic classification of equipment is more of a black art than a science." There is some disagreement about which systems should be included in this class. The origin of the name is uncertain.
As technology improved rapidly the distinction between minicomputer and superminicomputer performance blurred. Companies that sold mainframe computers began to offer machines in the same price and performance range as superminicomputers. By the mid-1980s microprocessors with the hardware architecture of superminicomputers were used to produce scientific and engineering workstations. The minicomputer industry then declined through the early 1990s. The term is now considered obsolete but still remains of interest for students/researchers of computer history.
Notable companies
Notable manufacturers of superminicomputers in 1980 included: Digital Equipment Corporation, Perkin-Elmer, and Prime Computer. Other makers of systems included SEL/Gould and Data General. Four years later there were about a dozen companies producing a significant number of superminicomputers.
Perkin-Elmer spun off their Data Systems Group in 1985 to form Concurrent Computer Corporation which continued making these systems. Nixdorf Computer, Norsk Data, and Toshiba also produced systems.
Significant superminicomputers
Interdata 7/32, 1974
Digital Equipment Corporation VAX-11/780, 1978
Prime Computer 750, 1979
Data General Eclipse MV/8000, 1980
IBM 4361, 1983
IBM 9370, 1987
External links
References
Super
Classes of computers |
https://en.wikipedia.org/wiki/28-bit%20computing | The only significant 28-bit computer was the Norsk Data ND-505, which was essentially a 32-bit machine with four wires in its address bus removed. The reason for scaling down was to be able to sell it to Eastern Bloc countries, avoiding the then CoCom embargo on 32-bit machines.
Norway–Soviet Union relations
Norsk Data minicomputers
Foreign trade of the Soviet Union |
https://en.wikipedia.org/wiki/OPIC%20%28disambiguation%29 | OPIC may refer to:
Overseas Private Investment Corporation
Oral Proficiency Interview - computer (OPIc): a computerized test of English usage skills
On-line Page Importance Computation (Selection policy, fifth para) |
https://en.wikipedia.org/wiki/General%20Comprehensive%20Operating%20System | General Comprehensive Operating System (GCOS, ; originally GECOS, General Electric Comprehensive Operating Supervisor) is a family of operating systems oriented toward the 36-bit GE-600 series and Honeywell 6000 series mainframe computers.
The original version of GCOS was developed by General Electric beginning in 1962. The operating system is still used today in its most recent versions (GCOS 7 and GCOS 8) on servers and mainframes produced by Groupe Bull, primarily through emulation, to provide continuity with legacy mainframe environments. GCOS 7 and GCOS 8 are separate branches of the operating system and continue to be developed alongside each other.
History
GECOS
The GECOS operating system was developed by General Electric for the 36-bit GE-600 series in 1962–1964; GE released GECOS I (with a prototype 635) in April 1965, GECOS II in November 1965 and GECOS III (with time-sharing) in 1967. It bore a close resemblance architecturally to IBSYS on the IBM 7094 and less to DOS/360 on the IBM System/360. However, the GE 600 Series four processor architecture was very different from the System/360 and GECOS was more ambitious than DOS/360. GECOS-III supported both time-sharing (TSS) and batch processing, with dynamic allocation of memory (IBM had fixed partitions, at that time), making it a true second-generation operating system.
Honeywell GCOS 3
After Honeywell acquired GE's computer division, GECOS-III was renamed GCOS 3, and the hardware line was renamed to the Honeywell 6000 series, adding the EIS (enhanced instruction set, character oriented instead of word oriented).
GCOS 64
The name "GCOS" was extended to the operating systems for all Honeywell-marketed product lines. GCOS-64, a completely different 32-bit operating system for the Level 64 series, similar to a parallel development called Multics, was designed by Honeywell and Honeywell Bull developers in France and Boston.
GCOS 61/62
GCOS-62, the operating system for another 32-bit low-end line of machines, the Level 62 series, was designed in Italy. GCOS-61 was the operating system for a new version of a small system made in France (Model 58, later Level 61/58), and the operating system for a new 16-bit minicomputer line from Massachusetts (Billerica), the Level 6, got the name GCOS 6.
GCOS 7 and GCOS 8
Another renaming of the hardware product lines occurred in 1979, with the Level 6 becoming the DPS 6, the Level 62 becoming the DPS 4, the Level 64 becoming DPS 7, and Level 66 becoming DPS 8. Operating Systems retained the GCOS brand-name, with GCOS 6, GCOS 4, GCOS 7, and GCOS 8 being introduced. GCOS 8 was an extensive rewrite of GCOS 3, with changes made to support true virtual memory management and demand paging (these changes also required new hardware). GCOS 3 was supported in maintenance for several years after this announcement and renaming. Honeywell Bull published "Large Systems: GCOS 8 OS Time Sharing System User's Guide" in 1986.
Legacy
DPS 6 and DPS 4 (ex-Level 62) we |
https://en.wikipedia.org/wiki/Nightline | Nightline (or ABC News Nightline) is ABC News' late-night television news program broadcast on ABC in the United States with a franchised formula to other networks and stations elsewhere in the world. Created by Roone Arledge, the program featured Ted Koppel as its main anchor from March 1980 until his retirement in November 2005. Its ongoing rotating anchors are Byron Pitts and Juju Chang. Nightline airs weeknights from 12:37 to 1:07 a.m., Eastern Time, after Jimmy Kimmel Live!, which had served as the program's lead-out from 2003 to 2012.
In 2002, Nightline was ranked 23rd on TV Guide's 50 Greatest TV Shows of All Time. The program has won four Peabody Awards, one in 2001, two in 2002 for the reports "Heart of Darkness" and "The Survivors," and one in 2022 for "The Appointment".
Through a video-sharing agreement with the BBC, Nightline repackages some of the BBC's output for an American audience. Segments from Nightline are shown in a condensed form on ABC's overnight news program World News Now. There was also a version of Nightline for sister cable channel Fusion.
The Iran Crisis–America Held Hostage (1979)
The program began on November 8, 1979, four days after the start of the Iran hostage crisis. ABC News president Roone Arledge figured that the best way to compete against NBC's The Tonight Show Starring Johnny Carson was to update Americans on the latest news from Iran. At that time, the show was called The Iran Crisis–America Held Hostage: Day "xxx", where xxx represented the number of days that Iranians held the occupants of the U.S. Embassy in Tehran, Iran as hostages. At first, World News Tonight lead anchor Frank Reynolds hosted the 20-minute special reports.
Shortly after its creation, Reynolds stopped hosting the program. Ted Koppel, then ABC News's State Department Correspondent, took on the hosting duties. A few days later a producer had the idea of displaying the number of days on America Held Hostage (e.g., Day 15, Day 50, Day 150, etc.).
Ted Koppel's Nightline (1980–2005)
By the end of the hostage crisis in 1981 (after 444 days), the program – which had been retitled the previous year as Nightline – had entrenched itself on ABC's programming schedule, and made Koppel a national figure. ABC had previously used the title "Night Line" for a short-lived 1 a.m. talk show starring Les Crane that was broadcast over the network's New York City flagship station, WABC-TV, starting in 1963.
The program originally aired four nights a week (on Monday through Thursdays) until 1982, when the sketch comedy program Fridays was shifted to air after Nightline. By this time, the news program had expanded to 30 minutes. For much of its history, the program prided itself on providing a mix of investigative journalism and extended interviews (something that continues to be featured to this day, albeit at a reduced extent), which would look out of place on World News Tonight.
The format of the show featured an introduction by the host, then a t |
https://en.wikipedia.org/wiki/The%20Early%20Show | The Early Show was an American morning television show that aired on CBS from November 1, 1999 to January 7, 2012, and the ninth attempt at a morning news-talk program by the network since 1954. The program aired Monday through Friday from 7:00 to 9:00 a.m. (live in the Eastern Time Zone, and on tape delay in all other time zones), although a number of affiliates either pre-empted or tape-delayed the Saturday edition. The program originally broadcast from the General Motors Building in New York City.
The Early Show, like many of its predecessors, traditionally placed third in the ratings, behind NBC's Today and ABC's Good Morning America. Much like Today and its fellow NBC program The Tonight Show, the Early Show title was analogous to that of CBS's late-night talk show, The Late Show. Unlike CBS' other attempts at a morning news program (which emphasize hard news), The Early Show followed the format of its two other competitors, which have long used a lighter soft news, lifestyle and infotainment approach.
On November 15, 2011, CBS announced the cancellation of The Early Show, and replacement by a new morning program that CBS News chairman Jeff Fager and president David Rhodes stated would "redefine the morning television landscape." The Early Show ended its twelve-year run on January 7, 2012, replaced three days later on January 9 by the second version of CBS This Morning.
History of CBS’s morning news shows
The 1950s
CBS' first attempt at a morning program debuted on March 15, 1954, with The Morning Show, originally hosted by Walter Cronkite and very similar in format to Today (which also ran for two hours from 7:00 to 9:00 a.m. Eastern Time until it was reduced to one hour to accommodate the premiere of Captain Kangaroo in 1955). Additional hosts over the years included Jack Paar, John Henry Faulk and Dick Van Dyke. Paar, the most successful of them in drawing an audience, made significant changes in the tone of the program during his tenure as host, casting it into a talk program with some infotainment elements but featuring an emphasis on humor and conversation, reminiscent of the kind of morning radio show he had done prior to World War II. In 1956, Paar was moved from The Morning Show to his own late-morning talk program on the network, which aired after Captain Kangaroo. (Paar left CBS to take over NBC's The Tonight Show in 1957.)
Next came Good Morning! with Will Rogers Jr., which lasted for 14 months before being replaced in April 1957 by a different version of The Morning Show, a variety program hosted by country music singer Jimmy Dean, which ended that December after nine months. The 45-minute program aired at 7:00 a.m. Eastern Time; it was followed by a 15-minute news program, the CBS Morning News, anchored by Richard C. Hottelet, and later Stuart Novins, which led into Captain Kangaroo at 8:00 a.m.
The 1960s
The CBS Morning News (1963)
CBS did not make any serious attempt to program against Today for eight years. The CBS Mo |
https://en.wikipedia.org/wiki/CBS%20Morning%20News | The CBS Morning News is an American early-morning news broadcast presented weekdays on the CBS television network. The program features late-breaking news stories, national weather forecasts and sports highlights. Since 2013, it has been anchored by Anne-Marie Green, who concurrently anchored the CBS late-night news program Up to the Minute until its cancellation in September 2015.
The program is broadcast live at 4:00 a.m. Eastern Time Zone, preceding local news beginning at 4:30 a.m. on many CBS stations. It is transmitted in a continuous half-hour broadcast delay loop until 10:00 a.m. Eastern Time, when CBS Mornings begin in the Pacific Time Zone. In the few markets where the station does not produce a morning newscast, it may air in a two- to three-hour loop immediately before the start of CBS Mornings. The show is updated for any breaking news occurring before 7:00 a.m. Eastern Time, while stations throughout the network will join CBS Mornings in all time zones past that time at their local discretion or network orders for live coverage.
History
Background
The CBS Morning News title was originally used as the name of a conventional morning news program that served as a predecessor to the network's current CBS Mornings. For most of the 1960s and 1970s, the program aired as a 60-minute hard news broadcast at 7:00 a.m., preceding Captain Kangaroo and airing opposite the first hour of NBC's Today. Walter Cronkite and sportscaster Jim McKay both anchored the original CBS Morning News at one time. Joseph Benti became the anchor in 1969. Other anchors of the broadcast in this format included John Hart, Hughes Rudd, Sally Quinn, Richard Threlkeld, Lesley Stahl and Bruce Morton.
CBS Early Morning News/current Morning News format
The program first aired in its current format on October 4, 1982 as the CBS Early Morning News. It was a half-hour extension of the two-hour CBS Morning News which aired directly opposite Today. Bill Kurtis and Diane Sawyer originally anchored both the Early Morning News and the Morning News of that era. Sawyer departed both programs in mid-1984, to be named a correspondent for 60 Minutes later that year. In her absence, Kurtis was joined by a rotating series of co-hosts, principally Maria Shriver, Meredith Vieira and Jane Wallace.
Kurtis anchored the Early Morning News solo until March 1985, while co-anchoring the Morning News with Phyllis George until July of that year. Faith Daniels took over and would remain on the anchor desk, most of the time sharing the role with Forrest Sawyer (July to December 1985 and January to September 1987) and later Douglas Edwards and Charles Osgood, until Daniels left CBS to become anchor of competing early-morning newscast NBC News at Sunrise in 1990. Osgood would remain anchor of the CBS Morning News until June 1992, paired with Victoria Corderi from 1990 to 1991, Giselle Fernández through February 1992, and then with Meredith Vieira for the remainder of Osgood's run as co-anchor. Afte |
https://en.wikipedia.org/wiki/Zigbee | Zigbee is an IEEE 802.15.4-based specification for a suite of high-level communication protocols used to create personal area networks with small, low-power digital radios, such as for home automation, medical device data collection, and other low-power low-bandwidth needs, designed for small scale projects which need wireless connection. Hence, Zigbee is a low-power, low data rate, and close proximity (i.e., personal area) wireless ad hoc network.
The technology defined by the Zigbee specification is intended to be simpler and less expensive than other wireless personal area networks (WPANs), such as Bluetooth or more general wireless networking such as Wi-Fi. Applications include wireless light switches, home energy monitors, traffic management systems, and other consumer and industrial equipment that requires short-range low-rate wireless data transfer.
Its low power consumption limits transmission distances to 10–100 meters (30' to 300') line-of-sight, depending on power output and environmental characteristics. Zigbee devices can transmit data over long distances by passing data through a mesh network of intermediate devices to reach more distant ones. Zigbee is typically used in low data rate applications that require long battery life and secure networking. (Zigbee networks are secured by 128 bit symmetric encryption keys.) Zigbee has a defined rate of up to 250 kbit/s, best suited for intermittent data transmissions from a sensor or input device.
Zigbee was conceived in 1998, standardized in 2003, and revised in 2006. The name refers to the waggle dance of honey bees after their return to the beehive.
Overview
Zigbee is a low-power wireless mesh network standard targeted at battery-powered devices in wireless control and monitoring applications. Zigbee delivers low-latency communication. Zigbee chips are typically integrated with radios and with microcontrollers. Zigbee operates in the industrial, scientific and medical (ISM) radio bands, including 2.4 GHz in most jurisdictions worldwide. Some devices also use 784 MHz in China, 868 MHz in Europe and 915 MHz in the US and Australia, even those regions and countries still use 2.4 GHz for most commercial Zigbee devices for home use. Data rates vary from 20 kbit/s (868 MHz band) to 250 kbit/s (2.4 GHz band).
Zigbee builds on the physical layer and media access control defined in IEEE standard 802.15.4 for low-rate wireless personal area networks (WPANs). The specification includes four additional key components: network layer, application layer, Zigbee Device Objects (ZDOs) and manufacturer-defined application objects. ZDOs are responsible for some tasks, including keeping track of device roles, managing requests to join a network, as well as device discovery and security.
The Zigbee network layer natively supports both star and tree networks, and generic mesh networking. Every network must have one coordinator device. Within star networks, the coordinator must be the central node. B |
https://en.wikipedia.org/wiki/ORDVAC | The ORDVAC (Ordnance Discrete Variable Automatic Computer), is an early computer built by the University of Illinois for the Ballistic Research Laboratory at Aberdeen Proving Ground. It was a successor to the ENIAC (along with EDVAC built earlier). It was based on the IAS architecture developed by John von Neumann, which came to be known as the von Neumann architecture. The ORDVAC was the first computer to have a compiler. ORDVAC passed its acceptance tests on March 6, 1952, at Aberdeen Proving Ground in Maryland. Its purpose was to perform ballistic trajectory calculations for the US Military. In 1992, the Ballistic Research Laboratory became a part of the U.S. Army Research Laboratory.
Unlike the other computers of its era, the ORDVAC and ILLIAC I were twins and could exchange programs with each other. The later SILLIAC computer was a copy of the ORDVAC/ILLIAC series. J. P. Nash of the University of Illinois was a developer of both the ORDVAC and of the university's own identical copy, the ILLIAC, which was later renamed the ILLIAC I. Abe Taub, Sylvian Ray, and Donald B. Gillies assisted in the checkout of ORDVAC at Aberdeen Proving Ground. After ORDVAC was moved to Aberdeen, it was used remotely by telephone by the University of Illinois for up to eight hours per night. It was one of the first computers to be used remotely and probably the first to routinely be used remotely.
The ORDVAC used 2178 vacuum tubes. Its addition time was 72 microseconds and the multiplication time was 732 microseconds. Its main memory consisted of 1024 words of 40 bits each, stored using Williams tubes. It was a rare asynchronous machine, meaning that there was no central clock regulating the timing of the instructions. One instruction started executing when the previous one finished.
Among the ORDVAC programmers were Martin Davis and Elsie Shutt.
ORDVAC and its successor at Aberdeen Proving Ground, BRLESC, used their own unique notation for hexadecimal numbers. Instead of the sequence A B C D E F universally used today, the digits ten to fifteen were represented by the letters K S N J F L (King Sized Numbers Just For Laughs), corresponding to the teleprinter characters on five-track paper tape. The manual that was used by the military in 1958 used the name sexadecimal for the base 16 number system.
Commissioning
When ORDVAC was completed, it was tested at the University of Illinois and then disassembled and shipped to Aberdeen Proving Ground in Maryland. Three faculty members including Sylvian Ray and Abe Taub drove to Maryland to help assemble the machine, which was reconstructed and passed its validation tests in just a week. It was expected that assembly and testing would take over a month. When some military officers came to check on the progress of Ordvac assembly, they asked, "Who is in charge here?", and were told, "It's the guy who is holding the broom!", as Abe Taub—the head of The University of Illinois Digital Computer Laboratory—was sweeping up af |
https://en.wikipedia.org/wiki/ILLIAC%20I | The ILLIAC I (Illinois Automatic Computer), a pioneering computer in the ILLIAC series of computers built in 1952 by the University of Illinois, was the first computer built and owned entirely by a United States educational institution.
Computer
The project was the brainchild of Ralph Meagher and Abraham H. Taub, who both were associated with Princeton's Institute for Advanced Study before coming to the University of Illinois. The ILLIAC I became operational on September 1, 1952. It was the second of two identical computers, the first of which was ORDVAC, also built at the University of Illinois. These two machines were the first pair of machines to run the same instruction set.
ILLIAC I was based on the IAS machine Von Neumann architecture as described by mathematician John von Neumann in his influential First Draft of a Report on the EDVAC. Unlike most computers of its era, the ILLIAC I and ORDVAC computers were twin copies of the same design, with software compatibility. The computer had 2,800 vacuum tubes, measured 10 ft (3 m) by 2 ft (0.6 m) by 8½ ft (2.6 m) (L×B×H), and weighed . ILLIAC I was very powerful for its time; in 1956, it had more computing power than all of Bell Telephone Laboratories.
Because the lifetime of the tubes within ILLIAC was about a year, the machine was shut down every day for "preventive maintenance" when older vacuum tubes would be replaced in order to increase reliability. Visiting scholars from Japan assisted in the design of the ILLIAC series of computers, and later developed the MUSASINO-1 computer in Japan. ILLIAC I was retired in 1962, when the ILLIAC II became operational.
Innovations
1955 – Lejaren Hiller and Leonard Isaacson used ILLIAC I to compose the Illiac Suite which was one of the first pieces of music to be written with the aid of a computer.
1957 – Mathematician Donald B. Gillies, physicist James E. Snyder, and astronomers George C. McVittie, S. P. Wyatt, Ivan R. King and George W. Swenson of the University of Illinois used the ILLIAC I computer to calculate the orbit of the Sputnik 1 satellite within two days of its launch.
1960 – The first version of the PLATO computer-based education system was implemented on the ILLIAC I by a team led by Donald Bitzer. It serviced a single user. In early 1961, version 2 of PLATO serviced two simultaneous users.
See also
ILLIAC II
ILLIAC III
ILLIAC IV
MISTIC – Similar computer specifically inspired by ILLIAC I
SILLIAC - Sydney version of the Illinois Automatic Computer, built by the University of Sydney
List of vacuum-tube computers
References
External links
ILLIAC I history including computer music
ILLIAC I documentation at bitsavers.org
I. R. King, G. C. McVittie, G. W. Swenson, Jr., and S. P. Wyatt, Jr., "Further observations of the first satellite," Nature, No. 4593, November 9, 1957, p. 943.
Digital Computer, 'electronic brain' at the University of Illinois. Digital Public Library of America
Photos from University of Illinois archive |
https://en.wikipedia.org/wiki/AVIDAC | The AVIDAC or Argonne Version of the Institute's Digital Automatic Computer, an early computer built by Argonne National Laboratory, was partially based on the IAS architecture developed by John von Neumann. It was built by the Laboratory's Physics Division for $250,000 and began operations on January 28, 1953.
As with almost all computers of its era, it was a one-of-a-kind machine that could not exchange programs with other computers (even other IAS machines).
See also
List of vacuum-tube computers
References
External links
IAS architecture computers |
https://en.wikipedia.org/wiki/Cyclone%20%28computer%29 | The Cyclone is a vacuum-tube computer, built by Iowa State College (later University) at Ames, Iowa. The computer was commissioned in July 1959. It was based on the IAS architecture developed by John von Neumann. The Cyclone was based on ILLIAC, the University of Illinois Automatic Computer. The Cyclone used 40-bit words, used two 20-bit instructions per word, and each instruction had an eight-bit op-code and a 12-bit operand or address field. In general IAS-based computers were not code compatible with each other, although originally math routines which ran on the ILLIAC would also run on the Cyclone.
The Cyclone was completed just as the transistor was replacing the vacuum tube as an active computing element. The Cyclone had about 2,500 vacuum tubes, 1,521 of which were type 5844. (The IBM 1401 computer, announced the same year, was fully transistorized. About 15,000 IBM 1401 machines were produced.)
The supervisor of the Cyclone computer construction was Dr. R. M. Stewart, a professor of physics at ISC (now ISU). The paper-tape input was upgraded with an optical character reader using a high-speed stepper motor, again by a person from the Physics Department. Robert Asbury Sharpe organized and taught courses for interested faculty and wrote an assembler as well as an ALGOL compiler for the Cyclone.
The Cyclone solved 40 equations with 40 unknowns in less than four minutes. This was the same type of problem that the Atanasoff–Berry Computer was designed to solve twenty years earlier at the same college.
The Cyclone computer was 10 feet tall, 12 feet long, 3 feet wide, and contained over 2,700 vacuum tubes. It used 19 kW of electric power and weighed about . "Good time" was about 40 hours per week.
The original Cyclone had:
Input and output via five-hole paper tape.
A model 28 Teleprinter, 10 characters per second, was also available for output.
Memory was originally 1,024 40-bit words of Williams tube electrostatic memory.
The Cyclone had a major rebuild about 1961:
Five-hole paper tape was replaced by an eight-hole tape reader/punch.
The console printer was upgraded to a eight-hole Friden Flexowriter.
1024-word Williams memory was replaced by four banks of magnetic-core memory, 4096 words in each bank.
Both versions had features and limitations:
All IAS derivatives used an asynchronous CPU, with no clock. Each unit generated an "answer-back" or "I'm ready" signal, which permitted the output to be used or the next step taken. Most computers designed since then are "synchronous", meaning after a certain number of clock cycles the unit is finished with the pending operation, for example an addition.
There were no index registers. To access sequential data in a loop, programs used address modification in the instructions instead of incrementing or decrementing an index.
The Cyclone had a loudspeaker system connected to the sign bit of the accumulator. Operators or monitors could listen for an infinite loop or particular program. W |
https://en.wikipedia.org/wiki/MANIAC%20I |
The MANIAC I (Mathematical Analyzer Numerical Integrator and Automatic Computer Model I) was an early computer built under the direction of Nicholas Metropolis at the Los Alamos Scientific Laboratory. It was based on the von Neumann architecture of the IAS, developed by John von Neumann. As with almost all computers of its era, it was a one-of-a-kind machine that could not exchange programs with other computers (even the several other machines based on the IAS). Metropolis chose the name MANIAC in the hope of stopping the rash of silly acronyms for machine names, although von Neumann may have suggested the name to him.
The MANIAC weighed about .
The first task assigned to the Los Alamos Maniac was to perform more precise and extensive calculations of the thermonuclear process. In 1953, the MANIAC obtained the first equation of state calculated by modified Monte Carlo integration over configuration space.
In 1956, MANIAC I became the first computer to defeat a human being in a chess-like game. The chess variant, called Los Alamos chess, was developed for a 6x6 chessboard (no bishops) due to the limited amount of memory and computing power of the machine.
The MANIAC ran successfully in March 1952 and was shut down on July 15, 1958. However, it was
transferred to the University of New Mexico in bad condition, and was restored to full operation by Dale Sparks, PhD. It was featured in at least two UNM Maniac programming dissertations from 1963. It remained in operation until it was retired in 1965. It was succeeded by MANIAC II in 1957.
A third version MANIAC III was built at the Institute for Computer Research at the University of Chicago in 1964.
Notable MANIAC programmers
Mary Tsingou - developed algorithm used in the Fermi-Pasta-Ulam-Tsingou problem
Klara Dan von Neumann - wrote the first programs for MANIAC I
Dana Scott - programmed the MANIAC to enumerate all solutions to a pentomino puzzle by backtracking in 1958.
Marjorie Devaney - one of the first MANIAC I programmers.
Arianna W. Rosenbluth - wrote the first full implementation of the widely used Markov chain Monte Carlo algorithm.
Paul Stein and Mark Wells - implemented Los Alamos chess.
See also
List of vacuum-tube computers
References
Brewster, Mike. John von Neumann: MANIAC's Father (archived) in BusinessWeek Online, April 8, 2004.
Harlow, Francis H. and N. Metropolis. Computing & Computers: Weapons Simulation Leads to the Computer Era, including photos of MANIAC I
External links
Photos:
IAS architecture computers
40-bit computers
Vacuum tube computers |
https://en.wikipedia.org/wiki/WEIZAC | WEIZAC (Weizmann Automatic Computer) was the first computer in Israel, and one of the first large-scale, stored-program, electronic computers in the world.
It was built at the Weizmann Institute during 1954–1955, based on the Institute for Advanced Study (IAS) architecture developed by John von Neumann and was operational until the end of 1963. WEIZAC was widely used by Israeli scientists and researchers and helped with the advancement of science and technology in the young nation.
As with all computers of its era, it was a one of a kind machine that could not exchange programs with other computers (even other IAS machines).
The beginning
The WEIZAC project was initiated by Prof. Chaim L. Pekeris, who worked at the IAS at the time von Neumann's IAS machine was being designed. Chaim Weizmann, Israel's future first president, asked Pekeris to establish the Department of Applied Mathematics at the Weizmann Institute, and Pekeris wanted to have a similar computer available there. Pekeris wanted it as means to solve Laplace’s tidal equations for the Earth's oceans, and also for the benefit of the entire scientific community of Israel, including the Defense Ministry.
In July 1947, an advisory committee for the Applied Mathematics Department discussed the plan to build the computer. Among the committee's members were Albert Einstein, who did not find the idea reasonable, and John von Neumann, who supported it. In one conversation, von Neumann was asked: "What will that tiny country do with an electric computer?" He responded: "Don’t worry about that problem. If nobody else uses the computer, Pekeris will use it full time!"
In the end, a decision was made to proceed with the plan. Chaim Weizmann assigned US$50,000 for the project (), which was 20% of the Weizmann Institute total budget.
In 1952, Gerald Estrin, a research engineer from the von Neumann project, was chosen to lead the project. He came to Israel along with his wife, Thelma, who was an electrical engineer and also involved in the project. They brought with them schematics, but no parts. Estrin later commented: "As I look back now, if we had systematically laid out a detailed plan of execution we would probably have aborted the project." After arriving, Estrin's impression was that besides Pekeris, other Israeli scientists thought it is ridiculous to build a computer in Israel.
To recruit skilled staff for the project, a newspaper advertisement was posted. Most of the applicants had no records of prior education because those were lost in the Holocaust or during immigration, but in Israel's budding technical community everyone knew or knew about everybody else. The WEIZAC project also provided an opportunity for mathematicians and engineers to move to Israel without sacrificing their professional careers.
Specifications
WEIZAC was an asynchronous computer operating on 40-bit words. Instructions consisted of twenty bits: an eight-bit instruction code and twelve bits for addressing. Pun |
https://en.wikipedia.org/wiki/Artificial%20consciousness | Artificial consciousness (AC), also known as machine consciousness (MC), synthetic consciousness or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience. The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness (the ability to feel qualia).
Some scholars believe that consciousness is generated by the interoperation of various parts of the brain; these mechanisms are labeled the neural correlates of consciousness or NCC. Some further believe that constructing a system (e.g., a computer system) that can emulate this NCC interoperation would result in a system that is conscious.
Philosophical views
As there are many hypothesized types of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of “raw feels”, “what it is like” or qualia.
Plausibility debate
Type-identity theorists and other skeptics hold the view that consciousness can only be realized in particular physical systems because consciousness has properties that necessarily depend on physical constitution.
In his article "Artificial Consciousness: Utopia or Real Possibility," Giorgio Buttazzo says that a common objection to artificial consciousness is that "Working in a fully automated mode, they [the computers] cannot exhibit creativity, unreprogrammation (which means can no longer be reprogrammed, from rethinking), emotions, or free will. A computer, like a washing machine, is a slave operated by its components."
For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness.
Computational Foundation argument
One of the most explicit arguments for the plausibility of artificial sentience comes from David Chalmers. His proposal is roughly that the right kinds of computations are sufficient for the possession of a conscious mind. Chalmers proposes that a system implements a computation if "the causal structure of the system mirrors the formal structure of the computation", and that any system that implements certain computations is sentient.
The most controversial part of Chalmers' proposal is that mental properties are "organizationally invariant". Mental properties are of two kinds, psychological and ph |
https://en.wikipedia.org/wiki/Leslie%20Lamport | Leslie B. Lamport (born February 7, 1941) is an American computer scientist and mathematician. Lamport is best known for his seminal work in distributed systems, and as the initial developer of the document preparation system LaTeX and the author of its first manual.
Lamport was the winner of the 2013 Turing Award for imposing clear, well-defined coherence on the seemingly chaotic behavior of distributed computing systems, in which several autonomous computers communicate with each other by passing messages. He devised important algorithms and developed formal modeling and verification protocols that improve the quality of real distributed systems. These contributions have resulted in improved correctness, performance, and reliability of computer systems.
Early life and education
Lamport was born into a Jewish family in Brooklyn, New York, the son of Benjamin and Hannah Lamport (née Lasser). His father was an immigrant from Volkovisk in the Russian Empire (now Vawkavysk, Belarus) and his mother was an immigrant from the Austro-Hungarian Empire, now southeastern Poland.
A graduate of Bronx High School of Science, Lamport received a B.S. in mathematics from the Massachusetts Institute of Technology in 1960, followed by M.A. (1963) and Ph.D. (1972) degrees in mathematics from Brandeis University. His dissertation, The analytic Cauchy problem with singular data, is about singularities in analytic partial differential equations.
Career and research
Lamport worked as a computer scientist at Massachusetts Computer Associates from 1970 to 1977, SRI International from 1977 to 1985, and Digital Equipment Corporation and Compaq from 1985 to 2001. In 2001 he joined Microsoft Research in California.
Distributed systems
Lamport's research contributions have laid the foundations of the theory of distributed systems. Among his most notable papers are
"Time, Clocks, and the Ordering of Events in a Distributed System", which received the Principles of Distributed Computing (PODC) Influential Paper Award in 2000,
"How to Make a Multiprocessor Computer That Correctly Executes Multiprocess Programs", which defined the notion of sequential consistency,
"The Byzantine Generals' Problem",
"Distributed Snapshots: Determining Global States of a Distributed System" and
"The Part-Time Parliament".
These papers relate to such concepts as logical clocks (and the happened-before relationship) and Byzantine failures. They are among the most cited papers in the field of computer science, and describe algorithms to solve many fundamental problems in distributed systems, including:
the Paxos algorithm for consensus,
the bakery algorithm for mutual exclusion of multiple threads in a computer system that require the same resources at the same time,
the Chandy–Lamport algorithm for the determination of consistent global states (snapshot), and
the Lamport signature, one of the prototypes of the digital signature.
LaTeX
When Donald Knuth began issuing the early releases |
https://en.wikipedia.org/wiki/List%20of%20electrical%20engineers | This is a list of electrical engineers (by no means exhaustive), people who have made notable contributions to electrical engineering or computer engineering.
See also
List of engineers - for lists of engineers from other disciplines
List of Russian electrical engineers
Engineers
Electrical Engineers |
https://en.wikipedia.org/wiki/IBM%20WebSphere | IBM WebSphere refers to a brand of proprietary computer software products in the genre of enterprise software known as "application and integration middleware". These software products are used by end-users to create and integrate applications with other applications. IBM WebSphere has been available to the general market since 1998.
History
In June 1998, IBM introduced the first product in this brand, IBM WebSphere Performance Pack. this first component formed a part of IBM WebSphere Application Server Network Deployment.
Products
The following products have been produced by IBM within the WebSphere brand:
IBM WebSphere Application Server - a web application server
IBM Workload Deployer - a hardware appliance that provides access to IBM middleware virtual images and patterns
IBM WebSphere eXtreme Scale - an in-memory data grid for use in high-performance computing
IBM HTTP Server
IBM WebSphere Adapters
IBM Websphere Business Events
IBM Websphere Edge Components
IBM Websphere Host On-Demand (HOD)
IBM WebSphere Message Broker
Banking Transformation Toolkit
IBM MQ
IBM WebSphere Portlet Factory
IBM WebSphere Process Server
WebSphere Commerce (sold to HCL Technologies in 2019)
WebSphere Portal (sold to HCL Technologies in 2019)
References
External links
Java enterprise platform
WebSphere
Portal software
Service-oriented architecture-related products |
https://en.wikipedia.org/wiki/ILLIAC%20IV | The ILLIAC IV was the first massively parallel computer. The system was originally designed to have 256 64-bit floating point units (FPUs) and four central processing units (CPUs) able to process 1 billion operations per second. Due to budget constraints, only a single "quadrant" with 64 FPUs and a single CPU was built. Since the FPUs all had to process the same instruction – ADD, SUB etc. – in modern terminology the design would be considered to be single instruction, multiple data, or SIMD.
The concept of building a computer using an array of processors came to Daniel Slotnick while working as a programmer on the IAS machine in 1952. A formal design did not start until 1960, when Slotnick was working at Westinghouse Electric and arranged development funding under a US Air Force contract. When that funding ended in 1964, Slotnick moved to the University of Illinois and joined the Illinois Automatic Computer (ILLIAC) team. With funding from Advanced Research Projects Agency (ARPA), they began the design of a newer concept with 256 64-bit processors instead of the original concept with 1,024 1-bit processors.
While the machine was being built at Burroughs, the university began building a new facility to house it. Political tension over the funding from the US Department of Defense led to the ARPA and the university fearing for the machine's safety. When the first 64-processor quadrant of the machine was completed in 1972, it was sent to the NASA Ames Research Center in California. After three years of thorough modification to fix various flaws, ILLIAC IV was connected to the ARPANET for distributed use in November 1975, becoming the first network-available supercomputer, beating the Cray-1 by nearly 12 months.
Running at half its design speed, the one-quadrant ILLIAC IV delivered 50 MFLOP peak, making it the fastest computer in the world at that time. It is also credited with being the first large computer to use solid-state memory, as well as the most complex computer built to that date, with over 1 million gates. Generally considered a failure due to massive budget overruns, the design was instrumental in the development of new techniques and systems for programming parallel systems. In the 1980s, several machines based on ILLIAC IV concepts were successfully delivered.
History
Origins
In June 1952, Daniel Slotnick began working on the IAS machine at the Institute for Advanced Study (IAS) at Princeton University. The IAS machine featured a bit-parallel math unit that operated on 40-bit words. Originally equipped with Williams tube memory, a magnetic drum from Engineering Research Associates was later added. This drum had 80 tracks so two words could be read at a time, and each track stored 1,024 bits.
While contemplating the drum's mechanism, Slotnik began to wonder if that was the correct way to build a computer. If the bits of a word were written serially to a single track, instead of in parallel across 40 tracks, then the data could be |
https://en.wikipedia.org/wiki/ILLIAC%20II | The ILLIAC II was a revolutionary super-computer built by the University of Illinois that became operational in 1962.
Description
The concept, proposed in 1958, pioneered Emitter-coupled logic (ECL) circuitry, pipelining, and transistor memory with a design goal of 100x speedup compared to ILLIAC I.
ILLIAC II had 8192 words of core memory, backed up by 65,536 words of storage on magnetic drums. The core memory access time was 1.8 to 2 µs. The magnetic drum access time was 8.5ms. A "fast buffer" was also provided for storage of short loops and intermediate results (similar in concept to what is now called cache). The "fast buffer" access time was 0.25 µs.
The word size was 52 bits.
Floating point numbers used a format with seven bits of exponent (power of 4) and 45 bits of mantissa.
Instructions were either 26 bits or 13 bits long, allowing packing of up to four instructions per memory word.
Rather than naming the pipeline stages, "Fetch, Decode, and Execute" (as on Stretch), the pipelined stages were named, "Advanced Control, Delayed Control, and Interplay".
Innovation
The ILLIAC II was one of the first transistorized computers. Like the IBM Stretch computer, ILLIAC II was designed using "future transistors" that had not yet been invented.
The ILLIAC II project was proposed before, and competed with IBM's Stretch project, and several ILLIAC designers felt that Stretch borrowed many of its ideas from ILLIAC II, whose design and documentation were published openly as University of Illinois Tech Reports. Members of the ILLIAC II team jokingly referred to the competing IBM Project as "St. Retch".
The ILLIAC II had a division unit designed by faculty member James E. Robertson, a co-inventor of the SRT Division algorithm.
The ILLIAC II was one of the first pipelined computers, along with IBM's Stretch Computer. The pipelined control was designed by faculty member Donald B. Gillies. The pipeline stages were named Advanced Control, Delayed Control, and Interplay.
The ILLIAC II was the first computer to incorporate Speed-Independent Circuitry, invented by faculty member David E. Muller. Speed-Independent Circuitry is a class of asynchronous digital logic based on the Muller C-element. This digital logic, being asynchronous, runs at full speed of transistor propagation and requires no clocks.
Discoveries
During check-out of the ILLIAC II, before it became fully operational, faculty member Donald B. Gillies programmed ILLIAC II to search for Mersenne prime numbers. The check-out period took roughly 3 weeks, during which the computer verified all the previous Mersenne primes and found three new prime numbers. The results were immortalized for more than a decade on a UIUC Postal Annex cancellation stamp, and were discussed in the New York Times, recorded in the Guinness Book of World Records, and described in a journal paper in ''Mathematics of Computation.
End of life
The ILLIAC II computer was disassembled roughly a decade after its |
https://en.wikipedia.org/wiki/ILLIAC%20III | The ILLIAC III was a fine-grained SIMD pattern recognition computer built by the University of Illinois in 1966.
This ILLIAC's initial task was image processing of bubble chamber experiments used to detect nuclear particles. Later it was used on biological images.
The machine was destroyed in a fire, caused by a Variac shorting on one of the wooden-top benches, in 1968. It was rebuilt in the early 1970s, and the core parallel-processing element of the machine, the Pattern Articulation Unit, was successfully implemented. In spite of this and the productive exploration of other advanced concepts, such as multiple-radix arithmetic, the project was eventually abandoned.
Bruce H. McCormick was the leader of the project throughout its history. John P. Hayes was responsible for the logic design of the input-output channel control units.
See also
ORDVAC
ILLIAC I
ILLIAC II
ILLIAC IV
References
External links
Advanced Computer Architecture I - Dan Hammerstrom (PDF) See page 30.
SIMD - CS 433 - UC San Diego CS Dept. - Andrew A. Chien (PS) See page 5.
One-of-a-kind computers
SIMD computing |
https://en.wikipedia.org/wiki/Marketing%20research | Marketing research is the systematic gathering, recording, and analysis of qualitative and quantitative data about issues relating to marketing products and services. The goal is to identify and assess how changing elements of the marketing mix impacts customer behavior.
This involves specifying the data required to address these issues, then designing the method for collecting information, managing and implementing the data collection process. After analyzing the collected data, these results and findings, including their implications, are forwarded to those empowered to act on them.
Market research, marketing research, and marketing are a sequence of business activities; sometimes these are handled informally.
The field of marketing research is much older than that of market research. Although both involve consumers, Marketing research is concerned specifically with marketing processes, such as advertising effectiveness and salesforce effectiveness, while market research is concerned specifically with markets and distribution. Two explanations given for confusing market research with marketing research are the similarity of the terms and also that market research is a subset of marketing research. Further confusion exists because of major companies with expertise and practices in both areas.
Overview
Marketing research is often partitioned into two sets of categorical pairs, either by target market:
Consumer marketing research, (B2C) and
Business-to-business (B2B) marketing research.
Or, alternatively, by methodological approach:
Qualitative marketing research, and
Quantitative marketing research.
Consumer marketing research is a form of applied sociology that concentrates on understanding the preferences, attitudes, and behaviors of consumers in a market-based economy, and it aims to understand the effects and comparative success of marketing campaigns.
Thus, marketing research may also be described as the systematic and objective identification, collection, analysis, and dissemination of information for the purpose of assisting management in decision making related to the identification and solution of problems and opportunities in marketing. The goal of market research is to obtain and provide management with viable information about the market (e.g. competitors), consumers, the product/service itself etc.
Role
The purpose of marketing research (MR) is to provide management with relevant, accurate, reliable, valid, and up to date market information. Competitive marketing environment and the ever-increasing costs attributed to poor decision making require that marketing research provide sound information. Sound decisions are not based on gut feeling, intuition, or even pure judgment.
Managers make numerous strategic and tactical decisions in the process of identifying and satisfying customer needs. They make decisions about potential opportunities, target market selection, MARKETING segmentation, planning and implementing mark |
https://en.wikipedia.org/wiki/E-government | E-government (short for electronic government) is the use of technological communications devices, such as computers and the Internet, to provide public services to citizens and other persons in a country or region. E-government offers new opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens.
The term consists of the digital interactions between a citizen and their government (C2G), between governments and other government agencies (G2G), between government and citizens (G2C), between government and employees (G2E), and between government and businesses/commerces (G2B). E-government delivery models can be broken down into the following categories: This interaction consists of citizens communicating with all levels of government (city, state/province, national, and international), facilitating citizen involvement in governance using information and communication technology (ICT) (such as computers and websites) and business process re-engineering (BPR). Brabham and Guth (2017) interviewed the third party designers of e-government tools in North America about the ideals of user interaction that they build into their technologies, which include progressive values, ubiquitous participation, geolocation, and education of the public.
Other definitions stray from the idea that technology is an object and defines e-government simply as facilitators or instruments and focus on specific changes in Public Administration issues. The internal transformation of a government is the definition that established the specialist technologist Mauro D. Ríos. In his paper "In Search of a Definition of Electronic Government", he says: "Digital government is a new way of organization and management of public affairs, introducing positive transformational processes in management and the structure itself of the organization chart, adding value to the procedures and services provided, all through the introduction and continued appropriation of information and communication technologies as a facilitator of these transformations."
Terminology
E-government is also known as e-gov, electronic government, Internet governance, digital government, online government, connected government. As of 2014 the OECD still uses the term digital government, and distinguishes it from e-government in the recommendation produced there for the Network on E-Government of the Public Governance Committee. Several governments have started to use the term digital government to a wide range of services involving contemporary technology, such as big data, automation or predictive analytics.
E-gov strategies (or digital government) is defined as "The employment of the Internet and the world-wide-web for delivering government information and services to the citizens." (United Nations, 2006; AOEMA, 2005). Electronic government (or e-government) essentially refers to "utilization of Information Technology (IT), Informatio |
https://en.wikipedia.org/wiki/Osborne%20Computer%20Corporation | The Osborne Computer Corporation (OCC) was an American computer company and pioneering maker of portable computers. It was located in the Silicon Valley of the southern San Francisco Bay Area in California.
Adam Osborne, the founder of the company, developed, with design work from Lee Felsenstein, the world's first mass-produced portable computer in 1981.
History
Osborne 1
After Adam Osborne sold his computer book-publishing company to McGraw-Hill in 1979, he decided to market an inexpensive portable computer with bundled software and hired Lee Felsenstein to design it. The resulting Osborne 1 featured a 5 inch (127 mm) 52-column display, two floppy-disk drives, a Z80 microprocessor, and 64 KB of RAM. It could fit under an airplane seat and survive being accidentally dropped. The bundled software package included the CP/M operating system, the MBASIC and CBASIC programming languages, the WordStar word processing package, and the SuperCalc spreadsheet program. It also included project management software with PERT and GANTT charts, and communications software for a 300 baud modem. Osborne obtained the software in part by offering stock in the new Osborne Computer Corporation, which he founded in January 1981. For example, MicroPro International received 75,000 shares and $4.60 for each copy of WordStar Osborne distributed with his computers.
Unlike other startup companies, Osborne Computer Corporation's first product was ready soon after its founding. The first Osborne 1 shipped in July 1981, and its low price set market expectations for bundled hardware and software packages for several years to come. The company sold 11,000 Osborne 1s in the eight months after its July 1981 debut, with 50,000 more on backorder, although the early units had a 10 to 15% failure rate. The peak sales per month for it over the course of the product lifetime was 10,000 units, despite the initial business plan for the computer predicting a total of only 10,000 units sold over the entire product lifecycle. Osborne had difficulty meeting demand, and the company grew from two employees, Osborne and Felsenstein, to 3,000 people and $73 million in revenue in 12 months. The growth was so rapid that, in one case, an executive who returned from a one-week trade show had to search two buildings to find her relocated staff. The company announced in October 1982 a temporary bundling of Ashton-Tate's dBase II, increasing demand so much that production reached 500 units a day and severely diminishing quality control.
In 1982, Osborne was originally represented in Australia exclusively by President Computers Pty Ltd headed by Tom Cooper, a Captain of Industry in the emerging Australian PC era. With outstanding success of Osborne 1 sales in Australia, President Computers was lauded at the time by Osborne Corp USA as the largest global distributor of Osborne 1 luggable computers outside of Computerland USA. However with success, Osborne's visiting CFO had his own sights on the Au |
https://en.wikipedia.org/wiki/Lionhead | Lionhead may refer to
Lionhead (goldfish), a variety of goldfish
Lionhead cichlid (Steatocranus casuarius), a fish
Lionhead rabbit, a breed of domestic rabbit
Lionhead Studios, a computer game development company
Lion Head (Alaska), a mountain in Alaska
Lionhead Unit, a campground at Priest Lake in Northern Idaho
The head of a lion
See also
Lion's Head (disambiguation) |
https://en.wikipedia.org/wiki/AWB | AWB may refer to:
.awb, a filename extension for Adaptive Multi-Rate Wideband computer files
Afrikaner Weerstandsbeweging, a South African neo-Nazi separatist political and paramilitary organisation
Air waybill, a receipt issued by an international courier company
Average White Band, a Scottish band
AWB (album), a 1974 album by Average White Band
Aviation Without Borders, a humanitarian organization
AWB Limited, the former Australian Wheat Board
Federal Assault Weapons Ban, a US law
Astronomers Without Borders, a US-based organization dedicated to astronomy
Aaron Wan-Bissaka (born 1997), an English professional footballer
Automatic White Balance in photography |
https://en.wikipedia.org/wiki/Snapshot | Snapshot, snapshots or snap shot may refer to:
Snapshot (photography), a photograph taken without preparation
Computing
Snapshot (computer storage), the state of a system at a particular point in time
Snapshot (file format) or SNP, a file format for reports from Microsoft Access
Film
Snapshot (film), a 1979 Australian film directed by Simon Wincer
Snapshots (2002 film), an Anglo-Dutch American film starring Burt Reynolds and Julie Christie
Snapshots (2018 film), an American film directed by Melanie Mayron
Snap Shot (film), an upcoming film
Music
"Snapshot" (Sylvia song), 1983
"Snapshot" (RuPaul song), 1996
"Snap Shot", a 1981 song by Slave
"SnapShot", a 2018 K-pop song by In2It
Albums
Snapshot (Daryl Braithwaite album), a 2005 album by Australian musician Daryl Braithwaite
Snapshot (Sylvia album), a 1983 album by American country music singer Sylvia
Snapshot (Mission of Burma album), a 2004 live album by American band Mission of Burma
Snapshot (Roger Glover album), a 2005 album by English musician Roger Glover
Snapshot (The Strypes album), a 2013 album by Irish band The Strypes
Snapshot, a 2000 album by Canadian band Knacker
Snapshots (Eleanor McEvoy album), a 1999 album by Eleanor McEvoy
Snapshots (Kim Wilde album), a 2011 covers album by Kim Wilde
Other uses
Snapshot (board game), a 1979 board wargame published by Game Designers' Workshop
Snapshot (video game), a 2012 platform indie game
Snapshot, a 2005 novel by Garry Disher
Snapshot, a 2017 novella by Brandon Sanderson
SNAPSHOT or SNAP-10A, a 1965 American nuclear-powered satellite
Snap shot (ice hockey), a fast shot made by snapping the wrists
Snapshots (TV series), a 2016 Canadian reality program for children
Snapshots, a 2005 jukebox musical of Stephen Schwartz songs |
https://en.wikipedia.org/wiki/Bor%C5%AFvka%27s%20algorithm | Borůvka's algorithm is a greedy algorithm for finding a minimum spanning tree in a graph,
or a minimum spanning forest in the case of a graph that is not connected.
It was first published in 1926 by Otakar Borůvka as a method of constructing an efficient electricity network for Moravia.
The algorithm was rediscovered by Choquet in 1938; again by Florek, Łukasiewicz, Perkal, Steinhaus, and Zubrzycki in 1951; and again by Georges Sollin in 1965. This algorithm is frequently called Sollin's algorithm, especially in the parallel computing literature.
The algorithm begins by finding the minimum-weight edge incident to each vertex of the graph, and adding all of those edges to the forest.
Then, it repeats a similar process of finding the minimum-weight edge from each tree constructed so far to a different tree, and adding all of those edges to the forest.
Each repetition of this process reduces the number of trees, within each connected component of the graph, to at most half of this former value,
so after logarithmically many repetitions the process finishes. When it does, the set of edges it has added forms the minimum spanning forest.
Pseudocode
The following pseudocode illustrates a basic implementation of Borůvka's algorithm.
In the conditional clauses, every edge uv is considered cheaper than "None". The purpose of the completed variable is to determine whether the forest F is yet a spanning forest.
If edges do not have distinct weights, then a consistent tie-breaking rule must be used, e.g. based on some total order on vertices or edges.
This can be achieved by representing vertices as integers and comparing them directly; comparing their memory addresses; etc.
A tie-breaking rule is necessary to ensure that the created graph is indeed a forest, that is, it does not contain cycles. For example, consider a triangle graph with nodes {a,b,c} and all edges of weight 1. Then a cycle could be created if we select ab as the minimal weight edge for {a}, bc for {b}, and ca for {c}.
A tie-breaking rule which orders edges first by source, then by destination, will prevent creation of a cycle, resulting in the minimal spanning tree {ab, bc}.
algorithm Borůvka is
input: A weighted undirected graph G = (V, E).
output: F, a minimum spanning forest of G.
Initialize a forest F to (V, E') where E' = {}.
completed := false
while not completed do
Find the connected components of F and assign to each vertex its component
Initialize the cheapest edge for each component to "None"
for each edge uv in E, where u and v are in different components of F:
let wx be the cheapest edge for the component of u
if is-preferred-over(uv, wx) then
Set uv as the cheapest edge for the component of u
let yz be the cheapest edge for the component of v
if is-preferred-over(uv, yz) then
Set uv as the cheapest edge for the compone |
https://en.wikipedia.org/wiki/E-GIF | An e-GIF, or eGovernment Interoperability Framework, is a scheme for ensuring the inter-operation of computer-based systems. It is intended to resolve and prevent (or at least minimise) problems arising from incompatible content of different computer systems. An e-GIF may aim to facilitate government processes at local, national or international levels.
International implementations
About 30 countries and international bodies are known to have implemented some form of e-GIF, most, but not all, using the "e-GIF" acronym. Within the EU many of these were supported by the European Interoperability Framework (EIF) of the IDABC. These included Denmark, Greece, the United Kingdom and the Reach "Public Services Broker" in Ireland.
In Africa COMESA and Ghana provide examples of similar initiatives, whilst Bhutan and Thailand are examples from Asia.
New Zealand and Australia, similarly, implemented their own frameworks. In the United States the National Information Exchange Model (NIEM) shared similar aims.
Aims of e-GIF
to enable the seamless flow of information across government / Public Service Organisations
to set practical standards using stable well supported products
to provide support, guidance and toolkits to enable the standards to be met
to provide a long-term strategy, able to accommodate and adapt.
Key e-GIF policies
alignment with the Internet: the universal adoption of common specifications used on the Internet and World Wide Web for public sector information systems
adoption of XML as the primary standard for data integration and presentation tools for government systems
adoption of the browser as the key interface; all public sector information systems are to be accessible through browser based technology; other interfaces are permitted but only in addition to browser based ones
the addition of metadata to government information resources
the development and adoption of Metadata standards such as the UK e-Government Metadata Standard (e-GMS) based on the international Dublin Core model
the development and maintenance of standard category lists such as the UK Government Category List (GCL) and IPSV.
adherence to a catalogue of Technical Standards such as the UK Technical Standards Catalogue.
adherence to the e-GIF was mandated throughout the public sector.
Drivers for selection of e-GIF specifications
interoperability - only specifications that are relevant to systems interconnectivity, data integration, e-services access and content management are specified
market support - the specifications selected are widely supported by the market, and likely to reduce the cost and risk of government information systems
scalability - specifications selected have capacity to be scaled to satisfy changed demands made on the system, such as changes in data volumes, number of transactions or number of users
openness - the specifications are documented and available to the public at large.
Establishing competence and capability in |
https://en.wikipedia.org/wiki/USRobotics | U.S. Robotics Corporation, often called USR, is a company that produces USRobotics computer modems and related products. Its initial marketing was aimed at bulletin board systems, where its high-speed HST protocol made FidoNet transfers much faster, and thus less costly. During the 1990s it became a major consumer brand with its Sportster line. The company had a reputation for high quality and support for the latest communications standards as they emerged, notably in its V.Everything line, released in 1996.
With the reduced usage of voiceband modems in North America in the early 21st century, USR began branching out into new markets. The company purchased Palm, Inc. for its Pilot PDA, but was itself purchased by 3Com soon after. 3Com spun off USR again in 2000, keeping Palm and returning USR to the now much smaller modem market. After 2004 the company is formally known as USR. USR is now a division of UNICOM Global, and is one of the few providers left in the modem market today. The division employs about 125 people worldwide.
History
USR was founded in 1976 in Chicago, Illinois (and later moved to Skokie, Illinois), by a group of entrepreneurs, including Casey Cowell, who served as CEO for most of the company's history, and Paul Collard who designed modems into the mid-1980s. The company name is a reference to the fictional company U.S. Robots and Mechanical Men which featured prominently in the works of Isaac Asimov. The company has stated it was named as an homage to Asimov because in his science fiction works U.S. Robots eventually became "the greatest company in the known galaxy", and USR appeared in I, Robot (2004) as the fictional company itself.
In its early years (circa 1980), USR was a reseller of computers, terminals and modems. At the time, commonly available modems ran at 300 bit/s, but 1200 bit/s using the mutually incompatible Bell 212A and V.22 standards were available at much higher price points. Even in 1983, 300 bit/s remained the most common speed. In 1984, the V.22bis standard provided 2400 bit/s service, but these remained high-cost devices.
USR sold its first modem, the Courier, to corporate customers starting in 1979. In 1984, the breakup of AT&T greatly lowered the cost of the testing needed for connection to the telephone network, which led to lower prices and wider use of modems. The began offering the Courier to the public 1984.
In 1986, USR introduced their Courier HST, short for "high speed transfer". Using trellis encoding, HST provided 9,600 bit/s speeds, leapfrogging the standards efforts and offering four times the performance for about twice the price of a 2400 bit/s model. In 1989 HST was expanded to 14.4 kbit/s, 16.8 kbit/s in 1992, and finally to 21 kbit/s and 24 kbit/s.
USR was not the only company making modems with proprietary protocols; Telebit's TrailBlazer series of 1985 offered speeds up to 19.2 kbit/s, and Hayes also introduced the 9600 bit/s Express 96 (or "Ping-Pong") system. However, USR |
https://en.wikipedia.org/wiki/Goa%20%28antelope%29 | The goa (Procapra picticaudata), also known as the Tibetan gazelle, is a species of antelope that inhabits the Tibetan plateau.
Description
The goa is a relatively small antelope, with slender and graceful bodies. Both males and females stand tall at the shoulder, measure in head-body length and weigh . Males have long, tapering, ridged horns, reaching lengths of . The horns are positioned close together on the forehead, and rise more or less vertically until they suddenly diverge towards the tips. Females have no horns, and neither sex has distinct facial markings.
The goa is grayish brown over most of its body, with its summer coat being noticeably greyer in colour than its winter one. It has a short, black-tipped tail in the center of its heart-shaped white rump patches. Its fur lacks an undercoat, consisting of long guard hairs only, and is notably thicker in winter. It appears to have excellent senses, including keen eyesight and hearing. Its thin and long legs enhance its running skills, which are required to escape from predators.
Distribution and habitat
The goa is native to the Tibetan plateau, and is widespread throughout the region, inhabiting terrain between in elevation. It is almost restricted to the Chinese provinces of Gansu, Xinjiang, Tibet, Qinghai, and Sichuan, with tiny populations in the Ladakh and Sikkim regions of India. No distinct subspecies of goa have been reported.
Alpine meadow and high elevation steppe are the primary habitats of the goa. It is scattered widely across its range, being present in numerous small herds spread wide apart; estimates of population density vary from 2.8 individuals per sq km to less than 0.1, depending on the local environment.
Behaviour and ecology
Unlike some other ungulates, goas do not form large herds, and are typically found in small family groups. Although they occasionally gather into larger aggregations, most goa groups contain no more than 10 individuals, and many are solitary. They have been noted to give short cries and calls to alert the herd on approach of a predator or other perceived threat.
Goas feed on a range of local vegetation, primarily forbs and legumes, supplemented by relatively small amounts of grasses and sedges. Their main local predators are the Himalayan wolf and the snow leopard.
Reproduction
For much of the year, the sexes remain separate, with the females grazing in higher altitude terrain than the males. The females descend from their high pastures around September, prior to the mating season in December. During the rut, the males are largely solitary, scent marking their territories and sometimes butting or wrestling rival males with their horns.
Gestation lasts around six months, with the single young being born between July and August. The infants remain hidden with their mother for the first two weeks of life, before rejoining the herd. The age of sexual maturity in goas is unknown, but is probably around 18 months. Goas have lived for up t |
https://en.wikipedia.org/wiki/Delay-line%20memory | Delay-line memory is a form of computer memory, now obsolete, that was used on some of the earliest digital computers. Like many modern forms of electronic computer memory, delay-line memory was a refreshable memory, but as opposed to modern random-access memory, delay-line memory was sequential-access.
Analog delay line technology had been used since the 1920s to delay the propagation of analog signals. When a delay line is used as a memory device, an amplifier and a pulse shaper are connected between the output of the delay line and the input. These devices recirculate the signals from the output back into the input, creating a loop that maintains the signal as long as power is applied. The shaper ensures the pulses remain well-formed, removing any degradation due to losses in the medium.
The memory capacity equals the time to transmit one bit divided by the recirculation time. Early delay-line memory systems had capacities of a few thousand bits (although the term "bit" was not in popular use at the time), with recirculation times measured in microseconds. To read or write a particular memory address, it is necessary to wait for the signal representing its value to circulate through the delay line into the electronics. The latency to read or write any particular address is thus time and address dependent, but no longer than the recirculation time.
Use of a delay line for a computer memory was invented by J. Presper Eckert in the mid-1940s for use in computers such as the EDVAC and the UNIVAC I. Eckert and John Mauchly applied for a patent for a delay-line memory system on October 31, 1947; the patent was issued in 1953. This patent focused on mercury delay lines, but it also discussed delay lines made of strings of inductors and capacitors, magnetostrictive delay lines, and delay lines built using rotating disks to transfer data to a read head at one point on the circumference from a write head elsewhere around the circumference.
Genesis in radar
The basic concept of the delay line originated with World War II radar research, as a system to reduce clutter from reflections from the ground and other "fixed" objects.
A radar system consists principally of an antenna, a transmitter, a receiver, and a display. The antenna is connected to the transmitter, which sends out a brief pulse of radio energy before being disconnected again. The antenna is then connected to the receiver, which amplifies any reflected signals and sends them to the display. Objects farther from the radar return echos later than those closer to the radar, which the display indicates visually as a "blip", which can be measured against a scale.
Non-moving objects at a fixed distance from the antenna always return a signal after the same delay. This would appear as a fixed spot on the display, making detection of other targets in the area more difficult. Early radars simply aimed their beams away from the ground to avoid the majority of this "clutter". This was not an ideal |
https://en.wikipedia.org/wiki/COMAL | COMAL (Common Algorithmic Language) is a computer programming language developed in Denmark by Børge R. Christensen and Benedict Løfstedt and originally released in 1975. COMAL was one of the few structured programming languages that were available for and comfortably usable on 8-bit home computers. It was based on the BASIC programming language, adding multi-line statements and well-defined subroutines among other additions.
"COMAL Kernel Syntax & Semantics" contains the formal definition of the language. Further extensions are common to many implementations.
Design
COMAL was created as a mixture of the prevalent educational programming languages of the time, BASIC, Pascal, and, at least in the Commodore and Compis versions, the turtle graphics of Logo. The language was meant to introduce structured programming elements in an environment where BASIC would normally be used.
With the benefit of hindsight, COMAL looks like a Structured BASIC that has reasonably well-written, vendor neutral, free, standards. It is never necessary to use GOTO, and line numbers are purely for editing purposes rather than flow control. Note, however, that the standardised language only supports control structuring, not data structuring such as records or structs (commercial implementations such as UniCOMAL 3 supported this as an extension).
History
COMAL was originally developed in Denmark by mathematics teacher Børge R. Christensen. The school in which he taught had received a Data General Nova 1200 minicomputer in 1972, with the expectation that the school would begin to teach computer science. Christensen, who had taken a short course on the subject at university, was expected to lead the program and to maintain the computer system. The NOVA 1200 was supplied with Data General Extended BASIC, and Christensen quickly became frustrated with the way in which the unstructured language led students to write low-quality code that was difficult to read and thus mark. Christensen met with computer scientist Benedict Løfstedt, who encouraged him to read Systematic Programming, the then-new book on programming language design by Niklaus Wirth, the creator of Pascal. Christensen was impressed, but found that he could not use Pascal directly, as it lacked the interactive shell that made BASIC so easy for students to develop with. Over the next six months Christensen and Løfstedt corresponded by mail to design an alternative to BASIC which retained its interactive elements but added structured elements from Pascal. By 1974 the language's definition was complete but Christensen was unsuccessful in attracting interest from software firms in developing an implementation. He therefore worked with two of his students, to whom he had taught NOVA 1200 machine language, to write an implementation themselves, over another six months. The first proof-of-concept implementation (running a five-line loop) was ready on 5 August 1974, and the first release (on paper tape, as this was wha |
https://en.wikipedia.org/wiki/Alternate%20reality%20game | An alternate reality game (ARG) is an interactive networked narrative that uses the real world as a platform and employs transmedia storytelling to deliver a story that may be altered by players' ideas or actions.
The form is defined by intense player involvement with a story that takes place in real time and evolves according to players' responses. It is shaped by characters that are actively controlled by the game's designers, as opposed to being controlled by an AI as in a computer or console video game. Players interact directly with characters in the game, solve plot-based challenges and puzzles, and collaborate as a community to analyze the story and coordinate real-life and online activities. ARGs generally use multimedia, such as telephones and mail, but rely on the Internet as the central binding medium.
ARGs tend to be free to play, with costs absorbed either through supporting products (e.g., collectible puzzle cards fund Perplex City) or through promotional relationships with existing products (for example, I Love Bees was a promotion for Halo 2, and the Lost Experience and Find 815 promoted the television show Lost). Pay-to-play models exist as well. Later games in the genre have shown an increasing amount of experimentation with new models and sub-genres.
Definition
There is a great deal of debate surrounding the characteristics by which the term "alternate reality game" should be defined. Sean Stacey, the founder of the website Unfiction, has suggested that the best way to define the genre was not to define it, and instead locate each game on three axes (ruleset, authorship and coherence) in a sphere of "chaotic fiction" that would include works such as the Uncyclopedia and street games like SF0 as well.
Several experts, though, point to the use of transmedia, "the aggregate effect of multiple texts/media artifacts," as the defining attribute of ARGs. This prompts the unique collaboration emanating from ARGs as well; Sean Stewart, founder of 42 Entertainment, which has produced various successful ARGs, speaks to how this occurs, noting that "the key thing about an ARG is the way it jumps off of all those platforms. It's a game that's social and comes at you across all the different ways that you connect to the world around you."
Unique terminology
Among the terms essential to understanding discussions about ARGs are:
Puppet-master – A puppet-master or "PM" is an individual involved in designing and/or running an ARG. Puppet-masters are simultaneously allies and adversaries to the player base, creating obstacles and providing resources for overcoming them in the course of telling the game's story. Puppet-masters generally remain behind the curtain while a game is running. The real identity of puppet-masters may or may not be known ahead of time.
The Curtain – The curtain, drawing from the phrase, "Pay no attention to the man behind the curtain," is generally a metaphor for the separation between the puppet-masters and t |
https://en.wikipedia.org/wiki/Government%20Category%20List | The United Kingdom Government Category List (GCL) was a type of controlled vocabulary called a taxonomy, for use in choosing Subject metadata and keywords, primarily for indexing government web pages. The use of GCL terms in the metadata of all government resources is intended to facilitate, encourage and simplify automatic categorisation. The Government Category list was superseded by the Integrated Public Sector Vocabulary (IPSV) during 2006, which incorporates terms from GCL as well as from other controlled vocabularies, and is designed to enable semantic interoperability of systems and web resources across the UK public sector.
References
External links
UK GovTalk – publishes the GCL documentation
Kablenet – a website that monitors UK eGovernment developments
Support Insight – a website particularly interested in international eGovernment and the adoption of GCL
E-government in the United Kingdom
Internet in the United Kingdom |
https://en.wikipedia.org/wiki/Probabilistic%20Turing%20machine | In theoretical computer science, a probabilistic Turing machine is a non-deterministic Turing machine that chooses between the available transitions at each point according to some probability distribution. As a consequence, a probabilistic Turing machine can—unlike a deterministic Turing Machine—have stochastic results; that is, on a given input and instruction state machine, it may have different run times, or it may not halt at all; furthermore, it may accept an input in one execution and reject the same input in another execution.
In the case of equal probabilities for the transitions, probabilistic Turing machines can be defined as deterministic Turing machines having an additional "write" instruction where the value of the write is uniformly distributed in the Turing Machine's alphabet (generally, an equal likelihood of writing a "1" or a "0" on to the tape). Another common reformulation is simply a deterministic Turing machine with an added tape full of random bits called the "random tape".
A quantum computer is another model of computation that is inherently probabilistic.
Description
A probabilistic Turing machine is a type of nondeterministic Turing machine in which each nondeterministic step is a "coin-flip", that is, at each step there are two possible next moves and the Turing machine probabilistically selects which move to take.
Formal definition
A probabilistic Turing machine can be formally defined as the 7-tuple , where
is a finite set of states
is the input alphabet
is a tape alphabet, which includes the blank symbol #
is the initial state
is the set of accepting (final) states
is the first probabilistic transition function. is a movement one cell to the left on the Turing machine's tape and is a movement one cell to the right.
is the second probabilistic transition function.
At each step, the Turing machine probabilistically applies either the transition function or the transition function . This choice is made independently of all prior choices. In this way, the process of selecting a transition function at each step of the computation resembles a coin flip.
The probabilistic selection of the transition function at each step introduces error into the Turing machine; that is, strings which the Turing machine is meant to accept may on some occasions be rejected and strings which the Turing machine is meant to reject may on some occasions be accepted. To accommodate this, a language is said to be recognized with error probability by a probabilistic Turing machine if:
a string in implies that
a string not in implies that
Complexity classes
As a result of the error introduced by utilizing probabilistic coin tosses, the notion of acceptance of a string by a probabilistic Turing machine can be defined in different ways. One such notion that includes several important complexity classes is allowing for an error probability of 1/3. For instance, the complexity class BPP is defined as the class of languages |
https://en.wikipedia.org/wiki/GP | Gp or GP may refer to:
Arts, entertainment, and media
Gaming
Gameplanet (New Zealand), a New Zealand video game community
GamePolitics.com, a blog about the politics of computer and video games
GamePro, a monthly video game magazine
Gold Piece, the currency unit in many role-playing games
Mario Kart Arcade GP, a 2005 arcade game
Music
GP (album), the first solo album by Gram Parsons
General Public, a UK band of the 1980s and 1990s
a stave annotation denoting a rest for the entire orchestra
Government Plates, 2013 studio album by hip-hop band Death Grips
"On GP", a song on The Powers That B by hip-hop band Death Grips
General principle, a term used in hip hop
Other media
GP, a rating for films in the early 1970s, eventually changed to "PG" by the MPAA
G.P., an Australian television medical drama series
Göteborgs-Posten, a daily Swedish newspaper
In business and finance
Terminology
General Partner, one with equal responsibility and liability for an enterprise
Gross profit, an accounting term
General practice, a term used in construction surveying
Businesses and brands
Model GP, for General Purpose tractor, built by Deere & Company
Georgia-Pacific LLC, a manufacturer, and marketer of tissue, packaging, paper, pulp, and building products
Girard-Perregaux, a luxury brand of Swiss watches
Gold Peak, a manufacturer of batteries and portable solar chargers
Google+, a social media service by Google
Grameenphone, a telecommunications service provider in Bangladesh
Jeep, an automobile marque
Mathematics, science, and technology
Biology, biochemistry, and medicine
GP (journal), a journal now known as American Family Physician
Gastroparesis, a medical condition
General practitioner, in medicine, a doctor who treats acute and chronic illnesses and provides preventive care and health education to patients
Glans penis, the sensitive bulbous structure at the distal end of the penis
Globus pallidus, a subcortical structure of the brain
Glycerate 3-phosphate, a 3-carbon molecule
Glycoprotein, proteins that contain oligosaccharide chains (glycans) covalently attached to polypeptide side-chains
Gutta-percha, used in endodontic treatment to obturate root canals
Glecaprevir/pibrentasvir, medication used to treat hepatitis C
Computing
.gp, the Internet top-level domain for Guadeloupe
Genetic programming, an algorithmic technique in computer science
Geometric programming, an algorithmic technique in engineering and optimization
Gigapixel image, a unit of computer graphic resolution
Goal programming, a branch of multiple objective programming
Grandparent post, a reference to the message two levels up in a threaded message board
Guitar Pro, a music composing program
Gurupa, Amazon.com's content delivery infrastructure
Microsoft Dynamics GP, part of Microsoft Dynamics accounting software Great Plains
PARI/GP, a computer algebra system
Weapons
GP-25 or GP-30, two series of Russian under-barrel grenade launchers
Grand |
https://en.wikipedia.org/wiki/MUSIC/SP | MUSIC/SP (Multi-User System for Interactive Computing/System Product; originally McGill University System for Interactive Computing) was developed at McGill University in the 1970s from an early IBM time-sharing system called RAX (Remote Access Computing System).
The system ran on IBM S/360, S/370, and 4300-series mainframe hardware, and offered then-novel features such as file access control and data compression. It was designed to allow academics and students to create and run their programs interactively on terminals, in an era when most mainframe computing was still being done from punched cards. Over the years, development continued and the system evolved to embrace email, the Internet and eventually the World Wide Web. At its peak in the late 1980s, there were over 200 universities, colleges and high school districts that used the system in North and South America, Europe and Asia.
MUSIC was originally designed as a stand-alone operating system but with the advent of IBM's virtual machine facility, VM/370, it became more common to deploy MUSIC as a guest operating system running under VM/370.
History
1966 – IBM Remote Access Computing System (RAX) released.
1972 – McGill's RAX modifications accepted by IBM for distribution as "Installed User Program" under the name of "McGill University System for Interactive Computing" (MUSIC).
1978 – MUSIC 4.0 Major change to file system providing longer file names and advanced access control.
1981 – MUSIC 5.0 Support for IBM 4300 series CPUs and FBA disks.
1985 – MUSIC/SP 1.0 Adopted by IBM as "System Product". Support for virtual memory.
1990 – MUSIC/SP 2.2, described by IBM as having "significant enhancements."
1991 – MUSIC/SP 2.3 Internet support and tree-structured file system.
Over the years the following people contributed to the MUSIC and MUSIC/SP systems.
Roy Miller, Alan Greenberg, Wilf Mandel, Dave Edwards,
Kevin McNamee, Don Farnsworth (IBM), Dean Daniele (IBM), Glen Matthews, Linda Chernabrow,
Frank Pettinicchio, Earl Lindberg, Pierre Goyette, Kathy Wilmot,
Simon Fulleringer, David Thorpe, Gerald Ratzer, Harry Williams (Marist College),
Dave Juraschek (Northern Virginia Community Colleges), Christian Robert (Ecole Polytechnique),
Simone Spiller, Silvino Mezzari, and Mike Short.
Features
File system
The MUSIC/SP file system was unique in a number of respects. There was a single system-wide file index. The owner's userid and the file name were hashed to locate the file in this index, so any file on the system could be located with a single I/O operation. However, this presented a flat file system to the user. It lacked the directory structure commonly offered by DOS, Microsoft Windows and Unix systems. In 1990 a "tree-structured" directory view of the file system was overlaid on this, bringing the system more in line with the file systems that were then available. By default the information stored in the files was compressed. This offered considerable saving in disk space. The |
https://en.wikipedia.org/wiki/Immutable%20object | In object-oriented (OO) and functional programming, an immutable object (unchangeable object) is an object whose state cannot be modified after it is created. This is in contrast to a mutable object (changeable object), which can be modified after it is created. In some cases, an object is considered immutable even if some internally used attributes change, but the object's state appears unchanging from an external point of view. For example, an object that uses memoization to cache the results of expensive computations could still be considered an immutable object.
Strings and other concrete objects are typically expressed as immutable objects to improve readability and runtime efficiency in OO programming. Immutable objects are also useful because they are inherently thread-safe. Other benefits are that they are simpler to understand and reason about and offer higher security than mutable objects.
Concepts
Immutable variables
In imperative programming, values held in program variables whose content never changes are known as constants to differentiate them from variables that could be altered during execution. Examples include conversion factors from meters to feet, or the value of pi to several decimal places.
Read-only fields may be calculated when the program runs (unlike constants, which are known beforehand), but never change after they are initialized.
Weak vs strong immutability
Sometimes, one talks of certain fields of an object being immutable. This means that there is no way to change those parts of the object state, even though other parts of the object may be changeable (weakly immutable). If all fields are immutable, then the object is immutable. If the whole object cannot be extended by another class, the object is called strongly immutable. This might, for example, help to explicitly enforce certain invariants about certain data in the object staying the same through the lifetime of the object. In some languages, this is done with a keyword (e.g. const in C++, final in Java) that designates the field as immutable. Some languages reverse it: in OCaml, fields of an object or record are by default immutable, and must be explicitly marked with mutable to be so.
References to objects
In most object-oriented languages, objects can be referred to using references. Some examples of such languages are Java, C++, C#, VB.NET, and many scripting languages, such as Perl, Python, and Ruby. In this case, it matters whether the state of an object can vary when objects are shared via references.
Referencing vs copying objects
If an object is known to be immutable, it is preferred to create a reference of it instead of copying the entire object. This is done to conserve memory by preventing data duplication and avoid calls to constructors and destructors; it also results in a potential boost in execution speed.
The reference copying technique is much more difficult to use for mutable objects, because if any user of a mutable object re |
https://en.wikipedia.org/wiki/PmWiki | PmWiki is wiki software written by Patrick R. Michaud in the PHP programming language,
and since January 2009 it is actively maintained by Petko Yotov under the oversight of Dr. Michaud.
It is free software, licensed under the terms of the GNU General Public License.
Design focus
PmWiki software focuses on ease-of-use, so people with little IT or wiki experience will be able to put it to use. The software is also designed to be extensible and customizable. The PmWiki philosophy favours writers over readers, doesn't try to replace HTML and supports collaborative maintenance of public web pages.
Besides the usual collaborative features such as content management and knowledge base, PmWiki has been used by companies or groups as an internal communication platform with task management and meeting archives. It is also used by university and research teams.
The PmWiki wiki markup shares similarities with MediaWiki (used by Wikipedia) and has a large number of features not found in other wiki engines however its primary goal is to help with the collaborative maintenance of websites. The PmWiki markup engine is highly customizable, allowing adding, modifying or disabling markup rules, and it can support other markup languages. As an example, the Creole specifications can be enabled.
Features
Content storage
PmWiki uses regular files to store content. Each page of the wiki is stored in its own file on the web server. Pages are stored in ASCII or Unicode format and may be edited directly by the wiki administrator. According to the author, "For the standard operations (view, edit, page revisions), holding the information in flat files is clearly faster than accessing them in a database..."
PmWiki is designed to be able to store and retrieve the pages' text and metadata in various systems and formats. By default, it does not support databases. However, through plugins, PmWiki can utilize MySQL or SQLite databases for data storage.
PmWiki supports "attachments" (uploads: images or other files) to its wiki pages. The uploads can be attached to a group of pages (default), individually to each page, or to the whole wiki, depending on the content needs and structure. There are PmWiki recipes allowing easier management of the uploaded files, e.g. deletion or thumbnail/gallery creation.
Wiki structure
In PmWiki, wiki pages are contained within "wiki groups" (or "namespaces"). Each wiki group can have its own configuration options, plug-ins, access control, skin, sidebar (menu), the language of the content, and interface.
By default, PmWiki allows exactly one hierarchical level of the pages ("WikiGroup/WikiPage"), but through recipes, it is possible to have a flat structure (no wiki groups), multiple nested groups, or sub-pages.
Special wiki groups are "PmWiki", Site, SiteAdmin, and Category which contain the documentation and some configuration templates.
Links to other pages on PmWiki are written normally, with double square brackets like MediaWiki, a |
https://en.wikipedia.org/wiki/Imperative%20programming | In computer science, imperative programming is a programming paradigm of software that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates step by step, rather than on high-level descriptions of its expected results.
The term is often used in contrast to declarative programming, which focuses on what the program should accomplish without specifying all the details of how the program should achieve the result.
Procedural Programming
Procedural programming is a type of imperative programming in which the program is built from one or more procedures (also termed subroutines or functions). The terms are often used as synonyms, but the use of procedures has a dramatic effect on how imperative programs appear and how they are constructed. Heavy procedural programming, in which state changes are localized to procedures or restricted to explicit arguments and returns from procedures, is a form of structured programming. Since the 1960’s, structured programming and modular programming in general have been promoted as techniques to improve the maintainability and overall quality of imperative programs. The concepts behind object-oriented programming attempt to extend this approach.
Procedural programming could be considered a step toward declarative programming. A programmer can often tell, simply by looking at the names, arguments, and return types of procedures (and related comments), what a particular procedure is supposed to do, without necessarily looking at the details of how it achieves its result. At the same time, a complete program is still imperative since it fixes the statements to be executed and their order of execution to a large extent.
Rationale and foundations of imperative programming
The programming paradigm used to build programs for almost all computers typically follows an imperative model. Digital computer hardware is designed to execute machine code, which is native to the computer and is usually written in the imperative style, although low-level compilers and interpreters using other paradigms exist for some architectures such as lisp machines.
From this low-level perspective, the program state is defined by the contents of memory, and the statements are instructions in the native machine language of the computer. Higher-level imperative languages use variables and more complex statements, but still follow the same paradigm. Recipes and process checklists, while not computer programs, are also familiar concepts that are similar in style to imperative programming; each step is an instruction, and the physical world holds the state. Since the basic ideas of imperative programming are both conceptually familiar and directly embodied in the hardware, most computer languages are in the imperative style.
Assign |
https://en.wikipedia.org/wiki/Imperative | Imperative may refer to:
Imperative mood, a grammatical mood (or mode) expressing commands, direct requests, and prohibitions
Imperative programming, a programming paradigm in computer science
Imperative logic
Imperative (film), a 1982 German drama film
In philosophy
Moral imperative, a philosophical concept relating to obligation
Categorical imperative, central philosophical concept in the moral philosophy of Immanuel Kant
Hypothetical imperative, introduced by Immanuel Kant as a commandment of reason that applies only conditionally |
https://en.wikipedia.org/wiki/Programming%20style | Programming style, also known as code style, is a set of rules or guidelines used when writing the source code for a computer program. It is often claimed that following a particular programming style will help programmers read and understand source code conforming to the style, and help to avoid introducing errors.
A classic work on the subject was The Elements of Programming Style, written in the 1970s, and illustrated with examples from the Fortran and PL/I languages prevalent at the time.
The programming style used in a particular program may be derived from the coding conventions of a company or other computing organization, as well as the preferences of the author of the code. Programming styles are often designed for a specific programming language (or language family): style considered good in C source code may not be appropriate for BASIC source code, etc. However, some rules are commonly applied to many languages.
Elements of good style
Good style is a subjective matter, and is difficult to define. However, there are several elements common to a large number of programming styles. The issues usually considered as part of programming style include the layout of the source code, including indentation; the use of white space around operators and keywords; the capitalization or otherwise of keywords and variable names; the style and spelling of user-defined identifiers, such as function, procedure and variable names; and the use and style of comments.
Code appearance
Programming styles commonly deal with the visual appearance of source code, with the goal of readability. Software has long been available that formats source code automatically, leaving coders to concentrate on naming, logic, and higher techniques. As a practical point, using a computer to format source code saves time, and it is possible to then enforce company-wide standards without debates.
Indentation
Indentation styles assist in identifying control flow and blocks of code. In some programming languages, indentation is used to delimit logical blocks of code; correct indentation in these cases is more than a matter of style. In other languages, indentation and white space do not affect function, although logical and consistent indentation makes code more readable. Compare:
if (hours < 24 && minutes < 60 && seconds < 60) {
return true;
} else {
return false;
}
or
if (hours < 24 && minutes < 60 && seconds < 60)
{
return true;
}
else
{
return false;
}
with something like
if ( hours < 24
&& minutes < 60
&& seconds < 60
)
{return true
;} else
{return false
;}
The first two examples are probably much easier to read because they are indented in an established way (a "hanging paragraph" style). This indentation style is especially useful when dealing with multiple nested constructs.
ModuLiq
The ModuLiq Zero Indentation Style groups with carriage returns rather than indentations. Compare all of the above to:
if (hours < 24 && minutes |
https://en.wikipedia.org/wiki/SILLIAC | The SILLIAC (Sydney version of the Illinois Automatic Computer, i.e. the Sydney ILLIAC), an early computer built by the University of Sydney, Australia, was based on the ILLIAC and ORDVAC computers developed at the University of Illinois.
Like other early computers, SILLIAC was physically large. The computer itself was a single large cabinet 2.5 m high, 3 m wide and 0.6 m deep in one room. Its power supply occupied a second room and air conditioning required an additional room in the basement.
It ran until May 17, 1968, when it was replaced by a faster and bigger machine. Although it was then broken up, some pieces of SILLIAC are at the Powerhouse Museum and others are displayed at Sydney University.
History
SILLIAC had its genesis in late 1953 when Harry Messel, the dynamic new head of the School of Physics, and John Blatt, newly arrived researcher, both independently realised that the School needed an electronic computer as a tool for theoretical physics. Whilst the first computer in the southern hemisphere, the CSIR Mk 1, was already running elsewhere on the University of Sydney grounds, there were several serious impediments to its use by the School of Physics: The CSIR Mk 1 was fully occupied with CSIR research and John Blatt found its staff very unhelpful; and, as a serial architecture computer, it was far too slow for the sort of problems that Blatt and Messel envisaged. The solution was for the School to build its own computer.
Rather than design a computer from scratch, Blatt and Messel chose to copy the design of the ILLIAC for which the University of Illinois were happy to provide plans and assistance. John Algie, then maintenance engineer for CSIRAC, estimated the cost at AU£35,200, which was approximately ten times the cost of a Sydney suburban house at the time. Based on this, a decision to proceed was made at the end of 1953. A mutual friend introduced Messel to Adolph Basser, who donated AU£50,000 towards the computer. SILLIAC's eventual cost was AU£75,000.
In July 1954, Standard Telephones and Cables was contracted to build the computer, with testing and installation performed by technicians within the School of Physics.
SILLIAC's first scientific computation was carried out by PhD student Bob May (later Robert May, Baron May of Oxford) in June 1956, after self tests had been completed successfully. Users were provided with regular access from July 9, with the official opening conducted on September 12.
Barry de Ferranti, a pioneer involved in the construction of SILLIAC described the main cabinet of the computer as about 2 metres high, 1 metre deep and 5 metres long with glass panels at the front and light switches that indicated what was going on inside.
It ran until May 17, 1968, when it was replaced by a faster and bigger machine. SILLIAC has now been broken up into pieces with parts of it placed on display in the Chau Chak Wing Museum, which opened in November 2020.
Hardware specifications
Parallel, asynchro |
https://en.wikipedia.org/wiki/Object%20slicing | In C++ programming, object slicing occurs when an object of a subclass type is copied to an object of superclass type: the superclass copy will not have any of the member variables or Member functions defined in the subclass. These variables and functions have, in effect, been "sliced off".
More subtly, object slicing can likewise occur when an object of a subclass type is copied to an object of the same type by the superclass's assignment operator, in which case some of the target object's member variables will retain their original values instead of getting copied over from the source object.
This issue is not inherently unique to C++, but it does not occur naturally in most other object-oriented languages — not even in C++'s relatives such as D, Java, and C# — because copying of objects is not a basic operation in those languages.
Instead, those languages prefer to manipulate objects via implicit references, such that only copying the reference is a basic operation.
In C++, by contrast, objects are copied automatically whenever a function takes an object argument by value or returns an object by value.
Additionally, due to the lack of garbage collection in C++, programs will frequently copy an object whenever the ownership and lifetime of a single shared object would be unclear. For example, inserting an object into a standard library collection (such as a ) actually involves making and inserting a copy into the collection.
Example
struct A
{
A(int a) : a_var(a) {}
int a_var;
};
struct B : public A
{
B(int a, int b) : A(a), b_var(b) {}
int b_var;
};
B &getB()
{
static B b(1, 2);
return b;
}
int main()
{
// Normal assignment by value to a
A a(3);
// a.a_var == 3
a = getB();
// a.a_var == 1, b.b_var not copied to a
B b2(3, 4);
// b2.a_var == 3, b2.b_var == 4
A &a2 = b2;
// Partial assignment by value through reference to b2
a2 = getB();
// b2.a_var == 1, b2.b_var == 4!
return 0;
}
See also
Diamond problem
External links
What is the slicing problem in C++?
geeksforgeeks.org/object-slicing-in-c/
learncpp.com/cpp-tutorial/121-pointers-and-references-to-the-base-class-of-derived-objects/
Object-oriented programming
Articles with example C++ code
C++ |
https://en.wikipedia.org/wiki/Template%20metaprogramming | Template metaprogramming (TMP) is a metaprogramming technique in which templates are used by a compiler to generate temporary source code, which is merged by the compiler with the rest of the source code and then compiled. The output of these templates can include compile-time constants, data structures, and complete functions. The use of templates can be thought of as compile-time polymorphism. The technique is used by a number of languages, the best-known being C++, but also Curl, D, Nim, and XL.
Template metaprogramming was, in a sense, discovered accidentally.
Some other languages support similar, if not more powerful, compile-time facilities (such as Lisp macros), but those are outside the scope of this article.
Components of template metaprogramming
The use of templates as a metaprogramming technique requires two distinct operations: a template must be defined, and a defined template must be instantiated. The generic form of the generated source code is described in the template definition, and when the template is instantiated, the generic form in the template is used to generate a specific set of source code.
Template metaprogramming is Turing-complete, meaning that any computation expressible by a computer program can be computed, in some form, by a template metaprogram.
Templates are different from macros. A macro is a piece of code that executes at compile time and either performs textual manipulation of code to-be compiled (e.g. C++ macros) or manipulates the abstract syntax tree being produced by the compiler (e.g. Rust or Lisp macros). Textual macros are notably more independent of the syntax of the language being manipulated, as they merely change the in-memory text of the source code right before compilation.
Template metaprograms have no mutable variables— that is, no variable can change value once it has been initialized, therefore template metaprogramming can be seen as a form of functional programming. In fact many template implementations implement flow control only through recursion, as seen in the example below.
Using template metaprogramming
Though the syntax of template metaprogramming is usually very different from the programming language it is used with, it has practical uses. Some common reasons to use templates are to implement generic programming (avoiding sections of code which are similar except for some minor variations) or to perform automatic compile-time optimization such as doing something once at compile time rather than every time the program is run — for instance, by having the compiler unroll loops to eliminate jumps and loop count decrements whenever the program is executed.
Compile-time class generation
What exactly "programming at compile-time" means can be illustrated with an example of a factorial function, which in non-template C++ can be written using recursion as follows:
unsigned factorial(unsigned n) {
return n == 0 ? 1 : n * factorial(n - 1);
}
// Usage examples:
// factorial(0) woul |
https://en.wikipedia.org/wiki/CinePaint | CinePaint is a free and open source computer program for painting and retouching bitmap frames of films. It is a fork of version 1.0.4 of the GNU Image Manipulation Program (GIMP). It enjoyed some success as one of the earliest open source tools developed for feature motion picture visual effects and animation work.
The main reason for this adoption over mainline GIMP was its support for high bit depths (greater than 8-bits per channel) which can be required for film work. The mainline GIMP project later added high bit depths in GIMP 2.9.2, released November 2015. It is free software under the GPL-2.0-or-later. In 2018, a post titled "CinePaint 2.0 Making Progress" announced progress, but version 2.0 has not been released as of 2022.
Main features
Features that set CinePaint apart from its photo-editing predecessor include the frame manager, onion skinning, and the ability to work with 16-bit and floating point pixels for high-dynamic-range imaging (HDR). CinePaint supports a 16-bit color managed workflow for photographers and printers, including CIE*Lab and CMYK editing. It supports the Cineon, DPX, and OpenEXR image file formats. HDR creation from bracketed exposures is easy.
CinePaint is a professional open-source raster graphics editor, not a video editor. Per-channel color engine core: 8-bit, 16-bit, and 32-bit. The image formats it supports include BMP, CIN, DPX, EXR, GIF, JPEG, OpenEXR, PNG, TIFF, and XCF.
CinePaint is currently available for UNIX and Unix-like OSes including Mac OS X and IRIX. The program is available on Linux, Mac OS X, FreeBSD and NetBSD. Its main competitors are the mainline GIMP and Adobe Photoshop, although the latter is only available for Mac OS X and Microsoft Windows. Glasgow, a completely new code architecture being used for CinePaint, is expected to make a new Windows version possible and is currently under production. The Glasgow effort is FLTK based. This effort appears to have stalled.
CinePaint version 1.4.4 appeared on SourceForge on 2021/5/6, followed by CinePaint 1.4.5 on 30. May 2021.
Movies
Examples of the software's application in the movie industry include:
Elf (2003)
Looney Tunes: Back in Action (2003)
League of Extraordinary Gentlemen (2003)
Duplex (2003)
The Last Samurai (2003)
Showtime (2002)
Blue Crush (2002)
2 Fast 2 Furious (2003)
The Harry Potter series
Cats & Dogs (2001)
Dr. Dolittle 2 (2001)
Little Nicky (2000)
The Grinch (2000)
The 6th Day (2000)
Stuart Little (1999)
Planet of the Apes (2001)
Stuart Little 2 (2002)
Spider-Man (2002)
Under its former name Film Gimp, CinePaint was used for films such as Scooby-Doo (2002), Harry Potter and the Philosopher's Stone (2001), The Last Samurai (2003) and Stuart Little (1999).
See also
Comparison of raster graphics editors
References
External links
Sourceforge project site
CinePaint Wiki and downloads
16-bit imaging. From digital camera to print, a colour management tutorial
Basic color management for X (linux |
https://en.wikipedia.org/wiki/Markov%20algorithm | In theoretical computer science, a Markov algorithm is a string rewriting system that uses grammar-like rules to operate on strings of symbols. Markov algorithms have been shown to be Turing-complete, which means that they are suitable as a general model of computation and can represent any mathematical expression from its simple notation. Markov algorithms are named after the Soviet mathematician Andrey Markov, Jr.
Refal is a programming language based on Markov algorithms.
Description
Normal algorithms are verbal, that is, intended to be applied to strings in different alphabets.
The definition of any normal algorithm consists of two parts: an alphabet, which is a set of symbols, and a scheme. The algorithm is applied to strings of symbols of the alphabet. The scheme is a finite ordered set of substitution formulas. Each formula can be either simple or final. Simple substitution formulas are represented by strings of the form , where and are two arbitrary strings in the alphabet. Similarly, final substitution formulas are represented by strings of the form .
Here is an example of a normal algorithm scheme in the five-letter alphabet :
The process of applying the normal algorithm to an arbitrary string in the alphabet of this algorithm is a discrete sequence of elementary steps, consisting of the following. Let’s assume that is the word obtained in the previous step of the algorithm (or the original word , if the current step is the first). If of the substitution formulas there is no left-hand side which is included in the , then the algorithm terminates, and the result of its work is considered to be the string . Otherwise, the first of the substitution formulae whose left sides are included in is selected. If the substitution formula is of the form , then out of all of possible representations of the string of the form (where and are arbitrary strings) the one with the shortest is chosen. Then the algorithm terminates and the result of its work is considered to be . However, if this substitution formula is of the form , then out of all of the possible representations of the string of the form of the one with the shortest is chosen, after which the string is considered to be the result of the current step, subject to further processing in the next step.
For example, the process of applying the algorithm described above to the word results in the sequence of words , , , , , , , , , and , after which the algorithm stops with the result .
For other examples, see below.
Any normal algorithm is equivalent to some Turing machine, and vice versaany Turing machine is equivalent to some normal algorithm. A version of the Church-Turing thesis formulated in relation to the normal algorithm is called the "principle of normalization."
Normal algorithms have proved to be a convenient means for the construction of many sections of constructive mathematics. Moreover, inherent in the definition of a normal algorithm are a number |
https://en.wikipedia.org/wiki/IEEE%20802.1 | IEEE 802.1 is a working group of the IEEE 802 project of the IEEE Standards Association.
It is concerned with:
802 LAN/MAN architecture
internetworking among 802 LANs, MANs and wide area networks
802 Link Security
802 overall network management
protocol layers above the MAC and LLC layers
LAN/MAN bridging and management. Covers management and the lower sub-layers of OSI Layer 2,
IEEE 802.1 standards
The IEEE 802 LAN/MAN Standards Committee makes current standards freely available, after a six-month delay, through their Get IEEE 802.1 program.
Other recent 802.1 standards are available through the IEEE for a fee.
References
External links
802.1 Working group web site
802.1 WG - Project Authorization Requests
IEEE 802
Working groups |
https://en.wikipedia.org/wiki/Maturity | Maturity or immaturity may refer to:
Adulthood or age of majority
Maturity model
Capability Maturity Model, in software engineering, a model representing the degree of formality and optimization of processes in an organization
Developmental age, the age of an embryo as measured from the point of fertilization
Mature technology, a technology has been in use and development for long enough that most of its initial problems have been overcome
Maturity (finance), indicating the final date for payment of principal and interest
Maturity (geology), rock, source rock, and hydrocarbon generation
Maturity (psychological), the attainment of one's final level of psychological functioning and the integration of their personality into an organized whole
Maturity (sedimentology), the proximity of a sedimentary deposit from its source
Sexual maturity, the stage when an organism can reproduce, though this is distinct from adulthood
See also
Evolution
Maturation (disambiguation)
Mature (disambiguation) |
https://en.wikipedia.org/wiki/Laptop | A laptop computer or notebook computer, also known as a laptop or notebook for short, is a small, portable personal computer (PC). Laptops typically have a clamshell form factor with a flat panel screen (usually in diagonal size) on the inside of the upper lid and an alphanumeric keyboard and pointing device (such as a trackpad and/or trackpoint) on the inside of the lower lid, although 2-in-1 PCs with a detachable keyboard are often marketed as laptops or as having a "laptop mode". Most of the computer's internal hardware is fitted inside the lower lid enclosure under the keyboard, although many laptops have a built-in webcam at the top of the screen and some modern ones even feature a touch-screen display. In most cases, unlike tablet computers which run on mobile operating systems, laptops tend to run on desktop operating systems which have been traditionally associated with desktop computers.
Laptops run on both an AC power supply and a rechargeable battery pack and can be folded shut for convenient storage and transportation, making them suitable for mobile use. Today, laptops are used in a variety of settings, such as at work (especially on business trips), in education, for playing games, web browsing, for personal multimedia, and for general home computer use.
The names "laptop" and "notebook" refer to the fact that the computer can be practically placed on (or on top of) the user's lap and can be used similarly to a notebook. As of 2022, in American English, the terms "laptop" and "notebook" are used interchangeably; in other dialects of English, one or the other may be preferred. Although the term "notebook" originally referred to a specific size of laptop (originally smaller and lighter than mainstream laptops of the time), the term has come to mean the same thing and no longer refers to any specific size.
Laptops combine many of the input/output components and capabilities of a desktop computer into a single unit, including a display screen, small speakers, a keyboard, and a pointing device (such as a touch pad or pointing stick). Most modern laptops include a built-in webcam and microphone, and many also have a touchscreen. Laptops can be powered by an internal battery or an external power supply by using an AC adapter. Hardware specifications may vary significantly between different types, models, and price points.
Design elements, form factors, and construction can also vary significantly between models depending on the intended use. Examples of specialized models of laptops include rugged notebooks for use in construction or military applications, as well as low-production-cost laptops such as those from the One Laptop per Child (OLPC) organization, which incorporate features like solar charging and semi-flexible components not found on most laptop computers. Portable computers, which later developed into modern laptops, were originally considered to be a small niche market, mostly for specialized field applications, such as |
https://en.wikipedia.org/wiki/Molecular%20dynamics | Molecular dynamics (MD) is a computer simulation method for analyzing the physical movements of atoms and molecules. The atoms and molecules are allowed to interact for a fixed period of time, giving a view of the dynamic "evolution" of the system. In the most common version, the trajectories of atoms and molecules are determined by numerically solving Newton's equations of motion for a system of interacting particles, where forces between the particles and their potential energies are often calculated using interatomic potentials or molecular mechanical force fields. The method is applied mostly in chemical physics, materials science, and biophysics.
Because molecular systems typically consist of a vast number of particles, it is impossible to determine the properties of such complex systems analytically; MD simulation circumvents this problem by using numerical methods. However, long MD simulations are mathematically ill-conditioned, generating cumulative errors in numerical integration that can be minimized with proper selection of algorithms and parameters, but not eliminated.
For systems that obey the ergodic hypothesis, the evolution of one molecular dynamics simulation may be used to determine the macroscopic thermodynamic properties of the system: the time averages of an ergodic system correspond to microcanonical ensemble averages. MD has also been termed "statistical mechanics by numbers" and "Laplace's vision of Newtonian mechanics" of predicting the future by animating nature's forces and allowing insight into molecular motion on an atomic scale.
History
MD was originally developed in the early 1950s, following the earlier successes with Monte Carlo simulations, which themselves date back to the eighteenth century, in the Buffon's needle problem for example, but was popularized for statistical mechanics at Los Alamos National Laboratory by Rosenbluth and Metropolis in what is known today as Metropolis–Hastings algorithm. Interest in the time evolution of N-body systems dates much earlier to the seventeenth century, beginning with Newton, and continued into the following century largely with a focus on celestial mechanics and issues such as the stability of the solar system. Many of the numerical methods used today were developed during this time period, which predates the use of computers; for example, the most common integration algorithm used today, the Verlet integration algorithm, was used as early as 1791 by Jean Baptiste Joseph Delambre. Numerical calculations with these algorithms can be considered to be MD "by hand."
As early as 1941, integration of the many-body equations of motion was carried out with analog computers. Some undertook the labor-intensive work of modeling atomic motion by constructing physical models, e.g., using macroscopic spheres. The aim was to arrange them in such a way as to replicate the structure of a liquid and use this to examine its behavior. J.D. Bernal said, in 1962: "... I took a number of r |
https://en.wikipedia.org/wiki/Ultrix | Ultrix (officially all-caps ULTRIX) is the brand name of Digital Equipment Corporation's (DEC) discontinued native Unix operating systems for the PDP-11, VAX, MicroVAX and DECstations.
History
The initial development of Unix occurred on DEC equipment, notably DEC PDP-7 and PDP-11 (Programmable Data Processor) systems. Later DEC computers, such as their VAX, also offered Unix. The first port to VAX, UNIX/32V, was finished in 1978, not long after the October 1977 announcement of the VAX, for which – at that time – DEC only supplied its own proprietary operating system, VMS.
DEC's Unix Engineering Group (UEG) was started by Bill Munson with Jerry Brenner and Fred Canter, both from DEC's Customer Service Engineering group, Bill Shannon (from Case Western Reserve University), and Armando Stettner (from Bell Labs). Other later members of UEG included Joel Magid, Bill Doll, and Jim Barclay recruited from DEC's marketing and product management groups.
Under Canter's direction, UEG released V7M, a modified version of Unix 7th Edition (q.v.).
In 1988 The New York Times reported that Ultrix was POSIX compliant.
BSD
Shannon and Stettner worked on low-level CPU and device driver support initially on UNIX/32V but quickly moved to concentrate on working with the University of California, Berkeley's 4BSD. Berkeley's Bill Joy came to New Hampshire to work with Shannon and Stettner to wrap up a new BSD release. UEG's machine was the first to run the new Unix, labeled 4.5BSD as was the tape Bill Joy took with him. The thinking was that 5BSD would be the next version - university lawyers thought it would be better to call it 4.1BSD. After the completion of 4.1BSD, Bill Joy left Berkeley to work at Sun Microsystems. Shannon later moved from New Hampshire to join him. Stettner stayed at DEC and later conceived of and started the Ultrix project.
Shortly after IBM announced plans for a native UNIX product, Stettner and Bill Doll presented plans for DEC to make a native VAX Unix product available to its customers; DEC founder Ken Olsen agreed.
V7m
DEC's first native UNIX product was V7M (for modified) or V7M11 for the PDP-11 and was based on Version 7 Unix from Bell Labs. V7M was developed by DEC's original Unix Engineering Group (UEG); work was done primarily by Fred Canter and Jerry Brenner, with their teammates Stettner, Bill Burns, Mary Anne Cacciola, and Bill Munson. V7M contained many fixes to the kernel including support for separate instruction and data spaces, significant work for hardware error recovery, and many device drivers. Much work was put into producing a release that would reliably bootstrap from many tape drives or disk drives. V7M was well respected in the Unix community. UEG evolved into the group that later developed Ultrix.
First release of Ultrix
The first native VAX UNIX product from DEC was Ultrix-32, based on 4.2BSD with some non-kernel features from System V, and was released in June 1984. Ultrix-32 was primarily the brainchild of |
https://en.wikipedia.org/wiki/Bentley%20Systems | Bentley Systems, Incorporated is an American-based software development company that develops, manufactures, licenses, sells and supports computer software and services for the design, construction, and operation of infrastructure. The company's software serves the building, plant, civil, and geospatial markets in the areas of architecture, engineering, construction (AEC) and operations. Their software products are used to design, engineer, build, and operate large constructed assets such as roadways, railways, bridges, buildings, industrial plants, power plants, and utility networks. The company re-invests 20% of their revenues in research and development.
Bentley Systems is headquartered in Exton, Pennsylvania, United States, but has development, sales and other departments in over 50 countries. In 2021, the company generated revenue of $1 billion in 186 countries.
Software
Bentley has three principal software product lines: MicroStation, ProjectWise, and AssetWise. Since 2014, some products have been based on the Microsoft Azure cloud computing platform.
History
Keith A. Bentley and Barry J. Bentley founded Bentley Systems in 1984. They introduced the commercial version of PseudoStation in 1985, which allowed users of Intergraph's VAX systems to use low-cost graphics terminals to view and modify the designs on their Intergraph IGDS (Interactive Graphics Design System) installations. Their first product was shown to potential users who were polled as to what they would be willing to pay for it. They averaged the answers, arriving at a price of $7,943. A DOS-based version of MicroStation was introduced in 1986.
Acquisitions
On June 18, 1997, Bentley acquired IdeaGraphix, a developer of MicroStation-based application software for architecture, engineering, and facilities management.
On January 15, 1998, Bentley acquired Jacobus.
On January 2, 2001, Bentley acquired Intergraph's civil engineering, plot-services and raster conversion software businesses.
On October 17, 2001, Bentley Systems bought Geopak design software for road and rail infrastructure.
On July 30, 2002, Bentley Systems acquired Rebis.
On January 6, 2003, Bentley announced it would acquire Infrasoft Corporation.
On August 2, 2004, Bentley acquired Haestad Methods, Inc.
On August 31, 2005, Bentley agreed to acquire netGuru's Research Engineers International (REI) business which included its STAAD structural analysis and design product line.
On June 6, 2006, Bentley acquired GEF-RIS AG.
On January 29, 2007, Bentley acquired KIWI Software.
On May 12, 2007, Bentley acquired C.W. Beilfuss and Associates.
On May 9, 2007, Bentley acquired TDV GmbH, an analysis and design software provider for bridge engineering.
On January 22, 2008, Bentley acquired Hevacomp, Ltd.
On January 24, 2008, Bentley acquired LEAP Software, Inc.
On January 29, 2008, Bentley acquired promis•e product line from ECT International.
On May 28, 2008, Bentley Systems acquired Common Point for mainstream construc |
https://en.wikipedia.org/wiki/Mastercam | Mastercam is a suite of Computer-Aided Manufacturing (CAM) and CAD/CAM software applications developed by CNC Software, LLC. Founded in Massachusetts in 1983, CNC Software are headquartered in Tolland, Connecticut.
Mastercam is CNC Software's main product. It started as a 2D CAM system with CAD tools that let machinists design virtual parts on a computer screen and also guided computer numerical controlled (CNC) machine tools in the manufacture of parts. Mastercam has been ranked by CIMdata Inc. as the most widely used CAM package in the world since 1994.
References
External links
CNC Software/Mastercam
Companies based in Tolland County, Connecticut
Software companies based in Connecticut
Computer-aided manufacturing software
Technology companies of the United States
Tolland, Connecticut
Numerical control
1983 establishments in Massachusetts
Software companies of the United States |
https://en.wikipedia.org/wiki/McAfee | McAfee Corp. ( ), formerly known as McAfee Associates, Inc. from 1987 to 1997 and 2004 to 2014, Network Associates Inc. from 1997 to 2004, and Intel Security Group from 2014 to 2017, is an American global computer security software company headquartered in San Jose, California.
The company was purchased by Intel in February 2011, and became part of the Intel Security division. In 2017, Intel had a strategic deal with TPG Capital and converted Intel Security into a joint venture between both companies called McAfee. Thoma Bravo took a minority stake in the new company, and Intel retained a 49% stake. The owners took McAfee public on the NASDAQ in 2020, and in 2022 an investor group led by Advent International Corporation took it private again.
History
1987–1999
The company was founded in 1987 as McAfee Associates, named for its founder John McAfee, who resigned from the company in 1994. McAfee was incorporated in the state of Delaware in 1992. In 1993, McAfee stepped down as head of the company, taking the position of chief technology officer before his eventual resignation. Bill Larson was appointed CEO in his place. Network Associates was formed in 1997 as a merger of McAfee Associates, Network General, PGP Corporation and Helix Software.
In 1996, McAfee acquired Calgary, Alberta, Canada-based FSA Corporation, which helped the company diversify its security offerings away from just client-based antivirus software by bringing on board its own network and desktop encryption technologies.
The FSA team also oversaw the creation of a number of other technologies that were leading edge at the time, including firewall, file encryption, and public key infrastructure product lines. While those product lines had their own individual successes including PowerBroker (written by Dean Huxley and Dan Freedman and now sold by BeyondTrust), the growth of antivirus ware always outpaced the growth of the other security product lines. It is fair to say that McAfee remains best known for its anti-virus and anti-spam products.
Among other companies bought and sold by McAfee is Trusted Information Systems, which developed the Firewall Toolkit, the free software foundation for the commercial Gauntlet Firewall, which was later sold to Secure Computing Corporation. McAfee acquired Trusted Information Systems under the banner of Network Associates in 1998.
McAfee, as a result of brief ownership of TIS Labs/NAI Labs/Network Associates Laboratories/McAfee Research, was highly influential in the world of open-source software, as that organization produced portions of the Linux, FreeBSD, and Darwin operating systems, and developed portions of the BIND name server software and SNMP version 3.
2000–2009
In 2000, McAfee/Network Associates was the leading authority in educating and protecting people against the Love Bug or ILOVEYOU virus, one of the most destructive computer viruses in history.
At the end of 2000, CEO Bill Larson, President Peter Watkins, and CFO Prabh |
https://en.wikipedia.org/wiki/Sybase | Sybase, Inc. was an enterprise software and services company. The company produced software relating to relational databases, with facilities located in California and Massachusetts. Sybase was acquired by SAP in 2010; SAP ceased using the Sybase name in 2014.
History
1984: Robert Epstein, Mark Hoffman, Jane Doughty, and Tom Haggin found Sybase (initially trading as Systemware) in Epstein's home in Berkeley, California. Their first commercial location is half of an office suite at 2107 Dwight Way in Berkeley. They set out to create a relational database management system (RDBMS) that will organize information and make it available to computers within a network.
March 1986: Systemware enters into talks with Microsoft to license Data Server, a database product built to run on UNIX computers. Those talks lead to a product called Ashton-Tate/Microsoft SQL Server 1.0, shipping in May 1989.
May 1991: Systemware changes its name to Sybase.
January 1998: Sybase announces that it has found inconsistencies in profits reported from its Japanese division and will be restating company financial results for the last three quarters of 1997. Sybase determines that the inconsistencies are due to five executives in Sybase's Japanese subsidiary found to have used side letters to artificially inflate the profits from their operations. Following a class-action lawsuit, the five executives involved are fired.
November 1998: John S. Chen is appointed Chairman, CEO, and President.
2007: Sybase crosses the $1billion revenue mark.
March 2009: Sybase and SAP partner to deliver the SAP Business Suite software to iPhone, Windows Mobile, BlackBerry, and other devices.
May 2009: Sybase begins packaging MicroStrategy business intelligence software with its Sybase IQ server.
September 2009: Sybase and Verizon partner to manage mobility services for enterprises worldwide through Verizon's Managed Mobility Solutions, which uses Sybase's enterprise device management platform.
May 2010: SAP and Sybase, Inc. announce that SAP America, Inc. has signed a definitive merger agreement to acquire Sybase, Inc. for all outstanding shares of Sybase common stock, representing an enterprise value of approximately $5.8billion.
July 2010: SAP announces it has completed the acquisition of Sybase, Inc., the latter now a wholly owned subsidiary of SAP America.
October 2012: All of Sybase's employees are incorporated into SAP's workforce. On October30, 2012, SAP announces that Sybase, Inc. President and CEO John S. Chen will be leaving Sybase effective the very next day (October31, 2012) after leading Sybase for 15years.
See also
Adaptive Server Enterprise
References
SAP SE acquisitions
Companies based in Alameda County, California
Software companies established in 1984
1984 establishments in California
Defunct software companies of the United States
Dublin, California
Defunct companies based in the San Francisco Bay Area
Software companies based in the San Francisco Bay Area
Data companies
2 |
https://en.wikipedia.org/wiki/Daniel%20J.%20Bernstein | Daniel Julius Bernstein (sometimes known as djb; born October 29, 1971) is an American German mathematician, cryptologist, and computer scientist. He is a visiting professor at CASA at Ruhr University Bochum, as well as a research professor of Computer Science at the University of Illinois at Chicago. Before this, he was a visiting professor in the department of mathematics and computer science at the Eindhoven University of Technology.
Early life
Bernstein attended Bellport High School, a public high school on Long Island, graduating in 1987 at the age of 15. The same year, he ranked fifth in the Westinghouse Science Talent Search. In 1987 (at the age of 16), he achieved a Top 10 ranking in the William Lowell Putnam Mathematical Competition. Bernstein earned a B.A. in mathematics from New York University (1991) and a Ph.D. in mathematics from the University of California, Berkeley (1995), where he studied under Hendrik Lenstra.
Bernstein v. United States
The export of cryptography from the United States was controlled as a munition starting from the Cold War until recategorization in 1996, with further relaxation in the late 1990s. In 1995, Bernstein brought the court case Bernstein v. United States. The ruling in the case declared that software was protected speech under the First Amendment, which contributed to regulatory changes reducing controls on encryption. Bernstein was originally represented by the Electronic Frontier Foundation. He later represented himself.
Cryptography
Bernstein designed the Salsa20 stream cipher in 2005 and submitted it to eSTREAM for review and possible standardization. He later published the ChaCha20 variant of Salsa in 2008. In 2005, he proposed the elliptic curve Curve25519 as a basis for public-key schemes. He worked as the lead researcher on the Ed25519 version of EdDSA. The algorithms made their way into popular software. For example, since 2014, when OpenSSH is compiled without OpenSSL they power most of its operations, and OpenBSD package signing is based on Ed25519.
Nearly a decade later, Edward Snowden disclosed mass surveillance by the National Security Agency, and researchers discovered a backdoor in the Agency's Dual EC DRBG algorithm. These events raised suspicions of the elliptic curve parameters proposed by NSA and standardized by NIST. Many researchers feared that the NSA had chosen curves that gave them a cryptanalytic advantage. Google selected ChaCha20 along with Bernstein's Poly1305 message authentication code for use in TLS, which is widely used for Internet security. Many protocols based on his works have been adopted by various standards organizations and are used in a variety of applications, such as Apple iOS, the Linux kernel, OpenSSH, and Tor.
In spring 2005, Bernstein taught a course on "high speed cryptography." He introduced new cache attacks against implementations of AES in the same time period.
In April 2008, Bernstein's stream cipher "Salsa20" was selected as a member o |
https://en.wikipedia.org/wiki/Aegis%20Combat%20System | The Aegis Combat System is an American integrated naval weapons system, which uses computers and radars to track and guide weapons to destroy enemy targets. It was developed by the Missile and Surface Radar Division of RCA, and it is now produced by Lockheed Martin.
Initially used by the United States Navy, Aegis is now used also by the Japan Maritime Self-Defense Force, Spanish Navy, Royal Norwegian Navy, Republic of Korea Navy, and Royal Australian Navy, and is planned for use by the Royal Canadian Navy. As of 2022, a total of 110 Aegis-equipped ships have been deployed, and 71 more are planned (see operators).
Aegis BMD (Ballistic Missile Defense) capabilities are being developed as part of the NATO missile defense system.
Etymology
The word "Aegis" is a reference that dates back to Greek mythology, with connotations of a protective shield, as the Aegis was the buckler (shield) of Zeus, worn by Athena.
Overview
The Aegis Combat System (ACS) implements advanced command and control (command and decision, or C&D, in Aegis parlance). It is composed of the Aegis Weapon System (AWS), the fast-reaction component of the Aegis Anti-Aircraft Warfare (AAW) capability, along with the Phalanx Close In Weapon System (CIWS), and the Mark 41 Vertical Launch System. Mk 41 VLS is available in different versions that vary in size and weight. There are three lengths: for the self-defense version, for the tactical version, and for the strike version. The empty weight for an 8-cell module is for the self-defense version, for the tactical version, and for the strike version, thus incorporating anti-submarine warfare (ASW) systems and Tomahawk Land Attack Cruise Missiles (TLAM). Shipboard torpedo and naval gunnery systems are also integrated.
AWS, the heart of Aegis, comprises the AN/SPY-1 Radar, MK 99 Fire Control System, Weapon Control System (WCS), the Command and Decision Suite, and Standard Missile family of weapons; these include the basic RIM-66 Standard, the RIM-156 Standard ER extended range missile, and the newer RIM-161 Standard Missile 3 designed to counter ballistic missile threats. A further SM-2 based weapon, the RIM-174 Standard ERAM (Standard Missile 6) was deployed in 2013. Individual ships may not carry all variants. Weapons loads are adjusted to suit assigned mission profile. The Aegis Combat System is controlled by an advanced, automatic detect-and-track, multi-function three-dimensional passive electronically scanned array radar, the AN/SPY-1. Known as "the Shield of the Fleet", the SPY high-powered (6 megawatt) radar is able to perform search, tracking, and missile guidance functions simultaneously with a track capacity of well over 100 targets at more than . However, the AN/SPY-1 Radar is mounted lower than the AN/SPS-49 radar system and so has a reduced radar horizon.
The Aegis system communicates with the Standard missiles through a radio frequency (RF) uplink using the AN/SPY-1 radar for mid-course update missile guidance durin |
https://en.wikipedia.org/wiki/ISPF | In computing, Interactive System Productivity Facility (ISPF) is a software product for many historic IBM mainframe operating systems and currently the z/OS and z/VM operating systems that run on IBM mainframes. It includes a screen editor, the user interface of which was emulated by some microcomputer editors sold commercially starting in the late 1980s, including SPF/PC.
ISPF primarily provides an IBM 3270 terminal interface with a set of panels. Each panel may include menus and dialogs to run tools on the underlying environment, e.g., Time Sharing Option (TSO). Generally, these panels just provide a convenient interface to do tasks—most of them execute modules of IBM mainframe utility programs to do the actual work. ISPF is frequently used to manipulate z/OS data sets via its Program Development Facility (ISPF/PDF).
ISPF is user-extensible and it is often used as an application programming interface. Many vendors have created products for z/OS that use the ISPF interface.
An early version was called Structured Programming Facility (SPF) and introduced in SVS and MVS systems in 1974. IBM chose the name because SPF was introduced about the same time as structured programming concepts. In 1979 IBM introduced a new version and a compatible product for CMS under Virtual Machine Facility/370 Release 5.
In 1980 IBM changed its name to System Productivity Facility and offered a version for CMS under VM/SP.
In 1982 IBM changed the name to Interactive System Productivity Facility, split off some facilities into Interactive System Productivity Facility/Program Development Facility (ISPF/PDF) and offered a version for VSE/AF.
In 1984 IBM released ISPF Version 2 and ISPF/PDF Version 2; the VM versions allowed the user to select either the PDF editor or XEDIT.
IBM eventually merged PDF back into the base product.
ISPF can also be run from a z/OS batch job.
ISPF/PDF interactive tools
When a foreground (interactive) TSO user invokes ISPF, it provides a menuing system, normally with an initial display of a Primary Option Menu this provides them access to many useful tools for application development and for administering the z/OS operating system.
Such tools include
Browse - for viewing data sets, partitioned data set (PDS) members, and Unix System Services files.
Edit - for editing data sets, PDS members, and Unix System Services files.
Utilities - for performing data manipulation operations, such as:
Data Set List - which allows the user to list and manipulate (copy, move, rename, print, catalog, delete, etc.) files (termed "data sets" in the z/OS environment).
Member List - for similar manipulations of members of PDSs.
Search facilities for finding modules or text within members or data sets.
Compare facilities for comparing members or data sets.
Library Management, including promoting and demoting program modules.
ISPF as a user interface development environment
Underlying ISPF/PDF is an extensive set of tools that allow application develo |
https://en.wikipedia.org/wiki/Predication%20%28computer%20architecture%29 | In computer architecture, predication is a feature that provides an alternative to conditional transfer of control, as implemented by conditional branch machine instructions. Predication works by having conditional (predicated) non-branch instructions associated with a predicate, a Boolean value used by the instruction to control whether the instruction is allowed to modify the architectural state or not. If the predicate specified in the instruction is true, the instruction modifies the architectural state; otherwise, the architectural state is unchanged. For example, a predicated move instruction (a conditional move) will only modify the destination if the predicate is true. Thus, instead of using a conditional branch to select an instruction or a sequence of instructions to execute based on the predicate that controls whether the branch occurs, the instructions to be executed are associated with that predicate, so that they will be executed, or not executed, based on whether that predicate is true or false.
Vector processors, some SIMD ISAs (such as AVX2 and AVX-512) and GPUs in general make heavy use of predication, applying one bit of a conditional mask vector to the corresponding elements in the vector registers being processed, whereas scalar predication in scalar instruction sets only need the one predicate bit. Where predicate masks become particularly powerful in vector processing is if an array of condition codes, one per vector element, may feed back into predicate masks that are then applied to subsequent vector instructions.
Overview
Most computer programs contain conditional code, which will be executed only under specific conditions depending on factors that cannot be determined beforehand, for example depending on user input. As the majority of processors simply execute the next instruction in a sequence, the traditional solution is to insert branch instructions that allow a program to conditionally branch to a different section of code, thus changing the next step in the sequence. This was sufficient until designers began improving performance by implementing instruction pipelining, a method which is slowed down by branches. For a more thorough description of the problems which arose, and a popular solution, see branch predictor.
Luckily, one of the more common patterns of code that normally relies on branching has a more elegant solution. Consider the following pseudocode:
if condition
{dosomething}
else
{dosomethingelse}
On a system that uses conditional branching, this might translate to machine instructions looking similar to:
branch-if-condition to label1
dosomethingelse
branch-to label2
label1:
dosomething
label2:
...
With predication, all possible branch paths are coded inline, but some instructions execute while others do not. The basic idea is that each instruction is associated with a predicate (the word here used similarly to its usage in predicate logic) and that the instruction will only be execut |
https://en.wikipedia.org/wiki/GeoPort | GeoPort is a serial data system used on some models of the Apple Macintosh that could be externally clocked to run at a 2 megabit per second data rate. GeoPort slightly modified the existing Mac serial port pins to allow the computer's internal DSP hardware or software to send data that, when passed to a digital-to-analog converter, emulated various devices such as modems and fax machines. GeoPort could be found on late-model 68K-based machines (the AV series) as well as many pre-USB Power Macintosh models and PiPPiN. Some later Macintosh models also included an internal GeoPort via an internal connector on the Communications Slot. Apple GeoPort technology is now obsolete, and modem support is typically offered through USB.
Background
AppleBus and LocalTalk
Early during the development of the Apple Macintosh, Apple engineers decided to use the Zilog 8530 "Serial Communications Controller" (SCC) for most input/output tasks. The SCC was relatively advanced compared to the more common UARTs of the era, offering a number of high-speed modes and built-in software for error checking and similar duties. The speed of the system was based on an external clock signal sent to it by the host platform, normally up to about 1 Mbit/s, which could be "divided down" to run at slower speeds as low as 300 bit/s. The SCC had two channels, which could be run at different speeds, and even different voltages, to allow communications with a wide variety of devices and interfaces.
Initially the engineers had envisioned using the SCC to support a packet-based protocol known as "AppleBus". AppleBus would allow peripheral devices to be plugged into a daisy-chain configuration in a manner surprisingly similar to the modern Universal Serial Bus. However, as development continued, Apple's networking project, AppleNet, was being canceled due to high costs and a rapidly changing marketplace. Team members working on AppleBus quickly shifted gears, producing the LocalTalk system running on the SCC ports rather than AppleNet's plug-in expansion card.
LocalTalk relied on clocking from the CPU that was divided down to produce an output at roughly 230.4 kbit/s. Nodes on the network remained in sync using clock recovery. This allowed the entire system to be run over a simple three-wire connection, or two-wires in the case of PhoneNet. As the ports also include the clock pins, it was possible to override the internal clock signal and run the system at much higher speeds, as was the case for Dayna and Centram products that ran between 750 and 850 kbit/s.
However, as the SCC had only three bytes of buffer space, it was critical that the ports be read as quickly as possible to prevent a buffer overflow and loss of data. This was not an issue for networking protocols, where lost packets are assumed and dealt with in the network stack, but represented a serious problem for RS-232 data which had no internal form of flow control in the data stream. As a result, performance on a Mac Plus w |
https://en.wikipedia.org/wiki/Centrino | Centrino is a brand name of Intel Corporation which represents its Wi-Fi and WiMAX wireless computer networking adapters. Previously the same brand name was used by the company as a platform-marketing initiative. The change of the meaning of the brand name occurred on January 7, 2010. The Centrino was replaced by the Ultrabook.
The old platform-marketing brand name covered a particular combination of mainboard chipset, mobile CPU and wireless network interface in the design of a laptop. Intel claimed that systems equipped with these technologies delivered better performance, longer battery life and broader wireless network interoperability than non-Centrino systems.
The new product line name for Intel wireless products is Intel Centrino Wireless.
Intel Centrino
Notebook implementations
Carmel platform (2003)
Intel used "Carmel" as the codename for the first-generation Centrino platform, introduced in March 2003.
Industry-watchers initially criticized the Carmel platform for its lack of support for IEEE 802.11g, because many independent Wi-Fi chip-makers like Broadcom and Atheros had already started shipping 802.11g products. Intel responded that the IEEE had not finalized the 802.11g standard at the time of Carmel's announcement.
In early 2004, after the finalization of the 802.11g standard, Intel permitted an Intel PRO/Wireless 2200BG to substitute for the 2100. At the same time, they permitted the new Dothan Pentium M to substitute for the Banias Pentium M. Initially, Intel permitted only the 855GM chipset, which did not support external graphics. Later, Intel allowed the 855GME and 855PM chips, which did support external graphics, in Centrino laptops.
Despite criticisms, the Carmel platform won quick acceptance among OEMs and consumers. Carmel could attain or exceed the performance of older Pentium 4-M platforms, while allowing for laptops to operate for 4 to 5 hours on a 48 W-h battery. Carmel also allowed laptop manufacturers to create thinner and lighter laptops because its components did not dissipate much heat, and thus did not require large cooling systems.
Sonoma platform (2005)
Intel used Sonoma as the codename for the second-generation Centrino platform, introduced in January 2005.
The Mobile 915 Express chipset, like its desktop version, supports many new features such as DDR2, PCI Express, Intel High Definition Audio, and SATA. Unfortunately, the introduction of PCI Express and faster Pentium M processors causes laptops built around the Sonoma platform to have a shorter battery-life than their Carmel counterparts; Sonoma laptops typically achieve between 3.5–4.6 hours of battery-life on a 53 W-h battery.
Napa platform (2006)
The codename Napa designates the third-generation Centrino platform, introduced in January 2006 at the Winter Consumer Electronics Show. The platform initially supported Intel Core Duo processors but the newer Core 2 Duo processors were launched and supported in this platform from July 27, 2006 onward |
https://en.wikipedia.org/wiki/AN/USQ-17 | The AN/USQ-17 or Naval Tactical Data System (NTDS) computer referred to in Sperry Rand documents as the Univac M-460, was Seymour Cray's last design for UNIVAC. UNIVAC later released a commercial version, the UNIVAC 490. That system was later upgraded to a multiprocessor configuration as the 494.
Overview
The machine was the size and shape of a refrigerator, about four feet high (roughly 1.20 meters), with a hinged lid for access. Shortly after completing the prototype design, Cray left to join Control Data Corporation. When the Navy awarded Sperry Rand a US$50 million contract to build the AN/USQ-17, Univac engineers redesigned the entire machine from scratch using silicon transistors. They retained the instruction set, so that programs developed for the original machine would still run on the new one.
As part of the redesign it was decided to improve access, and the second version was designed to stand upright, like an old fashioned double-door refrigerator, about six feet tall (roughly 1.80 m). This new design was designated the AN/USQ-20.
Instructions were represented as 30-bit words, in the following format:
f 6 bits function code
j 3 bits jump condition designator
k 3 bits partial word designator
b 3 bits which index register to use
y 15 bits operand address in memory
Numbers were represented as 30-bit words, this allowed for five 6-bit alphanumeric characters per word.
The main memory was 32,768 = 32K words of core memory.
The available processor registers were:
One 30-bit accumulator (A).
One 30-bit Q register (combined with A to give a total of 60 bits for the result of multiplication or the dividend in division).
Seven 15-bit index registers (B1–B7).
The instruction format defined for the AN/USQ-17 marked the beginning of an instruction set which would be carried on, with many changes along the way, into later UNIVAC computers including the UNIVAC 1100/2200 series, which is still in use .
First delivery of NTDS and related U.S. Navy computers
AN/USQ17, 30 bit, March 1958
CP-642 AKA AN/USQ-20, 30 bit, 1960
AN/UYK-8, 30 bit, 1967
AN/UYK-7, 32 bit, 1971
AN/UYK-43, 32 bit, 1984
AN/UYK-20, 16 bit, 1973
AN/AYK-14, 16 bit, 1980
AN/UYK-44, 16 bit, 1984
See also
List of UNIVAC products
History of computing hardware
References
External links
“M-460 Computer Characteristics” – PDF ... 32pp (1956)
The Univac M-460 Computer – Paper by J. E. Thornton, M. Macaulay, and D. H. Toth, Remington Rand Univac Division of Sperry Rand (on-line version from Ed Thelen's Antique Computer Home Page)
UNIVAC hardware
Transistorized computers
Military computers
Military electronics of the United States |
https://en.wikipedia.org/wiki/Privoxy | Privoxy is a free non-caching web proxy with filtering capabilities for enhancing privacy, manipulating cookies and modifying web page data and HTTP headers before the page is rendered by the browser. Privoxy is a "privacy enhancing proxy", filtering web pages and removing advertisements. Privoxy can be customized by users, for both stand-alone systems and multi-user networks. Privoxy can be chained to other proxies and is frequently used in combination with Squid among others and can be used to bypass Internet censorship.
History
Privoxy is based on the Internet Junkbuster and is released under the GNU General Public License. It runs on Linux, OpenWrt, DD-WRT, Windows, macOS, OS/2, AmigaOS, BeOS, and most flavors of Unix. Almost any Web browser can use it. The software is hosted at SourceForge. Historically the Tor Project bundled Privoxy with Tor but this was discontinued in 2010 as they pushed their own internal Tor Browser project and recommended against external third party proxies. Privoxy still works if manually configured and is still recommended for third party non-browser applications which do not natively support SOCKS.
Reception
Shashank Sharma of Linux Format rated it 9/10 stars and wrote, "Privoxy is highly customisable, easy to set up, has good documentation and is fun to work with. Use it!" Erez Zukerman of PC World rated it 4/5 stars and called it complicated but powerful. Michelle Delio of Wired.com called it "an outstanding way to protect one's privacy".
See also
Content-control software
Web accelerator which discusses host-based HTTP acceleration
Proxy server which discusses client-side proxies
Reverse proxy which discusses origin-side proxies
Internet Cache Protocol
Polipo, a caching web proxy server
Proxomitron, a similar content-filtering proxy for Windows
References
External links
Internet privacy
Proxy servers
Free network-related software
Free software programmed in C
Cross-platform software
Forward proxy
Reverse proxy
Proxy server software for Linux
Unix network-related software |
https://en.wikipedia.org/wiki/Type%20system | In computer programming, a type system is a logical system comprising a set of rules that assigns a property called a type (for example, integer, floating point, string) to every term (a word, phrase, or other set of symbols). Usually the terms are various language constructs of a computer program, such as variables, expressions, functions, or modules. A type system dictates the operations that can be performed on a term. For variables, the type system determines the allowed values of that term. Type systems formalize and enforce the otherwise implicit categories the programmer uses for algebraic data types, data structures, or other components (e.g. "string", "array of float", "function returning boolean").
Type systems are often specified as part of programming languages and built into interpreters and compilers, although the type system of a language can be extended by optional tools that perform added checks using the language's original type syntax and grammar. The main purpose of a type system in a programming language is to reduce possibilities for bugs in computer programs due to type errors. The given type system in question determines what constitutes a type error, but in general, the aim is to prevent operations expecting a certain kind of value from being used with values of which that operation does not make sense (validity errors). Type systems allow defining interfaces between different parts of a computer program, and then checking that the parts have been connected in a consistent way. This checking can happen statically (at compile time), dynamically (at run time), or as a combination of both. Type systems have other purposes as well, such as expressing business rules, enabling certain compiler optimizations, allowing for multiple dispatch, and providing a form of documentation.
Usage overview
An example of a simple type system is that of the C language. The portions of a C program are the function definitions. One function is invoked by another function. The interface of a function states the name of the function and a list of parameters that are passed to the function's code. The code of an invoking function states the name of the invoked, along with the names of variables that hold values to pass to it. During execution, the values are placed into temporary storage, then execution jumps to the code of the invoked function. The invoked function's code accesses the values and makes use of them. If the instructions inside the function are written with the assumption of receiving an integer value, but the calling code passed a floating-point value, then the wrong result will be computed by the invoked function. The C compiler checks the types of the arguments passed to a function when it is called against the types of the parameters declared in the function's definition. If the types do not match, the compiler throws a compile-time error or warning.
A compiler may also use the static type of a value to optimize the storage it |
https://en.wikipedia.org/wiki/String%20literal | A string literal or anonymous string is a literal for a string value in the source code of a computer program. Modern programming languages commonly use a quoted sequence of characters, formally "bracketed delimiters", as in x = "foo", where "foo" is a string literal with value foo. Methods such as escape sequences can be used to avoid the problem of delimiter collision (issues with brackets) and allow the delimiters to be embedded in a string. There are many alternate notations for specifying string literals especially in complicated cases. The exact notation depends on the programming language in question. Nevertheless, there are general guidelines that most modern programming languages follow.
Syntax
Bracketed delimiters
Most modern programming languages use bracket delimiters (also balanced delimiters)
to specify string literals. Double quotations are the most common quoting delimiters used:
"Hi There!"
An empty string is literally written by a pair of quotes with no character at all in between:
""
Some languages either allow or mandate the use of single quotations instead of double quotations (the string must begin and end with the same kind of quotation mark and the type of quotation mark may or may not give slightly different semantics):
'Hi There!'
These quotation marks are unpaired (the same character is used as an opener and a closer), which is a hangover from the typewriter technology which was the precursor of the earliest computer input and output devices.
In terms of regular expressions, a basic quoted string literal is given as:
"[^"]*"
This means that a string literal is written as: a quote, followed by zero, one, or more non-quote characters, followed by a quote. In practice this is often complicated by escaping, other delimiters, and excluding newlines.
Paired delimiters
A number of languages provide for paired delimiters, where the opening and closing delimiters are different. These also often allow nested strings, so delimiters can be embedded, so long as they are paired, but still result in delimiter collision for embedding an unpaired closing delimiter. Examples include PostScript, which uses parentheses, as in (The quick (brown fox)) and m4, which uses the backtick (`) as the starting delimiter, and the apostrophe (') as the ending delimiter. Tcl allows both quotes (for interpolated strings) and braces (for raw strings), as in "The quick brown fox" or {The quick {brown fox}}; this derives from the single quotations in Unix shells and the use of braces in C for compound statements, since blocks of code is in Tcl syntactically the same thing as string literals – that the delimiters are paired is essential for making this feasible.
The Unicode character set includes paired (separate opening and closing) versions of both single and double quotations:
“Hi There!”
‘Hi There!’
„Hi There!“
«Hi There!»
These, however, are rarely used, as many programming languages will not register them (one exception is |
https://en.wikipedia.org/wiki/Kahlua%20%28disambiguation%29 | Kahlua may refer to:
Kahlúa, a Mexican coffee-flavored liqueur
Kahlua (software), an implementation of the Lua programming language for Java ME
See also
Kailua (disambiguation) |
https://en.wikipedia.org/wiki/Type | Type may refer to:
Science and technology
Computing
Typing, producing text via a keyboard, typewriter, etc.
Data type, collection of values used for computations.
File type
TYPE (DOS command), a command to display contents of a file.
Type (Unix), a command in POSIX shells that gives information about commands.
Type safety, the extent to which a programming language discourages or prevents type errors.
Type system, defines a programming language's response to data types.
Mathematics
Type (model theory)
Type theory, basis for the study of type systems
Arity or type, the number of operands a function takes
Type, any proposition or set in the intuitionistic type theory
Type, of an entire function
Exponential type
Biology
Type (biology), which fixes a scientific name to a taxon
Dog type, categorization by use or function of domestic dogs
Lettering
Type is a design concept for lettering used in typography which helped bring about modern textual printing in the publishing industry
Type can refer to a font style, e.g., "italic type"
Movable type, in letterpress printing
Sort (typesetting), in letterpress printing
Typesetting, the composition of text by means of arranging types
Typeface, the overall design of lettering used in a collection of related fonts
Type design, the art and process of designing typefaces
Type foundry, a company that designs or distributes typefaces
Typewriter, a mechanical or electromechanical machine for writing characters similar to those produced by a printer's movable type
Sociology
Ideal type
Normal type
Typification
Other uses
Type (acting), a way of characterizing an actor by the sort of role they are well-suited for or fit into easily, or by their performance style
Type & antitype, in Typology, in Christian theology and Biblical exegesis
"Type" (song), a 1990 song by the band Living Colour
Type (designation), a model numbering system used for vehicles or military equipment
Type Museum, museum about the above
Architectural type, classification of architecture by functional types (houses, institutions), morphological types or historical types Architectural style subcategories
U.S. Navy type commands, senior commands for the specific "type" of weapon system (i.e., naval aviation, submarine warfare, surface warships) employed
Type of Constans, a 648 edict issued by Byzantine Emperor Constans II
Type-token distinction, in logic, linguistics, and computer programming
See also
Typology (disambiguation), the study of types
Categorization
Kind (disambiguation) |
https://en.wikipedia.org/wiki/KDevelop | KDevelop is a free and open-source integrated development environment (IDE) for Unix-like computer operating systems and Windows. It provides editing, navigation and debugging features for several programming languages, and integration with build automation and version-control systems, using a plugin-based architecture.
KDevelop 5 has parser backends for C, C++, Objective-C, OpenCL and JavaScript/QML, with plugins supporting PHP, Python 3 and Ruby. Basic syntax highlighting and code folding are available for dozens of other source-code and markup formats, but without semantic analysis.
KDevelop is part of the KDE project, and is based on KDE Frameworks and Qt. The C/C++ backend uses Clang to provide accurate information even for very complex codebases.
History
KDevelop 0.1 was released in 1998, with 1.0 following in late 1999. 1.x and 2.x were developed over a period of four years from the original codebase.
It is believed that Sandy Meier originated KDevelop. Ralf Nolden is also known to be an early developer of the project. In 1998 Sandy Meier started KDevelop and worked 8 weeks alone on this project. Since then, the KDevelop IDE is publicly available under the GPL and supports many programming languages.
Bernd Gehrmann started a complete rewrite and announced KDevelop 3.x in March 2001. Its first release was together with K Desktop Environment 3.2 in February 2004, and development of KDevelop 3.x continued until 2008.
KDevelop 4.x, another complete rewrite with a more object-oriented programming model, was developed from August 2005 and released as KDevelop 4.0.0 in May 2010. The last feature update of this branch was version 4.7.0 in September 2014, with bugfix releases continuing until KDevelop 4.7.4 in December 2016
KDevelop 5 development began in August 2014 as a continuation of the 4.x codebase, ported to Qt5 and KDE Frameworks 5. The custom C++ parser used in earlier versions, which had poor support for C++11 syntax, was replaced by a new Clang-based backend. The integrated CMakeFile interpreter was also removed in favour of JSON metadata produced by the upstream CMake tool.
Semantic language support was added for QML and JavaScript, using the parser from Qt Creator, alongside a new QMake project-manager backend.
The first stable 5.x release was KDevelop 5.0.0 in August 2016. In October 2016, official Microsoft Windows builds were released for the first time.
Features
KDevelop uses an embedded text editor component through the KParts framework. The default editor is KDE Advanced Text Editor, which can optionally be replaced with a Qt Designer-based editor. This list focuses on the features of KDevelop itself. For features specific to the editor component, see the article on Kate.
Source code editor with syntax highlighting and automatic indentation (Kate).
C/C++ language is now supported with a Clang's backend (as of KDevelop-5.0)
Project management for different project types, such as Automake, CMake, qmake for Qt based pr |
https://en.wikipedia.org/wiki/List%20of%20people%20on%20the%20postage%20stamps%20of%20New%20Zealand | This is a list of people on stamps of New Zealand.
Link
The year given is the year of issue of the first stamp depicting that person.
Data has been entered up to the end of 2002.
A
The Prince Andrew (1963)
The Princess Anne (1952)
Sean Astin (2001)
B
Brian Barratt-Boyes (1995)
Joseph Banks (1969)
Aunt Daisy (Maud Ruby Basham) (1994)
Albert Henry Baskerville (1995)
Jean Batten (1990)
James K. Baxter (1989)
Sean Bean (2001)
Princess Beatrice of York (1989)
Cate Blanchett (2001)
Evelyn Brooke (2015)
Peter Buck (1990)
C
Prince William (now Duke of Cambridge) (1985)
Sir Winston Churchill (1965)
James Cook (1940)
Whina Cooper (1995)
Barry Crump (1995)
D
Stewie Dempster (1992)
Jules Dumont d'Urville (1997)
E
The Duke of Edinburgh (1953)
Edward VII (1909)
Edward VIII (1940)
The Prince Edward (1973)
The Princess Elizabeth (now The Queen) (1943)
Queen Elizabeth (later The Queen Mother) (1937)
F
W. H. A. Feilding (1981)
Bernard Freyberg (1990)
G
George V (1915)
George VI (1937)
John Robert Godley (1950)
George Grey (1979)
Elizabeth Gunn (1969)
H
Richard Hadlee (1995)
Prince Harry (1985)
James Hector (1967)
Hone Heke (1990)
Sir Edmund Hillary (1994)
William Hobson (1990)
J
Robert Jack (1992)
K
Te Ruki Kawiti (1990)
Truby King (1957)
Charles Kingsford Smith (1958)
L
Christopher Lee (2001)
Danyon Loader (1996)
Jack Lovelock (1990)
M
Ian McKellen (2001)
Katherine Mansfield (1989)
The Princess Margaret (1943)
Samuel Marsden (1964)
Ngaio Marsh (1989)
Queen Mary (1935)
Bruce Mason (1989)
Viggo Mortensen (2001)
N
Grace Neill (1990)
George Nēpia (1990)
Āpirana Ngata (1980)
O
Miranda Otto (2002)
P
Richard Pearse (1990)
R
Ernest Rutherford, physicist (1971)
S
Richard John Seddon (1979)
Katherine Sheppard (1990)
George Smith (1995)
Peter Snell (2004)
Daniel Solander (1969)
Tommy Solomon (1991)
T
Blyth Tait (1996)
Abel Tasman (1940)
Hakopa Te Ata-o-tu (1980)
Te Hau-Takiri Wharepapa (1980)
Te Heu Heu Tukino IV (1980)
Kiri Te Kanawa (1995)
Te Puea Herangi (1980)
U
Charles Upham (1995)
V
Queen Victoria (1855)
Julius Vogel (1979)
W
John Walker (2004)
Anthony Wilding (1992)
The Prince of Wales (1950)
The Princess of Wales (1981)
Yvette Williams (2004)
Elijah Wood (2001)
Y
The Duchess of York (1989)
Notes
References
External links
See for illustrations of all New Zealand stamps
New Zealand, List of people on stamps of
Stamps
Stamps
Philately of New Zealand |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.