source stringlengths 32 199 | text stringlengths 26 3k |
|---|---|
https://en.wikipedia.org/wiki/Motorola%2068008 | The Motorola 68008 is an 8/32-bit microprocessor introduced by Motorola in 1982. It is a version of 1979's Motorola 68000 with an 8-bit external data bus, as well as a smaller address bus. The 68008 was available with 20 or 22 address lines (respective to 48-pin or 52-pin package) which allowed 1 MB or 4 MB address space versus the 16 MB addressable on the 68000. The 68008 was designed to work with lower cost and simpler 8-bit memory systems. Because of its smaller data bus, it was roughly half as fast as a 68000 of the same clock speed. It was still faster than competing 8-bit microprocessors, because internally the 68008 behaves identically to the 68000 and has the same microarchitecture.
Motorola ended production of the 68008 in 1996.
Details
The 68008 is an HMOS chip with about with a speed grade of . There are two versions of the chip. The original is in a 48-pin dual in-line package with a 20-bit address bus, allowing it to use up to 1 megabyte of memory. A later version is in a 52-pin plastic leaded chip carrier; this version has a 22-bit address bus and can support of RAM.
Usages
The Sinclair QL microcomputer and Luxor ABC 1600 use the 68008 as their main processor.
References
External links
A small 68008 design
Kiwi - an 68k Homebrew Computer
68k microprocessors |
https://en.wikipedia.org/wiki/Integrated%20services | In computer networking, integrated services or IntServ is an architecture that specifies the elements to guarantee quality of service (QoS) on networks. IntServ can for example be used to allow video and sound to reach the receiver without interruption.
IntServ specifies a fine-grained QoS system, which is often contrasted with DiffServ's coarse-grained control system.
Under IntServ, every router in the system implements IntServ, and every application that requires some kind of QoS guarantee has to make an individual reservation. Flow specs describe what the reservation is for, while RSVP is the underlying mechanism to signal it across the network.
Flow specs
There are two parts to a flow spec:
What does the traffic look like? Done in the Traffic SPECification part, also known as TSPEC.
What guarantees does it need? Done in the service Request SPECification part, also known as RSPEC.
TSPECs include token bucket algorithm parameters. The idea is that there is a token bucket which slowly fills up with tokens, arriving at a constant rate. Every packet which is sent requires a token, and if there are no tokens, then it cannot be sent. Thus, the rate at which tokens arrive dictates the average rate of traffic flow, while the depth of the bucket dictates how 'bursty' the traffic is allowed to be.
TSPECs typically just specify the token rate and the bucket depth. For example, a video with a refresh rate of 75 frames per second, with each frame taking 10 packets, might specify a token rate of 750 Hz, and a bucket depth of only 10. The bucket depth would be sufficient to accommodate the 'burst' associated with sending an entire frame all at once. On the other hand, a conversation would need a lower token rate, but a much higher bucket depth. This is because there are often pauses in conversations, so they can make do with less tokens by not sending the gaps between words and sentences. However, this means the bucket depth needs to be increased to compensate for the traffic being burstier.
RSPECs specify what requirements there are for the flow: it can be normal internet 'best effort', in which case no reservation is needed. This setting is likely to be used for webpages, FTP, and similar applications. The 'Controlled Load' setting mirrors the performance of a lightly loaded network: there may be occasional glitches when two people access the same resource by chance, but generally both delay and drop rate are fairly constant at the desired rate. This setting is likely to be used by soft QoS applications. The 'Guaranteed' setting gives an absolutely bounded service, where the delay is promised to never go above a desired amount, and packets never dropped, provided the traffic stays within spec.
RSVP
The Resource Reservation Protocol (RSVP) is described in RFC 2205. All machines on the network capable of sending QoS data send a PATH message every 30 seconds, which spreads out through the networks. Those who want to listen to them send a correspo |
https://en.wikipedia.org/wiki/Yorick%20%28programming%20language%29 | Yorick is an interpreted programming language designed for numerics, graph plotting, and steering large scientific simulation codes. It is quite fast due to array syntax, and extensible via C or Fortran routines. It was created in 1996 by David H. Munro of Lawrence Livermore National Laboratory.
Features
Indexing
Yorick is good at manipulating elements in N-dimensional arrays conveniently with its powerful syntax.
Several elements can be accessed all at once:
> x=[1,2,3,4,5,6];
> x
[1,2,3,4,5,6]
> x(3:6)
[3,4,5,6]
> x(3:6:2)
[3,5]
> x(6:3:-2)
[6,4]
Arbitrary elements
> x=[[1,2,3],[4,5,6]]
> x
[[1,2,3],[4,5,6]]
> x([2,1],[1,2])
[[2,1],[5,4]]
> list=where(1<x)
> list
[2,3,4,5,6]
> y=x(list)
> y
[2,3,4,5,6]
Pseudo-index
Like "theading" in PDL and "broadcasting" in Numpy, Yorick has a mechanism to do this:
> x=[1,2,3]
> x
[1,2,3]
> y=[[1,2,3],[4,5,6]]
> y
[[1,2,3],[4,5,6]]
> y(-,)
[[[1],[2],[3]],[[4],[5],[6]]]
> x(-,)
[[1],[2],[3]]
> x(,-)
[[1,2,3]]
> x(,-)/y
[[1,1,1],[0,0,0]]
> y=[[1.,2,3],[4,5,6]]
> x(,-)/y
[[1,1,1],[0.25,0.4,0.5]]
Rubber index
".." is a rubber-index to represent zero or more dimensions of the array.
> x=[[1,2,3],[4,5,6]]
> x
[[1,2,3],[4,5,6]]
> x(..,1)
[1,2,3]
> x(1,..)
[1,4]
> x(2,..,2)
5
"*" is a kind of rubber-index to reshape a slice(sub-array) of array to a vector.
> x(*)
[1,2,3,4,5,6]
Tensor multiplication
Tensor multiplication is done as follows in Yorick:
P(,+, )*Q(, +)
means
> x=[[1,2,3],[4,5,6]]
> x
[[1,2,3],[4,5,6]]
> y=[[7,8],[9,10],[11,12]]
> x(,+)*y(+,)
[[39,54,69],[49,68,87],[59,82,105]]
> x(+,)*y(,+)
[[58,139],[64,154]]
External links
Linux Journal Review
Yorick tutorial on JehTech
Array programming languages
Free compilers and interpreters
Lawrence Livermore National Laboratory
Programming languages created in 1996 |
https://en.wikipedia.org/wiki/Turing%20%28disambiguation%29 | Alan Turing (1912–1954) was a British mathematician, logician, cryptanalyst and computer scientist.
Turing may also refer to:
People
Turing baronets, a title in the Baronetage of Nova Scotia, including a list of baronets
Dermot Turing (born 1961), British solicitor and author
Turing (drag queen), Filipino drag queen
Fictional characters
Angelica Turing, in Sense8
Turing, in video game 2064: Read Only Memories
Other uses
Turing (cipher), a cryptographic stream cipher
Turing (microarchitecture), by Nvidia
Turing (programming language)
Turing Award, the annual award by the Association for Computing Machinery
See also
List of things named after Alan Turing
Turing machine (disambiguation)
Turing test (disambiguation)
Turing completeness, ability of a computing system to simulate Turing machines |
https://en.wikipedia.org/wiki/Relational%20algebra | In database theory, relational algebra is a theory that uses algebraic structures for modeling data, and defining queries on it with a well founded semantics. The theory was introduced by Edgar F. Codd.
The main application of relational algebra is to provide a theoretical foundation for relational databases, particularly query languages for such databases, chief among which is SQL. Relational databases store tabular data represented as relations. Queries over relational databases often likewise return tabular data represented as relations.
The main purpose of relational algebra is to define operators that transform one or more input relations to an output relation. Given that these operators accept relations as input and produce relations as output, they can be combined and used to express complex queries that transform multiple input relations (whose data are stored in the database) into a single output relation (the query results).
Unary operators accept a single relation as input. Examples include operators to filter certain attributes (columns) or tuples (rows) from an input relation. Binary operators accept two relations as input and combine them into a single output relation. For example, taking all tuples found in either relation (union), removing tuples from the first relation found in the second relation (difference), extending the tuples of the first relation with tuples in the second relation matching certain conditions, and so forth.
Other more advanced operators can also be included, where the inclusion or exclusion of certain operators gives rise to a family of algebras.
Introduction
Relational algebra received little attention outside of pure mathematics until the publication of E.F. Codd's relational model of data in 1970. Codd proposed such an algebra as a basis for database query languages. (See section Implementations.)
Relational algebra operates on homogeneous sets of tuples
where we commonly interpret m to be the number of rows in a table
and n to be the number of columns. All entries in each column
have the same type.
Five primitive operators of Codd's algebra are the selection, the projection, the Cartesian product (also called the cross product or cross join), the set union, and the set difference.
Set operators
The relational algebra uses set union, set difference, and Cartesian product from set theory, but adds additional constraints to these operators.
For set union and set difference, the two relations involved must be union-compatible—that is, the two relations must have the same set of attributes. Because set intersection is defined in terms of set union and set difference, the two relations involved in set intersection must also be union-compatible.
For the Cartesian product to be defined, the two relations involved must have disjoint headers—that is, they must not have a common attribute name.
In addition, the Cartesian product is defined differently from the one in set theory in the sense that t |
https://en.wikipedia.org/wiki/Tuple%20relational%20calculus | Tuple calculus is a calculus that was created and introduced by Edgar F. Codd as part of the relational model, in order to provide a declarative database-query language for data manipulation in this data model. It formed the inspiration for the database-query languages QUEL and SQL, of which the latter, although far less faithful to the original relational model and calculus, is now the de facto standard database-query language; a dialect of SQL is used by nearly every relational-database-management system. Michel Lacroix and Alain Pirotte proposed domain calculus, which is closer to first-order logic and together with Codd showed that both of these calculi (as well as relational algebra) are equivalent in expressive power. Subsequently, query languages for the relational model were called relationally complete if they could express at least all of these queries.
Definition of the calculus
Relational database
Since the calculus is a query language for relational databases we first have to define a relational database. The basic relational building block is the domain (somewhat similar, but not equal to, a data type). A tuple is a finite sequence of attributes, which are ordered pairs of domains and values. A relation is a set of (compatible) tuples. Although these relational concepts are mathematically defined, those definitions map loosely to traditional database concepts. A table is an accepted visual representation of a relation; a tuple is similar to the concept of a row.
We first assume the existence of a set C of column names, examples of which are "name", "author", "address", etcetera. We define headers as finite subsets of C. A relational database schema is defined as a tuple S = (D, R, h) where D is the domain of atomic values (see relational model for more on the notions of domain and atomic value), R is a finite set of relation names, and
h : R → 2C
a function that associates a header with each relation name in R. (Note that this is a simplification from the full relational model where there is more than one domain and a header is not just a set of column names but also maps these column names to a domain.) Given a domain D we define a tuple over D as a partial function that maps some column names to an atomic value in D. An example would be (name : "Harry", age : 25).
t : C ⇸ D
The set of all tuples over D is denoted as TD. The subset of C for which a tuple t is defined is called the domain of t (not to be confused with the domain in the schema) and denoted as dom(t).
Finally we define a relational database given a schema S = (D, R, h) as a function
db : R → 2TD
that maps the relation names in R to finite subsets of TD, such that for every relation name r in R and tuple t in db(r) it holds that
dom(t) = h(r).
The latter requirement simply says that all the tuples in a relation should contain the same column names, namely those defined for it in the schema.
Atoms
For the construction of the formulas we will assu |
https://en.wikipedia.org/wiki/Dynamic%20recompilation | In computer science, dynamic recompilation is a feature of some emulators and virtual machines, where the system may recompile some part of a program during execution. By compiling during execution, the system can tailor the generated code to reflect the program's run-time environment, and potentially produce more efficient code by exploiting information that is not available to a traditional static compiler.
Uses
Most dynamic recompilers are used to convert machine code between architectures at runtime. This is a task often needed in the emulation of legacy gaming platforms. In other cases, a system may employ dynamic recompilation as part of an adaptive optimization strategy to execute a portable program representation such as Java or .NET Common Language Runtime bytecodes. Full-speed debuggers also utilize dynamic recompilation to reduce the space overhead incurred in most deoptimization techniques, and other features such as dynamic thread migration.
Tasks
The main tasks a dynamic recompiler has to perform are:
Reading in machine code from the source platform
Emitting machine code for the target platform
A dynamic recompiler may also perform some auxiliary tasks:
Managing a cache of recompiled code
Updating of elapsed cycle counts on platforms with cycle count registers
Management of interrupt checking
Providing an interface to virtualized support hardware, for example a GPU
Optimizing higher-level code structures to run efficiently on the target hardware (see below)
Applications
Many Java virtual machines feature dynamic recompilation.
Apple's Rosetta for Mac OS X on x86, allows PowerPC code to be run on the x86 architecture.
Later versions of the Mac 68K emulator used in classic Mac OS to run 680x0 code on the PowerPC hardware.
Psyco, a specializing compiler for Python.
The HP Dynamo project, an example of a transparent binary dynamic optimizer.
DynamoRIO, an open-source successor to Dynamo that works with the ARM, x86-64 and IA-64 (Itanium) instruction sets.
The Vx32 virtual machine employs dynamic recompilation to create OS-independent x86 architecture sandboxes for safe application plugins.
Microsoft Virtual PC for Mac, used to run x86 code on PowerPC.
FreeKEYB, an international DOS keyboard and console driver with many usability enhancements utilized self-modifying code and dynamic dead code elimination to minimize its in-memory image based on its user configuration (selected features, languages, layouts) and actual runtime environment (OS variant and version, loaded drivers, underlying hardware), automatically resolving dependencies, dynamically relocating and recombining code sections on byte-level granularity and optimizing opstrings based on semantic information provided in the source code, relocation information generated by special tools during assembly and profile information obtained at load time.
The backwards compatibility functionality of the Xbox 360 (i.e. running games written for the original Xbox) is |
https://en.wikipedia.org/wiki/Ciphertext | In cryptography, ciphertext or cyphertext is the result of encryption performed on plaintext using an algorithm, called a cipher. Ciphertext is also known as encrypted or encoded information because it contains a form of the original plaintext that is unreadable by a human or computer without the proper cipher to decrypt it. This process prevents the loss of sensitive information via hacking. Decryption, the inverse of encryption, is the process of turning ciphertext into readable plaintext. Ciphertext is not to be confused with codetext because the latter is a result of a code, not a cipher.
Conceptual underpinnings
Let be the plaintext message that Alice wants to secretly transmit to Bob and let be the encryption cipher, where is a cryptographic key. Alice must first transform the plaintext into ciphertext, , in order to securely send the message to Bob, as follows:
In a symmetric-key system, Bob knows Alice's encryption key. Once the message is encrypted, Alice can safely transmit it to Bob (assuming no one else knows the key). In order to read Alice's message, Bob must decrypt the ciphertext using which is known as the decryption cipher,
Alternatively, in a non-symmetric key system, everyone, not just Alice and Bob, knows the encryption key; but the decryption key cannot be inferred from the encryption key. Only Bob knows the decryption key and decryption proceeds as
Types of ciphers
The history of cryptography began thousands of years ago. Cryptography uses a variety of different types of encryption. Earlier algorithms were performed by hand and are substantially different from modern algorithms, which are generally executed by a machine.
Historical ciphers
Historical pen and paper ciphers used in the past are sometimes known as classical ciphers. They include:
Substitution cipher: the units of plaintext are replaced with ciphertext (e.g., Caesar cipher and one-time pad)
Polyalphabetic substitution cipher: a substitution cipher using multiple substitution alphabets (e.g., Vigenère cipher and Enigma machine)
Polygraphic substitution cipher: the unit of substitution is a sequence of two or more letters rather than just one (e.g., Playfair cipher)
Transposition cipher: the ciphertext is a permutation of the plaintext (e.g., rail fence cipher)
Historical ciphers are not generally used as a standalone encryption technique because they are quite easy to crack. Many of the classical ciphers, with the exception of the one-time pad, can be cracked using brute force.
Modern ciphers
Modern ciphers are more secure than classical ciphers and are designed to withstand a wide range of attacks. An attacker should not be able to find the key used in a modern cipher, even if he knows any amount of plaintext and corresponding ciphertext. Modern encryption methods can be divided into the following categories:
Private-key cryptography (symmetric key algorithm): the same key is used for encryption and decryption
Public-key crypto |
https://en.wikipedia.org/wiki/Melissa%20%28computer%20virus%29 | The Melissa virus is a mass-mailing macro virus released on or around March 26, 1999. It targets Microsoft Word and Outlook-based systems and created considerable network traffic. The virus infects computers via email; the email is titled "Important Message From," followed by the current username. Upon clicking the message, the body reads, "Here's that document you asked for. Don't show anyone else ;)." Attached is a Word document titled "list.doc," containing a list of pornographic sites and accompanying logins for each. It then mass-mails itself to the first fifty people in the user's contact list and disables multiple safeguard features on Microsoft Word and Microsoft Outlook.
Description
The virus was released on March 26, 1999, by David L. Smith.
Smith used a hijacked AOL account to post the virus onto an Internet newsgroup called "alt.sex." And it soon ended up on similar sex groups and pornographic sites before spreading to corporate networks. However, the virus itself was credited to Kwyjibo, the Macro virus writer for VicodinS and ALT-F11, by comparing Microsoft Word documents with the same globally unique identifier. This method was also used to trace the virus back to Smith.
The "list.doc" file contains a Visual Basic script that copies the infected file into a template file used by Word for custom settings and default macros. If the recipient opens the attachment, the infecting file was read to computer storage. The virus then creates an Outlook object, reads the first 50 names in each Outlook Global Address Book, and sends a copy of itself to the addresses read.
Melissa works on Microsoft Word 97, Microsoft Word 2000 and Microsoft Outlook 97 or 98 email clients. Microsoft Outlook is not needed to receive the virus in email, but it is unable to spread via other emails without it.
Impact
The virus slowed down email systems due to overloading Microsoft Outlook and Microsoft Exchange servers with emails. Major organizations impacted included Microsoft, Intel Corp, and the United States Marine Corps. The Computer Emergency Response Team, a Pentagon-financed security service at Carnegie Mellon University, reported 250 organizations called regarding the virus, indicating at least 100,000 workplace computers were infected, although the number is believed to be higher. An estimated one million email accounts were hijacked by the virus. The virus was able to be contained within a few days, although it took longer to remove it from infected systems entirely. At the time it was the fastest spreading email worm.
Arrest
On April 1, 1999, Smith was arrested in New Jersey due to a tip from AOL and a collaborative effort involving the FBI, the New Jersey State Police, Monmouth Internet, a Swedish computer scientist, and others. Smith was accused of causing US$80 million worth of damages by disrupting personal computers and computer networks in business and government.
On December 10, 1999, Smith pleaded guilty to a second-degree charge of co |
https://en.wikipedia.org/wiki/Photon%20mapping | In computer graphics, photon mapping is a two-pass global illumination rendering algorithm developed by Henrik Wann Jensen between 1995 and 2001 that approximately solves the rendering equation for integrating light radiance at a given point in space. Rays from the light source (like photons) and rays from the camera are traced independently until some termination criterion is met, then they are connected in a second step to produce a radiance value. The algorithm is used to realistically simulate the interaction of light with different types of objects (similar to other photorealistic rendering techniques). Specifically, it is capable of simulating the refraction of light through a transparent substance such as glass or water (including caustics), diffuse interreflection between illuminated objects, the subsurface scattering of light in translucent materials, and some of the effects caused by particulate matter such as smoke or water vapor. Photon mapping can also be extended to more accurate simulations of light, such as spectral rendering. Progressive photon mapping (PPM) starts with ray tracing and then adds more and more photon mapping passes to provide a progressively more accurate render.
Unlike path tracing, bidirectional path tracing, volumetric path tracing, and Metropolis light transport, photon mapping is a "biased" rendering algorithm, which means that averaging infinitely many renders of the same scene using this method does not converge to a correct solution to the rendering equation. However, it is a consistent method, and the accuracy of a render can be increased by increasing the number of photons. As the number of photons approaches infinity, a render will get closer and closer to the solution of the rendering equation.
Effects
Caustics
Light refracted or reflected causes patterns called caustics, usually visible as concentrated patches of light on nearby surfaces. For example, as light rays pass through a wine glass sitting on a table, they are refracted and patterns of light are visible on the table. Photon mapping can trace the paths of individual photons to model where these concentrated patches of light will appear.
Diffuse interreflection
Diffuse interreflection is apparent when light from one diffuse object is reflected onto another. Photon mapping is particularly adept at handling this effect because the algorithm reflects photons from one surface to another based on that surface's bidirectional reflectance distribution function (BRDF), and thus light from one object striking another is a natural result of the method. Diffuse interreflection was first modeled using radiosity solutions. Photon mapping differs though in that it separates the light transport from the nature of the geometry in the scene. Color bleed is an example of diffuse interreflection.
Subsurface scattering
Subsurface scattering is the effect evident when light enters a material and is scattered before being absorbed or reflected in a dif |
https://en.wikipedia.org/wiki/Socket%207 | Socket 7 is a physical and electrical specification for an x86-style CPU socket on a personal computer motherboard. It was released in June 1995. The socket supersedes the earlier Socket 5, and accepts P5 Pentium microprocessors manufactured by Intel, as well as compatibles made by Cyrix/IBM, AMD, IDT and others. Socket 7 was the only socket that supported a wide range of CPUs from different manufacturers and a wide range of speeds.
Differences between Socket 5 and Socket 7 are that Socket 7 has an extra pin and is designed to provide dual split rail voltage, as opposed to Socket 5's single voltage. However, not all motherboard manufacturers supported the dual voltage on their boards initially. Socket 7 is backwards compatible; a Socket 5 CPU can be inserted and used on a Socket 7 motherboard.
Processors that used Socket 7 are the AMD K5 and K6, the Cyrix 6x86 and 6x86MX, the IDT WinChip, the Intel P5 Pentium (2.5–3.5 V, 75–200 MHz), the Pentium MMX (166–233 MHz), and the Rise Technology mP6.
Socket 7 typically uses a 321-pin (arranged as 19 by 19 pins) SPGA ZIF socket or the very rare 296-pin (arranged as 37 by 37 pins) SPGA LIF socket. The size is 1.95" x 1.95" (4.95 cm x 4.95 cm).
An extension of Socket 7, Super Socket 7, was developed by AMD for their K6-2 and K6-III processors to operate at a higher clock rate and use AGP.
Socket 7 and Socket 8 were replaced by Slot 1 and Slot 2 in 1999.
See also
List of Intel microprocessors
List of AMD microprocessors
References
Socket 007 |
https://en.wikipedia.org/wiki/Relational%20calculus | The relational calculus consists of two calculi, the tuple relational calculus and the domain relational calculus, that is part of the relational model for databases and provide a declarative way to specify database queries. The raison d'être of relational calculus is the formalization of query optimization, which is finding more efficient manners to execute the same query in a database.
The relational calculus is similar to the relational algebra, which is also part of the relational model: While the relational calculus is meant as a declarative language that prescribes no execution order on the subexpressions of a relational calculus expression, the relational algebra is meant as an imperative language: the sub-expressions of a relational algebraic expression are meant to be executed from left-to-right and inside-out following their nesting.
Per Codd's theorem, the relational algebra and the domain-independent relational calculus are logically equivalent.
Example
A relational algebra expression might prescribe the following steps to retrieve the phone numbers and names of book stores that supply Some Sample Book:
Join book stores and titles over the BookstoreID.
Restrict the result of that join to tuples for the book Some Sample Book.
Project the result of that restriction over StoreName and StorePhone.
A relational calculus expression would formulate this query in the following descriptive or declarative manner:
Get StoreName and StorePhone for book stores such that there exists a title BK with the same BookstoreID value and with a BookTitle value of Some Sample Book.
Mathematical properties
The relational algebra and the domain-independent relational calculus are logically equivalent: for any algebraic expression, there is an equivalent expression in the calculus, and vice versa. This result is known as Codd's theorem.
Purpose
The raison d'être of the relational calculus is the formalization of query optimization. Query optimization consists in determining from a query the most efficient manner (or manners) to execute it. Query optimization can be formalized as translating a relational calculus expression delivering an answer A into efficient relational algebraic expressions delivering the same answer A.
See also
Calculus of relations
References
Logical calculi
Relational model |
https://en.wikipedia.org/wiki/Brain%20%28computer%20virus%29 | Brain is the industry standard name for a computer virus that was released in its first form on 19 January 1986, and is considered to be the first computer virus for the IBM Personal Computer (IBM PC) and compatibles.
Description
Brain affects the PC by replacing the boot sector of a floppy disk with a copy of the virus. The real boot sector is moved to another sector and marked as bad. Infected disks usually have five kilobytes of bad sectors. The disk label is usually changed to ©Brain, and the following text can be seen in infected boot sectors:
Welcome to the Dungeon (c) 1986 Amjads (pvt) Ltd VIRUS_SHOE RECORD V9.0 Dedicated to the dynamic memories of millions of viruses who are no longer with us today - Thanks GOODNESS!!! BEWARE OF THE er..VIRUS : this program is catching program follows after these ....$#@%$@!!
There are many minor and major variations to that version of the text. The virus slows down the floppy disk drive and makes seven kilobytes of memory unavailable to DOS. Brain was written by Amjad Farooq Alvi, who at the time lived in Chah Miran, near Lahore Railway Station, in Lahore, Pakistan. The Alvi brothers told Time magazine they had written it to protect their medical software from illegal copying, and it was supposed to target copyright infringement only. The cryptic message "Welcome to the Dungeon", a safeguard and reference to an early programming forum on Dungeon BBS, appeared after a year because the brothers licensed a beta version of the code. The brothers could not be contacted to receive the final release of this version of the program.
Brain lacks code for dealing with hard disk partitioning, and avoids infecting hard disks by checking the most significant bit of the BIOS drive number being accessed. Brain does not infect the disk if the bit is set, unlike other viruses at the time, which paid no attention to disk partitioning and consequently destroyed data stored on hard disks by treating them in the same way as floppy disks. Brain often went undetected, partially due to this deliberate non-destructiveness, especially when the user paid little to no attention to the low speed of floppy disk access.
The virus came complete with address and three phone numbers, and a message that told the user that their machine was infected and to call them for inoculation:
This program was originally used to track a heart monitoring program for the IBM PC, and people were distributing illicit copies of the disks. This tracking program was supposed to stop and track illegal copies of the disk, however the program also sometimes used the last five kilobytes on an Apple floppy, making additional saves to the disk by other programs impossible.
Author response
When the brothers began to receive a large number of phone calls from people in the United Kingdom, the United States, and elsewhere, demanding that they disinfect their machines, they were stunned and tried to explain to the outraged callers that their motivation had not |
https://en.wikipedia.org/wiki/Active-matrix%20liquid-crystal%20display | An active-matrix liquid-crystal display (AMLCD) is a type of flat-panel display used in high-resolution TVs, computer monitors, notebook computers, tablet computers and smartphones with an LCD screen, due to low weight, very good image quality, wide color gamut and fast response time.
The concept of active-matrix LCDs was proposed by Bernard J. Lechner at the RCA Laboratories in 1968. The first functional AMLCD with thin-film transistors was made by T. Peter Brody, Fang-Chen Luo and their team at Westinghouse Electric Corporation in 1972. However, it took years of additional research and development by others to launch successful products.
Introduction
The most common type of AMLCD contains, besides the polarizing sheets and cells of liquid crystal, a matrix of thin-film transistors to make a thin-film-transistor liquid-crystal display. These devices store the electrical state of each pixel on the display while all the other pixels are being updated. This method provides a much brighter, sharper display than a passive matrix of the same size. An important specification for these displays is their viewing-angle.
Thin-film transistors are usually used for constructing an active matrix so that the two terms are often interchanged, even though a thin-film transistor is just one component in an active matrix and some active-matrix designs have used other components such as diodes. Whereas a passive matrix display uses a simple conductive grid to apply a voltage to the liquid crystals in the target area, an active-matrix display uses a grid of transistors and capacitors with the ability to hold a charge for a limited period of time. Because of the switching action of transistors, only the desired pixel receives a charge, and the pixel acts as a capacitor to hold the charge until the next refresh cycle, improving image quality over a passive matrix. This is a special version of a sample-and-hold circuit.
See also
Organic light-emitting diode
Active-matrix organic light-emitting diode
Display resolution
References
External links
Eduard Rhein Stiftung 1988 Technology Award Dr. T. Peter Brody: Basic development of TFT liquid crystal display
Liquid crystal displays |
https://en.wikipedia.org/wiki/Channel%205 | Channel 5 may refer to:
Americas
Canal 5 (Mexico), a Mexican television network owned by Televisa
XHGC-TDT, a television station in Mexico City, flagship of the Canal 5 network
Canal 5 Noticias, a news channel in Buenos Aires, Argentina
Canal 5 (Uruguay), a government-owned Uruguayan television network
Tonis (Canada), a former Ukrainian-language digital cable specialty television channel
Telefe Rosario, Argentine television station which broadcasts from the city of Rosario
Great Belize Television, Belize television station, known as "Channel 5", founded in 1991 and broadcasting from Belize City
Panamericana Televisión a Peruvian free-to-air television channel Broadcasting on Channel 5 in Lima, Peru
Paravisión, a Paraguayan television network broadcasting on Channel 5 in Asunción
TV+ (Chile), formerly UCV Televisión, a chilean free-to-air television channel broadcasting on Channel 5 in Santiago de Chile
WNYW-TV Channel 5, a Fox-affiliated television station in New York City, United States
Channel 5 (web channel), an American web channel led by Andrew Callaghan
Asia
TV5 (Philippine TV network), a Filipino commercial television network formerly known as "ABC 5" and "5"
DWET-TV, the flagship television station of TV5 in Metro Manila, Philippines
IRIB TV5, operated by Islamic Republic of Iran Broadcasting
Channel 5 (Pakistani TV channel), Pakistani Entertainment and News Channel
Channel 5 (Thai TV channel) Thailand television broadcaster, founded in 1958 and owned by the Royal Thai Army
Channel 5 (Singaporean TV channel), English-language Singapore television broadcaster
Sport 5, Israeli cable and satellite TV station
Canal 5 Creative Campus, a tourist attraction in Changzhou, China
Europe
5 Kanal, Ukrainian television channel
Canale 5, Italian television broadcaster
Channel 5 (British TV channel), British commercial public broadcast network
Channel 5 Broadcasting Limited, parent company of the British TV channel
Channel 5 Lithuania, the largest regional TV channel in Lithuania
France 5, French public television network
Kanal 5 (Sweden), Swedish commercial channel
Kanal 5 (Denmark), Danish television channel
Petersburg – Channel 5, Russian broadcaster, seen nationally, with regional channels
Telecinco, Spain's second private television station
Other uses
Channel 5 (Fear the Walking Dead), an episode of the television series Fear the Walking Dead
See also
CH5 (disambiguation)
C5 (disambiguation)
Kanal 5 (disambiguation)
TV5 (disambiguation)
Chanel No. 5, French perfume produced by the Parisian fashion house of Chanel
Channel 5 Video Distribution, a defunct home video brand created by Polygram, now owned by Universal Pictures
Channel 5 branded TV stations in the United States
Channel 5 virtual TV stations in Canada
Channel 5 virtual TV stations in Mexico
Channel 5 virtual TV stations in the United States
Channel 5 TV stations in Canada
Channel 5 TV stations in Mexico
Channel 5 digital TV stations in the U |
https://en.wikipedia.org/wiki/Overclocking | In computing, overclocking is the practice of increasing the clock rate of a computer to exceed that certified by the manufacturer. Commonly, operating voltage is also increased to maintain a component's operational stability at accelerated speeds. Semiconductor devices operated at higher frequencies and voltages increase power consumption and heat. An overclocked device may be unreliable or fail completely if the additional heat load is not removed or power delivery components cannot meet increased power demands. Many device warranties state that overclocking or over-specification voids any warranty, but some manufacturers allow overclocking as long as it is done (relatively) safely.
Overview
The purpose of overclocking is to increase the operating speed of a given component. Normally, on modern systems, the target of overclocking is increasing the performance of a major chip or subsystem, such as the main processor or graphics controller, but other components, such as system memory (RAM) or system buses (generally on the motherboard), are commonly involved. The trade-offs are an increase in power consumption (heat), fan noise (cooling), and shortened lifespan for the targeted components. Most components are designed with a margin of safety to deal with operating conditions outside of a manufacturer's control; examples are ambient temperature and fluctuations in operating voltage. Overclocking techniques in general aim to trade this safety margin by setting the device to run in the higher end of the margin, with the understanding that temperature and voltage must be more strictly monitored and controlled by the user. Examples are that operating temperature would need to be more strictly controlled with increased cooling, as the part will be less tolerant of increased temperatures at the higher speeds. Also base operating voltage may be increased to compensate for unexpected voltage drops and to strengthen signalling and timing signals, as low-voltage excursions are more likely to cause malfunctions at higher operating speeds.
While most modern devices are fairly tolerant of overclocking, all devices have finite limits. Generally for any given voltage most parts will have a maximum "stable" speed where they still operate correctly. Past this speed, the device starts giving incorrect results, which can cause malfunctions and sporadic behavior in any system depending on it. While in a PC context the usual result is a system crash, more subtle errors can go undetected, which over a long enough time can give unpleasant surprises such as data corruption (incorrectly calculated results, or worse writing to storage incorrectly) or the system failing only during certain specific tasks (general usage such as internet browsing and word processing appear fine, but any application wanting advanced graphics crashes the system).
At this point, an increase in operating voltage of a part may allow more headroom for further increases in clock speed, but the |
https://en.wikipedia.org/wiki/Nine%20Network | The Nine Network (stylised 9Network, commonly known as Channel Nine or simply Nine) is an Australian commercial free-to-air television network. It is owned by parent company Nine Entertainment and is one of five main free-to-air television networks in Australia.
From 2017 to 2021, the network's slogan was "We Are the One". Since 2021, the network has changed its slogan back to the iconic Golden Era slogan "Still the One".
As of 2022, the Nine Network is the second-rated television network in Australia, behind the Seven Network, and ahead of the ABC TV, Network 10 and SBS.
History
Origins
The Nine Network's first broadcasting station was launched in Sydney, New South Wales, as TCN-9 on 16 September 1956 by The Daily Telegraph owner Frank Packer.
John Godson introduced the station and former advertising executive Bruce Gyngell presented the first programme, This Is Television (so becoming the first person to appear on Australian television). Later that year, GTV-9 in Melbourne commenced transmissions to broadcast the 1956 Summer Olympics, later forming the National Television Network alongside QTQ-9 in Brisbane in 1959 and NWS-9 in Adelaide, the basis of the current Nine Network, in 1959. Before its formation, TCN-9 was then affiliated with HSV-7 (because alongside the Seven Network, they were both Australia's first television stations, having opened in 1956), and GTV-9's sister affiliate was ATN-7.
The network, by 1967, had begun calling itself the National Nine Network, and became simply the Nine Network Australia in 1987. Kerry Packer inherited the company after his father's death in 1974. Before the official conversion to colour on 1 March 1975, it was the first Australian television station to regularly screen programmes in colour with the first program to use it premiering in 1971, the very year NTD-8 in Darwin commenced.
The New South Wales Rugby Football League grand final of 1967 became the first football grand final of any code to be televised live nationally. The Nine Network paid $5,000 () to attain the broadcasting rights.
Nine Network station STW-9 Perth, which opened in 1965, became owned-and-operated station when Alan Bond purchased the network for one billion dollars in 1987, a deal that became effective after government approvals in 1988. However, in 1989, Bond Media sold the station to Sunraysia Television for A$95 million, due to the federal cross-media ownership laws which restricted the level of national reach for media owners. Nine, which then also included Channel 9 in Brisbane, fell back into the hands of Kerry Packer after Alan Bond's bankruptcy in 1992.
In 2011, GTV 9 Melbourne moved from 22 Bendigo Street, Richmond, to 717 Bourke Street, Docklands. 22 Bendigo Street started out as the Wertheim Piano Factory, then became the Heinz Soup Factory, then GTV9. The building in Bendigo Street still stands, now as luxury apartments.
The "Golden Era" (1977–2006)
Nine began using the slogan "Let Us Be The One" (based o |
https://en.wikipedia.org/wiki/Assurance | Assurance may refer to:
Assurance (computer networking)
Assurance (theology), a Protestant Christian doctrine
Assurance services, offered by accountancy firms
Life assurance, an insurance on human life
Quality assurance
Assurance IQ, Inc., a subsidiary of Prudential Financial
Places
Assurance, West Virginia, an unincorporated community in the United States
Mount Assurance, New Hampshire, United States
Ships
See also
Insurance |
https://en.wikipedia.org/wiki/PIM | PIM or Pim may refer to:
Computing
Parallel inference machine, an intended fifth generation computer
Personal information management
Personal information manager software
Personal Information Module for PalmDOS
Personal Iterations Multiplier for VeraCrypt
Platform-independent model in software engineering
Protocol Independent Multicast, Internet protocols
Processor-in-memory, CPU and memory on the same chip
Process-in-memory, do calculations, i.e. MAC operations in a memory rather than in a CPU
Engineering, science, and mathematics
Passive intermodulation of signals
Phosphatidylmyo-inositol mannosides, a glycolipid component of the cell wall of Mycobacterium tuberculosis
Principal indecomposable module in mathematical module theory
Business
Product information management
Partnerized Inventory Management
Pim Brothers & Co., large Irish family business founded in the nineteenth century
People
Given name
Pim (name)
Surname
Bedford Clapperton Trevelyan Pim, (1826-1886), Royal Navy officer
Jonathan Pim (1806–1885), Irish politician
Jonathan Pim (1858–1949), Irish lawyer and politician
Joshua Pim (1869-1942), Irish doctor and tennis player
Raymond Pim (1897–1993), American politician
Fictional
Pim Diffy, a character in TV series Phil of the Future
Places
Pimhill or Pim Hill, England
Pim Island, Canada
Pim (river), Russia
Pondok Indah Mall, Indonesia
Other uses
Pacific Islands Monthly, a news magazine, discontinued 2000
Penalty (ice hockey) (Penalties infraction minutes)
Pim Fortuyn List, Dutch political party
Pim weight in ancient Israel
Prague International Marathon
Providence Industrial Mission
Public Illumination Magazine
See also
Pimm's, alcoholic beverages
Pym (disambiguation) |
https://en.wikipedia.org/wiki/Virtual%20community | A virtual community is a social network of individuals who connect through specific social media, potentially crossing geographical and political boundaries in order to pursue mutual interests or goals. Some of the most pervasive virtual communities are online communities operating under social networking services.
Howard Rheingold discussed virtual communities in his book, The Virtual Community, published in 1993. The book's discussion ranges from Rheingold's adventures on The WELL, computer-mediated communication, social groups and information science. Technologies cited include Usenet, MUDs (Multi-User Dungeon) and their derivatives MUSHes and MOOs, Internet Relay Chat (IRC), chat rooms and electronic mailing lists. Rheingold also points out the potential benefits for personal psychological well-being, as well as for society at large, of belonging to a virtual community. At the same time, it showed that job engagement positively influences virtual communities of practice engagement.
Virtual communities all encourage interaction, sometimes focusing around a particular interest or just to communicate. Some virtual communities do both. Community members are allowed to interact over a shared passion through various means: message boards, chat rooms, social networking World Wide Web sites, or virtual worlds. Members usually become attached to the community world, logging in and out on sites all day every day, which can certainly become an addiction.
Introduction
The traditional definition of a community is of geographically circumscribed entity (neighborhoods, villages, etc.). Virtual communities are usually dispersed geographically, and therefore are not communities under the original definition. Some online communities are linked geographically, and are known as community websites. However, if one considers communities to simply possess boundaries of some sort between their members and non-members, then a virtual community is certainly a community. Virtual communities resemble real life communities in the sense that they both provide support, information, friendship and acceptance between strangers. Being in a virtual community space you may be expected to feel a sense of belonging and a mutual attachment among the members that are in your space.
One of the most influential part about virtual communities is the opportunity to communicate through several media platforms or networks. Now that virtual communities exists, this had leveraged out the things we once did prior to virtual communities, such as postal services, fax machines, and even speaking on the telephone. Early research into the existence of media-based communities was concerned with the nature of reality, whether communities actually could exist through the media, which could place virtual community research into the social sciences definition of ontology. In the seventeenth century, scholars associated with the Royal Society of London formed a community through the exchange of l |
https://en.wikipedia.org/wiki/Blue%20box | A blue box is an electronic device that produces tones used to generate the in-band signaling tones formerly used within the North American long-distance telephone network to send line status and called number information over voice circuits. This allowed an illicit user, referred to as a "phreaker", to place long-distance calls, without using the network's user facilities, that would be billed to another number or dismissed entirely as an incomplete call. A number of similar "color boxes" were also created to control other aspects of the phone network.
First developed in the 1960s and used by a small phreaker community, the introduction of low-cost microelectronics in the early 1970s greatly simplified these devices to the point where they could be constructed by anyone reasonably competent with a soldering iron or breadboard construction. Soon after, models of relatively low quality were being offered fully assembled, but these often required tinkering by the user to remain operational.
The long-distance network became digitized, replacing the audio call-control tones with out-of-band signaling methods in the form of common-channel signaling (CCS) carried digitally on a separate channel inaccessible to the telephone user. The audio-tone-based blue boxes were of limited use by the 1980s, and of little use today.
History
Automated dialing
Local calling had been increasingly automated through the first half of the 20th century, but long-distance calling still required operator intervention. Automation was deemed essential by AT&T. By the 1940s they had developed a system that used audible tones played over the long-distance lines to control network connections. Tone pairs, referred to as multi-frequency (MF) signals, were assigned to the digits used for telephone numbers. A different, single tone, referred to as single frequency (SF), was used as a line status signal.
This new system allowed the telephone network to be increasingly automated by deploying the dialers and tone generators on an as-required basis, starting with the busier exchanges. Bell Labs was happy to advertise their success in creating this system, and repeatedly revealed details of its inner workings. In the February 1950 issue of Popular Electronics, they published an advertisement, Playing a Tune for a Telephone Number, which showed the musical notes for the digits on a staff and described the telephone operator's pushbuttons as a "musical keyboard". Two keys on a piano would need to be pushed simultaneously to play the tones for each digit. The illustration did not include the tone pairs for the special control signals KP and ST, although in the picture the operator's finger is on the KP key and the ST key is visible. In the 1950s, AT&T released a public relations film, "Speeding Speech", which described the operation of the system. In the film, the tone sequence for sending a complete telephone number is heard through a loudspeaker as a technician presses the keys for d |
https://en.wikipedia.org/wiki/Linux%20Router%20Project | The Linux Router Project (LRP) is a now defunct networking-centric micro Linux distribution. The released versions of LRP were small enough to fit on a single 1.44MB floppy disk, and made building and maintaining routers, access servers, thin servers, thin clients, network appliances, and typically embedded systems next to trivial.
History
LRP was conceived and primarily developed by Dave Cinege from 1997 until 2002. It began originally as a 'router on a floppy' and evolved into a streamlined general purpose network operating system.
As LRP is the oldest embedded Linux distribution, it formed (in whole or in part) the basis of many other embedded system distributions and commercial products which followed it. Several parts developed or specifically enhanced for LRP are still found in common usage today such as POSIXness and BusyBox.
Pioneering Features
Small base OS footprint
A simplified packaging system
Menu based system and package configuration
Strict separation of volatile, non-volatile, Read Only, and Read/Write areas of the root hierarchy
Unpacked and run from ramdisk or run directly from flash
A system to commit configuration changes to a non-volatile medium (Disk/Flash)
Unreleased Work
Dave Cinege worked on a version 4.0 rewrite of LRP from late 2000 into January 2001. He then began testing some ideas he had with proof of concept code, which he claimed was a radical departure from the status quo. To his surprise, this new direction seemed ideal, prompting him to abandon all work done on LRP 4.0 and begin from scratch on a new OS named LRP 5.0.
LRP 5.0 development was headed towards a complete rewrite and reimplementation of Linux userland with a new standard design outside of the POSIX specification. The stated purpose of this was to provide a modern standard base operating system suitable for any application including embedded systems, appliances, servers, and desktop computers.
Cinege however stopped work several months later due to financial reasons. He refused to release any further work, or even the name of this OS, due to animosity towards the computer industry and what he perceived as the plundering of open source authors' work by large corporations.
On May 6, 2003 Cinege updated the LRP website to reflect that the project was being abandoned.
LRP 5.0 Proposed Features
A base OS size of 8MB
A new shell and scripting language unrelated to bourne shell
A new packaging scheme that would retrofit other OSes
An application management system
A core process management system
References
External links
Linux Journal Article on Linux Router and its comparison with Cisco Routers
Discontinued Linux distributions
Floppy-based Linux distributions
Free routing software
Gateway/routing/firewall distribution
Light-weight Linux distributions
Linux distributions |
https://en.wikipedia.org/wiki/Cache%20coherence | In computer architecture, cache coherence is the uniformity of shared resource data that ends up stored in multiple local caches. When clients in a system maintain caches of a common memory resource, problems may arise with incoherent data, which is particularly the case with CPUs in a multiprocessing system.
In the illustration on the right, consider both the clients have a cached copy of a particular memory block from a previous read. Suppose the client on the bottom updates/changes that memory block, the client on the top could be left with an invalid cache of memory without any notification of the change. Cache coherence is intended to manage such conflicts by maintaining a coherent view of the data values in multiple caches.
Overview
In a shared memory multiprocessor system with a separate cache memory for each processor, it is possible to have many copies of shared data: one copy in the main memory and one in the local cache of each processor that requested it. When one of the copies of data is changed, the other copies must reflect that change. Cache coherence is the discipline which ensures that the changes in the values of shared operands (data) are propagated throughout the system in a timely fashion.
The following are the requirements for cache coherence:
Write Propagation Changes to the data in any cache must be propagated to other copies (of that cache line) in the peer caches.
Transaction Serialization Reads/Writes to a single memory location must be seen by all processors in the same order.
Theoretically, coherence can be performed at the load/store granularity. However, in practice it is generally performed at the granularity of cache blocks.
Definition
Coherence defines the behavior of reads and writes to a single address location.
One type of data occurring simultaneously in different cache memory is called cache coherence, or in some systems, global memory.
In a multiprocessor system, consider that more than one processor has cached a copy of the memory location X. The following conditions are necessary to achieve cache coherence:
In a read made by a processor P to a location X that follows a write by the same processor P to X, with no writes to X by another processor occurring between the write and the read instructions made by P, X must always return the value written by P.
In a read made by a processor P1 to location X that follows a write by another processor P2 to X, with no other writes to X made by any processor occurring between the two accesses and with the read and write being sufficiently separated, X must always return the value written by P2. This condition defines the concept of coherent view of memory. Propagating the writes to the shared memory location ensures that all the caches have a coherent view of the memory. If processor P1 reads the old value of X, even after the write by P2, we can say that the memory is incoherent.
The above conditions satisfy the Write Propagation criteria required for cache |
https://en.wikipedia.org/wiki/Appliance | Appliance may refer to:
Electrical equipment and machinery
Computer appliance, a computing device with a specific function and limited configuration ability, e.g.:
Storage appliance, provides storage functionality for multiple attached systems using the transparent local storage area networks paradigm
Anti-spam appliances, detect and eliminate e-mail spam
Firewall (computing), a computer appliance designed to protect computer networks from unwanted traffic
Network appliance, a general purpose router (computing)
Security appliance, a computer appliance designed to protect computer networks from unwanted traffic
Software appliance, a software application that might be combined with just enough operating system (JeOS) for it to run optimally on industry standard hardware
Virtual appliance, a pre-configured virtual machine image, ready to run on a hypervisor
Home appliance, a household machine that uses electricity or some other energy input which includes MVHR units
Small appliance, also called a small domestic appliance or small electric, a portable or semi-portable machine, generally used on a table top, counter top, or other platform, to accomplish a household task
Major appliance, or domestic appliance, a large machine used for routine a housekeeping task
Arts, entertainment, and media
Appliance (band), a British musical group
Appliance, a motion pictures industry term for a latex piece, such as false ears or other features, used by make-up artists
Fire safety
Fire alarm notification appliance, an active fire protection component of a fire alarm system
Fire apparatus, a fire engine or fire truck in British English
Healthcare
Appliance, in medicine and dentistry, a device custom-fitted to an individual for the purpose of correction of a physical or dental problem, e.g.:
Dental braces
Orthotics, an orthotic appliance
Prosthesis |
https://en.wikipedia.org/wiki/Computational%20geometry | Computational geometry is a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. While modern computational geometry is a recent development, it is one of the oldest fields of computing with a history stretching back to antiquity.
Computational complexity is central to computational geometry, with great practical significance if algorithms are used on very large datasets containing tens or hundreds of millions of points. For such sets, the difference between O(n2) and O(n log n) may be the difference between days and seconds of computation.
The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come from mathematical visualization.
Other important applications of computational geometry include robotics (motion planning and visibility problems), geographic information systems (GIS) (geometrical location and search, route planning), integrated circuit design (IC geometry design and verification), computer-aided engineering (CAE) (mesh generation), and computer vision (3D reconstruction).
The main branches of computational geometry are:
Combinatorial computational geometry, also called algorithmic geometry, which deals with geometric objects as discrete entities. A groundlaying book in the subject by Preparata and Shamos dates the first use of the term "computational geometry" in this sense by 1975.
Numerical computational geometry, also called machine geometry, computer-aided geometric design (CAGD), or geometric modeling, which deals primarily with representing real-world objects in forms suitable for computer computations in CAD/CAM systems. This branch may be seen as a further development of descriptive geometry and is often considered a branch of computer graphics or CAD. The term "computational geometry" in this meaning has been in use since 1971.
Although most algorithms of computational geometry have been developed (and are being developed) for electronic computers, some algorithms were developed for unconventional computers (e.g. optical computers )
Combinatorial computational geometry
The primary goal of research in combinatorial computational geometry is to develop efficient algorithms and data structures for solving problems stated in terms of basic geometrical objects: points, line segments, polygons, polyhedra, etc.
Some of these problems seem so simple that they were not regarded as problems at all until the advent of computers. Consider, for example, the Closest pair problem:
Given n points in the plane, find the two with the smallest distance from each other.
One could compute the distances between all the pairs of |
https://en.wikipedia.org/wiki/Tommy%20Mu%C3%B1iz | Lucas Tomás Muñiz Ramírez (4 February 1922 – 15 January 2009), better known as Tommy Muñiz, was a Puerto Rican comedy and drama actor, media producer, businessman and network owner. He is considered to be one of the pioneering figures of the television business in Puerto Rico. Although Muñíz was born in Ponce, he was raised in the capital city of San Juan where he studied. Muñíz developed an interest in the entertainment business thanks to his father Tomas and to his uncle and godfather Félix Muñíz, who also produced radio programs. Muñiz was a successful radio producer in Puerto Rico during the mid- to late 1940s. Five of his radio programs -comedies for which he was often the scriptwriter, sometimes with the assistance of Sylvia Rexach- would consistently earn a strong following, as judged by the attendance to personal presentations of the artists featured in them. He was responsible for introducing more than a dozen new artists to the media. He bought Radio Luz 1600 (WLUZ-AM) a radio station in Bayamón, Puerto Rico During the first years of commercial television in Puerto Rico, and after a brief period during which revenues from his radio productions trickled down, Muñiz opted to start producing television programs as well. During the 1940s, when radios where ubiquitous in Puerto Rican households, Muñiz's radio scripts then became increasingly successful, beginning with El colegio de la alegría, in which he performed along José Miguel Agrelot. This was followed by La familia Pérez, Adelita, la secretaria, Gloria y Miguel and ¡Qué sirvienta!, all of which featured him in some function. He was producer or executive producer for dozens of television programs and specials between 1955 and 1995. At one time in the early 1960s, five programs produced by Muñiz were in the top five television rankings in local audience surveys. One of the programs even spawned a 1967 film, "La Criada Malcriada", starring Velda González, Shorty Castro and Muñiz, among others. He is credited for producing most of José Miguel Agrelot's television programs during his career. He is also credited with discovering and promoting other television artists as well, particularly Otilio Warrington. In the 1970s he was the owner of WRIK-TV Channel 7 in Ponce.
In the late 1970s, Muñiz revived a comedy format that he had successfully used in three previous radio and television productions, the family sitcom. He produced and acted in a comedy series named Los García together with his real-life son Rafo Muñiz, and with longtime friend Gladys Rodríguez. Also starring were William Gracia as Pepín, Gina Beveraggi as Gini, Edgardo Rubio as Junito, Manela Bustamante as Doña Tony, Emma Rosa Vincenty as Doňa Cayetana, and a number of additional actors in various roles. The show became the most successful television show in Puerto Rican history, having a mostly successful six-year run and staying for three of those years at the top of local television ratings. During the late 1970s and earl |
https://en.wikipedia.org/wiki/Polyglot%20%28disambiguation%29 | A polyglot is someone who speaks multiple languages.
Polyglot may also refer to:
Polyglot (book), a book that contains the same text in more than one language
Polyglot (computing), a computer program that is valid in more than one programming language
Polyglot (webzine), a biweekly game industry webzine published by Polymancer Studios
Polyglot markup, HTML markup that conforms to both the HTML and XHTML specifications
Polyglot Petition, a global call for a common cause, such as prohibitionism
The Polyglots, a 1925 novel by Anglo-Russian William Gerhardie
See also
List of polyglots
Multilingualism, the use of multiple languages, either by an individual or by a community
Pidgin, a language that develops between groups who do not share a common language
Mixed language
pt:Poliglota |
https://en.wikipedia.org/wiki/Memory%20management%20unit | A memory management unit (MMU), sometimes called paged memory management unit (PMMU), is a computer hardware unit that examines all memory references on the memory bus, translating these requests, known as virtual memory addresses, into physical addresses in main memory.
In modern systems, programs generally have addresses that access the theoretical maximum memory of the computer architecture, 32 or 64 bits. The MMU maps the addresses from each program into separate areas in physical memory, which is generally much smaller than the theoretical maximum. This is possible because programs rarely use large amounts of memory at any one time.
Most modern operating systems (OS) work in concert with the MMU to provide virtual memory (VM) support. The MMU tracks memory use in fixed-size blocks known as pages, and if a program refers to a location in a page that is not in physical memory, the MMU will cause an interrupt to the operating system. The OS will then select a lesser-used block in memory, write it to backing storage such as a hard drive if it's been modified since it was read in, read the page from backing storage into that block, and set up the MMU to map the block to the originally requested page so the program can use it. This is known as demand paging.
Modern MMUs generally perform additional memory-related tasks as well. Memory protection blocks attempts by a program to access memory it has not previously requested, which prevents a misbehaving program from using up all memory or malicious code from reading data from another program. They also often manage a processor cache, which stores recently accessed data in a very fast memory and thus reduces the need to talk to the slower main memory. In some implementations, they are also responsible for bus arbitration, controlling access to the memory bus among the many parts of the computer that desire access.
Prior to VM systems becoming widespread in the 1990s, earlier MMU designs were more varied. Common among these was paged translation, which was similar to modern demand paging in that it used fixed-size blocks, but had a fixed-size list of pages that divided up memory; this meant that the block size was a function of the number of pages and the installed memory. Another common technique, found mostly on larger machines, was segmented translation, which allowed for variable-size blocks of memory that better mapped onto program requests. This was efficient but did not map as well onto virtual memory. Some early systems, especially 8-bit systems, used very simple MMUs to perform bank switching.
Overview
Modern MMUs typically divide the virtual address space (the range of addresses used by the processor) into pages, each having a size which is a power of 2, usually a few kilobytes, but they may be much larger. Programs reference memory using the natural address size of the machine, typically 32 or 64-bits in modern systems. The bottom bits of the address (the offset within a page) are le |
https://en.wikipedia.org/wiki/Macintosh%20II | The Macintosh II is a personal computer designed, manufactured, and sold by Apple Computer from March 1987 to January 1990. Based on the Motorola 68020 32-bit CPU, it is the first Macintosh supporting color graphics. When introduced, a basic system with monitor and 20 MB hard drive cost . With a 13-inch color monitor and 8-bit display card the price was around . This placed it in competition with workstations from Silicon Graphics, Sun Microsystems, and Hewlett-Packard.
The Macintosh II was the first computer in the Macintosh line without a built-in display; a monitor rested on top of the case like the IBM Personal Computer and Amiga 1000. It was designed by hardware engineers Michael Dhuey (computer) and Brian Berkeley (monitor) and industrial designer Hartmut Esslinger (case).
Eighteen months after its introduction, the Macintosh II was updated with a more powerful CPU and sold as the Macintosh IIx. In early 1989, the more compact Macintosh IIcx was introduced at a price similar to the original Macintosh II, and by the beginning of 1990 sales stopped altogether. Motherboard upgrades to turn a Macintosh II into a IIx or Macintosh IIfx were offered by Apple.
Development
Two common criticisms of the Macintosh from its introduction in 1984 were the closed architecture and lack of color; rumors of a color Macintosh began almost immediately.
The Macintosh II project was begun by Dhuey and Berkeley during 1985 without the knowledge of Apple co-founder and Macintosh division head Steve Jobs, who opposed expansion slots and color, on the basis that the former complicated the user experience and the latter did not conform to WYSIWYG—color printers were not common. Jobs instead wanted higher-resolution monochrome displays, such as the ones chosen for his own "BigMac" project begun in 1984 to develop a Macintosh successor.
Initially referred to as "Little Big Mac", the Macintosh II was codenamed "Milwaukee" after Dhuey's hometown, and later went through a series of new names. After Jobs was fired from Apple in September 1985, the Milwaukee project could proceed openly (while Jobs' own BigMac project was finally cancelled).
The Macintosh II was introduced at the AppleWorld 1987 conference in Los Angeles, with low-volume initial shipments starting two months later. Retailing for US $5,498, the Macintosh II was the first modular Macintosh model, so called because it came in a horizontal desktop case like many IBM PC compatibles of the time. Previous Macintosh computers use an all-in-one design with a built-in black-and-white CRT.
The Macintosh II has drive bays for an internal hard disk (originally 40 MB or 80 MB) and an optional second floppy disk drive. It, along with the Macintosh SE, was the first Macintosh to use the Apple Desktop Bus (ADB) introduced with the Apple IIGS for keyboard and mouse interface.
The primary improvement in the Macintosh II was Color QuickDraw in ROM, a color version of the graphics routines. Color QuickDraw can handle |
https://en.wikipedia.org/wiki/Microsoft%20FrontPage | Microsoft FrontPage (full name Microsoft Office FrontPage) is a discontinued WYSIWYG HTML editor and website administration tool from Microsoft for the Microsoft Windows line of operating systems. It was branded as part of the Microsoft Office suite from 1997 to 2003. Microsoft FrontPage has since been replaced by Microsoft Expression Web and SharePoint Designer, which were first released in December 2006 alongside Microsoft Office 2007, but these two products were also discontinued in favor of a web-based version of SharePoint Designer, as those three HTML editors were desktop applications.
History
FrontPage was initially created by Cambridge, Massachusetts company Vermeer Technologies, Incorporated, evidence of which can be easily spotted in file names and directories prefixed _vti_ in web sites created using FrontPage. Vermeer was acquired by Microsoft in January 1996 specifically so that Microsoft could add FrontPage to its product line-up, allowing them to gain an advantage in the browser wars, as FrontPage was designed to create web pages for their own browser, Internet Explorer.
As a "WYSIWYG" (What You See Is What You Get) editor, FrontPage is designed to hide the details of pages' HTML code from the user, making it possible for novices to create web pages and web sites easily.
FrontPage's initial outing under the Microsoft name came in 1996 with the release of Windows NT 4.0 Server and its constituent Web server Internet Information Services 2.0. Bundled on CD with the NT 4.0 Server release, FrontPage 1.1 would run under NT 4.0 (Server or Workstation) or Windows 95. Up to FrontPage 98, the FrontPage Editor, which was used for designing pages, was a separate application from the FrontPage Explorer which was used to manage web site folders. With FrontPage 2000, both programs were merged into the Editor.
FrontPage used to require a set of server-side plugins originally known as IIS Extensions. The extension set was significantly enhanced for Microsoft inclusion of FrontPage into the Microsoft Office line-up with Office 97 and subsequently renamed FrontPage Server Extensions (FPSE). Both sets of extensions needed to be installed on the target web server for its content and publishing features to work. Microsoft offered both Windows and Unix-based versions of FPSE. FrontPage 2000 Server Extensions worked with earlier versions of FrontPage as well. FPSE 2002 was the last released version which also works with FrontPage 2003 and was later updated for IIS 6.0 as well. However, with FrontPage 2003, Microsoft began moving away from proprietary Server Extensions to standard protocols like FTP and WebDAV for remote web publishing and authoring. FrontPage 2003 can also be used with Windows SharePoint Services.
A version for the classic Mac OS was released in 1998; however, it had fewer features than the Windows product and Microsoft has never updated it.
In 2006, Microsoft announced that FrontPage would eventually be superseded by two products |
https://en.wikipedia.org/wiki/SSC | SSC may refer to:
Businesses
Shanghai Supercomputer Center, a high-performance computing service provider
Shared services center, outsourcing
SSC North America, an automobile manufacturer
Specialized System Consultants, a private media company
Swedish Space Corporation, a Swedish government owned company
Southern Star Central Gas Pipeline, Inc, based in Owensboro, KY
Syrian Satellite Channel, a satellite television channel owned by RTV Syria
Education
Student selected components, optional elements in the syllabus of UK medical schools
Secondary School Certificate, the certificate given to students graduating from a secondary school in India, Pakistan or Bangladesh
Secondary School Leaving Certificate examination
Educational institutions
Hong Kong
St Stephen's College, Hong Kong, in Stanley, Hong Kong Island
St. Stephen's Girls' College, Hong Kong, in Pok Fu Lam, Hong Kong Island
Australia
Sydney Secondary College, a public school in Sydney, Australia
St Stanislaus' College, Bathurst, New South Wales
Santa Sabina College, Strathfield, New South Wales
Saint Stephen's College, Coomera, Queensland
Saint Scholastica's College, Australia
Philippines
St. Scholastica's College, Manila, Philippines
San Sebastian College – Recoletos in Manila, Philippines
United States
Saint Stanislaus College, a high school in Bay St. Louis, Mississippi
Salem State University, a public college in Salem, Massachusetts
Seminole State College (Oklahoma), a public college in Seminole, Oklahoma
Seminole State College of Florida, a public college in Seminole County, Florida
South Seattle College, a two-year public college in Seattle, Washington
South Suburban College, a community college in South Holland, Illinois
Groups and organizations
Sangha Supreme Council, governs Buddhism in Thailand
Saudi Space Commission, government agency of Saudi Arabia
Sector skills councils in the UK, employers' organisations for reducing skills gaps
Shared Services Canada
Sierra Student Coalition, a student-run arm of the Sierra Club, an environmental organization in the United States
Singapore Symphony Chorus, choir of the Singapore Symphony Orchestra
SkyscraperCity, an online forum for urban discussion
Society of the Holy Cross (Societas Sanctae Crucis), traditionalist Anglo-Catholic society of male priests
Society of Solicitors in the Supreme Courts of Scotland, a professional association of solicitors
Space Systems Command, the acquisition, research and development, and launch command of the United States Space Force
Species Survival Commission of the International Union for Conservation of Nature
Staff Selection Commission, conducts entry exams for Indian Government staff
State Security Council of apartheid South Africa
State Services Commission of New Zealand, oversees NZ public sector performance
Statistical Society of Canada, promotes use and understanding of statistical methods
Sunni Students Council, council for Muslim students headquarte |
https://en.wikipedia.org/wiki/VoiceXML | VoiceXML (VXML) is a digital document standard for specifying interactive media and voice dialogs between humans and computers. It is used for developing audio and voice response applications, such as banking systems and automated customer service portals. VoiceXML applications are developed and deployed in a manner analogous to how a web browser interprets and visually renders the Hypertext Markup Language (HTML) it receives from a web server. VoiceXML documents are interpreted by a voice browser and in common deployment architectures, users interact with voice browsers via the public switched telephone network (PSTN).
The VoiceXML document format is based on Extensible Markup Language (XML). It is a standard developed by the World Wide Web Consortium (W3C).
Usage
VoiceXML applications are commonly used in many industries and segments of commerce. These applications include order inquiry, package tracking, driving directions, emergency notification, wake-up, flight tracking, voice access to email, customer relationship management, prescription refilling, audio news magazines, voice dialing, real-estate information and national directory assistance applications.
VoiceXML has tags that instruct the voice browser to provide speech synthesis, automatic speech recognition, dialog management, and audio playback. The following is an example of a VoiceXML document:
<vxml version="2.0" xmlns="http://www.w3.org/2001/vxml">
<form>
<block>
<prompt>
Hello world!
</prompt>
</block>
</form>
</vxml>
When interpreted by a VoiceXML interpreter this will output "Hello world" with synthesized speech.
Typically, HTTP is used as the transport protocol for fetching VoiceXML pages. Some applications may use static VoiceXML pages, while others rely on dynamic VoiceXML page generation using an application server like Tomcat, Weblogic, IIS, or WebSphere.
Historically, VoiceXML platform vendors have implemented the standard in different ways, and added proprietary features. But the VoiceXML 2.0 standard, adopted as a W3C Recommendation on 16 March 2004, clarified most areas of difference. The VoiceXML Forum, an industry group promoting the use of the standard, provides a conformance testing process that certifies vendors' implementations as conformant.
History
AT&T Corporation, IBM, Lucent, and Motorola formed the VoiceXML Forum in March 1999, in order to develop a standard markup language for specifying voice dialogs. By September 1999 the Forum released VoiceXML 0.9 for member comment, and in March 2000 they published VoiceXML 1.0. Soon afterwards, the Forum turned over the control of the standard to the W3C. The W3C produced several intermediate versions of VoiceXML 2.0, which reached the final "Recommendation" stage in March 2004.
VoiceXML 2.1 added a relatively small set of additional features to VoiceXML 2.0, based on feedback from implementations of the 2.0 standard. It is backward compatible with VoiceXML 2.0 and reache |
https://en.wikipedia.org/wiki/List%20of%20data%20structures | This is a list of well-known data structures. For a wider list of terms, see list of terms relating to algorithms and data structures. For a comparison of running times for a subset of this list see comparison of data structures.
Data types
Primitive types
Boolean, true or false.
Character
Floating-point representation of a finite subset of the rationals.
Including single-precision and double-precision IEEE 754 floats, among others
Fixed-point representation of the rationals
Integer, a direct representation of either the integers or the non-negative integers
Reference, sometimes erroneously referred to as a pointer or handle, is a value that refers to another value, possibly including itself
Symbol, a unique identifier
Enumerated type, a set of symbols
Composite types or non-primitive type
Array, a sequence of elements of the same type stored contiguously in memory
Record (also called a structure or struct), a collection of fields
Product type (also called a tuple), a record in which the fields are not named
String, a sequence of characters representing text
Union, a datum which may be one of a set of types
Tagged union (also called a variant, discriminated union or sum type), a union with a tag specifying which type the data is
Abstract data types
Container
List
Tuple
Associative array, Map
Multimap
Set
Multiset (bag)
Stack
Queue (example Priority queue)
Double-ended queue
Graph (example Tree, Heap)
Some properties of abstract data types:
"Ordered" means that the elements of the data type have some kind of explicit order to them, where an element can be considered "before" or "after" another element. This order is usually determined by the order in which the elements are added to the structure, but the elements can be rearranged in some contexts, such as sorting a list. For a structure that isn't ordered, on the other hand, no assumptions can be made about the ordering of the elements (although a physical implementation of these data types will often apply some kind of arbitrary ordering). "Uniqueness" means that duplicate elements are not allowed. Depending on the implementation of the data type, attempting to add a duplicate element may either be ignored, overwrite the existing element, or raise an error. The detection for duplicates is based on some inbuilt (or alternatively, user-defined) rule for comparing elements.
Linear data structures
A data structure is said to be linear if its elements form a sequence.
Arrays
Array
Bit array
Bit field
Bitboard
Bitmap
Circular buffer
Control table
Image
Dope vector
Dynamic array
Gap buffer
Hashed array tree
Lookup table
Matrix
Parallel array
Sorted array
Sparse matrix
Iliffe vector
Variable-length array
Lists
Doubly linked list
Array list
Linked list also known as a Singly linked list
Association list
Self-organizing list
Skip list
Unrolled linked list
VList
Conc-tree list
Xor linked list
Zipper
Doubly connected edge list also known as half-edge
Difference list
Free list
Trees
Tre |
https://en.wikipedia.org/wiki/Equational%20prover | EQP, an abbreviation for equational prover, is an automated theorem proving program for equational logic, developed by the Mathematics and Computer Science Division of the Argonne National Laboratory. It was one of the provers used for solving a longstanding problem posed by Herbert Robbins, namely, whether all Robbins algebras are Boolean algebras.
External links
EQP project.
Robbins Algebras Are Boolean.
Argonne National Laboratory, Mathematics and Computer Science Division.
Theorem proving software systems |
https://en.wikipedia.org/wiki/Warhol%20worm | A Warhol worm is a computer worm that spreads as fast as physically possible, infecting all vulnerable machines on the entire Internet in 15 minutes or less. The term is based on the claim that "in the future, everyone will have 15 minutes of fame", which has been misattributed to Andy Warhol. A 2002 paper presented at the 11th USENIX Security Symposium proposed designs for better worms, such as a "flash worm" that identifies a hit-list of vulnerable targets before attacking.
In 2003, SQL Slammer became the first observed example of a Warhol worm. The mechanism of SQL Slammer's spread used a pseudo-random number generator seeded from a system variable to determine which IP addresses to attack next for a rapid, unpredictable spread.
According to an analysis of the SQL Slammer outbreak by the Center for Applied Internet Data Analysis (CAIDA), its growth followed an exponential curve with a doubling time of 8.5 seconds in the early phases of the attack, which was only slowed by the collapse of many networks because of the denial of service attack caused by SQL Slammer's traffic. 90% of all vulnerable machines were infected within 10 minutes, showing that the original estimate for infection speed was roughly correct.
References
Computer worms |
https://en.wikipedia.org/wiki/NetBIOS | NetBIOS () is an acronym for Network Basic Input/Output System. It provides services related to the session layer of the OSI model allowing applications on separate computers to communicate over a local area network. As strictly an API, NetBIOS is not a networking protocol. Operating systems of the 1980s (DOS and Novell Netware primarily) ran NetBIOS over IEEE 802.2 and IPX/SPX using the NetBIOS Frames (NBF) and NetBIOS over IPX/SPX (NBX) protocols, respectively. In modern networks, NetBIOS normally runs over TCP/IP via the NetBIOS over TCP/IP (NBT) protocol. NetBIOS is also used for identifying system names in TCP/IP (Windows). Simply stated, it is a protocol that allows communication of data for files and printers through the Session Layer of the OSI Model in a LAN.
History and terminology
NetBIOS is an operating system-level API that allows applications on computers to communicate with one another over a local area network (LAN). The API was created in 1983 by Sytek Inc. for software communication over IBM PC Network LAN technology. On IBM PC Network, as an API alone, NetBIOS relied on proprietary Sytek networking protocols for communication over the wire.
In 1985, IBM went forward with the Token Ring network scheme and produced an emulator of Sytek's NetBIOS API to allow NetBIOS-aware applications from the PC-Network era to work over IBM's new Token Ring hardware. This IBM emulator, named NetBIOS Extended User Interface (NetBEUI), expanded the base NetBIOS API created by Sytek with, among other things, the ability to deal with the greater node capacity of Token Ring. A new networking protocol, NBF, was simultaneously produced by IBM to allow its NetBEUI API (their enhanced NetBIOS API) to provide its services over Token Ring – specifically, at the IEEE 802.2 Logical Link Control layer.
In 1985, Microsoft created its own implementation of the NetBIOS API for its MS-Net networking technology. As in the case of IBM's Token Ring, the services of Microsoft's NetBIOS implementation were provided over the IEEE 802.2 Logical Link Control layer by the NBF protocol.
In 1986, Novell released Advanced Novell NetWare 2.0 featuring the company's own emulation of the NetBIOS API. Its services were encapsulated within NetWare's IPX/SPX protocol using the NetBIOS over IPX/SPX (NBX) protocol.
In 1987, a method of encapsulating NetBIOS in TCP and UDP packets, NetBIOS over TCP/IP (NBT), was published. It was described in RFC 1001 ("Protocol Standard for a NetBIOS Service on a TCP/UDP Transport: Concepts and Methods") and RFC 1002 ("Protocol Standard for a NetBIOS Service on a TCP/UDP Transport: Detailed Specifications"). The NBT protocol was developed in order to "allow an implementation [of NetBIOS applications] to be built on virtually any type of system where the TCP/IP protocol suite is available," and to "allow NetBIOS interoperation in the Internet."
After the PS/2 computer hit the market in 1987, IBM released the PC LAN Support Program, which inclu |
https://en.wikipedia.org/wiki/Jakarta%20XML%20Registries | Jakarta XML Registries (JAXR; formerly Java API for XML Registries) defines a standard API for Jakarta EE applications to access and programmatically interact with various kinds of metadata registries. JAXR is one of the Java XML programming APIs. The JAXR API was developed under the Java Community Process as JSR 93.
JAXR provides a uniform and standard Java API for accessing different kinds of XML-based metadata registry. Current implementations of JAXR support ebXML Registry version 2.0, and UDDI version 2.0. More such registries could be defined in the future. JAXR provides an API for the clients to interact with XML registries and a service provider interface (SPI) for the registry providers so they can plug in their registry implementations. The JAXR API insulates application code from the underlying registry mechanism. When writing a JAXR based client to browse or populate a registry, the code does not have to change if the registry changes, for instance from UDDI to ebXML.
Jakarta XML Registries (JAXR) was removed from Jakarta EE 9.
References
External links
Apache Scout is an open source implementation of the JSR 93
JAXR home page
freebXML Registry Provides a royalty-free open source JAXR implementation
XML-based standards
Java API for XML
Java specification requests
Java enterprise platform |
https://en.wikipedia.org/wiki/Saturday-morning%20cartoon | "Saturday-morning cartoon" is a colloquial term for the original animated series and live-action programming that was typically scheduled on Saturday and Sunday mornings in the United States on the "Big Three" television networks. The genre's popularity had a broad peak from the mid-1960s through the mid-2000s; over time it declined, in the face of changing cultural norms, increased competition from formats available at all times, and heavier regulations. In the last two decades of the genre's existence, Saturday-morning and Sunday-morning cartoons were primarily created and aired to meet regulations on children's television programming in the United States, or E/I. Minor television networks, in addition to the non-commercial PBS in some markets, continue to air animated programming on Saturday and Sunday while partially meeting those mandates.
In the United States, the generally accepted times for these and other children's programs to air on Saturday mornings were from 8:00 a.m. to approximately 1:00 p.m. Eastern Time Zone. Until the late 1970s, American networks also had a schedule of children's programming on Sunday mornings, though most programs at this time were repeats of Saturday-morning shows that were already out of production. In some markets, some shows were pre-empted in favor of syndicated or other types of local programming. Saturday-morning and Sunday-morning cartoons were largely discontinued in Canada by 2002. In the United States, The CW continued to air non-E/I cartoons as late as 2014; among the "Big Three" traditional major networks, the final non-E/I cartoon to date (Kim Possible) was last aired in 2006. Cable television networks have since then revived the practice of debuting their most popular animated programming on Saturday and Sunday mornings on a sporadic basis.
History
Early cartoons
Although the Saturday-morning timeslot had always featured a great deal of children's television series beginning in the early 1950s, the idea of commissioning new animated series for broadcast on Saturday mornings caught on in the mid-1960s, when the networks realized that they could concentrate kids' viewing on that one morning to appeal to advertisers, notably manufacturers of toys and breakfast cereals. Furthermore, limited animation, such as that produced by such studios as Filmation, DePatie–Freleng Enterprises, Total Television, Jay Ward Productions and Hanna-Barbera, was economical enough to produce in sufficient quantity to fill the five-hour block of time, as compared to live-action programming. While production times and costs were undeniably higher with animated programming, the cost of talent was far less (voice actors became known for their ability to perform several characters at once, sometimes even on the same show) and networks could rerun children's animated programming more frequently than most live-action series, due to the belief that children would not remember the original airings enough to lose interest, neg |
https://en.wikipedia.org/wiki/OASIS%20%28organization%29 | The Organization for the Advancement of Structured Information Standards (OASIS; ) is a nonprofit consortium that works on the development, convergence, and adoption of open standards for cybersecurity, blockchain, Internet of things (IoT), emergency management, cloud computing, legal data exchange, energy, content technologies, and other areas.
History
OASIS was founded under the name "SGML Open" in 1993. It began as a trade association of Standard Generalized Markup Language (SGML) tool vendors to cooperatively promote the adoption of SGML through mainly educational activities, though some amount of technical activity was also pursued including an update of the CALS Table Model specification and specifications for fragment interchange and entity management.
In 1998, with the movement of the industry to XML, SGML Open changed its emphasis from SGML to XML, and changed its name to OASIS Open to be inclusive of XML and reflect an expanded scope of technical work and standards. The focus of the consortium's activities also moved from promoting adoption (as XML was getting much attention on its own) to developing technical specifications. In July 2000 a new technical committee process was approved. With the adoption of the process the manner in which technical committees were created, operated, and progressed their work was regularized. At the adoption of the process there were five technical committees; by 2004 there were nearly 70.
During 1999, OASIS was approached by UN/CEFACT, the committee of the United Nations dealing with standards for business, to jointly develop a new set of specifications for electronic business. The joint initiative, called "ebXML" and which first met in November 1999, was chartered for a three-year period. At the final meeting under the original charter, in Vienna, UN/CEFACT and OASIS agreed to divide the remaining work between the two organizations and to coordinate the completion of the work through a coordinating committee. In 2004 OASIS submitted its completed ebXML specifications to ISO TC154 where they were approved as ISO 15000.
The consortium has its headquarters in Burlington, Massachusetts, shared with other companies. On September 4, 2014, the consortium moved from 25 Corporate Drive Suite 103 to 35 Corporate Dr Suite 150, still on the same loop route.
Standards development
The following standards are under development or maintained by OASIS technical committees:
AMQP — Advanced Message Queuing Protocol, an application layer protocol for message-oriented middleware.
BCM — Business Centric-Methodology, BCM is a comprehensive approach and proven techniques that enable a service-oriented architecture (SOA) and support enterprise agility and interoperability.
CAM — Content Assembly Mechanism, is a generalized assembly mechanism for using templates of XML business transaction content and the associated rules. CAM templates augment schema syntax and provide implementers with the means to specify interopera |
https://en.wikipedia.org/wiki/Sealab%202021 | Sealab 2021 is an American adult animated television series created by Adam Reed and Matt Thompson for Cartoon Network's late-night programming block, Adult Swim. Cartoon Network aired the show's first three episodes in December 2000 before the official inception of the Adult Swim block on September 2, 2001, with the final episode airing on April 24, 2005. Sealab 2021 is one of the only four original Williams Street series that premiered in 2000 before Adult Swim officially launched, the others being Aqua Teen Hunger Force, The Brak Show, and Harvey Birdman, Attorney at Law.
Much like Adult Swim's Space Ghost Coast to Coast, the animation used stock footage from a 1970s Hanna-Barbera cartoon, in this case the short-lived, environmentally-themed Sealab 2020, along with original animation. The show was a satirical parody of both the original Sealab series and the general conventions of the 1970s animated children's series. While there was initial resistance from several of the original series' creators to the reuse of their characters, production moved forward on the series. Sealab 2021 was produced by 70/30 Productions.
Episodes
Production
Adam Reed and Matt Thompson, the creators and writers of Sealab 2021, came up with the idea for the show in 1995 while they were production assistants for Cartoon Network. The duo created High Noon Toons in the mid-1990s; this was a three-hour programming block of cartoons hosted by cowboy hand puppets. Thompson and Reed were usually heavily intoxicated while working on the show, and were reprimanded at one point for lighting one of the prop sets on fire. They stumbled on a tape of the show Sealab 2020, and wrote replacement dialogue. Cartoon Network passed on the show because it did not believe it was funny. Five years after quitting Cartoon Network, the two went back to the original tape, this time making the characters do what they wanted. Cartoon Network bought the show, coincidentally around the same time that Adult Swim was created. The original "pitch pilot" is available on the Season 1 DVD as a special feature.
Very few of the episodes of the series share any continuity or ongoing plot. For instance, the entire installation is destroyed at the end of many episodes, and crew members are often killed in horrible ways, only to return in the following episode. There are occasional running gags, such as the "Grizzlebee's" restaurant chain, a parody of Applebee's and Bennigan's, the character of Sharko, and Prescott, the half-man, half-tentacle monster "from the network". It contains many references to the pop culture of the 1980s–2000s and makes use of other cartoons from the 1970s besides that on which it is based, such as 1973's Butch Cassidy for the on-screen appearances of the Sealab writers, and various one-off appearances of other characters.
Characters
Captain Hazel "Hank" Murphy (Harry Goz) is the ostensible leader of the crew, though his qualifications, and even his grasp on reality, are quest |
https://en.wikipedia.org/wiki/Marker | The term Marker may refer to:
Common uses
Marker (linguistics), a morpheme that indicates some grammatical function
Marker (telecommunications), a special-purpose computer
Boundary marker, an object that identifies a land boundary
Marker or Clapperboard, equipment used during filming
Marker, a set of sewing patterns placed over cloth to be cut
Historical marker, a plaque erected at historically significant locations
Marker pen, a felt-tipped pen
Paintball marker, or paintball gun, an air gun
Survey marker, an object placed to mark a point
Places
4253 Märker, a main belt asteroid
Marker, Norway, a municipality in Østfold county, Norway
People
Chris Marker (1921–2012), French film maker and director of La jetée
Cliff Marker (1903–1972), American football player
Friedrich Märker (1893–1985), German writer, essayist, theatre critic and publicist
Gary Marker, American bass guitarist and recording engineer
Gus Marker (1905–1997), Canadian ice hockey player
Harry Marker (1899 – 1990), American filmmaker
James Marker (c. 1920–2012), American-born Canadian businessman who invented Cheezies
Jamsheed Marker (born 1922), Pakistani diplomat
Nicky Marker (born 1965), English footballer and coach
Peter Marker, Australian Australian rules footballer and media personality
Russell Earl Marker (1902–1995), American chemist
Steve Marker (born 1959), American musician and record producer, guitarist for Garbage
Vic Marker (fl. 1937–1939), American boxer
Voris (designer), née Voris Marker (1908-1973), American fashion designer and sculptor
Arts, entertainment, and media
Marker (band), a band formed in 2017 by Ken Vandermark
Marker (novel), a 2005 novel by Robin Cook
Marker (TV series), a 1995 American drama series
The Marker (film), a 2017 British crime film
TheMarker, an Israeli Hebrew-language daily business newspaper
Pistol Whipped (working title: Marker), a 2008 film starring Steven Seagal
Marker, a business website published by Medium
Companies
Marker (ski bindings), a company specializing in ski bindings
Science
Biological marker, or biomarker, a substance used as an indicator of a biological state
Genetic marker, a DNA sequence with a known location associated with a particular gene or trait
See also
Mark (disambiguation)
Markup (disambiguation)
Marker, Russian combat UGV |
https://en.wikipedia.org/wiki/The%20WB | The WB Television Network (shortened to The WB, and nicknamed the "Frog Network" for its former mascot Michigan J. Frog) was an American television network launched on broadcast television on January 11, 1995, as a joint venture between the Warner Bros. Entertainment division of Time Warner and the Tribune Broadcasting subsidiary of the Tribune Company, with the former acting as controlling partner (and from which The WB received its name). The network aired programs targeting teenagers and young adults between the ages of 13 and 34, while its children's division, Kids' WB, targeted children between the ages of 6 and 12.
On January 24, 2006, Warner Bros. and CBS Corporation announced plans to replace their respective subsidiary networks, The WB and UPN, with The CW later that same year. The WB ended its operations on September 17, 2006, with some programs from both it and competitor UPN (which had shut down on September 15) moving to The CW when it launched the following day, September 18.
Time Warner re-used the WB brand for an online network that launched on April 28, 2008. Until it was closed in December 2013, the website allowed users to watch shows aired on the former television network, as well as programming from the defunct In2TV service created prior to Time Warner's spinoff of AOL. The website could only be accessed within the United States.
History
1993–1995: Origins
Much like its competitor UPN, The WB was summoned in reaction primarily to the Federal Communications Commission (FCC)'s then-recent deregulation of media ownership rules that repealed the Financial Interest and Syndication Rules, and partly due to the success of the Fox network (which debuted in October 1986, nine years before The WB launched) and first-run syndicated programming during the late 1980s and early 1990s (such as Baywatch, Star Trek: The Next Generation, and War of the Worlds), as well as the erosion in ratings suffered by independent television stations due to the growth of cable television and movie rentals. The network can also trace its beginnings to the Prime Time Entertainment Network (PTEN), a programming service operated as a joint venture between Time Warner and the Chris-Craft Industries group of stations, and launched in January 1993.
On November 2, 1993, the Warner Bros. division of Time Warner announced the formation of The WB Television Network, with the Tribune Company holding a minority interest; as such, Tribune Broadcasting signed agreements to affiliate six of its seven television stations at the time – all of which were independent stations, including the television group's two largest stations, WPIX in New York City and KTLA in Los Angeles – with the network. Only five of these stations – along with a sixth that Tribune acquired the following year – would join The WB at launch (the company's Atlanta independent WGNX would instead agree to affiliate with CBS in September 1994, as a result of Fox's affiliation deal with New World Commu |
https://en.wikipedia.org/wiki/King%20Biscuit%20Flower%20Hour | The King Biscuit Flower Hour was an American syndicated radio show presented by the D.I.R. Radio Network that featured concert performances by various rock music recording artists.
History
The program was broadcast on Sunday nights from 1973 until 1993. Following the end of original programming, the program continued, featuring material from previously broadcast shows, until 2005. During its prime, the program was carried by more than 300 radio stations throughout the United States. The show's name was derived from the influential blues radio show King Biscuit Time, which was sponsored by the King Biscuit Flour Co., combined with the hippie phrase "flower power". The first show was broadcast on February 18, 1973, and featured Blood, Sweat & Tears, the Mahavishnu Orchestra, and Bruce Springsteen. The long-time host of the show until the mid-1990s was Bill Minkin, whose voice has been described as "the perfect blend of hipster enthusiasm and stoner casualness."
The concerts were usually recorded with a mobile recording truck, then mixed and edited for broadcast on the show within a few weeks. In the 1970s, the show was sent to participating radio stations on reel-to-reel tape. Some shows were recorded and mixed in both stereo and quadraphonic. In 1980, D.I.R. began using the LP format, producing the show on a three-sided, two-record set. The first show on compact disc was a live retrospective of the Rolling Stones broadcast on September 27, 1987. By 2000, King Biscuit was using CD-R media to distribute the show. These tapes, records or compact discs were accompanied by a cue sheet which gave the disc jockey a written guideline of the content and length of each segment of the program.
Although closely associated with classic rock in its later years, the King Biscuit Flower Hour dedicated much air time to new and emerging artists, including new wave and modern rock artists in the late 1970s and early 1980s.
Archives
In 1982, a three-alarm fire damaged the Manhattan office tower that housed D.I.R. Broadcasting. Reportedly, many of the King Biscuit Flower Hour recordings were lost in the fire.
In 2006, the remaining King Biscuit tape archives were acquired by Wolfgang's Vault which began streaming concerts online and has made some available for download.
King Biscuit Flower Hour Records
After founder Bob Meyrowitz sold his interest to new ownership, King Biscuit Flower Hour Records was formed in 1992 with the intention of releasing live albums from the archives. Licensing issues prevented the release of the most popular artists featured on the program, although dozens of recordings did see commercial release.
References
External links
DIR Broadcasting at the British Film Institute
Fred Jacobs Could The King Biscuit Flower Hour Survive Today ? · Jacobs Media Strategies
American music radio programs
Rock music radio programs
1973 radio programme debuts
2005 radio programme endings |
https://en.wikipedia.org/wiki/Death%20of%20Brandon%20Vedas | Brandon Carl Vedas (April 21, 1981 – January 12, 2003), also known by his nickname ripper on IRC, was an American computer enthusiast, recreational drug user and member of the Shroomery.org community who died of a multiple drug overdose while discussing what he was doing via chat and webcam. His death led to debate about the responsibilities and roles of online communities in life-threatening situations.
Overview
The video chat session began when Vedas logged into the IRC channel and announced "i got a grip of drugs", inviting other users to access his webcam feed and watch him take them. While some of the substances were illicit, most of them had apparently been obtained through legitimate prescriptions for treatment of various illnesses from which Vedas was said to have suffered.
Vedas then began consuming psilocybe mushrooms, which had been stored in a prescription medication bottle. As the chat session progressed, one of the users in the channel, grphish, noted "that's a lot of klonopin" and this is thought to be when Vedas consumed 8 mg of clonazepam. Vedas continued by showing the webcam viewers what would be one of four bottles of methadone that he would consume over the course of the session, and, after noting this on the channel, proceeded to consume an entire bottle (reportedly 80 mg of methadone). After a brief respite, Vedas then consumed 110 mg of propranolol (Inderal), two Vicodin tablets, and 120 mg temazepam, which seem to have been taken in between descriptions given on the IRC channel.
During this process, Vedas maintained that this was "usual weekend behavior" for him and that he had consumed similar quantities of the same substances on previous occasions. "I told u I was hardcore" was one of the last things Vedas typed, and he was found dead by his mother the next day.
See also
Drug overdose
Social media and suicide
Suicide of Kevin Whitrick
References
External links
Memorial site created by his brother
1981 births
2003 suicides
Deaths by person in Arizona
Drug-related suicides in Arizona
Filmed deaths in the United States
Filmed suicides
People from Phoenix, Arizona
Suicide and the Internet
2003 deaths |
https://en.wikipedia.org/wiki/Independent%20Television%20Authority | The Independent Television Authority (ITA) was an agency created by the Television Act 1954 to supervise the creation of "Independent Television" (ITV), the first commercial television network in the United Kingdom. The ITA existed from 1954 until 1972. It was responsible for determining the location, constructing, building, and operating the transmission stations used by the ITV network, as well as determining the franchise areas and awarding the franchises for each regional commercial broadcaster. The Authority began its operations on 4 August 1954, a mere four days after the Television Act received Royal Assent, under the Chairmanship of Sir Kenneth Clark. The Authority's first Director General, Sir Robert Fraser was appointed by Clark a month later on 14 September.
The physics of VHF broadcasting meant that a comparatively small number of transmitters could cover the majority of the population of Britain, if not the bulk of the area of the country. The ITA determined that the first three franchise areas would cover the London area, the English Midlands, and Northern England (the Lancashire/Yorkshire belt of industrial cities from Liverpool to Hull and the surrounding countryside). All three franchise areas would be awarded on a divided weekday/weekend basis, and it was planned that the franchise holders for these areas would produce the great bulk of network programmes, while the companies given the smaller franchises would produce mainly local programmes for their area only.
Franchises
The ITA awarded franchises to applicant companies, selecting between applicants on the basis of the financial soundness and structure of the company, the proposals for the service to be offered, and often on connections between the applicant company and the area to be served.
Franchises were awarded initially between 1954 and 1961, with the new television stations usually beginning their broadcasting one-to-two years later. During September 1963 the ITA invited new applications for franchises to operate from July 1964 for three years or until the arrival of a second commercial channel, whichever came first, but in fact no changes were made to any franchise holders at that time, except for confirming the merger of the South Wales and the West franchise held by TWW and the Wales West and North franchise held by WWN following the financial collapse of WWN. (In the event, a second commercial channel did not begin until 1982, under the guise of Channel 4.)
Initial franchises awarded in 1954
The London (Monday to Friday) franchise was awarded to Associated-Rediffusion.
The London (Saturday and Sunday) and Midlands (Monday to Friday) franchises were awarded to the Associated Broadcasting Company, later renamed Associated TeleVision (ATV).
The Midlands (Saturday and Sunday) and North of England (Saturday and Sunday) franchises were awarded to Kemsley-Winnick Television (which subsequently collapsed; the franchise was reallocated in 1955 to the Associated Brit |
https://en.wikipedia.org/wiki/Acorn%20Atom | The Acorn Atom is a home computer made by Acorn Computers Ltd from 1979 to 1982, when it was replaced by the BBC Micro. The BBC Micro began life as an upgrade to the Atom, originally known as the Proton.
The Atom was a progression of the MOS Technology 6502-based machines that the company had been making from 1979. The Atom was a cut-down Acorn System 3 without a disk drive but with an integral keyboard and cassette tape interface, sold in either kit or complete form. In 1979 it was priced between £120 in kit form, £170 () ready assembled, to over £200 for the fully expanded version with 12 KB of RAM and the floating-point extension ROM.
Hardware
The minimum Atom had 2 KB of RAM and 8 KB of ROM, with the maximum specification machine having 12 KB of each. An additional floating-point ROM was also available. The 2 KB of RAM was divided between 1 KB of Block Zero RAM (including the 256 bytes of "zero page") and 512 bytes for the screen (text mode) and only 512 bytes for programs (presumably in text mode, mode 0, and graphics not available), i.e. written in the BASIC language. When expanded up to a total of 12 KB RAM, the split is 1 KB, 5 KB for programs, and up to 6 KB for the high-resolution graphics (the screen memory could be expanded independently from the lower part of the address space). If the high-resolution graphics were not required then up to 5½ KB of the upper memory could additionally be used for program storage. The first 1 KB, i.e. Block Zero, was used by the CPU for stack storage, by the OS, and by the Atom BASIC for storage of the 27 variables.
It had an MC6847 Video Display Generator (VDG) video chip, allowing for both text and graphics modes. It could be connected to a TV or modified to output to a video monitor. Basic video memory was 1 KB but could be expanded to 6 KB. Since the MC6847 could only output at 60 Hz, meaning that the video could not be resolved on a large proportion of European TV sets, a 50 Hz PAL colour card was later made available. Six video modes were available, with resolutions from 64×64 in 4 colours, up to 256×192 in monochrome. At the time, 256×192 was considered to be high resolution.
The case was designed by industrial designer Allen Boothroyd of Cambridge Product Design Ltd.
Software
It had a built-in minor variation of Acorn System BASIC, a fast but idiosyncratic version of the BASIC programming language developed by Sophie Wilson, which included indirection operators (similar to PEEK and POKE) for bytes and words (of 4 bytes each); the use of a semi-colon to separate statements on the same line of code (instead of the colon used by most if not all other versions of BASIC); and the option of labels rather than line numbers for GOTO and GOSUB commands. Assembly code could be included within a BASIC program, because the BASIC interpreter also contained an assembler for the 6502 assembly language which assembled the inline code during program execution and then executed it. This was unusual. |
https://en.wikipedia.org/wiki/Jupiter%20Cantab | Jupiter Cantab Limited was a Cambridge based home computer company. Its main product was the 1983 Forth-based Jupiter Ace.
The company was founded in 1982 by two ex-Sinclair Research staffers, Richard Altwasser and Steven Vickers. Their machine was, externally, remarkably similar to the ZX Spectrum, with a copycat rubber keyboard. It also used the same Z80 processor. The Ace's video output was limited to monochrome like the ZX-81.
The £90 Ace was a flop in both the UK and US markets. In the US it was intended to be sold as the Ace 4000, although only 800 were ever made.
The Forth language, although considered powerful, was not as popular or accessible as the already well-established BASIC language featured in competing microcomputers. Although the Ace's price at £89.95 when the successor to Sinclair's ZX80, the ZX81, was £39.95 was more likely the primary reason for slow sales.
The company went bankrupt in November 1983 and its assets were sold to Boldfield Computing Ltd in 1984. The remaining hardware was sold-off into 1985. Boldfield Computing Ltd also commissioned some software for it, including various games, database, and spread sheet software. Documentation to this exists and is held by the current owners of the brand.
The sale of Boldfield’s IT solution business in 2006, excluded the rights to the Jupiter Ace IP and brand, but this was later sold to Andrews UK Limited in 2015.
References
External links
Boldfield Computing Ltd
Official Website of Jupiter Ace
Computer companies established in 1982
Home computer hardware companies
Computer companies disestablished in 1983
Companies based in Cambridge
1982 establishments in England |
https://en.wikipedia.org/wiki/Kruithof%20curve | The Kruithof curve describes a region of illuminance levels and color temperatures that are often viewed as comfortable or pleasing to an observer. The curve was constructed from psychophysical data collected by Dutch physicist Arie Andries Kruithof, though the original experimental data is not present on the curve itself. Lighting conditions within the bounded region were empirically assessed as being pleasing or natural, whereas conditions outside the region were considered uncomfortable, displeasing or unnatural. sources that are considered natural or closely resemble Planckian black bodies, but its value in describing human preference has been consistently questioned by further studies on interior lighting.
For example, natural daylight has a color temperature of 6500 K and an illuminance of about 104 to 105 lux. This color temperature–illuminance pair results in natural color rendition, but if viewed at a low illuminance, would appear bluish. At typical indoor office illuminance levels of about 400 lux, pleasing color temperatures are lower (between 3000 and 6000 K), and at typical home illuminance levels of about 75 lux, pleasing color temperatures are even lower (between 2400 and 2700 K). These color temperature-illuminance pairs are often achieved with fluorescent and incandescent sources, respectively. The pleasing region of the curve contains color temperatures and illuminance levels comparable to naturally lit environments.
History
At the emergence of fluorescent lighting in 1941, Kruithof conducted psychophysical experiments to provide a technical guide to design artificial lighting. Using gas-discharge fluorescent lamps, Kruithof was able to manipulate the color of emitted light and ask observers to report as to whether or not the source was pleasing to them. The sketch of his curve as presented consists of three major regions: the middle region, which corresponds to light sources considered pleasing; the lower region, which corresponds to colors that are considered cold and dim; and the upper region, which corresponds to colors that are warm and unnaturally colorful. These regions, while approximate, are still used to determine appropriate lighting configurations for homes or offices.
Perception and adaptation
Kruithof's findings are directly related to human adaptation to changes in illumination. As illuminance decreases, human sensitivity to blue light increases. This is known as the Purkinje effect. The human visual system switches from photopic (cone-dominated) vision to scotopic (rod-dominated) vision when luminance levels decrease. Rods have a very high spectral sensitivity to blue energy, whereas cones have varying spectral sensitivities to reds, greens and blues. Since the dominating photoreceptor in scotopic vision is most sensitive to blue, human sensitivity to blue light is therefore increased. Because of this, intense sources of higher (bluer) color temperatures are all generally considered to be displeasing at low |
https://en.wikipedia.org/wiki/SPSS | SPSS Statistics is a statistical software suite developed by IBM for data management, advanced analytics, multivariate analysis, business intelligence, and criminal investigation. Long produced by SPSS Inc., it was acquired by IBM in 2009. Versions of the software released since 2015 have the brand name IBM SPSS Statistics.
The software name originally stood for Statistical Package for the Social Sciences (SPSS), reflecting the original market, then later changed to Statistical Product and Service Solutions.
Overview
SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of "sociology's most influential books" for allowing ordinary researchers to do their own statistical analysis. In addition to statistical analysis, data management (case selection, file reshaping and creating derived data) and data documentation (a metadata dictionary is stored in the datafile) are features of the base software.
The many features of SPSS Statistics are accessible via pull-down menus or can be programmed with a proprietary 4GL command syntax language. Command syntax programming has the benefits of reproducible output, simplifying repetitive tasks, and handling complex data manipulations and analyses. Additionally, some complex applications can only be programmed in syntax and are not accessible through the menu structure. The pull-down menu interface also generates command syntax: this can be displayed in the output, although the default settings have to be changed to make the syntax visible to the user. They can also be pasted into a syntax file using the "paste" button present in each menu. Programs can be run interactively or unattended, using the supplied Production Job Facility.
A "macro" language can be used to write command language subroutines. A Python programmability extension can access the information in the data dictionary and data and dynamically build command syntax programs. This extension, introduced in SPSS 14, replaced the less functional SAX Basic "scripts" for most purposes, although SaxBasic remains available. In addition, the Python extension allows SPSS to run any of the statistics in the free software package R. From version 14 onwards, SPSS can be driven externally by a Python or a VB.NET program using supplied "plug-ins". (From version 20 onwards, these two scripting facilities, as well as many scripts, are included on the installation media and are normally installed by default.)
SPSS Statistics places constraints on internal file structure, data types, data processing, and matching files, which together considerably simplify programming. SPSS datasets have a two-dimensional table structure, where the rows typically represent cases (such as individuals or households) and the col |
https://en.wikipedia.org/wiki/Cinemax | Cinemax (alternatively shortened to Max) is an American pay television, cable, and satellite television network owned by the Home Box Office, Inc. subsidiary of Warner Bros. Discovery. Developed as a companion "maxi-pay" service complementing the offerings shown on parent network Home Box Office (HBO) and initially focusing on recent and classic films upon its launch on August 1, 1980, programming featured on Cinemax currently consists primarily of recent and older theatrically released motion pictures, and original action series, as well as documentaries and special behind-the-scenes featurettes.
Cinemax—which, in conjunction with HBO, was among the first two American pay television services to offer complementary multiplexed channels in August 1991—operates eight 24-hour, linear multiplex channels; a traditional subscription video-on-demand platform (Cinemax On Demand); and formerly a TV Everywhere streaming platform for Cinemax's linear television subscribers (Cinemax Go). On digital platforms, the Cinemax linear channels were not accessible on Cinemax Go in its final years, but were available to subscribers of over-the-top multichannel video programming distributors, and as live streams included in a la carte subscription channels sold through Apple TV Channels, Amazon Video Channels and Roku, which primarily feature VOD library content. (The live feeds on the OTT subscription channels consist of the primary channel's East and West Coast feeds and, for Amazon Video customers, the East Coast feeds of its seven multiplex channels.)
Cinemax's operations are based alongside HBO inside Warner Bros. Discovery's secondary corporate headquarters at 30 Hudson Yards in Manhattan's West Side district.
History
1980–1989
In an effort to capitalize on the swift national growth that Home Box Office (HBO) had experienced since it began transmitting via satellite in September 1975, Home Box Office, Inc.—then owned by the Time-Life Broadcasting unit of Time Inc.—experimented with a companion pay service to sell to prospective subscribers—including existing HBO customers—with Take 2, a movie-centered premium channel marketed at a family audience that launched on April 1, 1979. The "mini-pay" service (a smaller-scale pay television channel sold at a discounted rate) tried to cater to cable subscribers reluctant to subscribe to HBO because of its cost and potentially objectionable content in some programs. Take 2, however, was hampered by a slow subscriber and carriage growth throughout its just-under-two-year history. By the Spring of 1980, HBO executives began developing plans for a tertiary, lower-cost "maxi-pay" service (a full-service pay channel sold at a premium or slightly lower rate) to better complement HBO. On May 18 of that year, during the 1980 National Cable Television Association Convention, Home Box Office announced that it would launch a companion movie channel, to be named Cinemax. Billed as the cable industry's "first true tier," Cinemax wa |
https://en.wikipedia.org/wiki/VDSL | Very high-speed digital subscriber line (VDSL) and very high-speed digital subscriber line 2 (VDSL2) are digital subscriber line (DSL) technologies providing data transmission faster than the earlier standards of asymmetric digital subscriber line (ADSL) G.992.1, G.992.3 (ADSL2) and G.992.5 (ADSL2+).
VDSL offers speeds of up to 52 Mbit/s downstream and 16 Mbit/s upstream, over a single twisted pair of copper wires using the frequency band from 25 kHz to 12 MHz. These rates mean that VDSL is capable of supporting applications such as high-definition television, as well as telephone services (voice over IP) and general Internet access, over a single connection. VDSL is deployed over existing wiring used for analog telephone service and lower-speed DSL connections. This standard was approved by the International Telecommunication Union (ITU) in November 2001.
Second-generation systems (VDSL2; ITU-T G.993.2 approved in February 2006) use frequencies of up to 30 MHz to provide data rates exceeding 100 Mbit/s simultaneously in both the upstream and downstream directions. The maximum available bit rate is achieved at a range of about ; performance degrades as the local loop attenuation increases.
Conceptual development
The concept of VDSL was first published in 1991 through a joint Bellcore-Stanford research study. The study searched for potential successors to the then-prevalent HDSL and relatively new ADSL, which were both 1.5 Mbit/s. Specifically, it explored the feasibility of symmetric and asymmetric data rates exceeding 10 Mbit/s on short phone lines.
VDSL2 standard is an enhancement to ITU T G.993.1 that supports asymmetric and symmetric transmission at a bidirectional net data rate up to 400 Mbit/s on twisted pairs using a bandwidth up to 35 MHz.
VDSL standards
A VDSL connection uses up to seven frequency bands, so one can allocate the data rate between upstream and downstream differently depending on the service offering and spectrum regulations. The first-generation VDSL standard specified both quadrature amplitude modulation (QAM) and discrete multi-tone modulation (DMT). In 2006, ITU-T standardized VDSL in recommendation G.993.2 which specified only DMT modulation for VDSL2.
VDSL2
VDSL2 is an enhancement to VDSL designed to support the wide deployment of triple play services such as voice, video, data and high-definition television (HDTV) VDSL2 is intended to enable operators and carriers to gradually, flexibly, and cost-efficiently upgrade existing xDSL infrastructure.
The protocol is standardized in the International Telecommunication Union telecommunications sector (ITU-T) as Recommendation G.993.2. It was announced as finalized on 27 May 2005, and first published on 17 February 2006. Several corrections and amendments were published from 2007 to 2011.
VDSL2 permits the transmission of asymmetric and symmetric aggregate data rates up to 300+ Mbit/s downstream and upstream on twisted pairs using a bandwidth up to 35 MHz on its l |
https://en.wikipedia.org/wiki/Andy%20Looney | Andrew J. Looney (born November 5, 1963) is a game designer and computer programmer. He is also a photographer, a cartoonist, a video-blogger, and a marijuana-legalization advocate.
Andrew and Kristin Looney together founded the games company Looney Labs, where Andrew is the chief creative officer. Looney Labs has published most of his game designs, such as Fluxx, Chrononauts, and the Icehouse game system. His other game designs include Aquarius, Nanofictionary, IceTowers, Treehouse, and Martian Coasters.
Biography
Andrew Looney as a youth became an Eagle Scout. He entered the University of Maryland at College Park in 1981 as a freshman with an undecided major between English and computer science. He eventually selected computer science.
He and Kristin, his future spouse, met in 1986 when he started at NASA's Goddard Space Flight Center as a software programmer. Kristin was a computer engineer designing computer chips. Keeping English as a side interest, he wrote "The Empty City", a science-fiction short story. Wanting a game in the story but feeling a card game as too boring, he created a fictional game, Icehouse, that used pyramids. Readers of the short story requested to learn how to play the game. Thus actual rules were invented for Icehouse, then plastic pyramid pieces were made to play the game. The pieces were made from resin in his apartment, which upset the landlord due to the smell. This led them to launch their own game company to sell the Icehouse game. After several years, Looney shut down Icehouse Games, Inc.
He and his wife launched Looney Laboratories in 1996 as a part-time home based design company. Andrew soon designed the Fluxx card game. He then went on to a brief career as a game programmer at Magnet Interactive Studios, where he created that company's only entry to the market, Icebreaker. Aquarius was Andy's and Labs' next game, launched in 1998. In 2002, a few years after Kristin went full-time with their company, Andy followed.
Patents & awards
Andy has three U.S. patents and five Origins Awards.
Looney holds patents on the game mechanics for:
Icehouse – U.S. Patent 4,936,585 - Method of manipulating and interpreting playing pieces
https://patents.google.com/patent/US4936585A
IceTowers – U.S. Patent 6,352,262 - Method of conducting simultaneous gameplay using stackable game pieces
https://patents.google.com/patent/US6352262B1
Chrononauts – U.S. Patent 6,474,650 - Method of simulation time travel in a card game
https://patents.google.com/patent/US6474650B1
Looney has won the following game design awards:
1999 – Mensa Mind Games: Mensa Select Award for Fluxx
2000 – Origins Award: Best Abstract Board Game for Icehouse: The Martian Chess Set
Chrononauts
2000 – Origins Award: Best Traditional Card Game
2001 – Parents Choice Silver Honors
2001 – Origins Award: Best Abstract Board Game for Cosmic Coasters
2003 – Parents Choice Silver Honors Nanofictionary
2007 – Origins Award: Best Board Game or Expansion o |
https://en.wikipedia.org/wiki/DMZ%20%28computing%29 | In computer security, a DMZ or demilitarized zone (sometimes referred to as a perimeter network or screened subnet) is a physical or logical subnetwork that contains and exposes an organization's external-facing services to an untrusted, usually larger, network such as the Internet. The purpose of a DMZ is to add an additional layer of security to an organization's local area network (LAN): an external network node can access only what is exposed in the DMZ, while the rest of the organization's network is protected behind a firewall. The DMZ functions as a small, isolated network positioned between the Internet and the private network.
This is not to be confused with a DMZ host, a feature present in some home routers which frequently differs greatly from an ordinary DMZ.
The name is from the term demilitarized zone, an area between states in which military operations are not permitted.
Rationale
The DMZ is seen as not belonging to either network bordering it. This metaphor applies to the computing use as the DMZ acts as a gateway to the public Internet. It is neither as secure as the internal network, nor as insecure as the public internet.
In this case, the hosts most vulnerable to attack are those that provide services to users outside of the local area network, such as e-mail, Web and Domain Name System (DNS) servers. Because of the increased potential of these hosts suffering an attack, they are placed into this specific subnetwork in order to protect the rest of the network in case any of them become compromised.
Hosts in the DMZ are permitted to have only limited connectivity to specific hosts in the internal network, as the content of DMZ is not as secure as the internal network. Similarly, communication between hosts in the DMZ and to the external network is also restricted to make the DMZ more secure than the Internet and suitable for housing these special purpose services. This allows hosts in the DMZ to communicate with both the internal and external network, while an intervening firewall controls the traffic between the DMZ servers and the internal network clients, and another firewall would perform some level of control to protect the DMZ from the external network.
A DMZ configuration provides additional security from external attacks, but it typically has no bearing on internal attacks such as sniffing communication via a packet analyzer or spoofing such as e-mail spoofing.
It is also sometimes good practice to configure a separate classified militarized zone (CMZ), a highly monitored militarized zone comprising mostly Web servers (and similar servers that interface to the external world i.e. the Internet) that are not in the DMZ but contain sensitive information about accessing servers within the LAN (like database servers). In such architecture, the DMZ usually has the application firewall and the FTP while the CMZ hosts the Web servers. (The database servers could be in the CMZ, in the LAN, or in a separate VLAN altogether |
https://en.wikipedia.org/wiki/Developer | Developer may refer to:
Computers
Software developer, a person or organization who develop programs/applications
Video game developer, a person or business involved in video game development, the process of designing and creating games
Web developer, a programmer who specializes in, or is specifically engaged in, the development of World Wide Web applications
Other uses
Developer (album), the fifth album by indie rock band Silkworm
Photographic developer, chemicals that convert the latent image to a visible image
In real estate development, one who builds on land or alters the use of an existing building for some new purpose
See also
Game designer
Developer! Developer! Developer!, a series of community conferences aimed at software developers |
https://en.wikipedia.org/wiki/Mobile%20workstation | A mobile workstation, also known as a desktop replacement computer (DTR) or workstation laptop, is a personal computer that provides the full capabilities of a workstation-class desktop computer while remaining mobile. They are often larger, bulkier laptops or in some cases 2-in-1 PCs with a tablet-like form factor and interface. Because of their increased size, this class of computer usually includes more powerful components and a larger display than generally used in smaller portable computers and can have a relatively limited battery capacity (or none at all). Some use a limited range of desktop components to provide better performance at the expense of battery life. These are sometimes called desknotes, a blend of "desktop" and "notebook", though the term is also applied to desktop replacement computers in general. Other names being monster notebooks or musclebooks in reference to muscle cars.
Origins
The forerunners of the mobile workstation were the portable computers of the early to mid-1980s, such as the Portal R2E CCMC, the Osborne 1, Kaypro II, the Compaq Portable and the Commodore Executive 64 (SX-64) computers. These computers contained the CPU, display, floppy disk drive and power supply all in a single briefcase-like enclosure. Similar in performance to the desktop computers of the era, they were easily transported and came with an attached keyboard that doubled as a protective cover when not in use. They could be used wherever space and an electrical outlet were available, as they had no battery.
The development of the laptop form factor gave new impetus to portable computer development. Many early laptops were feature-limited in the interest of portability, requiring such mobility-limiting accessories as external floppy drives or clip-on trackball pointing devices. One of the first laptops that could be used as a standalone computer was the EUROCOM 2100 based on Intel's 8088 CPU architecture, it duplicated the functionality of the desktop models without requiring an external docking station.
The development of the modern mobile workstation came with the realization that many laptops were used in a semi-permanent location, often remaining connected to an external power source at all times. This suggested that a market existed for a laptop-style computer that would take advantage of the user's reduced need for portability, allowing for higher-performance components, greater expandability, and higher-quality displays. Mobile workstations are also often used with a port replicator, to full enjoy the desktop comfort.
Design features
Modern mobile workstations generally perform better than traditional laptop-style computers as their size allows the inclusion of more powerful components. The larger body means more efficient heat-dissipation, allowing manufacturers to use components that would otherwise overheat during normal use. Furthermore, their increased size allows for more modularity, which allows for a greater expandabilit |
https://en.wikipedia.org/wiki/WWN | WWN may refer to:
Wales West and North Television
World Wide Name, a Fibre Channel, Serial ATA, and Serial Attached SCSI term
World Without Nazism
World Wrestling Network
Weekly World News, a tabloid newspaper
W. W. Norton, a book publishing company
WWNLive
Waterford Whispers News, an Irish satirical online news website
World War N, also known as World War 0 |
https://en.wikipedia.org/wiki/Keygen | A key generator (key-gen) is a computer program that generates a product licensing key, such as a serial number, necessary to activate for use of a software application. Keygens may be legitimately distributed by software manufacturers for licensing software in commercial environments where software has been licensed in bulk for an entire site or enterprise, or they may be developed and distributed illegitimately in circumstances of copyright infringement or software piracy.
Illegitimate key generators are typically programmed and distributed by software crackers in the warez scene. These keygens often play music, which may include the genres dubstep, chiptunes, sampled loops or anything that the programmer desires. Chiptunes are often preferred due to their small size. Keygens can have artistic user interfaces or kept simple and display only a cracking group or cracker's logo.
Software licensing
A software license is a legal instrument that governs the usage and distribution of computer software. Often, such licenses are enforced by implementing in the software a product activation or digital rights management (DRM) mechanism, seeking to prevent unauthorized use of the software by issuing a code sequence that must be entered into the application when prompted or stored in its configuration.
Key verification
Many programs attempt to verify or validate licensing keys over the Internet by establishing a session with a licensing application of the software publisher. Advanced keygens bypass this mechanism, and include additional features for key verification, for example by generating the validation data which would otherwise be returned by an activation server. If the software offers phone activation then the keygen could generate the correct activation code to finish activation. Another method that has been used is activation server emulation, which patches the program memory to "see" the keygen as the de facto activation server.
Multi-keygen
A multi-keygen is a keygen that offers key generation for multiple software applications. Multi-keygens are sometimes released over singular keygens if a series of products requires the same algorithm for generating product keys.
Authors and distribution
Unauthorized keygens that typically violate software licensing terms are written by programmers who engage in reverse engineering and software cracking, often called crackers, to circumvent copy protection of software or digital rights management for multimedia.
Keygens are available for download on warez sites or through peer-to-peer (P2P) networks.
Malware keygens
Unauthorized keygens, available through P2P networks or otherwise, contain malicious payloads. These key generators may or may not generate a valid key, but the embedded malware loaded invisibly at the same time may, for example, be a version of CryptoLocker (ransomware).
Antivirus software may discover malware embedded in keygens; such software often also identifies unauthorized keygens w |
https://en.wikipedia.org/wiki/Symbolic%20link | In computing, a symbolic link (also symlink or soft link) is a file whose purpose is to point to a file or directory (called the "target") by specifying a path thereto.
Symbolic links are supported by POSIX and by most Unix-like operating systems, such as FreeBSD, Linux, and macOS. Limited support also exists in Windows 7 and Windows Vista, and to some degree in Windows 2000 and Windows XP in the form of shortcut files. CTSS on IBM 7090 had files linked by name in 1963. By 1978 minicomputer operating systems from DEC, and in Data General's RDOS included symbolic links.
Overview
A symbolic link contains a text string that is automatically interpreted and followed by the operating system as a path to another file or directory. This other file or directory is called the "target". The symbolic link is a second file that exists independently of its target. If a symbolic link is deleted, its target remains unaffected. If a symbolic link points to a target, and sometime later that target is moved, renamed or deleted, the symbolic link is not automatically updated or deleted, but continues to exist and still points to the old target, now a non-existing location or file. Symbolic links pointing to moved or non-existing targets are sometimes called broken, orphaned, dead, or dangling.
Symbolic links are different from hard links. Hard links do not link paths on different volumes or file systems, whereas symbolic links may point to any file or directory irrespective of the volumes on which the link and target reside.
Hard links always refer to an existing file, whereas symbolic links may contain an arbitrary path that does not point to anything.
Symbolic links operate transparently for many operations: programs that read or write to files named by a symbolic link will behave as if operating directly on the target file. However, they have the effect of changing an otherwise hierarchic filesystem from a tree into a directed graph, which can have consequences for such simple operations as determining the current directory of a process. Even the Unix standard for navigating to a directory's parent directory no longer works reliably in the face of symlinks. Some shells heuristically try to uphold the illusion of a tree-shaped hierarchy, but when they do, this causes them to produce different results from other programs that manipulate pathnames without such heuristic, relying on the operating system instead.
Programs that need to handle symbolic links specially (e.g., shells and backup utilities) thus need to identify and manipulate them directly.
Some Unix as well as Linux distributions use symbolic links extensively in an effort to reorder the file system hierarchy. This is accomplished with several mechanisms, such as variant, context-dependent symbolic links. This offers the opportunity to create a more intuitive or application-specific directory tree and to reorganize the system without having to redesign the core set of system functions and utilities. |
https://en.wikipedia.org/wiki/Television%20Act%201954 | The Television Act 1954 (2 & 3 Eliz. 2. c. 55) was a British law which permitted the creation of the first commercial television network in the United Kingdom, ITV.
Until the early 1950s, the only television service in Britain was operated as a monopoly by the British Broadcasting Corporation, and financed by the annual television licence fee payable by each household which contained one or more television sets. The new Conservative government elected in 1951 wanted to create a commercial television channel, but this was a controversial subject—the only other examples of commercial television were to be found in the United States, and it was widely considered that the commercial television found there was "vulgar".
The solution to the problem was to create the Independent Television Authority which would closely regulate the new commercial channel in the interests of good taste, and award franchises to commercial companies for fixed terms.
The first commercial franchises were awarded in 1954, and commercial television started broadcasting in stages between 1955 and 1962. The first advertisement aired by ITV promoted Gibbs SR toothpaste at 8:12pm on 22 September 1955. Household cleaners were the most frequently advertised products over the 1955–1960 period.
References
United Kingdom Acts of Parliament 1954
ITV (TV network)
1954 in British television
Media legislation
BBC
History of television in the United Kingdom |
https://en.wikipedia.org/wiki/Kalman%20filter | For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory.
This digital filter is sometimes termed the Stratonovich–Kalman–Bucy filter because it is a special case of a more general, nonlinear filter developed somewhat earlier by the Soviet mathematician Ruslan Stratonovich. In fact, some of the special case linear filter's equations appeared in papers by Stratonovich that were published before summer 1961, when Kalman met with Stratonovich during a conference in Moscow.
Kalman filtering has numerous technological applications. A common application is for guidance, navigation, and control of vehicles, particularly aircraft, spacecraft and ships positioned dynamically. Furthermore, Kalman filtering is a concept much applied in time series analysis used for topics such as signal processing and econometrics. Kalman filtering is also one of the main topics of robotic motion planning and control and can be used for trajectory optimization. Kalman filtering also works for modeling the central nervous system's control of movement. Due to the time delay between issuing motor commands and receiving sensory feedback, the use of Kalman filters provides a realistic model for making estimates of the current state of a motor system and issuing updated commands.
The algorithm works by a two-phase process. For the prediction phase, the Kalman filter produces estimates of the current state variables, along with their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some error, including random noise) is observed, these estimates are updated using a weighted average, with more weight being given to estimates with greater certainty. The algorithm is recursive. It can operate in real time, using only the present input measurements and the state calculated previously and its uncertainty matrix; no additional past information is required.
Optimality of Kalman filtering assumes that errors have a normal (Gaussian) distribution. In the words of Rudolf E. Kálmán: "In summary, the following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear." Though regardless of Gaussianity, if the process and measurement covariances are known, the Kalman filter is the best possible linear estimator in the minimum mean-square-error sense.
It is a common misconcept |
https://en.wikipedia.org/wiki/List%20of%20Nintendo%20Entertainment%20System%20accessories | This is a list of accessories released for the Nintendo Entertainment System (known in Japan as the Family Computer, or Famicom) by Nintendo and other various third party manufacturers.
Family Computer
Since the Famicom lacked traditional game controller ports, third-party controllers were designed for use with the console's expansion slot.
Nintendo Entertainment System
See also
History of the Nintendo Entertainment System
Nintendo Entertainment System hardware clone
List of Super Nintendo Entertainment System accessories
References
Video game lists by platform |
https://en.wikipedia.org/wiki/Apollo/Domain | Apollo/Domain was a range of workstations developed and produced by Apollo Computer from circa 1980 to 1989. The machines were built around the Motorola 68k family of processors, except for the DN10000, which had from one to four of Apollo's RISC processors, named PRISM.
Operating system
The original operating system was Apollo's own product called Aegis, which was later renamed to Domain/OS. The Aegis and Domain/OS system offered advanced features for the time, for example an object oriented filesystem, network transparency, diskless booting, a graphical user interface and, in Domain/OS, interoperability with BSD, System V and POSIX.
Hardware
An Apollo workstation resembled a modern PC, with base unit, keyboard, mouse, and screen. Early models were housed in short (about 2 ft high) 19" rack cabinets that would be set beside a desk or under a table. The DN300 and later DN330 were designed as integrated units with system and monitor in one unit and fit easily on a desk. Every Apollo system (even standalones) had to include at least one network interface. Originally the only option was the 12 Mbit/s Apollo Token Ring (ATR). Over time, 10 Mbit/s Ethernet was added as an option. It has been stated that the IBM Token Ring was an option - this was never available. The ATR was generally the best choice, since it was extremely scalable; whilst the Ethernet of the time suffered serious performance loss as extra machines were added to the network, this was not true of ATR, which could easily have over a hundred machines on one network. One drawback was that, unlike Ethernet, one machine failure (which could easily happen given a single faulty connector) stopped the entire network. For this reason, Apollo provided an optional (but strongly recommended) network cabling system of bypass switches and quick connect boxes which allowed machines to be disconnected and moved without problems. Apollo Token Ring networks used 75 ohm RG-6U coaxial cabling.
Networking
The network orientation of the systems, together with the ATR functionality, made it easy and practicable to boot and run diskless machines using another machine's OS. In principle, as many machines could be booted from one host as it could cope with; in practice, four diskless machines from one host was about the limit. Provided the correct machine-specific software was installed on the host (again, very easy), any type of machine could be booted from any other (one complication being that a DN10000 could only be booted from another DN10000 or a 68K-based system which had "cmpexe" compound executables installed).
Some systems could have the graphics card removed so that they could be used as a servers; in such a case the keyboard and mouse were automatically ignored, and the system accessed either across the network, or via a dumb terminal plugged into the machine's serial port. Such a system was designated "DSP" instead of "DN".
Models
The model naming convention was DN (for Domain Node) with |
https://en.wikipedia.org/wiki/Apollo%20Computer | Apollo Computer Inc., founded in 1980 in Chelmsford, Massachusetts, by William Poduska (a founder of Prime Computer) and others, developed and produced Apollo/Domain workstations in the 1980s. Along with Symbolics and Sun Microsystems, Apollo was one of the first vendors of graphical workstations in the 1980s. Like computer companies at the time and unlike manufacturers of IBM PC compatibles, Apollo produced much of its own hardware and software.
Apollo was acquired by Hewlett-Packard in 1989 for US$476 million (equivalent to $ million in ), and gradually closed down over the period of 1990–1997. The brand (as "HP Apollo") was resurrected in 2014 as part of HP's high-performance computing portfolio.
History
Apollo was started in 1980, two years before Sun Microsystems.
In addition to Poduska, the founders included Dave Nelson (Engineering), Mike Greata (Engineering), Charlie Spector (COO), Bob Antonuccio (Manufacturing), Gerry Stanley (Sales and Marketing), and Dave Lubrano (Finance). The founding engineering team included Mike Sporer, Bernie Stumpf, Russ Barbour, Paul Leach, and Andy Marcuvitz.
In 1981, the company unveiled the DN100 workstation, which used the Motorola 68000 microprocessor. Apollo workstations ran Aegis (later replaced by Domain/OS), a proprietary operating system with a Unix alternative shell. Apollo's networking was particularly elegant, among the first to allow demand paging over the network, and allowing a degree of network transparency and low sysadmin-to-machine ratio.
From 1980 to 1987, Apollo was the largest manufacturer of network workstations. Its quarterly sales exceeded $100 million for the first time in late 1986, and by the end of that year, it had the largest worldwide share of the engineering workstations market, at twice the market share of the number two, Sun Microsystems. At the end of 1987, it was third in market share after Digital Equipment Corporation and Sun, but ahead of Hewlett-Packard and IBM. Apollo's largest customers were Mentor Graphics (electronic design), General Motors, Ford, Chrysler, Chicago Research and Trading (Options and Futures) and Boeing.
Apollo was acquired by Hewlett-Packard in 1989 for US$476 million, and gradually closed down over the period 1990-1997. But after acquiring Apollo Computer in 1989, HP integrated a lot of Apollo technology into their own HP 9000 series of workstations and servers. The Apollo engineering center took over PA-RISC workstation development and Apollo became an HP workstation brand name (HP Apollo 9000) for a while. Apollo also invented the revision control system DSEE (Domain Software Engineering Environment) which inspired IBM Rational ClearCase. DSEE was pronounced "dizzy".
Apollo machines used a proprietary operating system, Aegis, because of the excessive cost of single-CPU Unix licenses at the time of system definition. Aegis, like Unix, was based on concepts from the Multics time-sharing operating system. It used the concepts of shell progra |
https://en.wikipedia.org/wiki/NMEA%200183 | NMEA 0183 is a combined electrical and data specification for communication between marine electronics such as echo sounder, sonars, anemometer, gyrocompass, autopilot, GPS receivers and many other types of instruments. It has been defined and is controlled by the National Marine Electronics Association (NMEA). It replaces the earlier NMEA 0180 and NMEA 0182 standards. In leisure marine applications it is slowly being phased out in favor of the newer NMEA 2000 standard, though NMEA 0183 remains the norm in commercial shipping.
Details
The electrical standard that is used is EIA-422, also known as RS-422, although most hardware with NMEA-0183 outputs are also able to drive a single EIA-232 port. Although the standard calls for isolated inputs and outputs, there are various series of hardware that do not adhere to this requirement.
The NMEA 0183 standard uses a simple ASCII, serial communications protocol that defines how data are transmitted in a "sentence" from one "talker" to multiple "listeners" at a time. Through the use of intermediate expanders, a talker can have a unidirectional conversation with a nearly unlimited number of listeners, and using multiplexers, multiple sensors can talk to a single computer port.
At the application layer, the standard also defines the contents of each sentence (message) type, so that all listeners can parse messages accurately.
While NMEA 0183 only defines an RS-422 transport, there also exists a de facto standard in which the sentences from NMEA0183 are placed in UDP datagrams (one sentence per packet) and sent over an IP network.
The NMEA standard is proprietary and sells for at least US$2000 (except for members of the NMEA) as of September 2020. However, much of it has been reverse-engineered from public sources.
UART settings
There is a variation of the standard called NMEA-0183HS that specifies a baud rate of 38,400. This is in general use by AIS devices.
Message structure
All transmitted data are printable ASCII characters between 0x20 (space) to 0x7e (~)
Data characters are all the above characters except the reserved characters (See next line)
Reserved characters are used by NMEA0183 for the following uses:
Messages have a maximum length of 82 characters, including the $ or ! starting character and the ending <LF>
The start character for each message can be either a $ (For conventional field delimited messages) or ! (for messages that have special encapsulation in them)
The next five characters identify the talker (two characters) and the type of message (three characters).
All data fields that follow are comma-delimited.
Where data is unavailable, the corresponding field remains blank (it contains no character before the next delimiter – see Sample file section below).
The first character that immediately follows the last data field character is an asterisk, but it is only included if a checksum is supplied.
The asterisk is immediately followed by a checksum represented as a tw |
https://en.wikipedia.org/wiki/0X | 0X or 0-X ("zero/oh ex") may refer to:
Computing
0x, prefix for a hexadecimal numeric constant
0x (decentralized exchange infrastructure), a blockchain protocol
C++11, standard for the C++ programming language (previously C++0x)
In fiction
Zero-X, a spacecraft from the Thunderbirds and Captain Scarlett puppet series
, a living cellular automaton from the Of Man and Manta novels
Vehicle
Zero X, an electric motorcycle model
See also
Ox (disambiguation)
X0 (disambiguation)
Zerox, in DC Comics |
https://en.wikipedia.org/wiki/Mimer%20SQL | Mimer SQL is a proprietary SQL-based relational database management system produced by the Swedish company Mimer Information Technology AB (Mimer AB), formerly known as Upright Database Technology AB. It was originally developed as a research project at the Uppsala University, Uppsala, Sweden in the 1970s before being developed into a commercial product.
The database has been deployed in a wide range of application situations, including the National Health Service Pulse blood transfusion service in the UK, Volvo Cars production line in Sweden and automotive dealers in Australia. It has sometimes been one of the limited options available in realtime critical applications and resource restricted situations such as mobile devices.
History
Mimer SQL originated from a project from the ITC service center supporting Uppsala University and some other institutions to leverage the relational database capabilities proposed by Codd and others. The initial release in about 1975 was designated RAPID and was written in IBM assembler language. The name was changed to Mimer in 1977 to avoid a trademark issue. Other universities were interested in the project on a number of machine architectures and Mimer was rewritten in Fortran to achieve portability. Further models were developed for Mimer with the Mimer/QL implementing the QUEL query languages.
The emergence of SQL in the 1980s as the standard query language resulted in Mimers' developers choosing to adopt it with the product becoming Mimer SQL.
In 1984 Mimer was transferred to the newly established company Mimer Information Systems.
Versions
the Mimer SQL database server is currently supported on the main platforms of Windows, MacOS, Linux, and OpenVMS (Itanium and x86-64). Previous versions of the database engine was supported on other operating systems including Solaris, AIX, HP-UX, Tru 64, SCO and DNIX. Versions of Mimer SQL are available for download and free for development.
The Enterprise product is a standards based SQL database server based upon the Mimer SQL Experience database server. This product is highly configurable and components can be added, removed or replacing in the foundation product to achieve a derived product suitable for embedded, real-time or small footprint application.
The Mimer SQL Realtime database server is a replacement database engine specifically designed for applications where real-time aspects are paramount. This is sometimes marketed as the Automotive approach. For resource limited environments the Mimer SQL Mobile database server is a replacement runtime environment without a SQL compiler. This is used for portable and certain custom devices and is termed the Mobile Approach.
Custom embedded approaches can be applied to multiple hardware and operating system combinations.
These options enable Mimer SQL to be deployed to a wide variety of additional target platforms, such as Android, and real-time operating systems including VxWorks.
The database is |
https://en.wikipedia.org/wiki/Bluefish%20%28software%29 | Bluefish is a free and open-source software advanced text editor with a variety of tools for programming and website development. It supports coding languages including HTML, XHTML, CSS, XML, PHP, C, C++, JavaScript, Java, Go, Vala, Ada, D, SQL, Perl, ColdFusion, JSP, Python, Ruby, and shell. It is available for many platforms, including Linux, macOS and Windows, and can be used via integration with GNOME or run as a stand-alone application. Designed as a compromise between plain text editors and full programming IDEs, Bluefish is lightweight, fast and easy to learn, while providing many IDE features. It has been translated into 17 languages.
Features
Bluefish's wizards can be used to assist in task completion. Its other features include syntax highlighting, auto-completion, code folding, auto-recovery, upload/download functionality, a code-aware spell-checker, a Unicode character browser, code navigation, and bookmarks. It has a multiple document interface that can quickly load codebases or websites, and it has many tools search-and-replace tools that can be used with scripts and regular expressions. It can store the current states of projects to reopen them in that state. Zencoding/emmet is supported for web development.
Bluefish is extensible via plugins and scripts. Many scripts come preconfigured, including statical code analysis, and syntax and markup checks for many different markup and programming languages.
History
Bluefish was started by Chris Mazuc and Olivier Sessink in 1997 to facilitate web development professionals on Linux desktop platforms. Its development has been continued by a changing group of professional web developers under project organizer Olivier Sessink. It was originally called Thtml editor, which was considered too cryptic; then Prosite, which was abandoned to avoid clashes with web-development companies already using that name. The name Bluefish was chosen after a logo (a child's drawing of a blue fish) was proposed on its mailing list. Since version 1.0, the original logo was replaced with a new, more polished one.
Source code and development
Bluefish is written in C and uses the cross-platform GTK library for its GUI widgets. Markup and programming language support is defined in XML files. Bluefish has a plugin API in C, but it has been used mainly to separate non-maintained parts (such as the infobrowser-plugin) from maintained parts. A few Python plugins exist as well, but they need a C plugin to interact with the main program. Bluefish also supports very loosely coupled plugins: external scripts that read standard input and return their results via standard output can be configured by the user in the preferences panel. It uses autoconf/automake to configure and set up its build environment. Both llvm and GCC can be used to compile Bluefish. On Windows, MinGW is used to build the binaries.
Reception
A Softpedia review found the software powerful, feature-rich and easy to use.
See also
Comparison of |
https://en.wikipedia.org/wiki/Godel%20%28disambiguation%29 | Kurt Gödel (28 April 1906 – 14 January 1978) was an Austrian (later American) logician, mathematician and philosopher.
Godel or similar may also refer to:
Gödel (programming language)
3366 Gödel, a main belt asteroid discovered in 1985
Gödel, Kastamonu, a village in the Kastamonu Province, Turkey
Godel (river), at Föhr, Schleswig-Holstein, Germany
Godel Iceport, at the coast of Queen Maud Land, Antarctica
Other people with the surname Godel include:
Gaston Godel (1914−2004), Swiss race walker
Vahé Godel (born 1931), Swiss writer
Arkadiusz Godel (born 1952), Polish fencer
Jon Godel, 21st-century British journalist
See also
Godel mouthpiece, a scuba mouthpiece with a snorkel attached |
https://en.wikipedia.org/wiki/Sidney%20Darlington | Sidney Darlington (July 18, 1906 – October 31, 1997) was an American electrical engineer and inventor of a transistor configuration in 1953, the Darlington pair. He advanced the state of network theory, developing the insertion-loss synthesis approach, and invented chirp radar, bombsights, and gun and rocket guidance.
Darlington was awarded a B.S. in physics, magna cum laude, from Harvard in 1928, where he was elected to Phi Beta Kappa. He also received a B.S. in E.E. from MIT in 1929, and a Ph.D. in physics from Columbia in 1940.
In 1945, he was awarded the Presidential Medal of Freedom, the United States' highest civilian honor, for his contributions during World War II. He was an elected member of the National Academy of Engineering, which cited his contributions to electrical network theory, radar, and guidance systems. In 1975, he received IEEE's Edison Medal "For basic contributions to network theory and for important inventions in radar systems and electronic circuits" and the IEEE Medal of Honor in 1981 "For fundamental contributions to filtering and signal processing leading to chirp radar".
He died at his home in Exeter, New Hampshire, USA, at the age of 91.
Patents
— Wave Transmission Network
— Semiconductor signal translating devices.(ed., "Darlington Transistor")
— Bombsight Computer
— Tracking Device
— Fire Control Computer
— Pulse Transmission(Chirp)
— Rocket Guidance
— Two-Port Network Synthesis
— Chirp Pulse Equalizer
References
External links
IEEE Biography
Darlington’s Contributions to Transistor Circuit Design
Irwin W. Sandberg and Ernest S. Kuh, "Sidney Darlington", Biographical Memoirs of the National Academy of Sciences (2004)
1906 births
1997 deaths
IEEE Medal of Honor recipients
IEEE Edison Medal recipients
American electrical engineers
Members of the United States National Academy of Engineering
Scientists at Bell Labs
People associated with radar
Scientists from Pittsburgh
Engineers from Pennsylvania
20th-century American engineers
MIT School of Engineering alumni
Columbia Graduate School of Arts and Sciences alumni
Harvard University alumni |
https://en.wikipedia.org/wiki/Installation | Installation may refer to:
Installation (computer programs)
Installation, work of installation art
Installation, military base
Installation, into an office, especially a religious (Installation (Christianity)) or political one |
https://en.wikipedia.org/wiki/Voluntary%20Agency%20Network%20of%20Korea | The Voluntary Agency Network of Korea (), abbreviated VANK (), is an Internet-based South Korean organization funded by the Korean government and established in 1999, consisting of 120,000 South Korean members and 30,000 international members. They refer to themselves as the "Cyber Diplomatic Delegation Group", and are mainly involved in spreading information about Korea to the world. They are politically motivated in their activities and frequently promote the Korean government's claims in various Japan-Korea and China-Korea disputes. Park Ki-Tae, founder of VANK, has said "the project is aimed at isolating Japan". VANK's membership consists mainly of junior high and high school students, although university students also participate.
Activities
Examples of campaigns they have conducted include organizing a protests movement to pressure Google and Apple to label the Liancourt rocks as Dokto on their maps and spreading the story of the ancient kingdom of Goguryeo, and about Jikji, the world's oldest extant book printed using movable metal type.
VANK publishes reading materials, postcards, maps, and videos. VANK's self-built online database and published books with information about Korea are acknowledged by overseas universities as recommended learning resources about Korea. As a way to exchange cultures and connect with foreigners, VANK also conducts surveys about their opinions of Korea, such as a notable survey about what aspects of Korea interest foreigners the most.
VANK disputes certain terms and information regarding Asian geographic names or about East Asian history. The head of the Voluntary Agency Network of Korea said the organization has corrected hundreds of mistaken statements by foreign governments about South Korea. VANK also raises awareness for Japanese war crimes and promotes the banning and removal of symbolism they associate with Imperial Japan.
In 2013, VANK launched a campaign against the Tokyo Olympic and Paralympic Games. The campaign included a letter to the International Olympic Committee (IOC) opposing the games because "Japan has no remorse for war crimes.”, the letter was also sent to major foreign media such as CNN and the New York Times. On January 6, 2020, a poster was put on a temporary fence on the site of the new Japanese embassy in Jong Chiyo Road, Nono District. In the posters, the Tokyo Games are contaminated by nuclear radiation, in one scene the Olympic Torch Relay is depicted with a man in a hazmat suit transporting radioactive material. They also produced stamps and coins with similar imagery.
In 2019, VANK launched a campaign against the expression Chinese New Year, recommending the term "lunar new year" instead.
In 2020, VANK urged Chinese netizens to stop cyberbullying Korean celebrity singer Lee Hyo-ri after her Instagram account received several complaints and criticisms. VANK posted an online petition titled "Stop China's cyber chauvinism which lynched a Korean celebrity!". It justified the |
https://en.wikipedia.org/wiki/STS-6 | STS-6 was the sixth NASA Space Shuttle mission and the maiden flight of the . Launched from Kennedy Space Center on April 4, 1983, the mission deployed the first Tracking and Data Relay Satellite, TDRS-1, into orbit, before landing at Edwards Air Force Base on April 9, 1983. STS-6 was the first Space Shuttle mission during which a Extravehicular activity was conducted, and hence was the first in which the Extravehicular Mobility Unit (EMU) was used.
Crew
STS-6 was the last shuttle mission with a four-person crew until STS-135, the final shuttle mission, which launched on July 8, 2011. Commander Paul Weitz had previously served as Pilot on the first Skylab crewed mission (Skylab-2), where he lived and worked in Skylab for nearly a month from May to June 1973. After Skylab, Weitz became the Deputy Chief of the Astronaut Office under Chief Astronaut John Young. Bobko originally became an astronaut for the Air Force's Manned Orbiting Laboratory (MOL) program but later joined NASA in 1969 after the MOL program's cancellation. Prior to STS-6 he participated in the Skylab Medical Experiment Altitude Test (SMEAT) and worked as a member of the support crew for the Apollo-Soyuz Test Project (ASTP).
Peterson was also a transfer from the MOL program, and was a member of the support crew for Apollo 16. Musgrave joined NASA in 1967 as part of the second scientist-astronaut group, and was the backup Science Pilot for the first Skylab mission. He also participated in the design of the equipment that he and Peterson used during their EVA on the STS-6 mission.
Support crew
Roy D. Bridges Jr. (entry CAPCOM)
Mary L. Cleave
Richard O. Covey (ascent CAPCOM)
Guy Gardner
Jon McBride
Bryan D. O'Connor
Spacewalks
Musgrave and Peterson
EVA Start: April 7, 1983
EVA End: April 8, 1983
Duration: 4hours, 17minutes
Crew seating arrangements
Mission background
The new orbiter was rolled out to LC-39A in November 1982. On December 18, 1982, Challenger was given a PFRF (Pre Flight Readiness Firing) to verify the operation of the main engines. The PFRF lasted for 16seconds. Although engine operation was generally satisfactory, telemetry data indicated significant leakage of liquid hydrogen in the thrust section. However, it was not possible to determine the location of the leak with certainty, so program directors decided on a second PFRF with added telemetry probes. It was known that during the test run on December 18, 1982, that recirculated exhaust gases and vibration leaked into the thrust section and this was considered a potential cause of the leak. Therefore, the original planned launch in late January 1983 had to be postponed.
On January 25, 1983, a second PFRF was conducted which lasted 23 seconds and exhibited more hydrogen leaks. Eventually, it was found that low pressure ducting in the No. 1 engine was cracked. The engine was replaced by a spare, which was found to also have leaks. A third engine had to be ordered from Rocketdyne, and after thoro |
https://en.wikipedia.org/wiki/Euston%20railway%20station | Euston railway station ( ; or London Euston) is a central London railway terminus managed by Network Rail in the London Borough of Camden. It is the southern terminus of the West Coast Main Line, the UK's busiest inter-city railway. Euston is the eleventh-busiest station in Britain and the country's busiest inter-city passenger terminal, being the gateway from London to the West Midlands, North West England, North Wales and Scotland.
Intercity express passenger services to the major cities of Birmingham, Manchester, Liverpool, Glasgow and Edinburgh, and through services to for connecting ferries to Dublin are operated by Avanti West Coast. Overnight sleeper services to Scotland are provided by the Caledonian Sleeper. London Northwestern Railway provide commuter and regional services to the West Midlands, whilst London Overground provide local suburban services in the London area via the Watford DC Line which runs parallel to the West Coast Main Line as far as . Euston tube station is connected to the main concourse and Euston Square tube station is nearby. King's Cross and St Pancras railway stations are about east along Euston Road.
Euston, the first inter-city railway terminal in London, was planned by George and Robert Stephenson. It was designed by Philip Hardwick and built by William Cubitt, with a distinctive arch over the station entrance. The station opened as the terminus of the London and Birmingham Railway (L&BR) on 20 July 1837. Euston was expanded after the L&BR was amalgamated with other companies to form the London and North Western Railway, and the original sheds were replaced by the Great Hall in 1849. Capacity was increased throughout the 19th century from two platforms to fifteen. The station was controversially rebuilt in the mid-1960s when the Arch and the Great Hall were demolished to accommodate the electrified West Coast Main Line, and the revamped station still attracts criticism over its architecture. Euston is to be the London terminus for the planned High Speed 2 railway and the station is being redeveloped to accommodate it.
Name and location
The station is named after Euston Hall in Suffolk, the ancestral home of the Dukes of Grafton, the main landowners in the area during the mid-19th century. It is set back from Euston Square and Euston Road on the London Inner Ring Road, between Cardington Street and Eversholt Street in the London Borough of Camden. It is one of 19 stations managed by Network Rail. As of 2016, it is the fifth-busiest station in Britain and the busiest inter-city passenger terminal in the country. It is the sixth-busiest terminus in London by entries and exits. Euston bus station is in front of the main entrance.
History
Euston was the first inter-city railway station in London. It opened on 20 July 1837 as the terminus of the London and Birmingham Railway (L&BR). The old station building was demolished in the 1960s and replaced with the present building in the international modern style.
|
https://en.wikipedia.org/wiki/Hoare%20logic | Hoare logic (also known as Floyd–Hoare logic or Hoare rules) is a formal system with a set of logical rules for reasoning rigorously about the correctness of computer programs. It was proposed in 1969 by the British computer scientist and logician Tony Hoare, and subsequently refined by Hoare and other researchers. The original ideas were seeded by the work of Robert W. Floyd, who had published a similar system for flowcharts.
Hoare triple
The central feature of Hoare logic is the Hoare triple. A triple describes how the execution of a piece of code changes the state of the computation. A Hoare triple is of the form
where and are assertions and is a command. is named the precondition and the postcondition: when the precondition is met, executing the command establishes the postcondition. Assertions are formulae in predicate logic.
Hoare logic provides axioms and inference rules for all the constructs of a simple imperative programming language. In addition to the rules for the simple language in Hoare's original paper, rules for other language constructs have been developed since then by Hoare and many other researchers. There are rules for concurrency, procedures, jumps, and pointers.
Partial and total correctness
Using standard Hoare logic, only partial correctness can be proven. Total correctness additionally requires termination, which can be proven separately or with an extended version of the While rule. Thus the intuitive reading of a Hoare triple is: Whenever holds of the state before the execution of , then will hold afterwards, or does not terminate. In the latter case, there is no "after", so can be any statement at all. Indeed, one can choose to be false to express that does not terminate.
"Termination" here and in the rest of this article is meant in the broader sense that computation will eventually be finished, that is it implies the absence of infinite loops; it does not imply the absence of implementation limit violations (e.g. division by zero) stopping the program prematurely. In his 1969 paper, Hoare used a narrower notion of termination which also entailed the absence of implementation limit violations, and expressed his preference for the broader notion of termination as it keeps assertions implementation-independent:
Rules
Empty statement axiom schema
The empty statement rule asserts that the statement does not change the state of the program, thus whatever holds true before also holds true afterwards.
Assignment axiom schema
The assignment axiom states that, after the assignment, any predicate that was previously true for the right-hand side of the assignment now holds for the variable. Formally, let be an assertion in which the variable is free. Then:
where denotes the assertion in which each free occurrence of has been replaced by the expression .
The assignment axiom scheme means that the truth of is equivalent to the after-assignment truth of . Thus were true prior to the assignm |
https://en.wikipedia.org/wiki/Macintosh%20IIfx | The Macintosh IIfx is a personal computer designed, manufactured and sold by Apple Computer from March 1990 to April 1992. At introduction it cost from to , depending on configuration, and it was the fastest Macintosh available at the time.
The IIfx is the most powerful of the 68030-based Macintosh II family and was replaced at the top of Apple's lineup by the Macintosh Quadra in 1991. It is the last Apple computer released that was designed using the Snow White design language.
Overview
Dubbed "Wicked Fast" by its Product Manager, Frank Casanova – who came to Apple from Apollo Computer in Boston, Massachusetts, where the Boston term "wicked" is commonly used to denote anything extreme – the IIfx runs at a clock rate of 40 megahertz, has 32 KB of Level 2 cache, six NuBus slots, and includes a number of proprietary ASICs and coprocessors. Designed to speed up the machine even further, these chips require system-specific drivers. The 40 MHz speed refers to the main logic board clock (the bus), the Motorola 68030 CPU, and the computer's Motorola 68882 FPU. The machine has eight RAM slots, for a maximum of 128 MB RAM, an enormous amount at the time.
The IIfx features specialized high-speed (80 ns) RAM using 64-pin dual-ported SIMMs, while all other contemporary Macintosh models use 30-pin SIMMs. The extra pins are a separate path to allow latched read and write operations. It is also possible to use parity memory modules; the IIfx is the only stock 68K Macintosh to support them along with special versions of the Macintosh IIci. The logic board has a total of 8 RAM slots; these must be populated four at a time with 1, 4, or 16 MB chips; this results in a maximum memory amount of 128 MB.
The IIfx includes two special dedicated processors for floppy disk operations, sound, ADB, and serial communications. These I/O chips feature a pair of 10 MHz embedded 6502 CPUs, which is the same CPU family used in Apple II machines.
The IIfx uses SCSI as its hard disk interface, as had all previous Macintosh models since the Macintosh Plus. The IIfx requires a special black-colored SCSI terminator for external drives.
Models
When first introduced, the IIfx was offered in the following configurations:
Macintosh IIfx: 4 MB memory, 1.44 MB SuperDrive. US$8,969.
Macintosh IIfx 4/80: 4 MB memory, 80 MB HDD. US$9,869.
Macintosh IIfx 4/160: 4 MB memory, 160 MB HDD. US$10,969.
Macintosh IIfx 4/80 with Parity Support: 4 MB of parity error-checking RAM, 80 MB HDD.
Introduced May 15, 1990:
Macintosh IIfx 4/80 with A/UX: 4 MB memory, 160 MB HDD, A/UX 2.0 preinstalled. US$10,469. Shipments began in June.
Timeline
References
External links
Macintosh IIfx profile on Low End Mac
Apple-History: Macintosh IIfx
EveryMac: Macintosh IIfx
Apple Technical Note DV15 (Mirror)
fx
IIfx
IIfx
IIfx
Computer-related introductions in 1990
Products and services discontinued in 1992 |
https://en.wikipedia.org/wiki/MMP | MMP may refer to:
Computing and video games
Massively multi-player, a type of online game
Massively multiprocessing, large symmetric multiprocessing (SMP) computer systems
Measure Map Pro format, a GIS format
Science and mathematics
Matrix metalloproteinase enzymes
Methuselah Mouse Prize, for research into slowing cellular ageing
Millennium Mathematics Project, of the University of Cambridge
Moscow Mathematical Papyrus, an ancient Egyptian mathematical papyrus
Matrilysin, an enzyme
Minimal model program, a branch of birational geometry
Million progressive motile (million motile sperm cells per milliliter), a measure of male fertility used in semen analysis
Politics
Mixed-member proportional representation, a voting system used in Germany, New Zealand and other countries
Manitoba Marijuana Party, now Freedom Party of Manitoba, a Canadian political party
Minuteman Project, 2005 action to deter illegal immigration
Molotov–Ribbentrop Pact, was a non-aggression pact signed in Moscow in the late hours of 23 August 1939
Industry and labor
International Organization of Masters, Mates & Pilots (MM&P), a maritime labor union
Moldova Metallurgical Plant, see Moldova Steel Works
Maintenance management professional, a Canadian job qualification
Sport
Memphis Motorsports Park, a race track in Millington, Tennessee, United States
Minute Maid Park, a ballpark in Houston, Texas, United States
Miller Motorsports Park in Tooele County, Utah.
Fiction
Mass market paperback, a bookbinding format
Tokyo Mew Mew, also known as Mew Mew Power, a Japanese cartoon
Miss Moneypenny, secretary to James Bond's boss
Mighty Math Powers, a method that Team Umizoomi uses
Others
Magellan Midstream Partners, a publicly traded partnership
Marian Movement of Priests, a Catholic organization
Multi-Man Publishing, a wargame company
Metal Mind Productions, a Polish music label
Monday Morning Podcast, a weekly comedy podcast by Bill Burr
Marchwood Military Port, a Military Port in Marchwood, England
Missile Moyenne Portée (MMP), a French anti-tank missile
Mucous membrane pemphigoid
Marilyn Monroe Productions |
https://en.wikipedia.org/wiki/Marylebone%20station | Marylebone station ( ) is a Central London railway terminus and connected London Underground station in the Marylebone area of the City of Westminster. On the National Rail network it is also known as London Marylebone and is the southern terminus of the Chiltern Main Line to Birmingham. An accompanying Underground station is on the Bakerloo line between Edgware Road and in Transport for London's fare zone 1.
The station opened on 15 March 1899 as the London terminus of the Great Central Main Line (GCML), the last major railway to open in Britain for 100 years, linking the capital to the cities of Leicester, Sheffield and Manchester. Marylebone was the last of London's main line termini to be built and is one of the smallest, opening with half of the platforms originally planned. There has been an interchange with the Bakerloo line since 1907, but not with any other lines.
Traffic declined at Marylebone station from the mid-20th century, particularly after the GCML closed. By the 1980s, it was threatened with closure, but was reprieved because of commuter traffic on the London to Aylesbury Line (a remaining part of the GCML) and from . In 1993 the station found a new role as the terminus of the Chiltern Main Line. Following the privatisation of British Rail, the station was expanded with two additional platforms in 2006 and improved services to . In 2015 services began between Marylebone and via a new chord connecting the main line to the Oxford to Bicester Line and an extension to following in 2016. As of 2020, it is the only main London terminus to host only diesel trains, as none of the National Rail lines into it are electrified.
Marylebone is one of the squares on the British Monopoly board, and is popular for filming because of its relative quietness compared to other London termini.
Location
The station stands on Melcombe Place just north of Marylebone Road, a straight west-to-east thoroughfare through Marylebone in Central London; Baker Street is close by to the east and south-east. It is in the northern, Lisson Grove, neighbourhood of the district, in a northern projection of the Bryanston and Dorset Square ward immediately south of St John's Wood. North-east is Regent's Park, north in a network of mostly residential streets is Lord's Cricket Ground and south, south-west and south-east are a mixed-use network of streets. Other nearby London termini are and .
A number of TfL bus routes serve the station.
National Rail
The main line station has six platforms; two built in 1899, two inserted into the former carriage road in the 1980s, and two built in September 2006. It is the only non-electrified terminal in London. Marylebone is operated by Chiltern Railways (part of Deutsche Bahn), making it one of the few London terminal stations not to be managed by Network Rail.
Chiltern Railways operates all services at the station, accessing the Chiltern Main Line and London to Aylesbury Line routes which serve , , Bicester, , , , , Birm |
https://en.wikipedia.org/wiki/List%20of%20Canadian%20television%20channels | Television in Canada has many individual stations, networks, and systems.
National broadcast television networks
English
CBC Television, a national public network owned by the Canadian Broadcasting Corporation (CBC).
Citytv, a privately owned television network owned by Rogers Media, with stations in Quebec, Ontario, Manitoba, Saskatchewan, Alberta and British Columbia.
CTV Television Network, a national private network (except for Newfoundland and Labrador and the territories) owned by Bell Media.
Global Television Network, a national private network (except for Newfoundland and Labrador and the territories) owned by Corus Entertainment.
APTN, the first national Indigenous broadcaster in the world.
French
Ici Radio-Canada Télé, a national public network owned by the CBC's French-language division Société Radio-Canada.
TVA, a privately owned television network owned by Groupe TVA.
Multilingual
Aboriginal Peoples Television Network, a broadcast television network with television stations in the three territories and cable network carried nationwide on cable and satellite. Programming focuses on Indigenous Peoples. It operates in English, French and various Aboriginal languages.
Regional broadcast television systems
English
CTV 2, a privately owned television system with stations in Ontario, Alberta, British Columbia, and Atlantic Canada. It is owned by Bell Media.
Great West Television, a privately owned group of stations affiliated with CTV Two and Citytv in British Columbia.
Yes TV, a group of three religious stations in Ontario and Alberta owned by Crossroads Christian Communications.
indieNET, an arrangement CHCH (in Ontario), CHEK (in British Columbia), & CJON (in Newfoundland and Labrador), three independent broadcasters, have with Yes TV to sub-license some of Yes TV's programming
French
Noovo, a privately owned television system based in Quebec owned by Bell Media.
Multilingual
CFHD-DT, a privately owned multicultural station based in Montreal, using the on-air brand ICI (International Channel).
OMNI, a group of five privately owned multicultural television stations in Ontario, Alberta and British Columbia owned by Rogers Media.
Defunct regional broadcast television systems
A-Channel, a privately owned television system based in Alberta, Manitoba, and Toronto and owned by Craig Media.
Baton Broadcast System, or BBS, a privately owned television system based in Ontario and Saskatchewan and owned by Baton Broadcasting.
E!, a privately owned television system based in Quebec, Ontario, Alberta, and British Columbia and owned by Canwest.
Joytv, a privately owned television system based in British Columbia and Manitoba and was owned by ZoomerMedia.
Northern Television, similar in fashion to Great West Television, also in BC. Shared by two northern BC CBC Television affiliates.
Regional broadcast television stations
English
CHCH-DT, using the on-air brand CHCH - a privately owned television station in Hamilton owne |
https://en.wikipedia.org/wiki/Eyewitness%20to%20History | Eyewitness to History was a Friday night CBS Television Network public affairs program. It was initially hosted by veteran broadcaster Charles Kuralt (1960–61), followed by Walter Cronkite (1961–62), and then Charles Collingwood (1962–63). It aired from September 30, 1960 through July 26, 1963 in the 10:30 pm time slot. Sponsored by the Firestone Tire and Rubber Company, the series concentrated on the most significant news story or stories of the previous week. Major events reviewed included the Kennedy-Nixon 1960 Presidential campaign, highlights of the Kennedy administration, the Bay Of Pigs invasion, the space race, the Cuban Missile Crisis and the civil rights movement.
The show's title was shortened to Eyewitness in 1961. Coincidentally, many local CBS affiliates adopted the branding "Eyewitness News" for their local newscasts in the 1960s.
One of the show's producers, Av Westin, went on to become executive producer of ABC Evening News and, later, 20/20.
References
http://www.museum.tv/eotvsection.php?entrycode=eyewitnessto
CBS original programming
1960s American television news shows
1960 American television series debuts
1963 American television series endings |
https://en.wikipedia.org/wiki/List%20of%20unsolved%20problems%20in%20mathematics | Many mathematical problems have been stated but not yet solved. These problems come from many areas of mathematics, such as theoretical physics, computer science, algebra, analysis, combinatorics, algebraic, differential, discrete and Euclidean geometries, graph theory, group theory, model theory, number theory, set theory, Ramsey theory, dynamical systems, and partial differential equations. Some problems belong to more than one discipline and are studied using techniques from different areas. Prizes are often awarded for the solution to a long-standing problem, and some lists of unsolved problems, such as the Millennium Prize Problems, receive considerable attention.
This list is a composite of notable unsolved problems mentioned in previously published lists, including but not limited to lists considered authoritative. Although this list may never be comprehensive, the problems listed here vary widely in both difficulty and importance.
Lists of unsolved problems in mathematics
Various mathematicians and organizations have published and promoted lists of unsolved mathematical problems. In some cases, the lists have been associated with prizes for the discoverers of solutions.
Millennium Prize Problems
Of the original seven Millennium Prize Problems listed by the Clay Mathematics Institute in 2000, six remain unsolved to date:
Birch and Swinnerton-Dyer conjecture
Hodge conjecture
Navier–Stokes existence and smoothness
P versus NP
Riemann hypothesis
Yang–Mills existence and mass gap
The seventh problem, the Poincaré conjecture, was solved by Grigori Perelman in 2003. However, a generalization called the smooth four-dimensional Poincaré conjecture—that is, whether a four-dimensional topological sphere can have two or more inequivalent smooth structures—is unsolved.
Notebooks
The Kourovka Notebook () is a collection of unsolved problems in group theory, first published in 1965 and updated many times since.
The Sverdlovsk Notebook () is a collection of unsolved problems in semigroup theory, first published in 1969 and updated many times since.
The Dniester Notebook () lists several hundred unsolved problems in algebra, particularly ring theory and modulus theory.
The Erlagol Notebook () lists unsolved problems in algebra and model theory.
Unsolved problems
Algebra
Birch–Tate conjecture on the relation between the order of the center of the Steinberg group of the ring of integers of a number field to the field's Dedekind zeta function.
Bombieri–Lang conjectures on densities of rational points of algebraic surfaces and algebraic varieties defined on number fields and their field extensions.
Connes embedding problem in Von Neumann algebra theory
Crouzeix's conjecture: the matrix norm of a complex function applied to a complex matrix is at most twice the supremum of over the field of values of .
Determinantal conjecture on the determinant of the sum of two normal matrices.
Eilenberg–Ganea conjecture: a group with cohomolo |
https://en.wikipedia.org/wiki/Front-side%20bus | The front-side bus (FSB) is a computer communication interface (bus) that was often used in Intel-chip-based computers during the 1990s and 2000s. The EV6 bus served the same function for competing AMD CPUs. Both typically carry data between the central processing unit (CPU) and a memory controller hub, known as the northbridge.
Depending on the implementation, some computers may also have a back-side bus that connects the CPU to the cache. This bus and the cache connected to it are faster than accessing the system memory (or RAM) via the front-side bus. The speed of the front side bus is often used as an important measure of the performance of a computer.
The original front-side bus architecture has been replaced by HyperTransport, Intel QuickPath Interconnect or Direct Media Interface in modern CPUs in personal computers.
History
The term came into use by Intel Corporation about the time the Pentium Pro and Pentium II products were announced, in the 1990s.
"Front side" refers to the external interface from the processor to the rest of the computer system, as opposed to the back side, where the back-side bus connects the cache (and potentially other CPUs).
A front-side bus (FSB) is mostly used on PC-related motherboards (including personal computers and servers). They are seldom used in embedded systems or similar small computers. The FSB design was a performance improvement over the single system bus designs of the previous decades, but these front-side buses are sometimes referred to as the "system bus".
Front-side buses usually connect the CPU and the rest of the hardware via a chipset, which Intel implemented as a northbridge and a southbridge. Other buses like the Peripheral Component Interconnect (PCI), Accelerated Graphics Port (AGP), and memory buses all connect to the chipset in order for data to flow between the connected devices. These secondary system buses usually run at speeds derived from the front-side bus clock, but are not necessarily synchronized to it.
In response to AMD's Torrenza initiative, Intel opened its FSB CPU socket to third party devices.
Prior to this announcement, made in Spring 2007 at Intel Developer Forum in Beijing, Intel had very closely guarded who had access to the FSB, only allowing Intel processors in the CPU socket. The first example was field-programmable gate array (FPGA) co-processors, a result of collaboration between Intel-Xilinx-Nallatech and Intel-Altera-XtremeData (which shipped in 2008).
Related component speeds
CPU
The frequency at which a processor (CPU) operates is determined by applying a clock multiplier to the front-side bus (FSB) speed in some cases. For example, a processor running at 3200 MHz might be using a 400 MHz FSB. This means there is an internal clock multiplier setting (also called bus/core ratio) of 8. That is, the CPU is set to run at 8 times the frequency of the front-side bus: 400 MHz × 8 = 3200 MHz. Different CPU speeds are achieved by varying either the FSB fre |
https://en.wikipedia.org/wiki/Satellite%20modem | A satellite modem or satmodem is a modem used to establish data transfers using a communications satellite as a relay. A satellite modem's main function is to transform an input bitstream to a radio signal and vice versa.
There are some devices that include only a demodulator (and no modulator, thus only allowing data to be downloaded by satellite) that are also referred to as "satellite modems." These devices are used in satellite Internet access (in this case uploaded data is transferred through a conventional PSTN modem or an ADSL modem).
Satellite link
A satellite modem is not the only device needed to establish a communication channel. Other equipment that is essential for creating a satellite link include satellite antennas and frequency converters.
Data to be transmitted are transferred to a modem from data terminal equipment (e.g. a computer). The modem usually has intermediate frequency (IF) output (that is, 50-200 MHz), however, sometimes the signal is modulated directly to L band. In most cases, frequency has to be converted using an upconverter before amplification and transmission.
A modulated signal is a sequence of symbols, pieces of data represented by a corresponding signal state, e.g. a bit or a few bits, depending upon the modulation scheme being used. Recovering a symbol clock (making a local symbol clock generator synchronous with the remote one) is one of the most important tasks of a demodulator.
Similarly, a signal received from a satellite is firstly downconverted (this is done by a Low-noise block converter - LNB), then demodulated by a modem, and at last handled by data terminal equipment. The LNB is usually powered by the modem through the signal cable with 13 or 18 V DC.
Features
The main functions of a satellite modem are modulation and demodulation. Satellite communication standards also define error correction codes and framing formats.
Popular modulation types being used for satellite communications:
Binary phase-shift keying (BPSK);
Quadrature phase-shift keying (QPSK);
Offset quadrature phase-shift keying (OQPSK);
8PSK;
Quadrature amplitude modulation (QAM), especially 16QAM.
The popular satellite error correction codes include:
Convolutional codes:
with constraint length less than 10, usually decoded using a Viterbi algorithm (see Viterbi decoder);
with constraint length more than 10, usually decoded using a Fano algorithm (see Sequential decoder);
Reed–Solomon codes usually concatenated with convolutional codes with an interleaving;
New modems support superior error correction codes (turbo codes and LDPC codes).
Frame formats that are supported by various satellite modems include:
Intelsat business service (IBS) framing
Intermediate data rate (IDR) framing
MPEG-2 transport framing (used in DVB)
E1 and T1 framing
High-end modems also incorporate some additional features:
Multiple data interfaces (like RS-232, RS-422, V.35, G.703, LVDS, Ethernet);
Embedded Distant-end Monitor and |
https://en.wikipedia.org/wiki/Universal%20Plug%20and%20Play | Universal Plug and Play (UPnP) is a set of networking protocols on the Internet Protocol (IP) that permits networked devices, such as personal computers, printers, Internet gateways, Wi-Fi access points and mobile devices, to seamlessly discover each other's presence on the network and establish functional network services. UPnP is intended primarily for residential networks without enterprise-class devices.
UPnP assumes the network runs IP and then leverages HTTP, on top of IP, in order to provide device/service description, actions, data transfer and event notification. Device search requests and advertisements are supported by running HTTP on top of UDP (port 1900) using multicast (known as HTTPMU). Responses to search requests are also sent over UDP, but are instead sent using unicast (known as HTTPU).
Conceptually, UPnP extends plug and play—a technology for dynamically attaching devices directly to a computer—to zero-configuration networking for residential and SOHO wireless networks. UPnP devices are plug and play in that, when connected to a network, they automatically establish working configurations with other devices, removing the need for users to manually configure and add devices through IP addresses.
UPnP is generally regarded as unsuitable for deployment in business settings for reasons of economy, complexity, and consistency: the multicast foundation makes it chatty, consuming too many network resources on networks with a large population of devices; the simplified access controls do not map well to complex environments; and it does not provide a uniform configuration syntax such as the CLI environments of Cisco IOS or JUNOS.
Overview
The UPnP architecture allows device-to-device networking of consumer electronics, mobile devices, personal computers, and networked home appliances. It is a distributed, open architecture protocol based on established standards such as the Internet Protocol Suite (TCP/IP), HTTP, XML, and SOAP. UPnP control points (CPs) are devices which use UPnP protocols to control UPnP controlled devices (CDs).
The UPnP architecture supports zero-configuration networking. A UPnP-compatible device from any vendor can dynamically join a network, obtain an IP address, announce its name, advertise or convey its capabilities upon request, and learn about the presence and capabilities of other devices. Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS) servers are optional and are only used if they are available on the network. Devices can disconnect from the network automatically without leaving state information.
UPnP was published as a 73-part international standard, ISO/IEC 29341, in December 2008.
Other UPnP features include:
Media and device independence UPnP technology can run on many media that support IP including Ethernet, FireWire, IR (IrDA), home wiring (G.hn) and RF (Bluetooth, Wi-Fi). No special device driver support is necessary; common network protocols are used instead.
Us |
https://en.wikipedia.org/wiki/Epistemic%20community | An epistemic community is a network of knowledge-based experts who help decision-makers to define the problems they face, identify various policy solutions and assess the policy outcomes. The definitive conceptual framework of an epistemic community is widely accepted as that of Peter M. Haas. He describes them as "...a network of professionals with recognised expertise and competence in a particular domain and an authoritative claim to policy relevant knowledge within that domain or issue-area."
Although the members of an epistemic community may originate from a variety of academic or professional backgrounds, they are linked by a set of unifying characteristics for the promotion of collective amelioration and not collective gain. This is termed their "normative component". In the big picture, epistemic communities are socio-psychological entities that create and justify knowledge. Such communities can constitute of only two persons and yet gain an important role in building knowledge on any specific subject. Miika Vähämaa has recently suggested that epistemic communities consist of persons being able to understand, discuss and gain self-esteem concerning the matters being discussed.
Some theorists argue that an epistemic community may consist of those who accept one version of a story, or one version of validating a story. Michel Foucault referred more elaborately to mathesis as a rigorous episteme suitable for enabling cohesion of a discourse and thus uniting a community of its followers. In philosophy of science and systems science the process of forming a self-maintaining epistemic community is sometimes called a mindset. In politics, a tendency or faction is usually described in very similar terms.
Most researchers carefully distinguish between epistemic forms of community and "real" or "bodily" community which consists of people sharing risk, especially bodily risk.
It is also problematic to draw the line between modern ideas and more ancient ones, for example, Joseph Campbell's concept of myth from cultural anthropology, and Carl Jung's concept of archetype in psychology. Some consider forming an epistemic community a deep human need, and ultimately a mythical or even religious obligation. Among these very notably are E. O. Wilson, as well as Ellen Dissanayake, an American historian of aesthetics who famously argued that almost all of our broadly shared conceptual metaphors centre on one basic idea of safety: that of "home".
From this view, an epistemic community may be seen as a group of people who do not have any specific history together, but search for a common idea of home as if forming an intentional community. For example, an epistemic community can be found in a network of professionals from a wide variety of disciplines and backgrounds.
As discussed in Peter M. Haas's definitive text, an epistemic community is made up of a diverse range of academic and professional experts, who are allied on the basis of four unifying ch |
https://en.wikipedia.org/wiki/Proof%20theory | Proof theory is a major branch of mathematical logic and theoretical computer science within which proofs are treated as formal mathematical objects, facilitating their analysis by mathematical techniques. Proofs are typically presented as inductively-defined data structures such as lists, boxed lists, or trees, which are constructed according to the axioms and rules of inference of a given logical system. Consequently, proof theory is syntactic in nature, in contrast to model theory, which is semantic in nature.
Some of the major areas of proof theory include structural proof theory, ordinal analysis, provability logic, reverse mathematics, proof mining, automated theorem proving, and proof complexity. Much research also focuses on applications in computer science, linguistics, and philosophy.
History
Although the formalisation of logic was much advanced by the work of such figures as Gottlob Frege, Giuseppe Peano, Bertrand Russell, and Richard Dedekind, the story of modern proof theory is often seen as being established by David Hilbert, who initiated what is called Hilbert's program in the Foundations of Mathematics. The central idea of this program was that if we could give finitary proofs of consistency for all the sophisticated formal theories needed by mathematicians, then we could ground these theories by means of a metamathematical argument, which shows that all of their purely universal assertions (more technically their provable sentences) are finitarily true; once so grounded we do not care about the non-finitary meaning of their existential theorems, regarding these as pseudo-meaningful stipulations of the existence of ideal entities.
The failure of the program was induced by Kurt Gödel's incompleteness theorems, which showed that any ω-consistent theory that is sufficiently strong to express certain simple arithmetic truths, cannot prove its own consistency, which on Gödel's formulation is a sentence. However, modified versions of Hilbert's program emerged and research has been carried out on related topics. This has led, in particular, to:
Refinement of Gödel's result, particularly J. Barkley Rosser's refinement, weakening the above requirement of ω-consistency to simple consistency;
Axiomatisation of the core of Gödel's result in terms of a modal language, provability logic;
Transfinite iteration of theories, due to Alan Turing and Solomon Feferman;
The discovery of self-verifying theories, systems strong enough to talk about themselves, but too weak to carry out the diagonal argument that is the key to Gödel's unprovability argument.
In parallel to the rise and fall of Hilbert's program, the foundations of structural proof theory were being founded. Jan Łukasiewicz suggested in 1926 that one could improve on Hilbert systems as a basis for the axiomatic presentation of logic if one allowed the drawing of conclusions from assumptions in the inference rules of the logic. In response to this, Stanisław Jaśkowski (1929) and G |
https://en.wikipedia.org/wiki/Trans-European%20road%20network | The Trans-European road network (TERN) was defined by Council Decision 93/629/EEC of 29 October 1993, and is a project to improve the internal road infrastructure of the European Union (EU). The TERN project is one of several Trans-European Transport Networks.
Decision 93/629/EEC expired on 30 June 1995 so it was further expanded by the Decision No 1692/96/EC of the European Parliament and of the Council of 23 July 1996 on Community guidelines for the development of the trans-European transport network, which added definition not only to the proposed road network, but to other Trans-European Transport Networks (TEN-T), as they came to be called.
This Decision is no longer in force either since it was replaced by Decision No 661/2010/EU of the European Parliament and of the Council of 7 July 2010 on Union guidelines for the development of the trans-European transport network.
Details of the road network
The trans-European road network, as laid out by Article 9 of Decision 661/2010/EU, is to include motorways and high-quality roads, whether existing, new or to be adapted, which:
play an important role in long-distance traffic; or
bypass the main urban centres on the routes identified by the network; or
provide interconnection with other modes of transport; or
link landlocked and peripheral regions to central regions of the Union.
Beyond these, the network should guarantee users a high, uniform and continuous level of services, comfort and safety.
It has also include infrastructure for traffic management, user information, dealing with incidents and emergencies and electronic fee collection, such infrastructure being based on active cooperation between traffic management systems at European, national and regional level and providers of travel and traffic information and value added services, which will ensure the necessary complementarity with applications whose deployment is facilitated under the trans-European telecommunications networks programme.
Selected TERN projects
Øresund Bridge, Denmark & Sweden (1992-1994)
Sidcup Bypass, London, United Kingdom (1985)
M25 motorway upgrades, UK (1985)
M20 motorway upgrades, UK (1986, 1989)
E-18 (Nordic Triangle) route in Finland (1995-2001)
A6 extensions in Germany (1997)
A43 improvements in Maurienne, France (1998)
A8 autobahn, Germany (2000)
N-340 from Cádiz to Barcelona via Málaga, Spain (2001)
A.Th.E. (north-south) and Egnatia Odos (east-west) motorways, Greece (1990-2004)
Ireland-UK-Benelux Link
Projects of common interest
In addition to specific priority axes and projects, projects of common interest form a common objective, the implementation of which depends on their degree of maturity and the availability of financial resources. Any project is of common interest which fulfils the criteria established in Article 7 of Decision 661/2010/EU.
References
See also
European routes
Trans-European Transport Network
International road networks
Road transport in Europe |
https://en.wikipedia.org/wiki/Primality%20test | A primality test is an algorithm for determining whether an input number is prime. Among other fields of mathematics, it is used for cryptography. Unlike integer factorization, primality tests do not generally give prime factors, only stating whether the input number is prime or not. Factorization is thought to be a computationally difficult problem, whereas primality testing is comparatively easy (its running time is polynomial in the size of the input). Some primality tests prove that a number is prime, while others like Miller–Rabin prove that a number is composite. Therefore, the latter might more accurately be called compositeness tests instead of primality tests.
Simple methods
The simplest primality test is trial division: given an input number, , check whether it is divisible by any prime number between 2 and (i.e., whether the division leaves no remainder). If so, then is composite. Otherwise, it is prime. In fact, for any divisor , there must be another divisor , and a prime divisor of , and therefore looking for prime divisors at most is sufficient.
For example, consider the number 100, whose divisors are these numbers:
1, 2, 4, 5, 10, 20, 25, 50, 100.
When all possible divisors up to are tested, some divisors will be discovered twice. To observe this, consider the list of divisor pairs of 100:
.
Notice that products past are the reverse of products that appeared earlier. For example, and are the reverse of each other. Note further that of the two divisors, and . This observation generalizes to all : all divisor pairs of contain a divisor less than or equal to , so the algorithm need only search for divisors less than / equal to to guarantee detection of all divisor pairs.
Also notice that 2 is a prime dividing 100, which immediately proves that 100 is not prime. Every positive integer except 1 is divisible by at least one prime number by the Fundamental Theorem of Arithmetic. Therefore the algorithm need only search for prime divisors less than / equal to .
For another example, consider how this algorithm determines the primality of 17. One has , and the only primes are 2 and 3. Neither divides 17, proving that 17 is prime. For a last example, consider 221. One has , and the primes are 2, 3, 5, 7, 11, and 13. Upon checking each, one discovers that , proving that 221 is not prime.
In cases where it is not feasible to compute the list of primes , it is also possible to simply (and slowly) check all numbers between and for divisors. A rather simple optimization is to test divisibility by 2 and by just the odd numbers between 3 and , since divisibility by an even number implies divisibility by 2.
This method can be improved further. Observe that all primes greater than 3 are of the form for a nonnegative integer and . Indeed, every integer is of the form for a positive integer and . Since 2 divides , and , and 3 divides and , the only possible remainders mod 6 for a prime greater than 3 are 1 and 5. So, |
https://en.wikipedia.org/wiki/JBuilder | JBuilder is a discontinued integrated development environment (IDE) for the programming language Java from Embarcadero Technologies. Originally developed by Borland, JBuilder was spun off with CodeGear which was eventually purchased by Embarcadero Technologies in 2008.
Oracle had based the first versions of JDeveloper on code from JBuilder licensed from Borland, but it has since been rewritten from scratch.
Versions
JBuilder 1 through 3 are based on the Delphi IDE. JBuilder 3.5 through 2006 are based on PrimeTime, an all-Java IDE framework. JBuilder 2007 "Peloton" is the first JBuilder release based on the eclipse IDE framework.
See also
Comparison of integrated development environments
References
External links
History of some JBuilder versions
CodeGear software
Java development tools
Integrated development environments
Cross-platform software |
https://en.wikipedia.org/wiki/Bias%20%28statistics%29 | Statistical bias, in the mathematical field of statistics, is a systematic tendency in which the methods used to gather data and generate statistics present an inaccurate, skewed or biased depiction of reality. Statistical bias exists in numerous stages of the data collection and analysis process, including: the source of the data, the methods used to collect the data, the estimator chosen, and the methods used to analyze the data. Data analysts can take various measures at each stage of the process to reduce the impact of statistical bias in their work. Understanding the source of statistical bias can help to assess whether the observed results are close to actuality. Issues of statistical bias has been argued to be closely linked to issues of statistical validity.
Statistical bias can have significant real world implications as data is used to inform decision making across a wide variety of processes in society. Data is used to inform lawmaking, industry regulation, corporate marketing and distribution tactics, and institutional policies in organizations and workplaces. Therefore, there can be significant implications if statistical bias is not accounted for and controlled. For example, if a pharmaceutical company wishes to explore the effect of a medication on the common cold but the data sample only includes men, any conclusions made from that data will be biased towards how the medication affects men rather than people in general. That means the information would be incomplete and not useful for deciding if the medication is ready for release in the general public. In this scenario, the bias can be addressed by broadening the sample. This sampling error is only one of the ways in which data can be biased.
Bias can be differentiated from other statistical mistakes such as accuracy (instrument failure/inadequacy), lack of data, or mistakes in transcription (typos). Bias implies that the data selection may have been skewed by the collection criteria. Other forms of human-based bias emerge in data collection as well such as response bias, in which participants give inaccurate responses to a question. Bias does not preclude the existence of any other mistakes. One may have a poorly designed sample, an inaccurate measurement device, and typos in recording data simultaneously. Ideally, all factors are controlled and accounted for.
Also it is useful to recognize that the term “error” specifically refers to the outcome rather than the process (errors of rejection or acceptance of the hypothesis being tested), or from the phenomenon of random errors. The terms flaw or mistake are recommended to differentiate procedural errors from these specifically defined outcome-based terms.
Bias of an estimator
Statistical bias is a feature of a statistical technique or of its results whereby the expected value of the results differs from the true underlying quantitative parameter being estimated. The bias of an estimator of a parameter should not be confus |
https://en.wikipedia.org/wiki/David%20H.%20Bailey%20%28mathematician%29 | David Harold Bailey (born 14 August 1948) is a mathematician and computer scientist. He received his B.S. in mathematics from Brigham Young University in 1972 and his Ph.D. in mathematics from Stanford University in 1976. He worked for 14 years as a computer scientist at NASA Ames Research Center, and then from 1998 to 2013 as a Senior Scientist at the Lawrence Berkeley National Laboratory. He is now retired from the Berkeley Lab.
Bailey is perhaps best known as a co-author (with Peter Borwein and Simon Plouffe) of a 1997 paper that presented a new formula for π (pi), which had been discovered by Plouffe in 1995. This Bailey–Borwein–Plouffe formula permits one to calculate binary or hexadecimal digits of pi beginning at an arbitrary position, by means of a simple algorithm. Subsequently, Bailey and Richard Crandall showed that the existence of this and similar formulas has implications for the long-standing question of "normality"—whether and why the digits of certain mathematical constants (including pi) appear "random" in a particular sense.
Bailey is a long-time collaborator with the late Jonathan Borwein (Peter's brother). They co-authored five books and over 80 technical papers on experimental mathematics.
Bailey also does research in numerical analysis and parallel computing. He has published studies on the fast Fourier transform (FFT), high-precision arithmetic, and the PSLQ algorithm (used for integer relation detection). He is a co-author of the NAS Benchmarks, which are used to assess and analyze the performance of parallel scientific computers. A "4-step" method of calculating the FFT is widely known as Bailey's FFT algorithm (Bailey himself credits it to W. M. Gentleman and G. Sande).
He has also published articles in the area of mathematical finance, including a 2014 paper "Pseudo-mathematics and financial charlatanism," which emphasizes the dangers of statistical overfitting and other abuses of mathematics in the financial field.
In 1993, Bailey received the Sidney Fernbach award from the IEEE Computer Society, as well as the Chauvenet Prize and the Hasse Prize from the Mathematical Association of America. In 2008 he was a co-recipient of the Gordon Bell Prize from the Association for Computing Machinery. In 2017 he was a co-recipient of the Levi L. Conant Prize from the American Mathematical Society.
Bailey is a member of the Church of Jesus Christ of Latter-day Saints. He has positioned himself as an advocate of the teaching of science and that accepting the conclusions of modern science is not incompatible with a religious view.
Selected works
with Peter B. Borwein and Simon Plouffe:
with Michał Misiurewicz:
with Jonathan Borwein, Marcos Lopez de Prado and Qiji Jim Zhu:
with Jonathan Borwein: Mathematics by experiment: Plausible reasoning in the 21st century, A. K. Peters 2004, 2008 (with accompanying CD Experiments in Mathematics, 2006)
with Jonathan Borwein, Neil Calkin, Roland Girgensohn, D. Russell Luke, V |
https://en.wikipedia.org/wiki/SBus | SBus is a computer bus system that was used in most SPARC-based computers (including all SPARCstations) from Sun Microsystems and others during the 1990s. It was introduced by Sun in 1989 to be a high-speed bus counterpart to their high-speed SPARC processors, replacing the earlier (and by this time, outdated) VMEbus used in their Motorola 68020- and 68030-based systems and early SPARC boxes. When Sun moved to open the SPARC definition in the early 1990s, SBus was likewise standardized and became IEEE-1496. In 1997 Sun started to migrate away from SBus to the Peripheral Component Interconnect (PCI) bus, and today SBus is no longer used.
The industry's first third-party SBus cards were announced in 1989 by Antares Microsystems; these were a 10BASE2 Ethernet controller, a SCSI-SNS host adapter, a parallel port, and an 8-channel serial controller.
The specification was published by Edward H. Frank and James D. Lyle.
A technical guide to the bus was published in 1992 in book form by Lyle, who founded Troubador Technologies. Sun also published a set of books as a "developer's kit" to encourage third-party products.
At the peak of the market over 250 manufacturers were listed in the SBus Product Directory, which was renamed to the SPARC Product Directory in 1996.
SBus is in many ways a "clean" design. It was targeted only to be used with SPARC processors, so most cross-platform issues were not a consideration. SBus is based on a big-endian 32-bit address and data bus, can run at speeds ranging from 16.67 MHz to 25 MHz, and is capable of transferring up to 100 MB/s. Devices are each mapped onto a 28-bit address space (256 MB). Only eight masters are supported, although there can be an unlimited number of slaves.
When the 64-bit UltraSPARC was introduced, SBus was modified to support extended transfers of a 64 bits doubleword per cycle to produce a 200 MB/s 64-bit bus. This variant of the SBus architecture used the same form factor and was backward-compatible with existing devices, as extended transfers are an optional feature.
SBus cards had a very compact form factor for the time. A single-width card was wide by long and is designed to be mounted parallel to the motherboard. This allowed for three expansion slots in the slim "pizza box" enclosure of the SPARCstation 1. The design also allows for double- or triple-width cards that take up two or three slots, as well as double-height (two 3x5 inch boards mounted in a "sandwich" configuration) cards.
SBus was originally announced as both a system bus and a peripheral interconnect that allowed input and output devices relatively low latency access to memory. However, soon memory and central processing unit (CPU) speeds outpaced I/O performance. Within a year some Sun systems used MBus, another interconnection standard, as a CPU—memory bus. The SBus served as an input/output bus for the rest of its lifetime.
See also
List of device bandwidths
References
External links
SBus Specification at Bi |
https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick%20algorithm | In computer science, the Aho–Corasick algorithm is a string-searching algorithm invented by Alfred V. Aho and Margaret J. Corasick in 1975. It is a kind of dictionary-matching algorithm that locates elements of a finite set of strings (the "dictionary") within an input text. It matches all strings simultaneously. The complexity of the algorithm is linear in the length of the strings plus the length of the searched text plus the number of output matches. Note that because all matches are found, there can be a quadratic number of matches if every substring matches (e.g. dictionary = , , , and input string is ).
Informally, the algorithm constructs a finite-state machine that resembles a trie with additional links between the various internal nodes. These extra internal links allow fast transitions between failed string matches (e.g. a search for in a trie that does not contain , but contains , and thus would fail at the node prefixed by ), to other branches of the trie that share a common prefix (e.g., in the previous case, a branch for might be the best lateral transition). This allows the automaton to transition between string matches without the need for backtracking.
When the string dictionary is known in advance (e.g. a computer virus database), the construction of the automaton can be performed once off-line and the compiled automaton stored for later use. In this case, its run time is linear in the length of the input plus the number of matched entries.
The Aho–Corasick string-matching algorithm formed the basis of the original Unix command fgrep.
Example
In this example, we will consider a dictionary consisting of the following words: {a, ab, bab, bc, bca, c, caa}.
The graph below is the Aho–Corasick data structure constructed from the specified dictionary, with each row in the table representing a node in the trie, with the column path indicating the (unique) sequence of characters from the root to the node.
The data structure has one node for every prefix of every string in the dictionary. So if (bca) is in the dictionary, then there will be nodes for (bca), (bc), (b), and (). If a node is in the dictionary then it is a blue node. Otherwise it is a grey node.
There is a black directed "child" arc from each node to a node whose name is found by appending one character. So there is a black arc from (bc) to (bca).
There is a blue directed "suffix" arc from each node to the node that is the longest possible strict suffix of it in the graph. For example, for node (caa), its strict suffixes are (aa) and (a) and (). The longest of these that exists in the graph is (a). So there is a blue arc from (caa) to (a). The blue arcs can be computed in linear time by performing a breadth-first search [potential suffix node will always be at lower level] starting from the root. The target for the blue arc of a visited node can be found by following its parent's blue arc to its longest suffix node and searching for a child of the suffi |
https://en.wikipedia.org/wiki/William%20Norris%20%28CEO%29 | William Charles Norris (July 14, 1911, near Red Cloud, Nebraska – August 21, 2006) was an American business executive. He was the CEO of Control Data Corporation, at one time one of the most powerful and respected computer companies in the world. He is famous for taking on IBM in a head-on fight and winning, as well as being a social activist who used Control Data's expansion in the late 1960s to bring jobs and training to inner cities and disadvantaged communities.
Early life
Norris was born and raised on a cattle farm in Nebraska, attending a tiny school in Inavale, Nebraska, and operating a ham radio. He attained a degree in electrical engineering from the University of Nebraska in 1932. He spent two years on his family's farm after graduation, helping weather the Great Depression and a significant drought in the Midwest by risking the use of Russian thistle as cattle feed.
Military career
Norris served in the United States Navy as a codebreaker, attaining the rank of lieutenant commander while working at the Navy's Nebraska Avenue Complex in Washington, D.C. His technical accomplishments included advancing methods for identifying U-boats.
Professional career
Before entering military service, Norris sold X-Ray equipment for the Westinghouse Corporation in Chicago, then worked for the Bureau of Ordnance as a civil servant engineer until signing with the Naval Reserve.
Norris entered the computer business just after World War II, when along with Howard Engstrom and other US Navy cryptographers he formed Engineering Research Associates (ERA) in January, 1946 to build scientific computers. He hired forty of the members of his codebreaking team and set up shop in a glider factory with Northwestern Aeronautical, a major government contractor. ERA was fairly successful, but in the early 1950s a lengthy series of government probes into "Navy funding" drained the company and it was sold to Remington Rand. They operated within Remington Rand as a separate division for a time, but during the later merger with Sperry Corporation that formed Sperry Rand, their division was merged with UNIVAC. This resulted in most of ERA's work being dropped. As a result, several employees left and set up Control Data, unanimously selecting Norris as president.
Control Data started by selling magnetic drum memory systems to other computer manufacturers, but introduced their own mainframe, the CDC 1604, in 1958. Designed primarily by Seymour Cray, the company soon followed the 1604 with a series of increasingly powerful machines. In 1965 they introduced the CDC 6600, the first supercomputer, and CDC was suddenly in the leadership position with a machine ten times faster than anything on the market.
This was a significant threat to IBM's business, and they quickly started a project of their own to take back the performance crown from CDC. In the meantime they announced an advanced version of the IBM System/360, called the ACS-1, that was to be faster than the 6600. |
https://en.wikipedia.org/wiki/SSDP | SSDP may refer to:
Simple Service Discovery Protocol, a networking protocol
Students for Sensible Drug Policy, an international non-profit advocacy and education organization based in Washington D.C. |
https://en.wikipedia.org/wiki/Simple%20Service%20Discovery%20Protocol | The Simple Service Discovery Protocol (SSDP) is a network protocol based on the Internet protocol suite for advertisement and discovery of network services and presence information. It accomplishes this without assistance of server-based configuration mechanisms, such as Dynamic Host Configuration Protocol (DHCP) or Domain Name System (DNS), and without special static configuration of a network host. SSDP is the basis of the discovery protocol of Universal Plug and Play (UPnP) and is intended for use in residential or small office environments. It was formally described in an IETF Internet Draft by Microsoft and Hewlett-Packard in 1999. Although the IETF proposal has since expired (April, 2000), SSDP was incorporated into the UPnP protocol stack, and a description of the final implementation is included in UPnP standards documents.
Protocol transport and addressing
SSDP is a text-based protocol based on HTTPU, which uses UDP as the underlying transport protocol. Services are announced by the hosting system with multicast addressing to a specifically designated IP multicast address at UDP port number 1900. In IPv4, the multicast address is and SSDP over IPv6 uses the address set for all scope ranges indicated by X.
This results in the following well-known practical multicast addresses for SSDP:
(IPv4 site-local address)
(IPv6 link-local)
(IPv6 site-local)
Additionally, applications may use the source-specific multicast addresses derived from the local IPv6 routing prefix, with group ID C (decimal 12).
SSDP uses the HTTP method NOTIFY to announce the establishment or withdrawal of services (presence) information to the multicast group. A client that wishes to discover available services on a network, uses method M-SEARCH. Responses to such search requests are sent via unicast addressing to the originating address and port number of the multicast request.
Microsoft's IPv6 SSDP implementations in Windows Media Player and Server use the link-local scope address. Microsoft uses port number 2869 for event notification and event subscriptions. However, early implementations of SSDP also used port 5000 for this service.
DDoS attack
In 2014 it was discovered that SSDP was being used in DDoS attacks known as an SSDP reflection attack with amplification. Many devices, including some residential routers, have a vulnerability in the UPnP software that allows an attacker to get replies from port number 1900 to a destination address of their choice. With a botnet of thousands of devices, the attackers can generate sufficient packet rates and occupy bandwidth to saturate links, causing the denial of services. The network company Cloudflare has described this attack as the "Stupidly Simple DDoS Protocol".
Firefox vulnerability
Firefox for Android prior to version 79 did not properly validate the schema of the URL received in SSDP and were vulnerable to remote code execution. An attacker on the same network could create a malicious server pretendi |
https://en.wikipedia.org/wiki/IDEN | Integrated Digital Enhanced Network (iDEN) is a mobile telecommunications technology, developed by Motorola, which provides its users the benefits of a trunked radio and a cellular telephone. It was called the first mobile social network by many technology industry analysts. iDEN places more users in a given spectral space, compared to analog cellular and two-way radio systems, by using speech compression and time-division multiple access (TDMA).
History
The iDEN project originally began as MIRS (Motorola Integrated Radio System) in early 1991. The project was a software lab experiment focused on the utilization of discontiguous spectrum for GSM wireless. GSM systems typically require 24 contiguous voice channels, but the original MIRS software platform dynamically selected fragmented channels in the radio frequency (RF) spectrum in such a way that a GSM telecom switch could commence a phone call the same as it would in the contiguous channel scenario.
Operating frequencies
iDEN is designed and licensed to operate on individual frequencies that may not be contiguous. iDEN operates on 25 kHz channels, but only occupies 20 kHz in order to provide interference protection via guard bands. By comparison, TDMA Cellular (Digital AMPS) is licensed in blocks of 30 kHz channels, but each emission occupies 40 kHz, and is capable of serving the same number of subscribers per channel as iDEN. iDEN uses frequency-division duplexing to transmit and receive signals separately, with transmit and receive bands separated by 39 MHz, 45 MHz, or 48 MHz depending on the frequency band being used.
iDEN supports either three or six interconnect users (phone users) per channel, and six dispatch users (push-to-talk users) per channel, using time-division multiple access. The transmit and receive time slots assigned to each user are deliberately offset in time so that a single user never needs to transmit and receive at the same time. This eliminates the need for a duplexer at the mobile end, since time-division duplexing of RF section usage can be performed.
Hardware
The first commercial iDEN handset was Motorola's L3000, which was released in 1994. Lingo, which stands for Link People on the Go, was used as a logo for its earlier handsets. Most modern iDEN handsets use SIM cards, similar to, but incompatible with GSM handsets' SIM cards. Early iDEN models such as the i1000plus stored all subscriber information inside the handset itself, requiring the data to be downloaded and transferred should the subscriber want to switch handsets. Newer handsets using SIM technology make upgrading or changing handsets as easy as swapping the SIM card. Four different sized SIM cards exist, "Endeavor" SIMs are used only with the i2000 without data, "Condor" SIMs are used with the two-digit models (i95cl, for example) using a SIM with less memory than the three-digit models (i730, i860), "Falcon" SIMs are used in the three-digit phones, (i530, i710) and will read the smaller SIM |
https://en.wikipedia.org/wiki/MSN | MSN (meaning Microsoft Network) is a web portal and related collection of Internet services and apps for Windows and mobile devices, provided by Microsoft and launched on August 24, 1995, alongside the release of Windows 95.
The Microsoft Network was initially a subscription-based dial-up online service that later became an Internet service provider named MSN Dial-up. At the same time, the company launched a new web portal named Microsoft Internet Start and set it as the first default home page of Internet Explorer, its web browser. In 1998, Microsoft renamed and moved this web portal to the domain name www.msn.com, where it has remained.
In addition to its original MSN Dial-up service, Microsoft has used the 'MSN' brand name for a wide variety of products and services over the years, notably Hotmail (later Outlook.com), Messenger (which was once synonymous with 'MSN' in Internet slang and has now been replaced by Skype), and its web search engine, which is now Bing, and several other rebranded and discontinued services.
The current website and suite of apps offered by MSN was first introduced by Microsoft in 2014 as part of a complete redesign and relaunch. MSN is based in the United States and offers international versions of its portal for dozens of countries around the world.
History
Microsoft Internet Start
From 1995 to 1998, the MSN.com domain was used by Microsoft primarily to promote MSN as an online service and Internet service provider. At the time, MSN.com also offered a custom start page and an Internet tutorial, but Microsoft's major web portal was known as "Microsoft Internet Start", and was located at home.microsoft.com.
Internet Start served as the default home page for Internet Explorer and offered basic information such as news, weather, sports, stocks, entertainment reports, links to other websites on the Internet, articles by Microsoft staff members, and software updates for Windows. Microsoft's original news website, https://msnbc.com (now NBCNews.com), which launched in 1996, was also tied closely to the Internet Start portal.
MSN.com
In 1998, the largely underutilized 'MSN.com' domain name was combined with Microsoft Internet Start and reinvented as both a web portal and as the brand for a family of sites produced inside Microsoft's Interactive Media Group. The new website put MSN in direct competition with sites such as Yahoo!, Excite, and Go Network. Because the new format opened up MSN's content to the world for free, the Internet service provider and subscription service were renamed to MSN Internet Access at that time. (That service eventually became known as MSN Dial-up.)
The relaunched MSN.com contained a whole family of sites, including original content, channels that were carried over from 'web shows' that were part of Microsoft's MSN 2.0 experiment with its Internet service provider in 1996–97, and new features that were rapidly added. MSN.com became the successor to the default Internet Explorer start |
https://en.wikipedia.org/wiki/World%20Series%20Cricket | World Series Cricket (WSC) was a commercial professional cricket competition staged between 1977 and 1979 which was organised by Kerry Packer and his Australian television network, Nine Network. WSC ran in commercial competition to established international cricket. World Series Cricket drastically changed the nature of cricket, and its influence continues to be felt today.
Three main factors caused the formation of WSC — a widespread view that players were not paid sufficient amounts to make a living from cricket or reflect their market value and that following the development of colour television and increased viewer audiences of sports events, the commercial potential of cricket was not being achieved by the established cricket boards and Packer wished to secure the exclusive broadcasting rights to Australian cricket, then held by the non-commercial, government-owned Australian Broadcasting Commission (ABC), to realise and capitalise on the commercial potential of cricket.
After the Australian Cricket Board (ACB) refused to accept Channel Nine's bid to gain exclusive television rights to Australia's Test matches in 1976, Packer set up his own series by secretly signing agreements with leading Australian, English, Pakistani, South African and West Indian players, most notably England captain Tony Greig, West Indies captain Clive Lloyd, Australian captain Greg Chappell, future Pakistani captain Imran Khan and former Australian captain Ian Chappell. Packer was aided by businessmen John Cornell and Austin Robertson, both of whom were involved with the initial setup and administration of the series.
Australian Captain Ian Chappell summed up the quality of World Series Cricket by saying it was the toughest cricket that he ever played (having all the best players in the world involved).
Kerry Packer and the Australian television industry
In the mid-1970s, the Australian television industry was at a crossroads. Since its inception in 1956, commercial television in Australia had developed a reliance on imported programmes, particularly from the United States, as buying them was cheaper than commissioning Australian productions. Agitation for more Australian-made programming gained impetus from the "TV: Make it Australian" campaign in 1970. This led to a government-imposed quota system in 1973. The advent of colour transmissions in 1975 markedly improved sport as a television spectacle and, importantly, Australian sport counted as local content. However, sports administrators perceived live telecasts to have an adverse effect on attendance. The correlation between sports, corporate sponsorship, and television exposure was not evident to Australian sports administrators at the time.
After the death of his father Sir Frank in 1974, Kerry Packer had assumed control of Channel Nine, one of the many media interests owned by the family's company Consolidated Press Holdings (CPH). With Nine's ratings languishing, Packer sought to turn the network around |
https://en.wikipedia.org/wiki/Network%20model | In computing, the network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, is not restricted to being a hierarchy or lattice.
The network model was adopted by the CODASYL Data Base Task Group in 1969 and underwent a major update in 1971. It is sometimes known as the CODASYL model for this reason. A number of network database systems became popular on mainframe and minicomputers through the 1970s before being widely replaced by relational databases in the 1980s.
Overview
While the hierarchical database model structures data as a tree of records, with each record having one parent record and many children, the network model allows each record to have multiple parent and child records, forming a generalized graph structure. This property applies at two levels: the schema is a generalized graph of record types connected by relationship types (called "set types" in CODASYL), and the database itself is a generalized graph of record occurrences connected by relationships (CODASYL "sets"). Cycles are permitted at both levels.
The chief argument in favour of the network model, in comparison to the hierarchical model, was that it allowed a more natural modeling of relationships between entities. Although the model was widely implemented and used, it failed to become dominant for two main reasons. Firstly, IBM chose to stick to the hierarchical model with semi-network extensions in their established products such as IMS and DL/I. Secondly, it was eventually displaced by the relational model, which offered a higher-level, more declarative interface. Until the early 1980s the performance benefits of the low-level navigational interfaces offered by hierarchical and network databases were persuasive for many large-scale applications, but as hardware became faster, the extra productivity and flexibility of the relational model led to the gradual obsolescence of the network model in corporate enterprise usage.
History
The network model's original inventor was Charles Bachman, and it was developed into a standard specification published in 1969 by the Conference on Data Systems Languages (CODASYL) Consortium. This was followed by a second publication in 1971, which became the basis for most implementations. Subsequent work continued into the early 1980s, culminating in an ISO specification, but this had little influence on products.
Bachman's influence is recognized in the term Bachman diagram, a diagrammatic notation that represents a database schema expressed using the network model. In a Bachman diagram, named rectangles represent record types, and arrows represent one-to-many relationship types between records (CODASYL set types).
Database systems
Some well-known database systems that use the network model include:
IMAGE for HP 3000
Integrated Data Store (IDS)
IDMS (Integ |
https://en.wikipedia.org/wiki/Standard%20data%20model | A standard data model or industry standard data model (ISDM) is a data model that is widely applied in some industry, and shared amongst competitors to some degree. They are often defined by standards bodies, database vendors or operating system vendors.
When in use, they enable easier and faster information sharing because heterogeneous organizations have a standard vocabulary and pre-negotiated semantics, format, and quality standards for exchanged data. The standardization affects software architecture as solutions that vary from the standard may cause data sharing issues and problems if data is out of compliance with the standard.
The more effective standard models have developed in the banking, insurance, pharmaceutical and automotive industries, to reflect the stringent standards applied to customer information gathering, customer privacy, consumer safety, or just in time manufacturing.
Typically these use the popular relational model of database management, but some use the hierarchical model, especially those used in manufacturing or mandated by governments, e.g., the DIN codes specified by Germany. While the format of the standard may have implementation trade-offs, the underlying goal of these standards is to make sharing of data easier.
The most complex data models known are in military use, and consortia such as NATO tend to require strict standards of their members' equipment and supply databases. However, they typically do not share these with non-NATO competitors, and so calling these 'standard' in the same sense as commercial software is probably not very appropriate.
Example Standard Data Models
ISO 10303 CAE Data Exchange Standard - includes its own data modelling language, EXPRESS
ISO 15926 Process Plants including Oil and Gas facilities Life-Cycle data
IDEAS Group Foundation Ontology agreed by defence departments of Australia, Canada, France, Sweden, UK and USA
EN12896 CEN Reference Data Model For Public Transport. covering public transport scheduling, fare management, operations and passenger information
Common Education Data Standards (CEDS) is a data dictionary standard model sponsored by the U.S. government that is used widely in the United States education system
SIF is an interoperability specification used as a standard data model in Australia, the UK, and the US.
References
External links
Professional Petroleum Data Management Association Lite Data Model
Energistics Energy Standards Resource Center
Pipeline Open Data Standard
Ed-Fi Data Standard
Data modeling |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.