source
stringlengths
32
199
text
stringlengths
26
3k
https://en.wikipedia.org/wiki/Altix
Altix is a line of server computers and supercomputers produced by Silicon Graphics (and successor company Silicon Graphics International), based on Intel processors. It succeeded the MIPS/IRIX-based Origin 3000 servers. History The line was first announced on January 7, 2003, with the Altix 3000 series, based on Intel Itanium 2 processors and SGI's NUMAlink processor interconnect. At product introduction, the system supported up to 64 processors running Linux as a single system image and shipped with a Linux distribution called SGI Advanced Linux Environment, which was compatible with Red Hat Advanced Server. By August 2003, many SGI Altix customers were running Linux on 128- and 256-processor SGI Altix systems. SGI officially announced 256-processor support within a single system image of Linux on March 10, 2004, using a 2.4-based Linux kernel. The SGI Advanced Linux Environment was eventually dropped after support using a standard, unmodified SUSE Linux Enterprise Server (SLES) distribution for SGI Altix was provided with SLES 8 and SLES 9. Later, SGI Altix 512-processor systems were officially supported using an unmodified, standard Linux distribution with the launch of SLES 9 SP1. Besides full support of SGI Altix on SUSE Linux Enterprise Server, a standard and unmodified Red Hat Enterprise Linux was also fully supported starting with SGI Altix 3700 Bx2 with RHEL 4 and RHEL 5 with system processor limits defined by Red Hat for those releases. On November 14, 2005, SGI introduced the Altix 4000 series based on the Itanium 2. The Altix 3000 and 4000 are distributed shared memory multiprocessors. SGI later officially supported 1024-processor systems on an unmodified, standard Linux distribution with the launch of SLES 10 in July 2006. SGI Altix 4700 was also officially supported by Red Hat with RHEL 4 and RHEL 5 — maximum processor limits were as defined by Red Hat for its RHEL releases. The Altix brand was used for systems based on multi-core Intel Xeon processors. These include the Altix XE rackmount servers, Altix ICE blade servers and Altix UV supercomputers. NASA's Columbia supercomputer, installed in 2004 and decommissioned in 2013, was a 10240-microprocessor cluster of twenty Altix 3000 systems, each with 512 microprocessors, interconnected with InfiniBand. Altix 3000 The Altix 3000 is the first generation of Altix systems. It was succeeded by the Altix 4000 in 2004, and the last model was discontinued on December 31, 2006. The Altix 330 is an entry-level server. Unlike the high-end models, the Altix 330 is not "brick" based, but is instead based on 1U-high compute modules mounted in a rack and connected with NUMAlink. A single system may contain 1 to 16 Itanium 2 processors and 2 to 128 GB of memory. The Altix 1330 is a cluster of Altix 330 systems. The systems are networked with Gigabit Ethernet or 4X InfiniBand. The Altix 350 is a mid-range model that supports up to 32 Itanium 2 processors. Introduced in 2005, it runs Linu
https://en.wikipedia.org/wiki/Gremlin%20Interactive
Gremlin Graphics Software Limited, later Gremlin Interactive Limited and ultimately Infogrames Studios Limited was a British software house based in Sheffield, working mostly in the home computer market. Like many software houses established in the 1980s, their primary market was the 8-bit range of computers such as the ZX Spectrum, Amstrad CPC, MSX, Commodore 16 and Commodore 64. The company was acquired by French video game publisher Infogrames in 1999 and was renamed Infogrames Studios in 2000. Infogrames Studios closed down in 2003. History The company, originally a computer store called Just Micro, was established as a software house in 1984 with the name Gremlin Graphics Software Ltd by Ian Stewart and Kevin Norburn with US Gold's Geoff Brown owning 75% of the company until mid-1989. Gremlin's early success was based on games such as Wanted: Monty Mole for the ZX Spectrum and Thing on a Spring for the Commodore 64. In 1994, it was renamed as Gremlin Interactive, now concentrating on the 16-bit, PC and console market. Gremlin enjoyed major success with the Zool and Premier Manager series in the early 1990s, and then with Actua Soccer, the first football game in full 3D; other successful games included the Lotus racing series; a futuristic racing game, Motorhead; a stunt car racing game, Fatal Racing (1995); and the 1998 flight simulator Hardwar. Following EA's success with the EA Sports brand, Gremlin also released their own sports videogame series, adding Golf, Tennis and Ice Hockey to their Actua Sports series. During this time, they used a motif from the Siegfried Funeral March from Götterdämmerung as introductory music. The company was floated on the stock market to raise funds. In 1997, Gremlin acquired Imagitec Design and DMA Design (creators of Grand Theft Auto and Lemmings). In 1999, they themselves were bought by Infogrames for around £24 million and renamed "Infogrames Sheffield House". Infogrames closed the studio in 2003. The building they latterly occupied near Devonshire Green has since been demolished when Infogrames Sheffield House was supposed to be renamed "Atari Sheffield House". In October 2003, Zoo Digital, the successor company to Gremlin, purchased the company's assets from the now-named Atari. Following the administration of Zoo Digital (later renamed Zushi Games), Gremlin Interactive's catalogue and name were bought up by Ian Stewart's new company Urbanscan. The Gremlin trademarks (including the g Gremlin logo) are now owned by Warner Bros Entertainment. Key staff Gremlin staff had included: Kevin Bulmer – Designer/graphics artist Jon Harrison – Designer/graphics artist Gary Priest – Programmer Bill Allen – Programmer Richard Stevenson – Programmer David Martin – Marketing Director Ben Daglish – Outsourced Musician Ade Carless – Designer/graphics artist Shaun McClure – Graphics artist / Art Resource Manager Antony Crowther ('Ratt') – Designer, programmer Asad Habib – Lead Tester Paul Whitehea
https://en.wikipedia.org/wiki/Default%20gateway
A default gateway is the node in a computer network using the Internet protocol suite that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Role A gateway is a network node that serves as an access point to another network, often involving not only a change of addressing, but also a different networking technology. More narrowly defined, a router merely forwards packets between networks with different network prefixes. The networking software stack of each computer contains a routing table that specifies which interface is used for transmission and which router on the network is responsible for forwarding to a specific set of addresses. If none of these forwarding rules is appropriate for a given destination address, the default gateway is chosen as the router of last resort. The default gateway can be specified by the route command to configure the node's routing table and default route. In a home or small office environment, the default gateway is a device, such as a DSL router or cable router, that connects the local network to the Internet. It serves as the default gateway for all network devices. Enterprise network systems may require many internal network segments. A device wishing to communicate with a host on the public Internet, for example, forwards the packet to the default gateway for its network segment. This router also has a default route configured to a device on an adjacent network, one hop closer to the public network. Examples Single router The following example shows IP addresses that might be used with an office network that consists of six hosts plus a router. The six hosts addresses are: 192.168.4.3 192.168.4.4 192.168.4.5 192.168.4.6 192.168.4.7 192.168.4.8 The router's inside address is: 192.168.4.1 The network has a subnet mask of: 255.255.255.0 (/24 in CIDR notation) The address range assignable to hosts is from 192.168.4.1 to 192.168.4.254. TCP/IP defines the addresses 192.168.4.0 (network ID address) and 192.168.4.255 (broadcast IP address). The office's hosts send packets to addresses within this range directly, by resolving the destination IP address into a MAC address with the Address Resolution Protocol (ARP) sequence and then encapsulates the IP packet into a MAC frame addressed to the destination host. A packet addressed outside of this range, for this example, addressed to 192.168.12.3, cannot travel directly to the destination. Instead it must be sent to the default gateway for further routing to their ultimate destination. In this example, the default gateway uses the IP address 192.168.4.1, which is resolved into a MAC address with ARP in the usual way. The destination IP address remains 192.168.12.3, but the next-hop MAC address is that of the gateway, rather than of the ultimate destination. Multi-router In another example, a network with three routers and three hosts is connected to the Internet thro
https://en.wikipedia.org/wiki/Vulnerability%20%28computing%29
Vulnerabilities are flaws in a computer system that weaken the overall security of the device/system. Vulnerabilities can be weaknesses in either the hardware itself, or the software that runs on the hardware. Vulnerabilities can be exploited by a threat actor, such as an attacker, to cross privilege boundaries (i.e. perform unauthorized actions) within a computer system. To exploit a vulnerability, an attacker must have at least one applicable tool or technique that can connect to a system weakness. In this frame, vulnerabilities are also known as the attack surface. Vulnerability management is a cyclical practice that varies in theory but contains common processes which include: discover all assets, prioritize assets, assess or perform a complete vulnerability scan, report on results, remediate vulnerabilities, verify remediation - repeat. This practice generally refers to software vulnerabilities in computing systems. Agile vulnerability management refers to preventing attacks by identifying all vulnerabilities as quickly as possible. A security risk is often incorrectly classified as a vulnerability. The use of vulnerability with the same meaning of risk can lead to confusion. The risk is the potential of a significant impact resulting from the exploit of a vulnerability. Then there are vulnerabilities without risk: for example when the affected asset has no value. A vulnerability with one or more known instances of working and fully implemented attacks is classified as an exploitable vulnerability—a vulnerability for which an exploit exists. The window of vulnerability is the time from when the security hole was introduced or manifested in deployed software, to when access was removed, a security fix was available/deployed, or the attacker was disabled—see zero-day attack. Security bug (security defect) is a narrower concept. There are vulnerabilities that are not related to software: hardware, site, personnel vulnerabilities are examples of vulnerabilities that are not software security bugs. Constructs in programming languages that are difficult to use properly can manifest large numbers of vulnerabilities. Definitions ISO 27005 defines vulnerability as: A weakness of an asset or group of assets that can be exploited by one or more threats, where an asset is anything that has value to the organization, its business operations, and their continuity, including information resources that support the organization's mission IETF RFC 4949 vulnerability as: A flaw or weakness in a system's design, implementation, or operation and management that could be exploited to violate the system's security policy The Committee on National Security Systems of United States of America defined vulnerability in CNSS Instruction No. 4009 dated 26 April 2010 National Information Assurance Glossary: Vulnerability—Weakness in an information system, system security procedures, internal controls, or implementation that could be exploited by a threat sour
https://en.wikipedia.org/wiki/Terminal%20node%20controller
A terminal node controller (TNC) is a device used by amateur radio operators to participate in AX.25 packet radio networks. It is similar in function to the Packet Assembler/Disassemblers used on X.25 networks, with the addition of a modem to convert baseband digital signals to audio tones. The first TNC, the VADCG board, was originally developed by Doug Lockhart, VE7APU, of Vancouver, British Columbia. Amateur Radio TNCs were first developed in 1978 in Canada by the Montreal Amateur Radio Club and the Vancouver Area Digital Communications group. These never gained much popularity because only a bare printed circuit board was made available and builders had to gather up a large number of components. In 1983, the Tucson Amateur Packet Radio (TAPR) association produced complete kits for their TNC-1 design. This was later available as the Heathkit HD-4040. A few years later, the improved TNC-2 became available, and it was licensed to commercial manufacturers such as MFJ. In 1986, the improved "TNC+" was designed to run programs and protocols developed for the original TNC board. TNC+ also included an assembler and a version of Forth (STOIC), which runs on the TNC+ itself, to support developing new programs and protocols. Description A typical model consists of a microprocessor, a modem, and software (in EPROM) that implements the AX.25 protocol and provides a command line interface to the user. (Commonly, this software provides other functionality as well, such as a basic bulletin board system to receive messages while the operator is away.) Because the TNC contains all the intelligence needed to communicate over an AX.25 network, no external computer is required. All of the network's resources can be accessed using a dumb terminal. The TNC connects to the terminal and a radio transceiver. Data from the terminal is formatted into AX.25 packets and modulated into audio signals (in traditional applications) for transmission by the radio. Received signals are demodulated, the data unformatted, and the output sent to the terminal for display. In addition to these functions, the TNC manages the radio channel according to guidelines in the AX.25 specification. Early usage was mostly one-to-one communication, either between two people or a person to an automated Bulletin Board or E-mail system. Current status Since the late 1990s, most AX.25 usage has shifted to a different one-to-many communication paradigm with the Automatic Packet Reporting System (APRS). The TNCs of the 1980s and 1990s were complete solutions that only needed a radio and an optional dumb terminal. As home computers made their way into ham "shacks," there was a movement toward simpler, cheaper "KISS" (Keep It Simple, Stupid) devices. These have a modem and minimal processing of the AX.25 protocol. Most of the processing is moved to the personal computer. The next logical step in the evolution is to eliminate the specialized hardware and move all of the processing
https://en.wikipedia.org/wiki/RTL%20%28Croatian%20TV%20channel%29
RTL (previously known as RTL Televizija) is a Croatian free-to-air television network founded on 30 April 2004. It was owned by the RTL Group from 2004 to 2022. Since 1 June 2022, it is owned by the CME Group. This is the second commercial television network in Croatia that has a national concession, following Nova TV. On 15 May 2014, RTL Group announced that Henning Tewes, Managing Director of news provider Enex, would be appointed Chief Executive Officer (CEO) of RTL Hrvatska as of 1 July 2014. Tewes succeeds Johannes Züll, who left the RTL Group to become president and CEO of Studio Hamburg. As a result of a proposal from Henning Tewes, Ivan Lovreček, former editor-in-chief and member of the Executive Board of RTL Hrvatska, was promoted to Deputy CEO of the company on 1 July 2014. The blocks got a three dimensional design on 16 September 2015. Mein RTL would be used in some certain idents. On 12 November 2019, as part of a nationwide transition to the DVB-T2 broadcast standard among all Croatian broadcasters, the channel including with RTL 2, RTL Kockica, RTL Passion, RTL Living, and RTL Crime launched their HD feeds. On 20 December 2019, a month later, RTL debuted a new flat logo and graphics package that replaced the Phoenica font (used by RTL's parent channel in its on-air appearance) with the well-known FF DIN font. As of now, its slogan is “Više od Televizije” (More than Television). On 15 September 2021, the German version of RTL was relaunched with a new multi-colored logo. It was confirmed that the new logo would debut in Croatia sometime in the near future. On 14 February 2022, it was announced that RTL Group reached an agreement with Central European Media Enterprises for the sale of RTL Hrvatska. On 1 June 2022, the transaction of RTL Hrvatska to CME had been completed. Programming In-house production In addition to its own daily news programming, RTL Televizija broadcasts its own news program each day. During the week, RTL Vijesti broadcasts at 16:30 on Monday to Thursday RTL Vijesti at 16:30 and RTL Danas at 18:30 CET, followed by RTL Direkt at 22:15 CET with late news and current affairs. During the weekend, two news programs are transmitted, RTL Vijesti at 16:30 and RTL Danas at 18:30. Throughout its existence, the station has created its own programming, including the daily talk show Sanja, a dating game show called Srcolovka, a quiz show called Veto, an entertainment show called Salto, a talk show called Retromanija, a daily soap opera called Zabranjena ljubav (Forbidden Love), Exploziv and Exkluziv magazines, which later became Exkluziv Tabloid. The Croatian version of Big Brother was also produced by RTL Televizija, at the time the most popular television program in Croatia. They also started producing a sitcom called Bibin svijet (Biba's World) in 2006. Among the shows is talk show Studio 45, the dramatic series Ne daj se, Nina!,  Krv nije voda, and K.T.2, Policijska patrola, with Život nogometaša and Moja 3 zi
https://en.wikipedia.org/wiki/Density%20%28computer%20storage%29
Density is a measure of the quantity of information bits that can be stored on a given length (linear density) of track, area of the surface (areal density), or in a given volume (volumetric density) of a computer storage medium. Generally, higher density is more desirable, for it allows more data to be stored in the same physical space. Density therefore has a direct relationship to storage capacity of a given medium. Density also generally affects the performance within a particular medium, as well as price. Storage device classes Solid state media Solid state drives use flash memory to store non-volatile media. They are the latest form of mass produced storage and rival magnetic disk media. Solid state media data is saved to a pool of NAND flash. NAND itself is made up of what are called floating gate transistors. Unlike the transistor designs used in DRAM, which must be refreshed multiple times per second, NAND flash is designed to retain its charge state even when not powered up. The highest capacity drives commercially available are the Nimbus Data Exadrive© DC series drives, these drives come in capacities ranging 16TB to 100TB. Nimbus states that for its size the 100TB SSD has a 6:1 space saving ratio over a nearline HDD Magnetic disk media Hard disk drives store data in the magnetic polarization of small patches of the surface coating on a disk. The maximum areal density is defined by the size of the magnetic particles in the surface, as well as the size of the "head" used to read and write the data. In 1956 the first hard drive, the IBM 350, had an areal density of 2,000 bit/in2. Since then, the increase in density has matched Moore's Law, reaching 1 Tbit/in2 in 2014. In 2015, Seagate introduced a hard drive with a density of 1.34 Tbit/in2, more than 600 million times that of the IBM 350. It is expected that current recording technology can "feasibly" scale to at least 5 Tbit/in2 in the near future. New technologies like heat-assisted magnetic recording (HAMR) and microwave-assisted magnetic recording (MAMR) are under development and are expected to allow increases in magnetic areal density to continue. Optical disc media Optical discs store data in small pits in a plastic surface that is then covered with a thin layer of reflective metal. Compact discs (CDs) offer a density of about 0.90 Gbit/in2, using pits which are 0.83 micrometers long and 0.5 micrometers wide, arranged in tracks spaced 1.6 micrometers apart. DVD disks are essentially a higher-density CD, using more of the disk surface, smaller pits (0.64 micrometers), and tighter tracks (0.74 micrometers), offering a density of about 2.2 Gbit/in2. Single-layer HD DVD and Blu-ray disks offer densities around 7.5 Gbit/in2 and 12.5 Gbit/in2, respectively. When introduced in 1982 CDs had considerably higher densities than hard disk drives, but hard disk drives have since advanced much more quickly and eclipsed optical media in both areal density and capacity per device. Magn
https://en.wikipedia.org/wiki/Quiescence
Quiescence (/kwiˈɛsəns/) is a state of quietness or inactivity. It may refer to: Quiescence search, in game tree searching (adversarial search) in artificial intelligence, a quiescent state is one in which a game is considered stable and unlikely to change drastically the next few plays Seed dormancy, a form of delayed seed germination Quiescence, a type of dormancy in trees Quiescent phase, the first part of the first stage of childbirth The G0 phase of a cell in the cell cycle; quiescence is the state of a cell when it is not dividing Quiescent current (biasing) in an electronic circuit Quiescent consistency is one of the safety properties for concurrent data structures See also Rest (disambiguation)
https://en.wikipedia.org/wiki/List%20of%20Java%20keywords
In the Java programming language, a keyword is any one of 68 reserved words that have a predefined meaning in the language. Because of this, programmers cannot use keywords in some contexts, such as names for variables, methods, classes, or as any other identifier. Of these 68 keywords, 17 of them are only contextually reserved, and can sometimes be used as an identifier, unlike standard reserved words. Due to their special functions in the language, most integrated development environments for Java use syntax highlighting to display keywords in a different colour for easy identification. List of Java keywords _ Added in Java 9, the underscore has become a keyword and cannot be used as a variable name anymore. abstract A method with no definition must be declared as abstract and the class containing it must be declared as abstract. Abstract classes cannot be instantiated. Abstract methods must be implemented in the sub classes. The abstract keyword cannot be used with variables or constructors. Note that an abstract class isn't required to have an abstract method at all. assert (added in J2SE 1.4) Assert describes a predicate (a true–false statement) placed in a Java program to indicate that the developer thinks that the predicate is always true at that place. If an assertion evaluates to false at run-time, an assertion failure results, which typically causes execution to abort. Assertions are disabled at runtime by default, but can be enabled through a command-line option or programmatically through a method on the class loader. boolean Defines a boolean variable for the values "true" or "false" only. By default, the value of boolean primitive type is false. This keyword is also used to declare that a method returns a value of the primitive type boolean. break Used to end the execution in the current loop body. Used to break out of a switch block. byte The byte keyword is used to declare a field that can hold an 8-bit signed two's complement integer. This keyword is also used to declare that a method returns a value of the primitive type byte. case A statement in the switch block can be labeled with one or more case or default labels. The switch statement evaluates its expression, then executes all statements that follow the matching case label; see switch. catch Used in conjunction with a try block and an optional finally block. The statements in the catch block specify what to do if a specific type of exception is thrown by the try block. char Defines a character variable capable of holding any character of the java source file's character set. class A type that defines the implementation of a particular kind of object. A class definition defines instance and class fields, methods, and inner classes as well as specifying the interfaces the class implements and the immediate superclass of the class. If the superclass is not explicitly specified, the superclass is implicitly . The class keyword can also be used in the form Class.clas
https://en.wikipedia.org/wiki/Bencode
Bencode (pronounced like Bee-encode) is the encoding used by the peer-to-peer file sharing system BitTorrent for storing and transmitting loosely structured data. It supports four different types of values: byte strings, integers, lists, and dictionaries (associative arrays). Bencoding is most commonly used in torrent files, and as such is part of the BitTorrent specification. These metadata files are simply bencoded dictionaries. Bencoding is simple and (because numbers are encoded as text in decimal notation) is unaffected by endianness, which is important for a cross-platform application like BitTorrent. It is also fairly flexible, as long as applications ignore unexpected dictionary keys, so that new ones can be added without creating incompatibilities. Encoding algorithm Bencode uses ASCII characters as delimiters and digits. An integer is encoded as i<integer encoded in base ten ASCII>e. Leading zeros are not allowed (although the number zero is still represented as "0"). Negative values are encoded by prefixing the number with a hyphen-minus. The number 42 would thus be encoded as , 0 as , and -42 as . Negative zero is not permitted. A byte string (a sequence of bytes, not necessarily characters) is encoded as <length>:<contents>. The length is encoded in base 10, like integers, but must be non-negative (zero is allowed); the contents are just the bytes that make up the string. The string "spam" would be encoded as . The specification does not deal with encoding of characters outside the ASCII set; to mitigate this, some BitTorrent applications explicitly communicate the encoding (most commonly UTF-8) in various non-standard ways. This is identical to how netstrings work, except that netstrings additionally append a comma suffix after the byte sequence. A list of values is encoded as l<contents>e . The contents consist of the bencoded elements of the list, in order, concatenated. A list consisting of the string "spam" and the number 42 would be encoded as: . Note the absence of separators between elements, and the first character is the letter 'l', not digit '1'. A dictionary is encoded as d<contents>e. The elements of the dictionary are encoded with each key immediately followed by its value. All keys must be byte strings and must appear in lexicographical order. A dictionary that associates the values 42 and "spam" with the keys "foo" and "bar", respectively (in other words, ), would be encoded as follows: . There are no restrictions on what kind of values may be stored in lists and dictionaries; they may (and usually do) contain other lists and dictionaries. This allows for arbitrarily complex data structures to be encoded. Features & drawbacks Bencode is a very specialized kind of binary coding with some unique properties: For each possible (complex) value, there is only a single valid bencoding; i.e. there is a bijection between values and their encodings. This has the advantage that applications may compare bencode
https://en.wikipedia.org/wiki/Tuxedo%20%28software%29
Tuxedo (Transactions for Unix, Extended for Distributed Operations) is a middleware platform used to manage distributed transaction processing in distributed computing environments. Tuxedo is a transaction processing system or transaction-oriented middleware, or enterprise application server for a variety of systems and programming languages. Developed by AT&T in the 1980s, it became a software product of Oracle Corporation in 2008 when they acquired BEA Systems. Tuxedo is now part of the Oracle Fusion Middleware. History From the beginning in 1983, AT&T designed Tuxedo for high availability and to provide extremely scalable applications to support applications requiring thousands of transactions per second on commonly available distributed systems. The original development targeted the creation and administration of operations support systems for the US telephone company that required online transaction processing (OLTP) capabilities. The Tuxedo concepts derived from the Loop Maintenance Operations System (LMOS). Tuxedo supported moving the LMOS application off mainframe systems that used Information Management System (IMS) from IBM on to much cheaper distributed systems running (AT&T's own) Unix. The original Tuxedo team comprised members of the LMOS team, including Juan M. Andrade, Mark T. Carges, Terrence Dwyer, and Stephen Felts. In 1993 Novell acquired the Unix System Laboratories (USL) division of AT&T which was responsible for the development of Tuxedo at the time. In September 1993 it was called the "best known" distributed transaction processing monitor, running on 25 different platforms. In February 1996, BEA Systems made an exclusive agreement with Novell to develop and distribute Tuxedo on non-NetWare platforms, with most Novell employees working with Tuxedo joining BEA. In 2008, Oracle Corporation acquired BEA Systems, and TUXEDO was marketed as part of the Oracle Fusion Middleware product line. Tuxedo has been used as transactional middleware by a number of multi-tier application development tools. The Open Group used some of the Tuxedo interfaces as the basis of their standards such as X/Open XA and XATMI. The Tuxedo developers published papers about it in the early 1990s. Later it became the basis of some research projects. Features Standards based APIs - SCA, The Open Group XATMI, Object Management Group CORBA Communication types - Synchronous, Asynchronous, Conversational, Unsolicited Notifications, Publish/subscribe Typed buffers FML/FML32 - Self-describing fielded buffers similar to Abstract Syntax Notation One or Fast Infoset XML STRING and multibyte strings MBSTRING CARRAY binary blobs VIEW/VIEW32 externally described records RECORD representing COBOL record structures Transaction Management - Global Transactions - Two-phase commit protocol - X/Open XA /D - Clustering - Domains /WS - Remote Clients WTC - Weblogic Tuxedo Connector Java clients - Jolt Java EE (J2EE) Integration - Tuxedo JCA Adapter Bidire
https://en.wikipedia.org/wiki/Set%20partitioning%20in%20hierarchical%20trees
Set partitioning in hierarchical trees (SPIHT) is an image compression algorithm that exploits the inherent similarities across the subbands in a wavelet decomposition of an image. The algorithm was developed by Brazilian engineer Amir Said with William A. Pearlman in 1996. General description The algorithm codes the most important wavelet transform coefficients first, and transmits the bits so that an increasingly refined copy of the original image can be obtained progressively. See also Embedded Zerotrees of Wavelet transforms (EZW) Wavelet References Image compression Wavelets Brazilian inventions
https://en.wikipedia.org/wiki/Start%20Network
Start Network AS is a private company that owns and runs the internet service provider Start.no in Norway. In addition to the premium portal site at start.no the company offers several levels of service, including free Internet access over dialup, free internet- and POP3 based email, free web hosting for homepages. During 2005, the company started selling high bit rate home Internet access via asymmetric digital subscriber line (ADSL) to Norwegian customers. Member-based websites share the common authentication system, Start Pass. It is a Norwegian equivalent to the Microsoft Passport. By February 2006, it had 1.7 million registered users. Norway has 4.6 million citizens. The Company derives its revenues primarily from the internet access business, from the sale of advertising and from various types of electronic commerce. The two major shareholders are DB Medialab and PowerTech Information Systems. References Internet service providers of Norway
https://en.wikipedia.org/wiki/Multivac
Multivac is the name of a fictional supercomputer appearing in over a dozen science fiction stories by American writer Isaac Asimov. Asimov's depiction of Multivac, a mainframe computer accessible by terminal, originally by specialists using machine code and later by any user, and used for directing the global economy and humanity's development, has been seen as the defining conceptualization of the genre of computers for the period (1950s–1960s). Multivac has been described as the direct ancestor of HAL 9000. Description Like most of the technologies Asimov describes in his fiction, Multivac's exact specifications vary among appearances. In all cases, it is a government-run computer that answers questions posed using natural language, and it is usually buried deep underground for security purposes. According to his autobiography In Memory Yet Green, Asimov coined the name in imitation of UNIVAC, an early mainframe computer. Asimov had assumed the name "Univac" denoted a computer with a single vacuum tube (it actually is an acronym for "Universal Automatic Computer"), and on the basis that a computer with many such tubes would be more powerful, called his fictional computer "Multivac". His later short story "The Last Question", however, expands the AC suffix to be "analog computer". However, Asimov never settles on a particular size for the computer (except for mentioning it is very large):86 or the supporting facilities around it. In the short story "Franchise" it is described as half a mile long (~800 meters) and three stories high, at least as far as the general public knows, while "All the Troubles of the World" states it fills all of Washington D.C.. There are frequent mentions of corridors and people inside Multivac. Unlike the artificial intelligences portrayed in his Robot series, Multivac's early interface is mechanized and impersonal, consisting of complex command consoles few humans can operate. In "The Last Question", Multivac is shown as having a life of many thousands of years, growing ever more enormous with each section of the story, which can explain its different reported sizes as occurring further down the internal timeline of the overarching story.:20 Storylines Multivac appeared in over a dozen science fiction stories by American writer Isaac Asimov, some of which have entered the popular imagination. In the early Multivac story, "Franchise", Multivac chooses a single "most representative" person from the population of the United States, whom the computer then interrogates to determine the country's overall orientation. All elected offices are then filled by the candidates the computer calculates as acceptable to the populace. Asimov wrote this story as the logical culmination – and/or possibly the reductio ad absurdum – of UNIVAC's ability to forecast election results from small samples. In possibly the most famous Multivac story, "The Last Question", two slightly drunken technicians ask Multivac if humanity can revers
https://en.wikipedia.org/wiki/A%20Muppet%20Family%20Christmas
A Muppet Family Christmas is a Christmas musical television special starring Jim Henson's Muppets. It first aired on December 16, 1987, on the ABC television network in the United States. Shot in Toronto, Ontario, Canada, its teleplay was conceived by longtime Muppet writer Jerry Juhl, and directed by Peter Harris and Eric Till (the latter of whom was uncredited). The special features various Muppets from The Muppet Show, Sesame Street, Fraggle Rock, and Muppet Babies. It also stars Gerard Parkes as Doc from the North American wraparound segments of Fraggle Rock, and Henson as himself in a cameo appearance at the end. In the plot, the Muppets surprise Fozzie Bear's mother with a Christmas visit to her farmhouse, unaware of her planned getaway to Malibu. Due to licensing issues with songs featured in A Muppet Family Christmas, some scenes have been cut from subsequent home media releases. Plot Fozzie Bear is driving many of the Muppets to his mother Emily's farm for Christmas while they all sing "We Need a Little Christmas". Unbeknownst to Fozzie, Emily Bear is preparing to go to Malibu for the holiday and rent her farmhouse to Doc and Sprocket, who want to spend a nice quiet Christmas in the country. Doc and Sprocket have arrived when Fozzie and the other Muppets enter, disrupting Emily and Doc's plans for the holidays. Just then, Miss Piggy calls to tell Kermit the Frog that she is at a photo session and will be late, making Kermit very worried. Rowlf the Dog and the Swedish Chef arrive, and they begin to prepare for Christmas. Meanwhile, Fozzie builds a snowman outside and the snowman comes to life singing along with Fozzie and putting on a comedy act with him. After the performance, Fozzie goes into the house where he tells Kermit about his new act. This is interrupted by Miss Piggy calling again, when she tells Kermit that she is doing a little Christmas shopping before she goes to the farmhouse. Sometime later, the gang watches a home movie of themselves as babies during their first Christmas together. A group of carolers then arrive consisting of Big Bird and the rest of the Sesame Street Muppets. All the Muppets continue to prepare for Christmas as the news comes on TV. The Muppet Newsman reports that the worst blizzard in 50 years is approaching the area. Kermit realizes that Miss Piggy is out in the storm and gets more worried about her. Fozzie and Emily go over where everyone is going to sleep. Big Bird and Cookie Monster will sleep in the attic, Herry Monster will sleep in the bathtub, and Ernie and Bert will bunk with Kermit. The Sesame Street Muppets perform a pageant of 'Twas the Night Before Christmas where the Two-Headed Monster portrays Santa Claus. Kermit then gets a third call from Piggy stating that her limo got stuck in the snow and that she is calling for a taxi. Fozzie approaches Kermit stating that now is a good time to show him his new comedy act with the Snowman, but their act is cut short no thanks to Statler and
https://en.wikipedia.org/wiki/Animal%20Miracles
Animal Miracles, also broadcast as Miracle Pets, is a one-hour, live action program that aired on the Pax TV network from 2001 to 2003, offering a perspective into the realm of human and animal interaction. Hosted by Alan Thicke, the series features animals protecting humans or other pets, one such being a llama guarding a herd of alpacas. It is also shown on Animal Planet. Each episode contains three or four segments, some extended beyond a commercial. Season 1 Episodes Although the show had released more than forty episodes from 2001 to 2003, only thirteen episodes from the first season are available on Amazon Video. The available episodes are listed here: 1. Caesar's Sacrifice A Canadian police dog faces a drunken gunman on a crowded schoolyard; a young girl affected with spina bifida forms a strong bond with a 32-year-old horse; a manatee is rescued by Seaworld; an aging English Mastiff saves his diabetic owner from going into insulin shock. 2. Stormy's Shark Attack A bottlenose dolphin survives a shark attack and is nursed back to health; a yellow Labrador retriever saves his elderly owner when he collapses during a daily walk; a German Shepherd cross pulls her master from a frozen river; a black Labrador retriever saves his owner after the duo end up lost in a forest; a farmer is rescued by one of his llamas after a fence panel collapses on his right leg. 3. Cat on the Night Highway A stray terrier warns his elderly owners of a threatening house fire; a horse alerts his owners of his injured horse friend nearby; a cat survives a house fire after warning her owner just in time; two dogs end up lost in a forest; a cat arouses her dozing owner before she falls asleep at the wheel. 4. Kiwi Pulls Through A miniature pony visits sick, injured, and disabled children at a hospital; a German Shepherd alerts her owner of a kitchen fire; a Burmese cat arouses his owner when her heat blanket goes up in smoke; a trained service dog saves her owner from a chemical spill. 5. Keno, Avalanche Dog A search and rescue Labrador retriever rescues an avalanche victim buried under several feet of snow; a woman befriends a gifted horse named Shagra; a rescued bald eagle finds a new home at a rehabilitation center; a cocker spaniel saves her owner from a house fire. 6. Poudre's Catch of the Day A Golden Retriever pulls her owner out of a raging river after a fly-fishing accident; a Rottweiler helps his owner breathe when a power outage shuts off her breathing tank; a woman builds a sanctuary for alligators, iguanas, and other reptiles; a woman who had been in prison relives her childhood passion by taking care of dogs; a housecat alerts his owners of a living room fire. 7. Hand Me the Bat A Rottweiler protects her truck-driving master from a trio of ruthless thugs; an overweight dachshund saves his owners from multiple fires in the house; a ten-year-old blind girl rides a racing horse; a Royal Canadian Mounted Police officer and his German shepherd t
https://en.wikipedia.org/wiki/Encore%20%28disambiguation%29
An encore is a performance added to the end of a concert. Encore(s) may also refer to: Businesses and products Computing Encore (software), a music notation software Encore Computer, an early maker of parallel computers and real-time software Encore, Inc., a software publishing and distribution company EnCore Processor, a configurable and extendable microprocessor Adobe Encore, a DVD authoring software tool Z8 Encore!, a microcontroller by ZiLOG Transportation Encore, a train on Amtrak's Hiawatha Service Buick Encore, a car produced by GM from 2012 onwards Norwegian Encore, a Norwegian Cruise Line passenger ship WestJet Encore, a Canadian airline Other Encore Books, a defunct American bookstore chain Encore Capital Group, an American financial-services company Encore Data Products, an American manufacturer of audio and video equipment Encore Enterprises, an American real-estate company Encore Las Vegas, a casino resort in Las Vegas, Nevada, US Encore Studio, a music platform launched by Kid Cudi Film and television Film Encore (1951 film), an American anthology film Encore (1988 film) or Once More, a French film directed by Paul Vecchiali Encore (1996 film), a French film directed by Pascal Bonitzer Encore, Once More Encore!, a 1992 Russian film Television channels and series Encore+, a YouTube channel sponsored by the Canadian Media Fund Encore (TV series), a 1960 Canadian drama anthology series Encore! (TV series), a 2019–2020 American reality series Encore! Encore!, a 1998–1999 American sitcom Starz Encore, formerly Encore, an American premium television channel Television episodes "Encore" (Brimstone) "Encore" (Law & Order) "Encore" (Mission: Impossible) "Encore" (So Weird) Music Encore! (musician) (born 1974), German singer Encore HSC, an annual performance at the Sydney Opera House Encore School for Strings, an American summer music institute Encore Series, a series of concert recordings by The Who Encores!, a program presented by New York City Center since 1994 Albums Encore (Anderson East album), 2018 Encore (Bobby Vinton album), 1980 Encore (Clark Sisters album), 2008 Encore (David Garrett album), 2008 Encore (DJ Snake album), 2016 Encore (Eberhard Weber album), 2015 Encore (Elaine Paige album), 1995 Encore (Eddie Bert album), 1955 Encore (Eminem album) or the title song (see below), 2004 Encore (George Jones album), 1981 Encore! (Jeanne Pruett album), 1979 Encore (Johnny Cash album), 1981 Encore (Klaus Nomi album), 1983 Encore (Lionel Richie album), 2002 Encore (The Louvin Brothers album), 1961 Encore (Lynn Anderson album), 1981 Encore (Marina Prior album), 2013 Encore (Marti Webb album), 1985 Encore (Russell Watson album), 2002 Encore (S.H.E album), 2004 Encore (Sam Cooke album), 1958 Encore (Sarah Brightman album), 2002 Encore (The Specials album), 2019 Encore (Tangerine Dream album), 1977 Encore (1988 Wanda Jackson album), 1988 Encore (2021 Wanda Jackson album),
https://en.wikipedia.org/wiki/ARIA%20Music%20Awards%20of%202003
The 17th Annual Australian Recording Industry Association Music Awards (generally known as ARIA Music Awards) were held on 21 October 2003 at the Sydney Superdome. The ceremony aired on Network Ten. Awards Winners highlighted in bold, with nominees, in plain, below them. ARIA Awards Album of the Year Powderfinger – Vulture Street Delta Goodrem – Innocent Eyes The Sleepy Jackson – Lovers Something for Kate – The Official Fiction The Waifs – Up All Night Single of the Year Delta Goodrem – "Born to Try" Amiel – "Lovesong" Powderfinger – "(Baby I've Got You) On My Mind" Silverchair – "Luv Your Life" The Waifs – "Lighthouse" Best Male Artist Alex Lloyd – "Coming Home" Ben Lee – Hey You. Yes You. John Butler – Living Nick Cave – Nocturama Tex Perkins – Sweet Nothing Best Female Artist Delta Goodrem – Innocent Eyes Amiel – Audio Out Kylie Minogue – "Come into My World" Renée Geyer – Tenderland Sarah Blasko - Prelusive EP ("Your Way") Best Group Powderfinger – Vulture Street Grinspoon – "No Reason" Silverchair – "Across the Night" Something for Kate – The Official Fiction The Waifs – Up All Night Highest Selling Single Delta Goodrem – "Born to Try" Amiel – "Lovesong" The Androids – "Do It with Madonna" Delta Goodrem – "Innocent Eyes" Delta Goodrem – "Lost Without You" Highest Selling Album Delta Goodrem – Innocent Eyes John Farnham – The Last Time Kasey Chambers – Barricades & Brickwalls Powderfinger – Vulture Street Silverchair – Diorama Breakthrough Artist – Single Delta Goodrem – "Born to Try" Candice Alley – "Falling" The Casanovas – "Shake It" Rogue Traders – "One of My Kind" The Sleepy Jackson – "Vampire Racecourse" Breakthrough Artist – Album Delta Goodrem – Innocent Eyes Amiel – Audio Out Pete Murray – Feeler The Sleepy Jackson – Lovers The Waifs – Up All Night Best Independent Release The Waifs – Up All Night 1200 Techniques – "Eye of the Storm" Diesel – Hear John Butler Trio – Living The Mess Hall – Feeling Sideways Best Adult Contemporary Album John Farnham – The Last Time Blackeyed Susans – Shangri-La David Bridie – Hotel Radio The Go-Betweens – Bright Yellow Bright Orange Renée Geyer – Tenderland Best Rock Album Powderfinger – Vulture Street Magic Dirt – Tough Love Nick Cave and the Bad Seeds – Nocturama The Sleepy Jackson – Lovers Something for Kate – The Official Fiction Best Country Album Keith Urban – Golden Road Adam Harvey – Cowboy Dreams Beccy Cole – Little Victories Bill Chambers – Sleeping with the Blues Sara Storer – Beautiful Circle Best Blues & Roots Album The Waifs – Up All Night John Butler Trio – Living Mia Dyson – Cold Water Pete Murray – Feeler The Revelators – The Revelators Best Pop Release Delta Goodrem – Innocent Eyes Amiel – Audio Out The Androids – "Do It with Madonna" Dannii Minogue – Neon Nights Kylie Minogue – "Come into My World" Best Dance Release Rogue Traders – "One of My Kind" 1200 Techniques – "Eye of the Storm" Disco Montego – Disco Montego Gerling – "Who's Ya Daddy?" Wicked Beat Sound System – New Soul Bre
https://en.wikipedia.org/wiki/Danny%20Kopec
Daniel Kopec (February 28, 1954 – June 12, 2016) was an American chess International Master, author, and computer science professor at Brooklyn College. Education He graduated from Dartmouth College in the class of 1975. Kopec later received a PhD in Machine Intelligence from the University of Edinburgh in 1982 studying under Donald Michie. Chess Kopec was Greater NY High School Champion at 14, and reached master at 17. Kopec won the Scottish Chess Championship in 1980 while pursuing his doctorate in Edinburgh. He lived in Canada for two years during the 1980s, and competed there with success, including second-equal in the 1984 Canadian Chess Championship. Kopec achieved the FIDE International Master title in 1985 and had several top three finishes (including second place ties) in the US Open. He wrote numerous books on the subject of chess, produced eight chess instructional DVDs, and ran chess camps starting in 1994. Kopec also worked to promote his chess opening, the Kopec System (1.e4 c5 2.Nf3 d6 3.Bd3!?). With Ivan Bratko, he was the creator of the Bratko–Kopec Test, which was one of the de facto standard testing systems for chess-playing computer programs in the 1980s. Computer science Kopec published notable academic pieces in the areas of artificial intelligence, machine error reduction, intelligent tutoring systems, and computer education. Death Kopec died on June 12, 2016, from pancreatic cancer. Partial chess bibliography (1980) Best Games Of The Young Grandmasters, with Craig Pritchett (non-fiction chess), publisher Bell and Howell, London (1985) Master Chess: A Course in 21 Lessons (non-fiction chess) (1997) Practical Middlegame Techniques with Rudy Blumenfeld (non-fiction chess) (1998) Test, Evaluate, and Improve your Chess co-author Hal Terrie (non-fiction chess) (2002) Chess World Title Contenders and Their Styles co-author Craig Pritchett (non-fiction chess) (2003) Mastering the Sicilian (non-fiction chess) (2004) Winning the Won Game co-author Ľubomír Ftáčnik (non-fiction chess) References External links Academic Website at Brooklyn College Kopec Chess Services 1954 births 2016 deaths American chess players Canadian chess players Chess International Masters Canadian chess writers American chess writers American male non-fiction writers Dartmouth College alumni Alumni of the University of Edinburgh Brooklyn College faculty
https://en.wikipedia.org/wiki/XDI
XDI (eXtensible Data Interchange) is a semantic data interchange format and protocol under development by the OASIS XDI Technical Committee. The name comes from the addressable graph model XDI uses: every node in the XDI graph is its own RDF graph that is uniquely addressable. Background The main features of XDI are: the ability to link and nest RDF graphs to provide context; full addressability of all nodes in the graph at any level of context; representation of XDI operations as graph statements so authorization can be built into the graph; a standard JSON serialization format; and a simple ontology language for defining shared semantics using XDI dictionary services. The XDI protocol is based on an exchange of XDI messages which themselves are XDI graphs. Since the semantics of each message is fully contained within the XDI graph of that message, the XDI protocol can be bound to multiple transport protocols. The XDI TC is defining bindings to HTTP and HTTPS, however it is also exploring bindings to XMPP and potentially directly to TCP/IP. XDI also provides a standardized portable authorization format called XDI link contracts. Link contracts are XDI subgraphs that express the permissions that one XDI actor (person, organization, or thing) grants to another for access to and usage of an XDI data graph. XDI link contracts enable these permissions to be expressed in a standard machine-readable format understood by any XDI endpoint. This approach to globally distributed data sharing models the real-world mechanisms of social contracts and legal contracts that bind civilized people and organizations in the world today. Thus XDI can be a key enabler of a distributed Social Web. It has also been cited as a mechanism to support a new legal concept, Virtual Rights , which are based on a new legal entity, the "virtual identity", and a new fundamental right: "to have or not to have virtual identities". Public services based on the OASIS XDI specification are under development by an international non-profit organization, XDI.org . See also Link contract i-name i-number External links OASIS XDI Technical Committee XDI.org OASIS XDI TC wiki page with links to documents explaining the XDI graph model Implementations XDI2 open source XDI reference implementation in Java Site has live utilities for experimenting directly with XDI XML-based standards
https://en.wikipedia.org/wiki/Extensible%20Name%20Service
Extensible Name Service (XNS) is an open protocol for universal addressing and automated data exchange. It is an XML-based digital identity architecture. History The development of XML in 1998 led to the XNS project, and the establishment of an international non-profit governance organization, XNS Public Trust Organization (XNSORG), in early 2000. In 2002, the XNS specifications were contributed by XNSORG to OASIS, where they became part of the XRI (Extensible Resource Identifier) and XDI (XRI Data Interchange) Technical Committees. Together these two standards, XRI and XDI, form the basis for the formation of the Dataweb. XNSORG has since evolved into XDI.ORG, and now offers community-based XRI/XDI infrastructure. See also OpenID External links XNSORG XDI.ORG Internet protocols
https://en.wikipedia.org/wiki/SCU
SCU may refer to: Computing SAS control unit, a hardware component that controls Serial attached SCSI devices Single compilation unit, C/C++ specific compilation technique System Control Unit for Sega Saturn chip set Sport SoCal Uncensored, a professional wrestling stable often referred to as SCU Unions Scottish Cyclist's Union, the sports governing body for cycling in Scotland, now known as Scottish Cycling Service Credit Union, New Hampshire Sikorsky Credit Union, Connecticut Steinbach Credit Union, Canada Units Serious Crash Unit, a New Zealand television series Scoville Units on the Scoville scale measure the hotness or piquancy of sauces Special Commando Unit, employed by the MACV-SOG during the Vietnam War Street Crimes Unit, part of the New York Police Department Santa Clara Unit, an operational unit of the California Department of Forestry and Fire Protection responsible for the East and South Bay regions of the Bay Area. Universities Santa Clara University, California, United States Sapporo City University, Japan Scott Christian University, Machakos, Kenya Shih Chien University, Taipei, Taiwan Sichuan University, Chengdu, China Seoul Cyber University, Seoul, Korea Soochow University (disambiguation), multiple schools Southern California University of Health Sciences, Whittier, California Southern Cross University, Lismore, NSW, Australia Southwestern Christian University, Bethany, Oklahoma Suez Canal University, Ismailia, Egypt Shahid Chamran University, Ahvaz, Iran Soegijapranata Catholic University, Semarang, Indonesia Other uses Antonio Maceo Airport in Santiago de Cuba, Cuba (IATA airport code: SCU) Sacra Corona Unita, a Mafia-like criminal organization from Apulia, southern Italy SCU Lightning Complex fires, a group of wildfires in California Sculptor Capital Management, an American alternative asset management firm (stock symbol SCU) Senatus consultum ultimum, something tantamount to martial law in the times of the Roman Republic Service Class User, a term used in the DICOM standard Sacra Corona Unita, an Italian criminal organization See also
https://en.wikipedia.org/wiki/I-name
I-names are one form of an XRI — an OASIS open standard for digital identifiers designed for sharing resources and data across domains and applications. I-names are human readable XRIs intended to be as easy as possible for people to remember and use. For example, a personal i-name could be =Mary or =Mary.Jones. An organizational i-name could be @Acme or @Acme.Corporation. Persistence One problem XRIs are designed to solve is persistent addressing — how to maintain an address that does not need to change no matter how often the contact details of a person or organization change. XRIs accomplish this by adding a new layer of abstraction over the existing IP numbering and DNS naming layers used on the Internet today (as well as over other type of addresses, such as phone numbers or instant messaging addresses). Such an abstraction layer is not new — URNs (Uniform Resource Names) and other persistent identifier architectures have the same effect. What's different about the XRI layer is that it offers a single uniform syntax and resolution protocol for two different types of identifiers: I-names I-names are identifiers resembling domain names, designed for simplicity and ease of use. Though typically long-lived, i-names may, like domain names, be transferred or reassigned to another resource by their owners. For example, a company that changes its corporate name could sell its old i-name to another company, while both companies could retain their original i-numbers. What most differentiates i-names from domain names is that in practice they will have a synonymous (equivalent) persistent i-number (below). I-numbers I-numbers are machine readable identifiers (similar to IP addresses) that are assigned to a resource (for instance, a person, organization, application or file) and never reassigned. This means an i-number can always be used to address a network representation of the resource as long it remains available anywhere on the network. I-numbers, like IP addresses, are designed to be efficient for network routers to process and resolve. XRI syntax also allows i-names and i-numbers to be combined within the same XRI. So effectively the XRI layer supports both i-name and i-number synonyms for resources — one that reflects real-world semantics and can change over time, and one that reflects the persistent identity of a resource no matter how often its attributes (including its i-names) may change. And the same HTTP-based XRI resolution protocol can be used to resolve either an i-name or an i-number to an XRDS document describing the target resource. XRIs are backward-compatible with the DNS and IP addressing systems, so it is possible for domain names and IP addresses to be used as i-names (or, in rare cases, as i-numbers). Like DNS names, XRIs can also be "delegated", i.e., nested multiple levels deep, just like the directory names on a local computer file system. For example, a company can register a top-level (global) i-name for itself and
https://en.wikipedia.org/wiki/Trust%20federation
A trust federation is part of the evolving Identity Metasystem that will bring a new layer of persistent identity and trusted data sharing to the Internet. Although the concept of trust federations is technology neutral, several protocols like SAML, OpenID, Information Card, XDI can handle the challenges of technical interoperability. The challenge of business and social interoperability requires a new type of cooperative association similar to a credit card association. Instead of banks, however, a trust federation is an alliance of i-brokers and their customers who agree to abide by a common set of agreements in the care and handling of customer data. A model for trust federations is offered by Open Identity Exchange and Kantara Initiative, which is applied in the U.S. Government ICAM Trust Framework. Some operational trust federations are: InCommon (academic, USA) REFEDs (Research and Education Federations, Europe) IGTF Interoperable Global Trust Federation Portalverbund Government Portal Federation, Austria Trust federations are not limited to the social web use case, but apply to all federations where trust in identity and compliance to other objectives of information security such as confidentiality, integrity and privacy is brokered. See also I-name I-number XRI XDI External links The Social Web: Building an Open Social Network with XDI OASIS XDI Technical Committee OASIS XRI Technical Committee XDI.ORG Identity Commons References Social media
https://en.wikipedia.org/wiki/George%20W.%20Snedecor
George Waddel Snedecor (October 20, 1881 – February 15, 1974) was an American mathematician and statistician. He contributed to the foundations of analysis of variance, data analysis, experimental design, and statistical methodology. Snedecor's F-distribution and the George W. Snedecor Award of the American Statistical Association are named after him. Early life Born in Memphis, Tennessee, into a socially prominent and politically powerful, southern Democratic, Presbyterian family line, Snedecor grew up in Florida and Alabama where his lawyer father moved wife and children in order to fulfill a personal and radical religious calling to minister to, evangelize and educate the poor. George was the grandson of Memphis lawyer Bedford Mitchell Estes, he was the son of Emily Alston Estes and James G. Snedecor, and nephew of Ione Estes Dodd and William J. Dodd, the midwest architect. Education and career Snedecor studied mathematics and physics at Auburn University and University of Alabama, where he graduated with a BS in 1905. After taking up teaching jobs at Selma Military Academy and Austin College in Sherman, Texas, he continued his study in physics at the University of Michigan, where he received an MSc in 1913. Snedecor moved to the Iowa State University in 1913, where he became a professor in mathematics. He founded the first academic department of statistics in the United States, at Iowa State University in 1947. He also created the first statistics laboratory in the U.S. at Iowa State, and was a pioneer of modern applied statistics in the US. His 1938 textbook Statistical Methods became an essential resource: "In the 1970s, a review of citations in published scientific articles from all areas of science showed that Snedecor's Statistical Methods was the most frequently cited book." Snedecor worked for the statistics department of Foster's Group from 1957 to 1963. He was involved in the elaboration of all production data. The "F" of Snedecor's F distribution is named in honor of Sir Ronald Fisher. Snedecor was awarded honorary doctorates in science by North Carolina State University in 1956 and by Iowa State University in 1958. Snedecor Hall, at Iowa State University, is the home of the Statistics Department. It was constructed in 1939. At Iowa State, he was an early user of John Vincent Atanasoff's Atanasoff–Berry computer (maybe the first user of an electronic digital computer for real world production mathematics problem solutions). Selected publications References Further reading External links George W. Snedecor biography American statisticians 20th-century American mathematicians 1881 births 1974 deaths Iowa State University faculty Iowa State University alumni University of Michigan alumni People from Memphis, Tennessee Presidents of the American Statistical Association Fellows of the American Statistical Association University of Alabama alumni Mathematical statisticians
https://en.wikipedia.org/wiki/Blockhead%20%28thought%20experiment%29
Blockhead is the name of a theoretical computer system invented as part of a thought experiment by philosopher Ned Block, which appeared in a paper titled "Psychologism and Behaviorism". Block did not name the computer in the paper. Overview In "Psychologism and Behaviorism," Block argues that the internal mechanism of a system is important in determining whether that system is intelligent and claims to show that a non-intelligent system could pass the Turing test. Block asks us to imagine a conversation lasting any given amount of time. He states that given the nature of language, there are a finite number of syntactically- and grammatically-correct sentences that can be used to start a conversation. Consequently, there is a limit to how many "sensible" responses can be made to the first sentence, then to the second sentence, and so on until the conversation ends. Block then asks the reader to imagine a computer which had been programmed with all the sentences in theory, if not in practice. Block argues that such a machine could continue a conversation with a person on any topic because the computer would be programmed with every sentence that it was possible to use so the computer would be able to pass the Turing test despite the fact that — according to Block — it was not intelligent. Block says that this does not show that there is only one correct internal structure for generating intelligence but simply that some internal structures do not generate intelligence. The argument is related to John Searle's Chinese room. A recent objection to the Blockhead argument is Hanoch Ben-Yami (2005), who agrees that Block's machine lacks intelligence but compares its answers to a poetic dialogue in which one man is whispered romantic poetry to recite to his would-be lover as it answers only what it has been told to answer in advance by its programmers. Sources . . Theory of computation Philosophy of artificial intelligence Thought experiments in philosophy of mind See also Dissociated press Philosophical zombie
https://en.wikipedia.org/wiki/Arnoldi%20iteration
In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general (possibly non-Hermitian) matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices. The Arnoldi method belongs to a class of linear algebra algorithms that give a partial result after a small number of iterations, in contrast to so-called direct methods which must complete to give any useful results (see for example, Householder transformation). The partial result in this case being the first few vectors of the basis the algorithm is building. When applied to Hermitian matrices it reduces to the Lanczos algorithm. The Arnoldi iteration was invented by W. E. Arnoldi in 1951. Krylov subspaces and the power iteration An intuitive method for finding the largest (in absolute value) eigenvalue of a given m × m matrix is the power iteration: starting with an arbitrary initial vector b, calculate normalizing the result after every application of the matrix A. This sequence converges to the eigenvector corresponding to the eigenvalue with the largest absolute value, . However, much potentially useful computation is wasted by using only the final result, . This suggests that instead, we form the so-called Krylov matrix: The columns of this matrix are not in general orthogonal, but we can extract an orthogonal basis, via a method such as Gram–Schmidt orthogonalization. The resulting set of vectors is thus an orthogonal basis of the Krylov subspace, . We may expect the vectors of this basis to span good approximations of the eigenvectors corresponding to the largest eigenvalues, for the same reason that approximates the dominant eigenvector. The Arnoldi iteration The Arnoldi iteration uses the modified Gram–Schmidt process to produce a sequence of orthonormal vectors, q1, q2, q3, ..., called the Arnoldi vectors, such that for every n, the vectors q1, ..., qn span the Krylov subspace . Explicitly, the algorithm is as follows: Start with an arbitrary vector q1 with norm 1. Repeat for k = 2, 3, ... qk := A qk−1 for j from 1 to k − 1 hj,k−1 := qj* qk qk := qk − hj,k−1 qj hk,k−1 := qk := qk / hk,k−1 The j-loop projects out the component of in the directions of . This ensures the orthogonality of all the generated vectors. The algorithm breaks down when qk is the zero vector. This happens when the minimal polynomial of A is of degree k. In most applications of the Arnoldi iteration, including the eigenvalue algorithm below and GMRES, the algorithm has converged at this point. Every step of the k-loop takes one matrix-vector product and approximately 4mk floating point operations. In the programming language Python with support of the NumPy library: import numpy as np def arnoldi_iteration(A, b, n: int): """Compute a basis o
https://en.wikipedia.org/wiki/Block%20code
In coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists, mathematicians, and computer scientists to study the limitations of all block codes in a unified way. Such limitations often take the form of bounds that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors. Examples of block codes are Reed–Solomon codes, Hamming codes, Hadamard codes, Expander codes, Golay codes, and Reed–Muller codes. These examples also belong to the class of linear codes, and hence they are called linear block codes. More particularly, these codes are known as algebraic block codes, or cyclic block codes, because they can be generated using boolean polynomials. Algebraic block codes are typically hard-decoded using algebraic decoders. The term block code may also refer to any error-correcting code that acts on a block of bits of input data to produce bits of output data . Consequently, the block coder is a memoryless device. Under this definition codes such as turbo codes, terminated convolutional codes and other iteratively decodable codes (turbo-like codes) would also be considered block codes. A non-terminated convolutional encoder would be an example of a non-block (unframed) code, which has memory and is instead classified as a tree code. This article deals with "algebraic block codes". The block code and its parameters Error-correcting codes are used to reliably transmit digital data over unreliable communication channels subject to channel noise. When a sender wants to transmit a possibly very long data stream using a block code, the sender breaks the stream up into pieces of some fixed size. Each such piece is called message and the procedure given by the block code encodes each message individually into a codeword, also called a block in the context of block codes. The sender then transmits all blocks to the receiver, who can in turn use some decoding mechanism to (hopefully) recover the original messages from the possibly corrupted received blocks. The performance and success of the overall transmission depends on the parameters of the channel and the block code. Formally, a block code is an injective mapping . Here, is a finite and nonempty set and and are integers. The meaning and significance of these three parameters and other parameters related to the code are described below. The alphabet Σ The data stream to be encoded is modeled as a string over some alphabet . The size of the alphabet is often written as . If , then the block code is called a binary block code. In many applications it is useful to consider to be a prime power, and to identify with the finite field . The message length k Messages ar
https://en.wikipedia.org/wiki/S3%20Graphics
S3 Graphics, Ltd (commonly referred to as S3) was an American computer graphics company. The company sold the Trio, ViRGE, Savage, and Chrome series of graphics processors. Struggling against competition from 3dfx Interactive, ATI and Nvidia, it merged with hardware manufacturer Diamond Multimedia in 1999. The resulting company renamed itself to SONICblue Incorporated, and, two years later, the graphics portion was spun off into a new joint effort with VIA Technologies. The new company focused on the mobile graphics market. VIA Technologies' stake in S3 Graphics was purchased by HTC in 2011. History S3 was founded and incorporated in January 1989 by Dado Banatao and Ronald Yara. It was named S3 as it was Banatao's third startup company. The company's first products were among the earliest graphical user interface (GUI) accelerators. These chips were popular with video card manufacturers, and their followup designs, including the Trio64, made strong inroads with OEMs. S3 took over the high end 2D market just prior to the popularity of 3D accelerators. S3's first 3D accelerator chips, the ViRGE series, controlled half of the market early on but could not compete against the high end 3D accelerators from ATI, Nvidia, and 3Dfx. In some cases, the chips performed worse than software-based solutions without an accelerator. As S3 lost market share, their offerings competed in the mid-range market. Their next design, the Savage 3D, was released early and suffered from driver issues, but it introduced S3TC, which became an industry standard. S3 bought Number Nine's assets in 1999, then merged with Diamond Multimedia. The resulting company renamed itself SONICblue, refocused on consumer electronics, and sold its graphics business to VIA Technologies. Savage-derived chips were integrated into numerous VIA motherboard chipsets. Subsequent discrete derivations carried the brand names DeltaChrome and GammaChrome. In July 2011, HTC Corporation announced they were buying VIA Technologies' stake in S3 Graphics, thus becoming the majority owner of S3 Graphics. In November, the United States International Trade Commission ruled against S3 in a patent dispute with Apple. Graphics controllers S3 911, 911A (June 10, 1991) - S3's first Windows accelerators (16/256-color, high-color acceleration) S3 924 - 24-bit true-color acceleration S3 801, 805, 805i - mainstream DRAM VLB Windows accelerators (16/256-color, high-color acceleration) S3 928 - 24/32-bit true-color acceleration, DRAM or VRAM S3 805p, 928p - S3's first PCI support S3 Vision864, Vision964 (1994) - 2nd generation Windows accelerators (64-bit wide framebuffer) S3 Vision868, Vision968 - S3's first motion video accelerator (zoom and YUV→RGB conversion) S3 Trio 32, 64, 64V+, 64V2 (1995) - S3's first integrated (RAMDAC+VGA) accelerator. The 64-bit versions were S3's most successful product range. ViRGE (no suffix), VX, DX, GX, GX2, Trio3D, Trio3D/2X - S3's first Windows 3D-accelerators.
https://en.wikipedia.org/wiki/Synaptics
Synaptics is a publicly owned San Jose, California-based developer of human interface (HMI) hardware and software, including touchpads for computer laptops; touch, display driver, and fingerprint biometrics technology for smartphones; and touch, video and far-field voice technology for smart home devices and automotives. Synaptics sells its products to original equipment manufacturers (OEMs) and display manufacturers. Synaptics invented the computer touchpad, the click wheel on the classic iPod, Android phones' touch sensors, touch and display driver integrated chips (TDDI), and fingerprint sensors. History 19861998 Federico Faggin and Carver Mead founded Synaptics in 1986. They used their research on neural networks and transistors on chips to build pattern recognition products. In 1991, Synaptics patented a refined "winner take all" circuit for teaching neural networks how to recognize patterns and images. The circuit uses basic physics principles in order to select the strongest signal from the different processors. In 1992, the company used the pattern recognition techniques it developed to build the world's first touchpad for laptop computers that allowed users to control the cursor and click with no additional mechanical buttons. The pad was a replacement for trackballs and mice used at the time. By 1994, Twinhead and Epson America had adopted Synaptics' touchpad for their computers (Epson with the ActionNote), followed by Apple in 1995 and later by other computer manufacturers, including Compaq and Dell. 19992010 In 1999, Francis Lee took over as CEO. The company had an initial public offering in 2002. As adoption of the touchpad grew, Synaptics sought to integrate the technology with other products. In 2004, Apple debuted the iPod Mini and fourth-generation iPod, both featuring a scrolling click wheel that used Synaptics' capacitive touch technology. Synaptics also provided a similar but vertical click wheel for the Creative Zen Touch portable media player. In 2005, Synaptics sensors were featured in the Samsung B310, the first mobile phone to use capacitive-touch technology. In October 2006, Synaptics provided a live demonstration of the Onyx, a concept smartphone with a color touchscreen enabled by its ClearPad touch controller technology. The Onyx's touch sensor could tell the difference between a finger and a cheek, preventing accidental inputs during calls. The company's touch technology was used in LG's Prada phone in 2007, which was the world's first mobile phone with a capacitive touchscreen. In 2009, Synaptics announced the development of the Fuse concept smartphone. It had touch sensitivity on the back of the phone, the ability to interact with the phone by squeezing, animated icons, a user interface sensitive to the phone's orientation and tilt, and haptic gestures. 2011present In 2011, the company appointed Rick Bergman to succeed Francis Lee as CEO. In 2012, Synaptics introduced the first pressure recognizing touc
https://en.wikipedia.org/wiki/Skithouse
Skithouse (styled skitHOUSE) was an Australian sketch comedy television series that ran on Network Ten from 9 February 2003 to 28 July 2004. The series was produced by Roving Enterprises. It featured many well-known Australian comedians, including comedy-band Tripod. Reruns can now be seen on The Comedy Channel on Foxtel. In the UK, it is shown on the channel Paramount Comedy 2 and Trouble. The title name itself is a pun on the colloquialism "shithouse". The series only ran for two seasons, before being cancelled due to a combination of dwindling ratings and the withdrawal of the cable network Foxtel as co-financier of the program's production. Cast Skithouse was produced by Roving Enterprises, a production company formed by Rove McManus. Two key performers were Rove's Rove Live co-hosts Peter Helliar and Corinne Grant. The show also featured Cal Wilson, Scott Brennan, Fiona Harris, Damian Callinan, Roz Hammond, Michael Chamberlin, Ingrid Bloom, Tom Gleeson, Jason Geary and Ben Anderson. Members of the comedic band Tripod also featured, not just as the band but in the actual skits as well. Tripod are Scod (Scott Edgar), Yon (Simon Hall) and Gatesy (Steven Gates). The director was Full Frontal alumna, Daina Reid. Since the cancellation of the series, a number of the stars have moved on to other areas in the comedy industry. Scott Brennan and Fiona Harris starred in Comedy Inc. (before the show's end) as well as Damian Callinan and Cal Wilson staying on Network Ten on The Wedge with Roz Hammond and Ben Anderson as part of the ensemble cast on Thank God You're Here. Tom Gleeson has gone on to host popular ABCTV quiz show, "Hard Quiz" which he has been the host of since October 2016. The show The show consisted of numerous comedic skits. The half-hour shows themselves often seemed to have themes (or at least they repeated the use of sets, costumes, characters and props). Its comedic styling was reminiscent of many classic Australian sketch comedies, like Full Frontal and Fast Forward, sharing common elements such as self-depreciating humour, low-cost props and effects. Notable characters and sketches Many characters recurred throughout the series, often appearing several times in a single episode, creating a semi-coherent storyline. Some more notable recurring characters and/or scenarios are listed below. The Australian Fast Bowler (Gleeson) A cricket fast bowler, loosely resembling Dennis Lillee, who uses his bowling skills to help people or defend against evil, superhero style. Indeed, he has his own sidekick and nemesis (the latter being Callinan portraying The English Batsman). For instance, a choking man would be helped with a ball bowled at his back. There was also a variation of The Australian Fast Bowler, a 12-year-old boy called the Schoolyard Fast Bowler; one episode also featured the Australian Lawn Bowler, seemingly the Australian Fast Bowler many years later (a reference to the common perception of lawn bowling as an "old peop
https://en.wikipedia.org/wiki/Data%20migration
Data migration is the process of selecting, preparing, extracting, and transforming data and permanently transferring it from one computer storage system to another. Additionally, the validation of migrated data for completeness and the decommissioning of legacy data storage are considered part of the entire data migration process. Data migration is a key consideration for any system implementation, upgrade, or consolidation, and it is typically performed in such a way as to be as automated as possible, freeing up human resources from tedious tasks. Data migration occurs for a variety of reasons, including server or storage equipment replacements, maintenance or upgrades, application migration, website consolidation, disaster recovery, and data center relocation. The standard phases , "nearly 40 percent of data migration projects were over time, over budget, or failed entirely." Thus, proper planning is critical for an effective data migration. While the specifics of a data migration plan may vary—sometimes significantly—from project to project, IBM suggests there are three main phases to most any data migration project: planning, migration, and post-migration. Each of those phases has its own steps. During planning, dependencies and requirements are analyzed, migration scenarios get developed and tested, and a project plan that incorporates the prior information is created. During the migration phase, the plan is enacted, and during post-migration, the completeness and thoroughness of the migration is validated, documented, and closed out, including any necessary decommissioning of legacy systems. For applications of moderate to high complexity, these data migration phases may be repeated several times before the new system is considered to be fully validated and deployed. Planning: The data and applications to be migrated are selected based on business, project, and technical requirements and dependencies. Hardware and bandwidth requirements are analyzed. Feasible migration and back-out scenarios are developed, as well as the associated tests, automation scripts, mappings, and procedures. Data cleansing and transformation requirements are also gauged for data formats to improve data quality and to eliminate redundant or obsolete information. Migration architecture is decided on and developed, any necessary software licenses are obtained, and change management processes are started. Migration: Hardware and software requirements are validated, and migration procedures are customized as needed. Some sort of pre-validation testing may also occur to ensure requirements and customized settings function as expected. If all is deemed well, migration begins, including the primary acts of data extraction, where data is read from the old system, and data loading, where data is written to the new system. Additional verification steps ensure the developed migration plan was enacted in full. Post-migration: After data migration, results are subjected to
https://en.wikipedia.org/wiki/Secure%20Electronic%20Transaction
Secure Electronic Transaction (SET) is a communications protocol standard for securing credit card transactions over networks, specifically, the Internet. SET was not itself a payment system, but rather a set of security protocols and formats that enabled users to employ the existing credit card payment infrastructure on an open network in a secure fashion. However, it failed to gain attraction in the market. Visa now promotes the 3-D Secure scheme. Secure Electronic Transaction (SET) is a system for ensuring the security of financial transactions on the Internet. It was supported initially by Mastercard, Visa, Microsoft, Netscape, and others. With SET, a user is given an electronic wallet (digital certificate) and a transaction is conducted and verified using a combination of digital certificates and digital signatures among the purchaser, a merchant, and the purchaser's bank in a way that ensures privacy and confidentiality History and development SET was developed by the SET Consortium, established in 1996 by Visa and Mastercard in cooperation with GTE, IBM, Microsoft, Netscape, SAIC, Terisa Systems, RSA, and VeriSign. The consortium’s goal was to combine the card associations' similar but incompatible protocols (STT from Visa/Microsoft and SEPP from Mastercard/IBM) into a single standard. SET allowed parties to identify themselves to each other and exchange information securely. Binding of identities was based on X.509 certificates with several extensions. SET used a cryptographic blinding algorithm that, in effect, would have let merchants substitute a certificate for a user's credit card number. If SET were used, the merchant itself would never have had to know the credit-card numbers being sent from the buyer, which would have provided verified good payment but protected customers and credit companies from fraud. SET was intended to become the de facto standard payment method on the Internet between the merchants, the buyers, and the credit-card companies. Unfortunately, the implementation by each of the primary stakeholders was either expensive or cumbersome. There were also some external factors that may have complicated how the consumer element would be integrated into the browser. There was a rumor circa 1994-1995 that suggested that Microsoft sought an income stream of 0.25% from every transaction secured by Microsoft's integrated SET compliant components they would implement in their Internet browser. Key features To meet the business requirements, SET incorporates the following features: Confidentiality of information Integrity of data Cardholder account authentication Merchant authentication Participants A SET system includes the following participants: Cardholder Merchant Issuer Acquirer Payment gateway Certification authority How it works Both cardholders and merchants must register with the CA (certificate authority) first, before they can buy or sell on the Internet. Once registration is done, cardholder
https://en.wikipedia.org/wiki/Memory%20scrubbing
Memory scrubbing consists of reading from each computer memory location, correcting bit errors (if any) with an error-correcting code (ECC), and writing the corrected data back to the same location. Due to the high integration density of modern computer memory chips, the individual memory cell structures became small enough to be vulnerable to cosmic rays and/or alpha particle emission. The errors caused by these phenomena are called soft errors. Over 8% of DIMM modules experience at least one correctable error per year. This can be a problem for DRAM and SRAM based memories. The probability of a soft error at any individual memory bit is very small. However, together with the large amount of memory modern computersespecially serversare equipped with, and together with extended periods of uptime, the probability of soft errors in the total memory installed is significant. The information in an ECC memory is stored redundantly enough to correct single bit error per memory word. Hence, an ECC memory can support the scrubbing of the memory content. Namely, if the memory controller scans systematically through the memory, the single bit errors can be detected, the erroneous bit can be determined using the ECC checksum, and the corrected data can be written back to the memory. Overview It is important to check each memory location periodically, frequently enough, before multiple bit errors within the same word are too likely to occur, because the one bit errors can be corrected, but the multiple bit errors are not correctable, in the case of usual (as of 2008) ECC memory modules. In order to not disturb regular memory requests from the CPU and thus prevent decreasing performance, scrubbing is usually only done during idle periods. As the scrubbing consists of normal read and write operations, it may increase power consumption for the memory compared to non-scrubbing operation. Therefore, scrubbing is not performed continuously but periodically. For many servers, the scrub period can be configured in the BIOS setup program. The normal memory reads issued by the CPU or DMA devices are checked for ECC errors, but due to data locality reasons they can be confined to a small range of addresses and keeping other memory locations untouched for a very long time. These locations can become vulnerable to more than one soft error, while scrubbing ensures the checking of the whole memory within a guaranteed time. On some systems, not only the main memory (DRAM-based) is capable of scrubbing but also the CPU caches (SRAM-based). On most systems the scrubbing rates for both can be set independently. Because cache is much smaller than the main memory, the scrubbing for caches does not need to happen as frequently. Memory scrubbing increases reliability, therefore it can be classified as a RAS feature. Variants There are usually two variants, known as patrol scrubbing and demand scrubbing. While they both essentially perform memory scrubbing and associ
https://en.wikipedia.org/wiki/WATFIV
WATFIV, or WATerloo FORTRAN IV, developed at the University of Waterloo, Canada is an implementation of the Fortran computer programming language. It is the successor of WATFOR. WATFIV was used from the late 1960s into the mid-1980s. WATFIV was in turn succeeded by later versions of WATFOR. Because it could complete the three usual steps ("compile-link-go") in just one pass, the system became popular for teaching students computer programming. History In the early 1960s, newly formed computer science departments started university programs to teach computer programming languages. The Fortran language had been developed at IBM, but suffered from slow and error-prone three-stage batch processing workflow. In the first stage, the compiler started with source code and produced object code. In the second stage, a linker constructed a complete program using growing libraries of common functions. Finally, the program was repeatedly executed with data for the typical scientific and business problems of customers. Each step often included a new set of punched cards or tape. Students, on the other hand, had very different requirements. Their programs were generally short, but usually contained logic and syntax errors, resulting in time-consuming repetition of the steps and confusing "core dumps" (It often took a full day to submit and receive the successful or failed output from the computer operator). Once their programs worked correctly, they were turned in and not run again. In 1961, the University of Wisconsin developed a technology called FORGO for the IBM 1620 which combined some of the steps. Similar experiments were carried out at Purdue University on the IBM 7090 in a system called PUFFT. WATFOR 7040 In summer 1965, four undergraduate students of the University of Waterloo, Gus German, James G. Mitchell Richard Shirley and Robert Zarnke, led by Peter Shantz, developed a Fortran compiler for the IBM 7040 computer called WATFOR. Its objectives were fast compilation speed and effective error diagnostics at both compile and execution time. It eliminates the need for a separate linking step and, as a result, FORTRAN programs which contain no syntax errors are placed into immediate execution. Professor J. Wesley Graham provided leadership throughout the project. This simple, one-step process allowed non-experienced programmers to learn programming with lower cost in time and computing resources. To aid in debugging, the compiler uses an innovative approach to checking for undefined variables (an extremely common mistake by novice programmers). It uses a diagnostic feature of the 7040 that can deliberately set areas of memory to bad parity. When a program tries to reference variables that hadn't been set, the machine takes an interrupt (handled by the Watfor runtime routines) and the error is reported to the user as an undefined variable. This has the pleasant side effect of checking for undefined variables with essentially no CPU overhead.
https://en.wikipedia.org/wiki/John%20Chambers
John Chambers may refer to: Academics John Chambers (scientist), one of two scientists who formulated the Planet V theory in 2002 John Chambers (statistician), creator of the S programming language and core member of the R programming language project John Chambers (topographer) (1780–1839), English antiquarian Artists John Chambers (artist) (1852–1928), British landscape, seascape and portrait painter John Chambers (make-up artist) (1922–2001), American make-up artist, won a special Oscar for his work on Planet of the Apes Businessmen John Chambers (Australian pastoralist) (c. 1815–1889), Australian pioneer, brother of James Chambers (pastoralist) John Chambers (pastoralist) (1819–1893), New Zealand pastoralist, community leader and businessman John Chambers (businessman) (c. 1839–1903), New Zealand businessman John T. Chambers (born 1949), American businessman and former CEO of Cisco Systems Clergy John Chambre (1470–1549), also Chambers, English churchman, academic and physician John Chambers (bishop) (died 1556), last abbot of Peterborough abbey and, after the dissolution, the first bishop of Peterborough John David Chambers (1805–1893), English legal and liturgical writer Politicians John Chambers (politician) (1780–1852), American politician, governor of Iowa Territory in 1841–1845 John G. Chambers (Delaware politician), member of the 67th Delaware General Assembly John Green Chambers (1798–1884), American politician from Texas John M. Chambers (politician) (1845–1916), Irish-American businessman and politician from New York John Chambers Hughes (1891–1971), United States diplomat John Thomas Chambers Jr. (1928–2011), American politician from Maryland Sportsmen John Chambers (Australian cricketer) (1930–2017), Australian cricketer John Chambers (English cricketer) (born 1971), English cricketer John Chambers (footballer) (born 1949), English footballer Johnnie Chambers (1911-1977), American baseball player John Graham Chambers (1843–1883), codified the "Marquess of Queensberry rules", upon which modern day boxing is based Others John B. Chambers (born 1956), evaluator of sovereign debt for Standard & Poor's John Chambers (writer), American television soap opera writer Fictional characters Johnny Quick, a DC Comics character whose real name was Johnny Chambers Joanna "Johnny" Chambers, one of the two main characters in the 2020 novel Beneath the Rising See also Jack Chambers (disambiguation) John Chamber (disambiguation)
https://en.wikipedia.org/wiki/IStumbler
iStumbler is a utility for finding wireless networks and devices with AirPort- or Bluetooth-enabled Macintosh computers. iStumbler was originally based on MacStumbler source code. Its early development focused on detection of open wireless (802.11) networks, but more recent versions support the detection of Bluetooth wireless devices and Bonjour network services. Up to release 99, iStumbler was open-source under a BSD license. See also KisMAC – a wireless network discovery tool for Mac OS X. WiFi Explorer – a wireless network scanner for Mac OS X. Netspot – A Mac OS X tool for wireless networks assessment, scanning and surveys. External links References MacOS network-related software Free network-related software
https://en.wikipedia.org/wiki/Link%20contract
A link contract is an approach to data control in a distributed data sharing network. Link contracts are a key feature of the XDI specifications under development at OASIS. In XDI, a link contract is a machine-readable XDI document that governs the sharing of other XDI data. Unlike a conventional Web link, which is essentially a one-dimensional "string" that "pulls" a linked document into a browser, a link contract is a graph of metadata (typically in JSON) that can actively control the flow of data from a publisher to a subscriber by either "push" or "pull". The flow is controlled by the terms of the contract, which can be as flexible and extensible as real-world contracts, i.e., link contracts can govern: Identification: Who are the parties to the contract? Authority: Who controls the data being shared via the contract? Authentication: How will each party prove its identity to the other? Authorization: Who has what access rights and privileges to the data? Scope: What data does it cover? Permission and Privacy: What uses can be made of the data and by whom? Synchronization: How and when will the subscriber receive updates to the data? Termination: What happens when the data sharing relationship is ended? Recourse: How will any disputes over the contract be resolved? Like real-world contracts, link contracts can also refer to other link contracts. Using this design, the vast majority of link contracts can be very simple, referring to a very small number of more complex link contracts that have been carefully designed to reflect the requirements of common data exchange scenarios (e.g., business cards, mailing lists, e-commerce transactions, website registrations, etc.) Link contracts have been proposed as a key element of digital trust frameworks such as those published by the non-profit Open Identity Exchange. See also XDI Social Web Creative Commons External links OASIS XDI Technical Committee XDI.org XML-based standards Web services
https://en.wikipedia.org/wiki/Newbridge%20Networks
Newbridge Networks was an Ottawa, Ontario, Canada company founded by Welsh-Canadian entrepreneur Sir Terry Matthews. It was founded in 1986 to create data and voice networking products after Matthews was forced out of his original company Mitel. According to Matthews, he saw that data networking would grow far faster than voice networking, and he had wanted to take Mitel in much the same direction, but the 'risk-averse' British Telecom-dominated Mitel board refused and effectively ousted him. The name Newbridge Networks comes from Sir Terry Matthews' home town of Newbridge in south Wales. Newbridge quickly became a major market player in this area using the voice switching and software engineering expertise that was prevalent in the Ottawa area. The company initially had innovative channelbank products, which allowed telcos with existing wiring to offer a wide variety of new services. Newbridge also offered (for the time) the industry's most innovative network management (46020 NMS) and ISDN TAP 3500, RS-232C 3600 Mainstreet PC 4601A and ACC "river" routers (Congo/Amazon/Tigris etc.), including distributed star-topology routing through both proprietary software over the unused facilities' data links in optical hardware and telco switches. Starting in 1992 the company became increasingly focused upon and well known for its family of ATM products such as the MainStreet 36150 and MainStreet Xpress 36170 (later renamed Alcatel 7470). Newbridge later absorbed some routing technology that had been abandoned by Tandem Computers, with the purchase of Ungermann-Bass and entered the pure data networking market with a traditional routing and switching product. This was in addition to its internally developed ViViD product line, which was a network-wide distributed routing product (ridge or routing bridge—bridges with a centralized routing server). Newbridge had 30% ownership of affiliate West End Systems Corp., which was headquartered in Kanata, Ontario, with R&D facilities in Arnprior, Ontartio. Newbridge was purchased and absorbed by Alcatel in late February 2000 for over $7 billion in stock. With this transaction, Matthews became the single largest shareholder in Alcatel. References Canadian companies established in 1986 Defunct networking companies Telecommunications companies of Canada Telecommunications equipment vendors Companies based in Ottawa Wesley Clover Canadian companies disestablished in 2000 2000 disestablishments in Ontario Defunct computer companies of Canada
https://en.wikipedia.org/wiki/Gandalf%20Technologies
Gandalf Technologies, Inc., or simply Gandalf, was a Canadian data communications company based in Ottawa. It was best known for modems and terminal adapters that allowed computer terminals to connect to host computers through a single interface. Gandalf also pioneered a radio-based mobile data terminal that was popular for many years in taxi dispatch systems. The rapid rise of TCP/IP relegated many of Gandalf's products to niche status, and the company went bankrupt in 1997; its assets were acquired by Mitel. History Gandalf was founded by Desmond Cunningham and Colin Patterson in 1971, and started business from the lobby of the Skyline Hotel, which is now the Crowne Plaza Hotel, on Albert Street in Ottawa. The company's first products were industrial-looking half-bridges for remote terminals which were supported by large terminal multiplexers on the "computer end". Gandalf referred to these systems as a "PACX", in analogy to the telephony PABX which provided similar services in the voice field. These systems allowed the user to "dial up" the Gandalf box and then instruct it what computer they wanted to connect to. In this fashion, large computer networks could be built in a single location using shared resources, as opposed to having to dedicate terminals to different machines. These systems were particularly popular in large companies and universities. Gandalf supplanted these systems with "true" modems, both for host-to-host use and for remote workers. Unlike most modems, Gandalf's devices were custom systems intended to connect only to another Gandalf modem, and were designed to extract the maximum performance possible. Gandalf sold a number of different designs intended to be used with different line lengths and qualities, from 4-wire modems running at 9600 bit/s over "short" distances (bumped to 19,200 bit/s in later models), to 2400 bit/s models for 2-wire runs over longer distances. On the host-end, modem blocks could be attached to the same PACX multiplexers, making local and remote access largely identical. With the introduction of low-cost high-speed modems in the early 1990s, Gandalf increasingly became irrelevant. Even highest-speed solutions were soon being outperformed by standardized systems like v.32bis. Low-cost terminal adapters based on RADIUS (and similar) technologies connecting to Ethernet further eroded its core businesses, offering features similar to PACX switches. Gandalf's solutions were decidedly "low tech"; users selected what computer they wanted to connect to by selecting a two-digit number on the front of the modem, often requiring a "phone book" if more than one host computer was being used. In comparison, the more modern terminal adapters generally included a command line interface that allowed the user to select a host from a directory that appeared on their terminal. Introductions of Ethernet concentrators and ISDN-based versions of earlier host adapters did little to fix the problem, never becoming ver
https://en.wikipedia.org/wiki/PACX
PACX (Private Automatic Computer eXchange) was a name given by Gandalf Technologies to their family of data switching products. Architecture The PACX was a centralized switch that allowed serial connections from end users to be connected to any one of a number of computers, typically mainframes. Users were equipped with small boxes with two thumbwheels on them, and by rolling the wheels to a given two-digit number they could select among the machines connected to the PACX. In typical setups, the PACX would be connected to computer terminals or modems. The switch box would be queried on connection for the user's selection, and then a direct connection would be made between the user and that machine. The system was not unlike the telephone network, and the name PACX was deliberately chosen to suggest a computer-side analog of the PABX market. Background The PACX, with its mainframe utility, was part of an era in which enterprise data services were seen to the province of an Office Controller. The Office Controller was envisaged as a central switch which would interconnect and create all applications and make them available to users. PABX manufacturers of that era (the 1980s) created suites of data applications for the connection of users to mainframe applications. There were even magazine articles touting the victory of the PABX as an Office Controller over its LAN rivals. The PACX was a data PABX without the voice capability. With the development of LANs and cheap PCs with their attendant client/server applications, the Office Controller vision faded away. However the idea is not without merit. It is being re-established in new guises with the development of SIP with Session Border Controllers and Service Oriented Architectures. External links Columbia University computer history page on PACX Networking hardware
https://en.wikipedia.org/wiki/Data%20diddling
Data diddling is a type of cybercrime in which data is altered as it is entered into a computer system, most often by a data entry clerk or a computer virus. Computerized processing of the altered data results in a fraudulent benefit. In some cases, the altered data is changed back after processing to conceal the activity. The results can be huge. They might include adjusting financial figures up or down marginally, or it could be more complex and make an entire system unusable. References Cybercrime
https://en.wikipedia.org/wiki/Mark%20of%20the%20Unicorn
Mark of the Unicorn (MOTU) is a music-related computer software and hardware supplier. It is based in Cambridge, Massachusetts and has created music software since 1984. In the mid-1980s, Mark of the Unicorn sold productivity software and several games for the Macintosh, Atari ST, and Amiga. Products Current Digital Performer AudioDesk Past MINCE and SCRIBBLE, an Emacs-like editor and Scribe-like text formatter for CP/M machines. MINCE was also available for the Atari ST. FinalWord word processor (sold and became Sprint). Professional Composer, one of the first graphical music-notation editors. Mouse Stampede, arguably the first arcade-style game available for the Apple Macintosh (1984). Hex game for the Atari ST and Amiga computers (released in 1985). The first FireWire Audio Interface for Mac and Windows. PC/Intercomm, VT100 emulator for the Atari ST. References External links Computer companies of the United States Companies based in Cambridge, Massachusetts
https://en.wikipedia.org/wiki/Chunking%20%28computing%29
In computer programming, chunking has multiple meanings. In memory management Typical modern software systems allocate memory dynamically from structures known as heaps. Calls are made to heap-management routines to allocate and free memory. Heap management involves some computation time and can be a performance issue. Chunking refers to strategies for improving performance by using special knowledge of a situation to aggregate related memory-allocation requests. For example, if it is known that a certain kind of object will typically be required in groups of eight, instead of allocating and freeing each object individually, making sixteen calls to the heap manager, one could allocate and free an array of eight of the objects, reducing the number of calls to two. In HTTP message transmission Chunking is a specific feature of the HTTP 1.1 protocol. Here, the meaning is the opposite of that used in memory management. It refers to a facility that allows inconveniently large messages to be broken into conveniently-sized smaller "chunks". In data deduplication, data synchronization and remote data compression In data deduplication, data synchronization and remote data compression, Chunking is a process to split a file into smaller pieces called chunks by the chunking algorithm. It can help to eliminate duplicate copies of repeating data on storage, or reduces the amount of data sent over the network by only selecting changed chunks. The Content-Defined Chunking (CDC) algorithm like Rolling hash and its variants have been the most popular data deduplication algorithms for the last 15 years. See also Chunk (information) References Memory management
https://en.wikipedia.org/wiki/Software%20patent%20debate
The software patent debate is the argument about the extent to which, as a matter of public policy, it should be possible to patent software and computer-implemented inventions. Policy debate on software patents has been active for years. The opponents to software patents have gained more visibility with fewer resources through the years than their pro-patent opponents. Arguments and critiques have been focused mostly on the economic consequences of software patents. One aspect of the debate has focused on the proposed European Union directive on the patentability of computer-implemented inventions, also known as the "CII Directive" or the "Software Patent Directive," which was ultimately rejected by the EU Parliament in July 2005. Arguments for patentability There are several arguments commonly given in defense of software patents or defense of the patentability of computer-implemented inventions. Public disclosure Through public disclosure, patents encourage the open sharing of information and additional transparency about legal exposure. Through public disclosure, patents encourage the transfer of mechanical technology, which may apply more broadly. Economic benefit Software patents resulting from the production of patentable ideas can increase the valuation of small companies. Software patents increase the return on investment made, which includes government funded research. Encouragement of innovation The ability to patent new software developed as a result of research encourages investment in software-related research by increasing the potential return of investment of said research. Copyright limitations Patents protect functionality. Copyright on the other hand only protects expression. Substantial modification to an original work, even if it performs the same function, would not be prevented by copyright. To prove copyright infringement also requires the additional hurdle of proving copying, which is not necessary for patent infringement. Copyright law protects unique expressions, while patent law protects inventions, which in the case of software, are algorithms; copyright cannot protect a novel means of accomplishing a function, merely the syntax of one such means. This means that patents incentivize projects that are unique and innovative in functionality rather than simply form. Copyrights, in turn, only incentivize uniqueness in form. Protection for small companies Software patents can afford smaller companies market protection by preventing larger companies from stealing work done by a smaller organization, leveraging their greater resources to go to market before the smaller company can. Hardware patents analogy Hardware and software are sometimes interchangeable. If people can patent hardware, then ideas describing software implemented by that hardware should also be patentable. Arguments against patentability Opponents of software patents argue that: Software is math A program is the transcription of an algor
https://en.wikipedia.org/wiki/Software%20patents%20under%20United%20States%20patent%20law
Neither software nor computer programs are explicitly mentioned in statutory United States patent law. Patent law has changed to address new technologies, and decisions of the United States Supreme Court and United States Court of Appeals for the Federal Circuit (CAFC) beginning in the latter part of the 20th century have sought to clarify the boundary between patent-eligible and patent-ineligible subject matter for a number of new technologies including computers and software. The first computer software case in the Supreme Court was Gottschalk v. Benson in 1972. Since then, the Supreme Court has decided about a half dozen cases touching on the patent eligibility of software-related inventions. The eligibility of software, as such, for patent protection has been only scantily addressed in the courts or in legislation. In fact, in the recent Supreme Court decision in Alice v. CLS Bank, the Court painstakingly avoided the issue, and one Justice in the oral argument repeatedly insisted that it was unnecessary to reach the issue. The expression "software patent" itself has not been clearly defined. The United States Patent and Trademark Office (USPTO) has permitted patents to be issued on nothing more than a series of software computer instructions, but the latest Federal Circuit decision on the subject invalidated such a patent. The court held that software instructions as such were too intangible to fit within any of the statutory categories such as machines or articles of manufacture. On June 19, 2014 the United States Supreme Court ruled in Alice Corp. v. CLS Bank International that "merely requiring generic computer implementation fails to transform [an] abstract idea into a patent-eligible invention." The ruling continued: [...] the mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention. Stating an abstract idea "while adding the words 'apply it'" is not enough for patent eligibility.[] Nor is limiting the use of an abstract idea "'to a particular technological environment.'"[]. Stating an abstract idea while adding the words "apply it with a computer" simply combines those two steps, with the same deficient result. Thus, if a patent's recitation of a computer amounts to a mere instruction to "implemen[t]" an abstract idea "on . . . a computer," [] that addition cannot impart patent eligibility. Law Constitution Article 1, section 8 of the United States Constitution establishes that the purpose of intellectual property is to serve a broader societal good, the promotion of "the Progress of Science and the useful Arts" : Article 1, section 8 United States Constitution: Congress shall have Power [. . .] To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries; . . . Statute Section 101 of title 35, United States Code, provides: Whoever invents or discove
https://en.wikipedia.org/wiki/Disk%20cache
Disk cache may refer to: Disk buffer, the small amount of RAM embedded on a hard disk drive, used to store the data going to and coming from the disk platters Page cache, the cache of data residing on a storage device, kept by the operating systems and stored in unused main memory General application-level caching of the data residing on a storage device zh:磁碟快取
https://en.wikipedia.org/wiki/Computer%20case
A computer case, also known as a computer chassis, is the enclosure that contains most of the hardware of a personal computer. The components housed inside the case (such as the CPU, motherboard, memory, mass storage devices, power supply unit and various expansion cards) are referred as the internal hardware, while hardware outside the case (typically cable-linked or plug-and-play devices such as the display, speakers, keyboard, mouse and USB flash drives) are known as peripherals. Conventional computer cases are fully enclosed, with small holes (mostly in the back panel) that allow ventilation and cutout openings that provide access to plugs/sockets (back) and removable media drive bays (front). The structural frame (chassis) of a case is usually constructed from rigid metals such as steel (often SECC — steel, electrogalvanized, cold-rolled, coil) and aluminium alloy, with hardpoints and through holes for mounting internal hardware, case fans/coolers and for organizing cable management. The external case panels, at least one of which are removable, cover the chassis from the front, sides and top to shield the internal components from physical intrusion and dust collection, and are typically made from painted metallic and/or plastic material, while other materials such as mesh, tempered glass, acrylic, wood and even Lego bricks have appeared in many modern commercial or home-built cases. In recent years, open frame or open air cases that are only partly enclosed (with freer ventilation and thus theoretically better cooling) have become available in the premium gaming PC market. Sizes and terminology Cases can come in many different sizes and shapes, which are usually determined by the form factor of the motherboard since it is physically the largest hardware component in most computers. Consequently, personal computer form factors typically specify only the internal dimensions and layout of the case. Form factors for rack-mounted and blade servers may include precise external dimensions as well since these cases must themselves fit in specific enclosures. For example, a case designed for an ATX motherboard and power supply unit (PSU) may take on several external forms such as a vertical tower (designed to sit on the floor, height > width), a flat desktop (height < width) or pizza box (height ≤ ) designed to sit on the desk under the computer's monitor). Full-size tower cases are typically larger in volume than desktop cases, with more room for drive bays, expansion slots, and custom or all-in-one (AIO) water cooling solutions. Desktop cases—and mini-tower cases under about high—are popular in business environments where space is at a premium. Currently, the most popular form factor for desktop computers is ATX, although microATX and small form factors have also become very popular for a variety of uses. In the high-end segment, the unofficial and loosely defined XL-ATX specification appeared around 2009. It extends the length of the ma
https://en.wikipedia.org/wiki/Stuart%20Feldman
Stuart Feldman is an American computer scientist. He is best known as the creator of the computer software program make. He was also an author of the first Fortran 77 compiler, was part of the original group at Bell Labs that created the Unix operating system, and participated in development of the ALTRAN and EFL programming languages. Feldman is the chief scientist at Schmidt Futures. He was previously a member of the dean's External Advisory Board at the University of Michigan School of Information. He was previously Vice President, Engineering, East Coast, at Google, and before that Vice President of Computer Science at IBM Research. Feldman has served on the board of the Computing Research Association (CRA) and of the Association to Advance Collegiate Schools of Business (AACSB International). He was chair of ACM SIGPLAN and founding chair of ACM SIGecom. He was elected the President of the ACM in 2006. Feldman is also a member of the Editorial Advisory Board of ACM Queue, a magazine he helped found with Steve Bourne. He has also served on the editorial boards of IEEE Internet Computing and IEEE Transactions on Software Engineering. He received an A.B. in astrophysical sciences from Princeton University and a Ph.D. in applied mathematics from the Massachusetts Institute of Technology. In 2010 the University of Waterloo awarded him an Honorary Doctorate of Mathematics. Feldman became a Fellow of the IEEE in 1991, Fellow of the ACM in 1995, and Fellow of the AAAS in 2007. In 2003, he was awarded ACM's Software System Award for his creation of make. References External links Photo of Feldman in "External Advisory Board - University of Michigan School of Information" Year of birth missing (living people) Living people American computer scientists Computer systems researchers Fellow Members of the IEEE Fellows of the Association for Computing Machinery Presidents of the Association for Computing Machinery Google employees Unix people Scientists at Bell Labs Multics people
https://en.wikipedia.org/wiki/Internet%20Draft
An Internet Draft (I-D) is a document published by the Internet Engineering Task Force (IETF) containing preliminary technical specifications, results of networking-related research, or other technical information. Often, Internet Drafts are intended to be work-in-progress documents for work that is eventually to be published as a Request for Comments (RFC) and potentially leading to an Internet Standard. It is considered inappropriate to rely on Internet Drafts for reference purposes. I-D citations should indicate the I-D is a work in progress. An Internet Draft is expected to adhere to the basic requirements imposed on any RFC. An Internet Draft is only valid for six months unless it is replaced by an updated version. An otherwise expired draft remains valid while it is under official review by the Internet Engineering Steering Group (IESG) when a request to publish it as an RFC has been submitted. Expired drafts are replaced with a "tombstone" version and remain available for reference. Naming conventions Internet Drafts produced by the IETF working groups follow the naming convention: draft-ietf-<wg>-<name>-<version number>.txt. Internet Drafts produced by IRTF research groups following the naming convention: draft-irtf-<rg>-<name>-<version number>.txt. Drafts produced by individuals following the naming convention: draft-<individual>-<name>-<version number>.txt The IAB, RFC Editor, and other organizations associated with the IETF may also produce Internet Drafts. They follow the naming convention: draft-<org>-<name>-<version number>.txt. The initial version number is represented as 00. The second version, i.e. the first revision is represented as 01, and incremented for all following revisions. References External links Internet-Drafts Status of IETF Internet Drafts (IANA) Internet Draft search An archive of expired IDs Internet Standards
https://en.wikipedia.org/wiki/Laguerre%27s%20method
In numerical analysis, Laguerre's method is a root-finding algorithm tailored to polynomials. In other words, Laguerre's method can be used to numerically solve the equation for a given polynomial . One of the most useful properties of this method is that it is, from extensive empirical study, very close to being a "sure-fire" method, meaning that it is almost guaranteed to always converge to some root of the polynomial, no matter what initial guess is chosen. However, for computer computation, more efficient methods are known, with which it is guaranteed to find all roots (see ) or all real roots (see Real-root isolation). This method is named in honour of Edmond Laguerre, a French mathematician. Definition The algorithm of the Laguerre method to find one root of a polynomial of degree is: Choose an initial guess For If is very small, exit the loop Calculate Calculate Calculate , where the sign is chosen to give the denominator with the larger absolute value, to avoid catastrophic cancellation. Set Repeat until a is small enough or if the maximum number of iterations has been reached. If a root has been found, the corresponding linear factor can be removed from p. This deflation step reduces the degree of the polynomial by one, so that eventually, approximations for all roots of p can be found. Note however that deflation can lead to approximate factors that differ significantly from the corresponding exact factors. This error is least if the roots are found in the order of increasing magnitude. Derivation The fundamental theorem of algebra states that every nth degree polynomial can be written in the form such that where are the roots of the polynomial. If we take the natural logarithm of both sides, we find that Denote the derivative by and the negated second derivative by We then make what Acton calls a 'drastic set of assumptions', that the root we are looking for, say, is a certain distance away from our guess , and all the other roots are clustered together some distance away. If we denote these distances by and then our equation for may be written as and the expression for becomes Solving these equations for , we find that , where the square root of a complex number is chosen to produce larger absolute value of the denominator, or equivalently, to satisfy: , where denotes real part of a complex number, and is the complex conjugate of ; or , where the square root of a complex number is chosen to have a non-negative real part. For small values of this formula differs from the offset of the third order Halley's method by an error of , so convergence close to a root will be cubic as well. Note that, even if the 'drastic set of assumptions' does not work for some particular polynomial , can be transformed into a related polynomial for which the assumptions are correct, e.g. by shifting the origin towards a suitable complex number , giving , to give distinct roots distinct magnitudes if necessary (whic
https://en.wikipedia.org/wiki/Abstract%20semantic%20graph
In computer science, an abstract semantic graph (ASG) or term graph is a form of abstract syntax in which an expression of a formal or programming language is represented by a graph whose vertices are the expression's subterms. An ASG is at a higher level of abstraction than an abstract syntax tree (or AST), which is used to express the syntactic structure of an expression or program. ASGs are more complex and concise than ASTs because they may contain shared subterms (also known as "common subexpressions"). Abstract semantic graphs are often used as an intermediate representation by compilers to store the results of performing common subexpression elimination upon abstract syntax trees. ASTs are trees and are thus incapable of representing shared terms. ASGs are usually directed acyclic graphs (DAG), although in some applications graphs containing cycles may be permitted. For example, a graph containing a cycle might be used to represent the recursive expressions that are commonly used in functional programming languages as non-looping iteration constructs. The mutability of these types of graphs, is studied in the field of graph rewriting. The nomenclature term graph is associated with the field of term graph rewriting, which involves the transformation and processing of expressions by the specification of rewriting rules, whereas abstract semantic graph is used when discussing linguistics, programming languages, type systems and compilation. Abstract syntax trees are not capable of sharing subexpression nodes because it is not possible for a node in a proper tree to have more than one parent. Although this conceptual simplicity is appealing, it may come at the cost of redundant representation and, in turn, possibly inefficiently duplicating the computation of identical terms. For this reason ASGs are often used as an intermediate language at a subsequent compilation stage to abstract syntax tree construction via parsing. An abstract semantic graph is typically constructed from an abstract syntax tree by a process of enrichment and abstraction. The enrichment can for example be the addition of back-pointers, edges from an identifier node (where a variable is being used) to a node representing the declaration of that variable. The abstraction can entail the removal of details which are relevant only in parsing, not for semantics. Example: Code Refactoring For example, consider the case of code refactoring. To represent the implementation of a function that takes an input argument, the received parameter is conventionally given an arbitrary, distinct name in the source code so that it can be referenced. The abstract representation of this conceptual entity, a "function argument" instance, will likely be mentioned in the function signature, and also one or more times within the implementation code body. Since the function as a whole is the parent of both its header or "signature" information as well as its implementation body, an AST would n
https://en.wikipedia.org/wiki/Summer%20Desire
Summer Desire is the name of the first and only night-time special aired under the Another World soap opera banner. Touted as special event programming, the hour-long episode aired just before the Daytime Emmy Awards on June 23, 1992. Unlike other soaps, which also aired one-off specials at night, the Another World special followed existing storylines, in the hopes that viewers tuning in early for the Daytime Emmys would be intrigued with what they saw and, by extension, would watch the show in the afternoon. The universal theme was love, and the stories followed four existing (and popular) couples in the series. To appeal to many demographics, thirtyish couple Cass (Stephen Schnetzer) and Frankie (Alice Barrett) were featured, as well as twentysomethings Ryan and Vicky (Paul Michael Valley and Jensen Buchanan) and Jake and Paulina (Tom Eplin and Judi Evans Luciano). Teen supercouple Dean and Jenna (Ricky Paull Goldin and Alla Korot) received most of the story in the special episode, as a party was being thrown in honor of his new album. On the show, Dean was a budding singer, and an album was produced. One of his songs, Ladykiller, became wildly popular and was played on the Times Square Jumbotron in the special episode. Other stories revolved around Jake and Paulina seeking out a justice of the peace to get married, only to end up calling it off, while Ryan and Vicky fought their attraction to one another, only to give in to their desires at the end of the episode. The special brought in low ratings, ranking 78th out of 96 programs that week with a 6.1/11 rating/share, ranking a distant third in its timeslot, behind Full House (#11, 12.7/24) and Home Improvement (#3, 14.8/27) on ABC, and Rescue 911 (#14, 12.3/23) on CBS. Another AW nighttime special was not attempted, although its sister show, Days of Our Lives, produced more one-off episodes. References American television soap operas 1990s American television specials Another World (TV series) 1992 television specials 1992 in American television
https://en.wikipedia.org/wiki/%2440%20a%20Day
$40 a Day was a Food Network show hosted by Rachael Ray. In each episode, Ray takes a one-day trip to an American, Canadian, or European city with only $40 US, to spend on food. While touring the city, she finds restaurants to go to (often based on local recommendations), and usually manages to fit three meals and some sort of snack or after-dinner drink into her small budget. The show premiered on April 1, 2002, five months after the debut of 30 Minute Meals, making it her second show on the Food Network. Some clips are sometimes used in Ray's later series, Rachael Ray's Tasty Travels. Another Food Network series, Giada's Weekend Getaways starring Giada De Laurentiis, is similar in format. In 2010, The Travel Channel began airing reruns of the show. As of 2013, the show is no longer in reruns on the Travel Channel. Details According to Ray, visiting a fast food restaurant, particularly those of national chains, is considered cheating (she says so explicitly in the Orlando episode). On occasion, smaller sit-down restaurant chains (such as Bahama Breeze in the Las Vegas episode, or Bongos in the South Beach episode) are visited. Generally, non-food items and non-food-related activities are not included in her budget. Ray always offers tips on what to see in the various cities, as well as hints on how to save money and find bargains while traveling. She also emphasizes researching whatever city she plans to visit through the Internet and asking the local citizens for their recommendations. Initially, Ray only used item prices against her $40 limit. She started including applicable taxes and tips during the first season. On occasion, she does go over budget; however, during her trips to Philadelphia and Arizona, she did so on purpose. Her cheapest day was in Vancouver, British Columbia, in 2003, when she spent just under $25 USD including taxes and tips (at the time, less than $40 Canadian, although she budgeted for $40 USD). On occasions, she has had to get creative to stay on-budget; for example, she accidentally blew half her budget on her second meal in her first Miami episode. The pilot, shot in Los Angeles, had a 12-hour limit, but subsequent episodes raised it to 24 hours. Usually episodes begin in the morning with breakfast, occasionally brunch. Episodes almost always feature four paid meals, but on at least one occasion, in the Research Triangle (Raleigh-Durham-Chapel Hill, NC), she did five meals. Also on rare occasion, only three meals are paid, and a fourth ends up being free. On only one occasion, in Antigua, she partook of a hotel's free Continental breakfast, but she still did four paid meals in that episode. On her first visit to Las Vegas during the first season, Ray began with dinner and stayed overnight, ending with breakfast. She did several episodes in Europe when the euro was still valued less than the U.S. dollar. She has not visited Europe since the U.S. dollar has fallen under the euro in value. She has also visited s
https://en.wikipedia.org/wiki/Uninitialized%20variable
In computing, an uninitialized variable is a variable that is declared but is not set to a definite known value before it is used. It will have some value, but not a predictable one. As such, it is a programming error and a common source of bugs in software. Example of the C language A common assumption made by novice programmers is that all variables are set to a known value, such as zero, when they are declared. While this is true for many languages, it is not true for all of them, and so the potential for error is there. Languages such as C use stack space for variables, and the collection of variables allocated for a subroutine is known as a stack frame. While the computer will set aside the appropriate amount of space for the stack frame, it usually does so simply by adjusting the value of the stack pointer, and does not set the memory itself to any new state (typically out of efficiency concerns). Therefore, whatever contents of that memory at the time will appear as initial values of the variables which occupy those addresses. Here's a simple example in C: void count( void ) { int k, i; for (i = 0; i < 10; i++) { k = k + 1; } printf("%d", k); } The final value of k is undefined. The answer that it must be 10 assumes that it started at zero, which may or may not be true. Note that in the example, the variable i is initialized to zero by the first clause of the for statement. Another example can be when dealing with structs. In the code snippet below, we have a struct student which contains some variables describing the information about a student. The function register_student leaks memory contents because it fails to fully initialize the members of struct student new_student. If we take a closer look, in the beginning, age, semester and student_number are initialized. But the initialization of the first_name and last_name members are incorrect. This is because if the length of first_name and last_name character arrays are less than 16 bytes, during the strcpy, we fail to fully initialize the entire 16 bytes of memory reserved for each of these members. Hence after memcpy()'ing the resulted struct to output, we leak some stack memory to the caller. struct student { unsigned int age; unsigned int semester; char first_name[16]; char last_name[16]; unsigned int student_number; }; int register_student(struct student *output, int age, char *first_name, char *last_name) { // If any of these pointers are Null, we fail. if (!output || !first_name || !last_name) { printf("Error!\n"); return -1; } // We make sure the length of the strings are less than 16 bytes (including the null-byte) // in order to avoid overflows if (strlen(first_name) > 15 || strlen(last_name) > 15) { printf("first_name and last_name cannot be longer than 16 characters!\n"); return -1; } // Initializing the members struct student new_student;
https://en.wikipedia.org/wiki/World%20Ocean%20Database%20Project
The World Ocean Database Project, or WOD, is a project established by the Intergovernmental Oceanographic Commission (IOC). The project leader is Sydney Levitus who is director of the International Council for Science (ICSU) World Data Center (WDC) for Oceanography, Silver Spring. In recognition of the success of the IOC Global Oceanographic Data Archaeological and Rescue Project (GODAR project), a proposal was presented at the 16th Session of the Committee on International Oceanographic Data and Information Exchange (IODE), which was held in Lisbon, Portugal, in October–November 2000, to establish the World Ocean Database Project. This project is intended to stimulate international exchange of modern oceanographic data and encourage the development of regional oceanographic databases as well as the implementation of regional quality control procedures. This new Project was endorsed by the IODE at the conclusion of the Portugal meeting, and the IOC subsequently approved this project in June 2001. The World Ocean Database represents the world’s largest collection of ocean profile-plankton data available internationally without restriction. Data comes from the: (a) Sixty-five National Oceanographic Data Centers and nine Designated National Agencies (DNAs) (in Croatia, Finland, Georgia, Malaysia, Romania, Senegal, Sweden, Tanzania, and Ukraine), (b) International Ocean Observing Projects such as the completed World Ocean Circulation Experiment (WOCE) and Joint Global Ocean Flux Study (JGOFS), as well as currently active programs such as CLIVAR and Argo, (c) International Ocean Data Management Projects such as the IOC/IODE Global Oceanographic Data Archaeology and Rescue Project (GODAR), and (d) Real-time Ocean Observing Systems such as the IOC/IODE Global Temperature-Salinity Profile Project (GTSPP). All ocean data acquired by WDC Silver Spring – USA are considered as part of the WDC archive and are freely available as public domain data. Comparison of World Ocean Databases The World Ocean Database was first released in 1994 and updates have been released approximately every four years, 1998, 2001, and 2005. The most recent World Ocean Database series, WOD09, was released in September 2009. The WOD09 has more than 9 million temperature profiles and 3.6 million salinity profiles. The table shows a comparison of the number of stations by instrument type in WOD09 with previous NODC/WDC global ocean databases. Instrument Types Ocean profile, plankton data, and metadata are available in the World Ocean Database for 29 depth-dependent variables (physical and biochemical) and 11 instruments types: Ocean Station Data (OSD), Mechanical Bathythermograph (MBT), Expendable Bathythermograph (XBT), Conductivity, Temperature, Depth (CTD), Undulating Oceanographic Recorder (UOR), Profiling Float (PFL), Moored Buoy (MRB), Drifting Buoy (DRB), Gliders (GLD), Autonomous Pinniped Bathythermograph (APB). Word Ocean Database Products The data in the World Ocean Dat
https://en.wikipedia.org/wiki/Superkey
In the relational data model a superkey is a set of attributes that uniquely identifies each tuple of a relation. Because superkey values are unique, tuples with the same superkey value must also have the same non-key attribute values. That is, non-key attributes are functionally dependent on the superkey. The set of all attributes is always a superkey (the trivial superkey). Tuples in a relation are by definition unique, with duplicates removed after each operation, so the set of all attributes is always uniquely valued for every tuple. A candidate key (or minimal superkey) is a superkey that can't be reduced to a simpler superkey by removing an attribute. For example, in an employee schema with attributes employeeID, name, job, and departmentID, if employeeID values are unique then employeeID combined with any or all of the other attributes can uniquely identify tuples in the table. Each combination, {employeeID}, {employeeID, name}, {employeeID, name, job}, and so on is a superkey. {employeeID} is a candidate key, since no subset of its attributes is also a superkey. {employeeID, name, job, departmentID} is the trivial superkey. If attribute set K is a superkey of relation R, then at all times it is the case that the projection of R over K has the same cardinality as R itself. Example First, list out all the sets of attributes: • {} • {Monarch Name}   • {Monarch Number}   • {Royal House} • {Monarch Name, Monarch Number} • {Monarch Name, Royal House} • {Monarch Number, Royal House} • {Monarch Name, Monarch Number, Royal House} Second, eliminate all the sets which do not meet superkey's requirement. For example, {Monarch Name, Royal House} cannot be a superkey because for the same attribute values (Edward, Plantagenet), there are two distinct tuples: (Edward, II, Plantagenet) (Edward, III, Plantagenet) Finally, after elimination, the remaining sets of attributes are the only possible superkeys in this example: {Monarch Name, Monarch Number} — this is also the candidate key {Monarch Name, Monarch Number, Royal House} In reality, superkeys cannot be determined simply by examining one set of tuples in a relation. A superkey defines a functional dependency constraint of a relation schema which must hold for all possible instance relations of that relation schema. See also Alternate key Candidate key Compound key Foreign key Primary key References Further reading External links Relation Database terms of reference, Keys: An overview of the different types of keys in an RDBMS Data modeling de:Superschlüssel zh:超鍵
https://en.wikipedia.org/wiki/Single-frequency%20network
A single-frequency network or SFN is a broadcast network where several transmitters simultaneously send the same signal over the same frequency channel. Analog AM and FM radio broadcast networks as well as digital broadcast networks can operate in this manner. SFNs are not generally compatible with analog television transmission, since the SFN results in ghosting due to echoes of the same signal. A simplified form of SFN can be achieved by a low power co-channel repeater, booster or broadcast translator, which is utilized as a gap filler transmitter. The aim of SFNs is efficient utilization of the radio spectrum, allowing a higher number of radio and TV programs in comparison to traditional multi-frequency network (MFN) transmission. An SFN may also increase the coverage area and decrease the outage probability in comparison to an MFN, since the total received signal strength may increase to positions midway between the transmitters. SFN schemes are somewhat analogous to what in non-broadcast wireless communication, for example cellular networks and wireless computer networks, is called transmitter macrodiversity, CDMA soft handoff and Dynamic Single Frequency Networks (DSFN). SFN transmission can be considered as creating a severe form of multipath propagation. The radio receiver receives several echoes of the same signal, and the constructive or destructive interference among these echoes (also known as self-interference) may result in fading. This is problematic especially in wideband communication and high-data rate digital communications, since the fading in that case is frequency-selective (as opposed to flat fading), and since the time spreading of the echoes may result in intersymbol interference (ISI). Fading and ISI can be avoided by means of diversity schemes and equalization filters. Transmitters, which are part of a SFN, should not be used for navigation via direction finding as the direction of signal minima or signal maxima can differ from the direction to the transmitter. OFDM and COFDM In wideband digital broadcasting, self-interference cancellation is facilitated by the OFDM or COFDM modulation method. OFDM uses a large number of slow low-bandwidth modulators instead of one fast wide-band modulator. Each modulator has its own frequency sub-channel and sub-carrier frequency. Since each modulator is very slow, we can afford to insert a guard interval between the symbols, and thus eliminate the ISI. Although the fading is frequency-selective over the whole frequency channel, it can be considered as flat within the narrowband sub-channel. Thus, advanced equalization filters can be avoided. A forward error correction code (FEC) can counteract some of the sub-carriers being exposed to too much fading to be correctly demodulated. OFDM is utilized in the terrestrial digital TV broadcasting system DVB-T (used in Europe and other regions), ISDB-T (used in Japan and Brazil) and in ATSC 3.0. OFDM is also widely used in digital rad
https://en.wikipedia.org/wiki/Source-code%20editor
A source-code editor is a text editor program designed specifically for editing source code of computer programs. It may be a standalone application or it may be built into an integrated development environment (IDE). Characteristics Source-code editors have characteristics specifically designed to simplify and speed up typing of source code, such as syntax highlighting, indentation, autocomplete and brace matching functionality. These editors also provide a convenient way to run a compiler, interpreter, debugger, or other program relevant for the software-development process. So, while many text editors like Notepad can be used to edit source code, if they don't enhance, automate or ease the editing of code, they are not source-code editors. Structure editors are a different form of source-code editor, where instead of editing raw text, one manipulates the code's structure, generally the abstract syntax tree. In this case features such as syntax highlighting, validation, and code formatting are easily and efficiently implemented from the concrete syntax tree or abstract syntax tree, but editing is often more rigid than free-form text. Structure editors also require extensive support for each language, and thus are harder to extend to new languages than text editors, where basic support only requires supporting syntax highlighting or indentation. For this reason, strict structure editors are not popular for source code editing, though some IDEs provide similar functionality. A source-code editor can check syntax while code is being entered and immediately warn of syntax problems. A few source-code editors compress source code, typically converting common keywords into single-byte tokens, removing unnecessary whitespace, and converting numbers to a binary form. Such tokenizing editors later uncompress the source code when viewing it, possibly prettyprinting it with consistent capitalization and spacing. A few source-code editors do both. The Language Server Protocol, first used in Microsoft's Visual Studio Code, allows for source code editors to implement an LSP client that can read syntax information about any language with a LSP server. This allows for source code editors to easily support more languages with syntax highlighting, refactoring, and reference finding. Many source code editors such as Neovim and Brackets have added a built-in LSP client while other editors such as Emacs, vim, and Sublime Text have support for an LSP Client via a separate plug-in. History In 1985, Mike Cowlishaw of IBM created LEXX while seconded to the Oxford University Press. LEXX used live parsing and used color and fonts for syntax highlighting. IBM's LPEX (Live Parsing Extensible Editor) was based on LEXX and ran on VM/CMS, OS/2, OS/400, Windows, and Java Although the initial public release of vim was in 1991, the syntax highlighting feature was not introduced until version 5.0 in 1998. In 2003, Notepad++, a source code editor for Windows, was released
https://en.wikipedia.org/wiki/WCET
WCET may refer to: Worst-case execution time, a computer science term WCET (TV), a PBS station serving the Cincinnati area Wireless Communication Engineering Technologies Certification, an IEEE certification regarding wireless technologies Western Cooperative for Educational Telecommunications
https://en.wikipedia.org/wiki/Global%20Certification%20Forum
The Global Certification Forum, known as GCF, is an London-based partnership between mobile network operators, mobile device manufacturers and the test industry. GCF was founded in 1999, and its membership has been responsible for creating an independent certification programme to help ensure global interoperability between mobile devices and networks. The GCF certification process is based on technical requirements as specified within dedicated test specifications provided by the 3GPP, 3GPP2, OMA, IMTC, the GSM Association and others. The current GCF membership includes mobile network operators, more than 40 leading terminal manufacturers and over 65 test equipment manufacturers, test laboratories and other organizations from mainly a test environment. Recognized Test Organizations Recognised Test Organisations (RTO) are GCF Members that have demonstrated they possess the experience, qualifications and systems to assess mobile phones and wireless devices against GCF's Certification Criteria. The Conformance and Field Trial RTOs are organisations to ensure quality compliance for devices per technical standards (e.g. 3GPP) and in industry guidelines for live networks (e.g. GSMA TS.11), respectively. A GCF-certified device is much more likely to perform its best as it would be configured to fit the “DNAs” of the network connected and have resolved any issue before its product launch. Therefore GCF certification enhances the commercial success of a model as well as its brand owner in the fiercely competitive mobile market. From 1 January 2013, it became a requirement that all device testing associated with GCF Certification must be undertaken by an RTO. The scheme recognises Test Organisations in three distinct disciplines: Conformance Testing, Field Trials and Interoperability Testing. To become an RTO, an organisation must submit a declaration confirming that it understands GCF procedures and has the ability to conduct testing in accordance with GCF rules and the relevant RTO requirements. From 1 January 2015, GCF took over all certification activities previously handled by the CDMA Certification Forum (CCF). The current CEO is Lars Nielsen. See also Federal Communications Commission PTCRB, which provides the framework for GSM Mobile Equipment (ME) Type Certification in North America. References External links Official website Mobile telecommunications standards Mobile phone standards 1999 establishments Non-profit organisations based in London
https://en.wikipedia.org/wiki/Ping%27an%20Avenue
Ping'an Avenue (; English translation: "Peaceful Avenue") refers to a section of the road network in Beijing, China. Ping'an Avenue is not the name of any particular road; it refers to the stretch of roads from Guanyuan Bridge on the western 2nd Ring Road through to Dongsishitiao Bridge on the eastern 2nd Ring Road. Streets in Beijing
https://en.wikipedia.org/wiki/The%20Jackie%20Gleason%20Show
The Jackie Gleason Show is the name of a series of American network television shows that starred Jackie Gleason, which ran from 1952 to 1970, in various forms. Cavalcade of Stars Gleason's first variety series, which aired on the DuMont Television Network under the title Cavalcade of Stars, first aired June 4, 1949. The show's first host was comedian Jack Carter, who was followed by Jerry Lester. Lester jumped to NBC in June 1950 to host the late-night show Broadway Open House, and Gleason—who had made his mark filling in for William Bendix as the title character on the first television incarnation of The Life of Riley sitcom—stepped into Cavalcade on July 15, 1950 and became an immediate sensation. The show was broadcast live in front of a theater audience, and offered the same kind of vaudevillian entertainment common to early television revues. Gleason's guests included New York-based performers of stage and screen, including Bert Wheeler, Smith and Dale, Patricia Morison, and Vivian Blaine. Production values were modest, owing to DuMont's humble facilities and a thrifty sponsor (Quality Drugs, representing most of the nation's drug stores). In 1952, CBS president William S. Paley offered Gleason a considerably higher salary to move to that network. The series was retitled The Jackie Gleason Show and premiered on CBS Television on September 20, 1952. In 1953, CBS' own orchestral accordionist John Serry Sr. made a cameo appearance. While much of DuMont's programming archive was destroyed after they ceased broadcasting, a surprising number of Cavalcade of Stars episodes survive, including several episodes at the UCLA Film and Television Archive. Additionally, at least 14 Gleason episodes survive at the Paley Center for Media. In his book The Forgotten Network, author David Weinstein mentions an unusual aspect of the DuMont network. He notes that while Drug Store Productions was technically the sponsor, they in turn sold the commercial air time to various companies and products. Weinstein notes this as an early example of U.S. network television moving away from the single-sponsor system typical of the early 1950s. He quotes former DuMont executive Ted Bergmann describing the DuMont version as featuring six commercial breaks during the hour, with each break comprising a single one-minute commercial. Format The show typically opened with a monologue from Gleason, followed by sketch comedy involving Gleason and a number of regular performers (including Art Carney) and a musical interlude featuring the June Taylor Dancers. (Taylor later became Gleason's sister-in-law; he married her sister Marilyn in 1975.) Gleason portrayed a number of recurring characters, including: supercilious, mustachioed playboy millionaire Reginald Van Gleason III (Gleason's personal favorite) friendly Joe the Bartender loudmouthed braggart Charlie Bratton Rum Dum, a hapless dipsomaniac with a walrus mustache mild-mannered Fenwick Babbitt The Bachelor who was foreve
https://en.wikipedia.org/wiki/Social%20balance%20theory
Social balance theory is a class of theories about balance or imbalance of sentiment relation in dyadic or triadic relations with social network theory. Sentiments can result in the emergence of two groups. Disliking exists between the two subgroups within liking agents. Development of the theory This theory evolved over time to produce models more closely resembling real-world social networks. It uses a balance index to measure the effect of local balance on that of a global level and also on a more intimate level, like in interpersonal relationships. Dorwin Cartwright and Frank Harary introduced clustering to account for multiple social cliques. Davis introduced hierarchical clustering to account for asymmetric relations. Recent research indicated that hubness in positive and negative subnetworks increases the balance of the signed network. See also Balance theory Sociograms References Social networks Sociological theories Further Readings
https://en.wikipedia.org/wiki/Computable%20function
Computable functions are the basic objects of study in computability theory. Computable functions are the formalized analogue of the intuitive notion of algorithms, in the sense that a function is computable if there exists an algorithm that can do the job of the function, i.e. given an input of the function domain it can return the corresponding output. Computable functions are used to discuss computability without referring to any concrete model of computation such as Turing machines or register machines. Any definition, however, must make reference to some specific model of computation but all valid definitions yield the same class of functions. Particular models of computability that give rise to the set of computable functions are the Turing-computable functions and the general recursive functions. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable. This term has since come to be identified with the computable functions. The effective computability of these functions does not imply that they can be efficiently computed (i.e. computed within a reasonable amount of time). In fact, for some effectively calculable functions it can be shown that any algorithm that computes them will be very inefficient in the sense that the running time of the algorithm increases exponentially (or even superexponentially) with the length of the input. The fields of feasible computability and computational complexity study functions that can be computed efficiently. According to the Church–Turing thesis, computable functions are exactly the functions that can be calculated using a mechanical calculation device given unlimited amounts of time and storage space. Equivalently, this thesis states that a function is computable if and only if it has an algorithm. An algorithm in this sense is understood to be a sequence of steps a person with unlimited time and an unlimited supply of pen and paper could follow. The Blum axioms can be used to define an abstract computational complexity theory on the set of computable functions. In computational complexity theory, the problem of determining the complexity of a computable function is known as a function problem. Definition Computability of a function is an informal notion. One way to describe it is to say that a function is computable if its value can be obtained by an effective procedure. With more rigor, a function is computable if and only if there is an effective procedure that, given any -tuple of natural numbers, will produce the value . In agreement with this definition, the remainder of this article presumes that computable functions take finitely many natural numbers as arguments and produce a value which is a single natural number. As counterparts to this informal description, there exist multiple formal, mathematical definitions. The class of computable functions can be defined in many equivalent models of computation, including Tu
https://en.wikipedia.org/wiki/Friends%20Reunited
Friends Reunited was a portfolio of social networking websites based upon the themes of reunion with research, dating and job-hunting. The first and eponymous website was created by a husband-and-wife team in the classic back-bedroom Internet start-up; it was the first online social network to achieve prominence in Britain, and it weathered the dotcom bust. Each site worked on the principle of user-generated content through which registered users were able to post information about themselves which could be searched by other users. A double-blind email system allowed contact between users. Formerly, the site cost £7.50 per year to use but it was later free of charge. The main Friends Reunited site aimed to reunite people who had in common a school, university, address, workplace, sports club or armed service; the sister site Genes Reunited enabled members to pool their family trees and identify common ancestors; the Dating and Jobs sister sites linked members with similar attributes, interests and/or locations. Friends Reunited branding was attached to CD collections of nostalgic popular music, and television programmes broadcast on the ITV network, which owned the site until August 2009. A book of members' stories was published in 2003 by Virgin Books, and a song about (and named after) the site was released by The Hussys in 2006. Following ITV's sale of the site to DC Thomson's Brightsolid subsidiary in 2009, the company relaunched Friends Reunited in March 2012 with a new emphasis on nostalgia and memories. On 26 February 2016 the website closed down, after 16 years of operation. History Establishment The website was conceived by Julie and Steve Pankhurst of Barnet, Hertfordshire and their friend Jason Porter in 1999. Julie Pankhurst's curiosity about the current status of old school friends inspired her to develop the website, exploiting a gap in the UK market following the success of US website Classmates.com. Friends Reunited was officially launched in June 2000. By the end of the year, it had 3,000 members, and a year later this had increased to 2.5 million. ITV ownership By December 2005, Friends Reunited had over 15 million members and was bought by British TV company ITV plc for £120 million ($208 million), plus further payments of up to £55 million based on its performance up to 2009. Friends Reunited had become popular enough that its uses went beyond the intentions of its founders. According to the Register, potential employers used entries to screen job applicants. Friends Reunited has been used by bitter partners to exact revenge on those who have abandoned them and users have been sued for comments made on Friends Reunited about other people. Friends Reunited features prominently in Ben Elton's detective novel Past Mortem (2004). The website launched a series of television advertisements for the first time in early 2007. In 2007, ITV Chairman Michael Grade described the site as "the sweet spot" of the internet and state
https://en.wikipedia.org/wiki/The%20Incredible%20Hulk%20%281978%20TV%20series%29
The Incredible Hulk is an American television series based on the Marvel Comics character the Hulk. The series aired on the CBS television network and starred Bill Bixby as Dr. David Banner, Lou Ferrigno as the Hulk, and Jack Colvin as Jack McGee. In the TV series, Dr. David Banner, a widowed physician and scientist who is presumed dead, travels across America under assumed names and finds himself in positions where he helps others in need despite his terrible secret: Following an accident that altered his cells, in times of extreme anger or stress, he transforms into a huge, savage, incredibly strong green-skinned humanoid, who has been named "the Hulk". In his travels, Banner earns money by working temporary jobs while searching for a way to either control or cure his condition. All the while, he is obsessively pursued by a tabloid newspaper reporter, Jack McGee, who is convinced that the Hulk is a deadly menace whose exposure would enhance his career. The series' two-hour television pilot movie, which established the Hulk's origins, aired on November 4, 1977. The series' 80 episodes were originally broadcast by CBS over five seasons from 1978 to 1982. It was developed and produced by Kenneth Johnson, who also wrote or directed some episodes. The series ends with David Banner continuing to search for a cure. In 1988, the filming rights were purchased from MCA/Universal by New World Television for a series of TV movies to conclude the series' story line. The broadcast rights were, in turn, transferred to rival NBC. New World (which at one point owned Marvel) produced three television films: The Incredible Hulk Returns (directed by Nicholas J. Corea), The Trial of the Incredible Hulk, and The Death of the Incredible Hulk (both directed by Bill Bixby). Since its debut, The Incredible Hulk has garnered a worldwide fan base. Premise David Banner, M.D., Ph.D., is a physician and scientist employed at California's Culver Institute, who is traumatized by the car accident that killed his beloved wife, Laura. Haunted by his inability to save her, Banner and his research partner, Dr. Elaina Marks, study people who were able to summon superhuman strength during moments of extreme stress. Obsessed with discovering why he was unable to exhibit such super-strength under similar conditions, Banner hypothesizes that high levels of gamma radiation from sunspots contributed to the subjects' increase in strength. Impatient to test his theory, Banner conducts an unsupervised experiment in the laboratory, bombarding himself with gamma radiation. However, the radiology equipment has recently been recalibrated, and Banner unknowingly receives a massive overdose. He initially thinks that the experiment has failed, but, when he injures himself while changing a flat tire, Banner's anger triggers his transformation into a , , green-skinned, superhumanly strong creature who is driven by rage, and has only a primitive, sub-human intelligence. The creature reverts to Ban
https://en.wikipedia.org/wiki/Hack%20Canada
Hack Canada is a Canadian organization run by hackers and phreakers that provides information mainly about telephones, computer technology, and legal issues related to technology. Founded in 1998 by CYBØRG/ASM, HackCanada has been in media publications many times, including Wired News and the Edmonton Sun newspaper (as well as other regional newspapers) for developments such as A Palm Pilot red boxing program. Hackcanada has also been featured in books such as Hacking for Dummies () and Steal This Computer Book. Hackcanada was also featured often on the Hacker News Network. On November 29, 2017, almost twenty years after its registration, the HackCanada.com domain went offline and now displays a "This Domain Name Has Expired" message. References External links Hacker groups Scientific organizations based in Canada 1998 establishments in Canada Organizations established in 1998
https://en.wikipedia.org/wiki/Sleepycat%20License
The Sleepycat License (sometimes referred to as Berkeley Database License or the Sleepycat Public License) is a copyleft free software license used by Oracle Corporation for the open-source editions of Berkeley DB, Berkeley DB Java Edition and Berkeley DB XML embedded database products older than version 6.0.20. (Starting with version 6.0.20, the open-source editions are instead licensed under the GNU AGPL v3.) The license is named after its original publisher Sleepycat Software, Inc., a company that was merged into Oracle in 2006. See also Comparison of free and open-source software licenses References External links License Berkeley DB Paper Discussion on the potential of the license at libreplanet-discuss mailing list Free and open-source software licenses Copyleft software licenses
https://en.wikipedia.org/wiki/Consistency%20%28database%20systems%29
In database systems, consistency (or correctness) refers to the requirement that any given database transaction must change affected data only in allowed ways. Any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof. This does not guarantee correctness of the transaction in all ways the application programmer might have wanted (that is the responsibility of application-level code) but merely that any programming errors cannot result in the violation of any defined database constraints. Consistency can also be understood as after a successful write, update or delete of a Record, any read request immediately receives the latest value of the Record. As an ACID guarantee Consistency is one of the four guarantees that define ACID transactions; however, significant ambiguity exists about the nature of this guarantee. It is defined variously as: The guarantee that database constraints are not violated, particularly once a transaction commits. The guarantee that any transactions started in the future necessarily see the effects of other transactions committed in the past. As these various definitions are not mutually exclusive, it is possible to design a system that guarantees "consistency" in every sense of the word, as most relational database management systems in common use today arguably do. As a CAP trade-off The CAP theorem is based on three trade-offs, one of which is "atomic consistency" (shortened to "consistency" for the acronym), about which the authors note, "Discussing atomic consistency is somewhat different than talking about an ACID database, as database consistency refers to transactions, while atomic consistency refers only to a property of a single request/response operation sequence. And it has a different meaning than the Atomic in ACID, as it subsumes the database notions of both Atomic and Consistent." In the CAP theorem, you can only have two of the following three properties: consistency, availability, or partition tolerance. Therefore, consistency may have to be traded off in some database systems. See also Consistency model CAP theorem Referential integrity Eventual consistency References Data management Transaction processing
https://en.wikipedia.org/wiki/Member
Member may refer to: Military jury, referred to as "Members" in military jargon Element (mathematics), an object that belongs to a mathematical set In object-oriented programming, a member of a class Field (computer science), entries in a database Member variable, a variable that is associated with a specific object Limb (anatomy), an appendage of the human or animal body Euphemism for penis Structural component of a truss, connected by nodes User (computing), a person making use of a computing service, especially on the Internet Member (geology), a component of a geological formation Member of parliament The Members, a British punk rock band Meronymy, a semantic relationship in linguistics Church membership, belonging to a local Christian congregation, a Christian denomination and the universal Church Member, a participant in a club or learned society See also
https://en.wikipedia.org/wiki/Electronic%20voting%20in%20Canada
Federal elections use hand-counted paper ballots. Provincial elections use paper ballots, some provinces have introduced computer ballot counting (vote tabulators), and the Northwest Territories has experimented with Internet voting for absentee voting. Paper ballots with computer vote tabulators have been used since at least the 1990s at the municipal level. A federal committee has recommended against national Internet voting. Committee reports and analysis from Nova Scotia, New Brunswick, Quebec, Ontario, and British Columbia have all recommended against provincial Internet voting. Elections Quebec has studied Internet voting and wants to continue to do so. Some municipalities in Ontario and Nova Scotia provide Internet voting. There are no Canadian electronic voting standards. Federal There is no electronic or online voting in Canadian federal elections. Paper ballots are hand-counted. For national elections, there is a uniform set of standards for voting. This governing law is the Canada Elections Act. The Act is c. 9, assented to (made law) 31 May 2000. It has been amended several times since 2000. In 2014, it was amended (2014, c. 12, s. 8.) to require the prior approval of a majority in both the Senate and House of Commons for electronic voting, rather than just Senate and House committees. The relevant provision applying to electronic voting is: PART 2 CHIEF ELECTORAL OFFICER AND STAFF Alternative voting process 18.1 The Chief Electoral Officer may carry out studies on voting, including studies respecting alternative voting processes, and may devise and test an alternative voting process for future use in a general election or a by-election. Such a process may not be used for an official vote without the prior approval of the committees of the Senate and of the House of Commons that normally consider electoral matters or, in the case of an alternative electronic voting process, without the prior approval of the Senate and the House of Commons. Federal Initiative to Increase Voter Turnout It was reported that 'Elections Canada hoped to test web voting by 2013, beginning with a byelection. "The general philosophy is to take the ballot box to the voter," says Mayrand, Canada's chief electoral officer.' Elections Canada released a report requesting approval to conduct an "electronic voting test-run in a byelection by 2013". The tests of online voting never took place. 2010 Federal Dialogue on Internet Voting On 26 January 2010, Elections Canada, in conjunction with partners organised The Canada-Europe Transatlantic Dialogue (Strategic Knowledge Cluster) - Internet Voting: What Can Canada Learn? Examples of Internet voting from Europe and from Canadian municipalities were presented. 2016 Federal Consultation on Electoral Reform, Including Online Voting On 7 June 2016, the House of Commons created a Special Committee on Electoral Reform. The committee was charged "to identify and conduct a study of viable alternate
https://en.wikipedia.org/wiki/Henry%20Tuke
Henry Tuke (24 March 1755 – 11 August 1814) co-founded with his father, William Tuke, the Retreat asylum in York, England, a humane alternative to the nineteenth-century network of asyla, based on Quaker principles. He was the author of several moral and theological treatises which have been translated into German and French. He was a subscriber to the African Institution, the body which set out to create a viable, civilized refuge for freed slaves in Sierra Leone, Africa. Historic ship The 1824 ship Henry Tuke, 365 tons, was built by Thatcher Magoun in Medford, MA, and owned by Daniel Pinckney Parker and John Chandler, Jr. It was a whaler in Warren, RI in 1846. See also Tuke family References English Quakers English humanitarians Penal system in England English non-fiction writers 1755 births 1814 deaths Henry Whaling ships Ships built in Medford, Massachusetts English male non-fiction writers
https://en.wikipedia.org/wiki/PAR2
PAR2 may refer to: Parchive, an error correction system for computer files. The second version is known as PAR2. Protease activated receptor 2, a G-protein coupled receptor protein PAR2, one of the pseudoautosomal regions of the X and Y chromosomes
https://en.wikipedia.org/wiki/Santa%20and%20the%20Three%20Bears
Santa and the Three Bears is a 1970 animated feature film, which aired in syndication on television regularly during the holiday season. Background The film was originally pitched to TV networks, which rejected it as it lacked a villain, but was then shown in theaters instead. This special has been rerun on TBN, USA Network, FOX Family (now Freeform), and on KTLA channel 5 in Los Angeles. It also received a "blue ribbon" award for Best Family Film at the San Francisco International Film Festival. The live-action sequences, directed by Barry Mahon, at the beginning and end of the film are often edited out in television reruns. The edited version was later released on VHS in 1992 by Kids Klassics, and distributed by GoodTimes Home Video. Plot summary Two young bears, Nikomi and Chinook, know nothing of Christmas until the local park ranger tells them about the legend, and they become curious to meet Santa Claus. Their mother, Nana, is preparing for Winter hibernation and cynically tells her children there is no Santa, but they are determined to believe. Mother finds it impossible to begin their sleep, since the young cubs wish to stay awake until Santa arrives. Voice cast Hal Smith as Grandfather, Santa and Mr. Ranger Jean Vander Pyl (credited as Jean van der Pyl) as Nana Christina Ferra-Gilmore (credited as Annette Ferra) as Nikomi Bobby Riha as Chinook Joyce Taylor Ken Engels Beth Goldfarb as Beth Brian Hobbs as Brian Lenard Keith Kathy Lemmon Roxanne Poole Michael Rodriguez Live action segments The theatrical release of the film contains live-action sequences directed by director Barry Mahon, running for around four minutes in total. These sequences feature actor Hal Smith and two young children (Brian Hobbs and Beth Goldfarb) sitting in a cabin and conversing by the fireplace and Christmas tree, and a short montage of mechanical toys, Christmas decorations, and a pet kitten, during the opening and closing credits. The film has been also released by Modern Sound Pictures Inc. with the live-action sequences cut. Bill Hutten and Tony Love, the film's animators, created another Christmas television special in 1983 named The Christmas Tree Train, also starring a bear cub alongside a fox cub and a park ranger, which led to a line of specials called Buttons & Rusty. The film is currently owned by Multicom Entertainment Group. See also List of American films of 1970 List of Christmas films References External links 1970 films 1970 animated films 1970 television films 1970s American animated films 1970s children's animated films 1970 television specials 1970s animated television specials 1970s Christmas films American films with live action and animation American Christmas films Animated Christmas films Christmas television specials American animated featurettes Animated films about bears 1970s English-language films Santa Claus in film
https://en.wikipedia.org/wiki/Kido
Kido or KIDO may refer to: Kido (surname) KIDO, an American radio station Kidō, a form of magic used by characters in the manga and anime Bleach Conficker or Kido, computer worm Gao Hanyu or Kido, Chinese actor and singer
https://en.wikipedia.org/wiki/System%27s%20Twilight
System's Twilight: An Abstract Fairy Tale is a graphical interactive fiction computer game created by Andrew Plotkin and released in 1994. Summary The game is a combination of puzzle and story, combining several different kinds of logic puzzles and word puzzles. The puzzles include variations of Set, Black Box, and Sokoban, as well as many others. The overarching story is an allegory in which the player and other characters are programs in a broken, dysfunctional computer environment. Originally, Plotkin released System's Twilight as shareware. Since 2000, it has been re-released as binary-only freeware. It runs only on the Mac OS Classic environment, but can be run in emulation on other platforms. Reception MacAddict commented that System's Twilight felt similar to 3 in Three, with hard puzzles, quality sound and graphics, and a witty storyline. Inside Mac Games rated the game four out of five, and also noted the game's similarity to 3 in Three, saying it took "the genre of Cliff Johnson game [...] to new heights." Inside Mac Games called the game "very well crafted in all aspects": the story is involved and complex and the puzzles are clever and original. The review also praised the interface, graphics, and sound. Adventure Gamers felt that System's Twilight improved on "Cliff Johnson’s metapuzzle adventures" by adding "some much-needed nonlinearity as well as a stronger narrative." The game "synthesizes abstract puzzle-solving into an adventure game to great effect", with an ending that is rewarding "in terms of both adventuring and puzzle-solving." AllGame rated System's Twilight three and a half out of five stars, calling it "a very unusual game" with an ending that was "something of a letdown", and suggesting that it would appeal to people who like hard puzzles. For the AllGame reviewer, the graphics were average while the weird and amusing sounds complemented the gameplay. References External links System's Twilight System's Twilight Hint Guide by Wei-Hwa Huang 1994 video games 1990s interactive fiction Classic Mac OS-only games Video games developed in the United States Classic Mac OS games Single-player video games
https://en.wikipedia.org/wiki/Free%20Haven%20Project
The Free Haven Project was formed in 1999 by a group of Massachusetts Institute of Technology students with the aim to develop a secure, decentralized system of data storage. The group's work led to a collaboration with the United States Naval Research Laboratory to develop Tor, funded by DARPA. Distributed anonymous storage system The Project's early work focused on an anonymous storage system, Free Haven, which was designed to ensure the privacy and security of both readers and publishers. It contrasts Free Haven to anonymous publishing services to emphasize persistence rather than accessibility. Free Haven is a distributed peer-to-peer system designed to create a "servnet" consisting "servnet nodes" which each hold fragments ("shares") of documents, divided using Rabin's Information dispersal algorithm such that the publisher or file contents cannot be determined by any one piece. The shares are stored on the servnet along with a unique public key. To recover and recreate the file, a client broadcasts the public key to find fragments, which are sent to the client along anonymous routes. For greater security, Free Haven periodically moves the location of shares between nodes. Its function is similar to Freenet but with greater focus on persistence to ensure unpopular files do not disappear. The mechanisms that enable this persistence, however, are also the cause of some problems with inefficiency. A referral- or recommendation-based "metatrust" reputation system built into the servnet attempts to ensure reciprocity and information value by holding node operators accountable. Although nodes remain pseudonymous, communication is facilitated between operators through anonymous email. Work with Tor Tor was developed to by the US Naval Research Laboratory and the Free Haven Project to secure government communications, with initial funding from the US Office of Naval Research and DARPA. Tor was deployed in 2003, as their third generation of deployed onion routing designs. In 2005, the Electronic Frontier Foundation provided additional funding to the Free Haven Project. In 2006, the Tor Project was incorporated as a non-profit organization. References External links The Free Haven Project website Anonymity networks Cloud storage Computer security organizations Internet privacy organizations Tor (anonymity network) Tor onion services Science and technology in Massachusetts
https://en.wikipedia.org/wiki/Rael%20Dornfest
Rael Dornfest is an American computer programmer and author. He was a technical fellow and CTO of Charity: Water, and was previously an engineer at Twitter. He was founder and chief executive officer of Values of N, creator of "I Want Sandy" and "Stikkit: Little Yellow Notes that Think." Previously, he was chief technology officer at O'Reilly Media. He began working for Twitter after they bought the assets of his company Values of N. He led the RSS-DEV Working Group, which authored RSS 1.0 and is the author of Blosxom, a lightweight Perl-based publishing system. He was series editor of O'Reilly's Hacks series, and has coauthored a number of books including Google Hacks (), Mac OS X Panther Hacks (), and Google: The Missing Manual (). References External links @rael - Twitter page Rael Dornfest - Author bio at O'Reilly Hack Google - Article by Dornfest for TechTV Rules for Remixing (IT Conversations), a recording of a presentation at the 2005 Year of birth missing (living people) Living people American male bloggers American bloggers Computer programmers American technology writers O'Reilly writers American chief technology officers University of California, Davis alumni American technology chief executives 21st-century American non-fiction writers
https://en.wikipedia.org/wiki/Iconectiv
iconectiv is a supplier of network planning and network management services to telecommunications providers. Known as Bellcore after its establishment in the United States in 1983 as part of the break-up of the Bell System, the company's name changed to Telcordia Technologies after a change of ownership in 1996. The business was acquired by Ericsson in 2012, then restructured and rebranded as iconectiv in 2013. A major architect of the United States telecommunications system, the company pioneered many services, including caller ID, call waiting, mobile number portability, and toll-free telephone (800) service. It also pioneered the prepaid charging system and the Intelligent Network. Headquartered in Bridgewater, New Jersey (U.S.), iconectiv provides network and operations management, numbering, registry and fraud prevention services for the global telecommunications industry. It provides numbering services in more than a dozen countries, including serving as the Local Number Portability Administrator (LNPA) for the United States. In that capacity, iconectiv manages the Number Portability Administration Center (NPAC), the system that supports the implementation of local number portability. Founding iconectiv was established on October 20, 1983, as Central Services Organization as part of the 1982 Modification of Final Judgment that broke up the Bell System. It later received the name Bell Communications Research. Nicknamed Bellcore, it was a consortium established by the Regional Holding Companies upon their separation from AT&T. Since AT&T retained Bell Laboratories, the operating companies desired a separate research and development facility. Bellcore, the tenth company to register an Internet domain name in comTLD, provided joint research and development, was involved in standards setting, training, and centralized government point-of-contact functions for its co-owners, the seven Regional Holding Companies that were themselves divested from AT&T as holding companies for the 22 local Bell Operating Companies. Bellcore's initial staff and corporate culture were drawn from the nearby Bell Laboratories locations in northern New Jersey, plus additional staff from AT&T and regional operating companies. The company originally had its headquarters in Livingston with dedication by New Jersey Governor Thomas Kean in 1985, but moved its headquarters to Morristown a decade later. Bellcore also operated the former Bell System Center for Technical Education in Lisle, Illinois. Separation from the Baby Bells In 1996, the company was provisionally acquired by Science Applications International Corporation (SAIC). The sale was closed one year later, following a regulatory approval process that covered every U.S. state individually. Since the divested company no longer had any ownership connection with the Regional Bell Operating Companies (Baby Bells), the name was changed to Telcordia Technologies in 1999. The headquarters was moved to Piscataway, New
https://en.wikipedia.org/wiki/Predictor%40home
Predictor@home was a volunteer computing project that used BOINC software to predict protein structure from protein sequence in the context of the 6th biannual CASP, or Critical Assessment of Techniques for Protein Structure Prediction. A major goal of the project was the testing and evaluating of new algorithms to predict both known and unknown protein structures. Predictor@home was complementary to Folding@home. Whereas the latter aims to study the dynamics of protein folding, Predictor@home aimed to specify what the final tertiary structure will be. Also, the two projects differ significantly in the infrastructure they use. The project used BOINC software, whereas Folding@home maintains its own software completely outside of BOINC. However, for a time, Predictor@home competed with other BOINC protein structure prediction projects, such as Rosetta@home. Each uses different methods of rapidly and reliably predicting the final tertiary structure. Predictor@home is currently inactive. History Predictor@home holds the distinction of being the first independent BOINC project to be launched. The project was set up and run by Michela Taufer at The Scripps Research Institute. On September 6, 2006, Predictor@home was temporarily taken off line, with no new work units being sent out. In May, 2008, the project reverted to Alpha status while experimenting with new methods. Over the summer of 2008, the project servers were moved to the University of Michigan and as of December 2008, the project had not sent out any work for some months. BOINC stats sites were unable to obtain updated XML data, as this had been suspended by the project team. On June 10, 2009, the Predictor@home web site and forums were shut down. See also List of volunteer computing projects Rosetta@home SIMAP Grid computing Protein structure prediction References External links Berkeley Open Infrastructure for Network Computing (BOINC) Science in society Free science software Volunteer computing projects Scripps Research University of Michigan
https://en.wikipedia.org/wiki/Climateprediction.net
climateprediction.net (CPDN) is a volunteer computing project to investigate and reduce uncertainties in climate modelling. It aims to do this by running hundreds of thousands of different models (a large climate ensemble) using the donated idle time of ordinary personal computers, thereby leading to a better understanding of how models are affected by small changes in the many parameters known to influence the global climate. The project relies on the BOINC framework where voluntary participants agree to run some processes of the project at the client-side in their personal computers after receiving tasks from the server-side for treatment. CPDN, which is run primarily by Oxford University in England, has harnessed more computing power and generated more data than any other climate modelling project. It has produced over 100 million model years of data so far. , there are more than 12,000 active participants from 223 countries with a total BOINC credit of more than 27 billion, reporting about 55 teraflops (55 trillion operations per second) of processing power. Aims The aim of the climateprediction.net project is to investigate the uncertainties in various parameterizations that have to be made in state-of-the-art climate models. The model is run thousands of times with slight perturbations to various physics parameters (a 'large ensemble') and the project examines how the model output changes. These parameters are not known exactly, and the variations are within what is subjectively considered to be a plausible range. This will allow the project to improve understanding of how sensitive the models are to small changes and also to things like changes in carbon dioxide and sulphur cycle. In the past, estimates of climate change have had to be made using one or, at best, a very small ensemble (tens rather than thousands) of model runs. By using participants' computers, the project will be able to improve understanding of, and confidence in, climate change predictions more than would ever be possible using the supercomputers currently available to scientists. The climateprediction.net experiment is intended to help "improve methods to quantify uncertainties of climate projections and scenarios, including long-term ensemble simulations using complex models", identified by the Intergovernmental Panel on Climate Change (IPCC) in 2001 as a high priority. Hopefully, the experiment will give decision makers a better scientific basis for addressing one of the biggest potential global problems of the 21st century. As shown in the graph above, the various models have a fairly wide distribution of results over time. For each curve, on the far right, there is a bar showing the final temperature range for the corresponding model version. The further into the future the model is extended, the wider the variances between them. Roughly half of the variation depends on the future climate forcing scenario rather than uncertainties in the model. Any reduction
https://en.wikipedia.org/wiki/The%20Age%20of%20Intelligent%20Machines
The Age of Intelligent Machines is a non-fiction book about artificial intelligence by inventor and futurist Ray Kurzweil. This was his first book and the Association of American Publishers named it the Most Outstanding Computer Science Book of 1990. It was reviewed in The New York Times and The Christian Science Monitor. The format is a combination of monograph and anthology with contributed essays by artificial intelligence experts such as Daniel Dennett, Douglas Hofstadter, and Marvin Minsky. Kurzweil surveys the philosophical, mathematical and technological roots of artificial intelligence, starting with the assumption that a sufficiently advanced computer program could exhibit human-level intelligence. Kurzweil argues the creation of humans through evolution suggests that humans should be able to build something more intelligent than themselves. He believes pattern recognition, as demonstrated by vision, and knowledge representation, as seen in language, are two key components of intelligence. Kurzweil details how quickly computers are advancing in each domain. Driven by the exponential improvements in computer power, Kurzweil believes artificial intelligence will be possible and then commonplace. He explains how it will impact all areas of people's lives, including work, education, medicine, and warfare. As computers acquire human level faculties Kurzweil says people will be challenged to figure out what it really means to be human. Background Ray Kurzweil is an inventor and serial entrepreneur. In 1990 when this book was published he had already started three companies: Kurzweil Computer Products, Kurzweil Music Systems, and Kurzweil Applied Intelligence. The companies developed and sold reading machines for the blind, music synthesizers, and speech recognition software respectively. Optical character recognition, which he used in the reading machine, and speech recognition are both featured centrally in the book as examples of pattern recognition problems. After the publication of The Age of Intelligent Machines he expanded on its ideas with two follow-on books: The Age of Spiritual Machines and the best selling The Singularity is Near. Content Definition and history Kurzweil starts by trying to define artificial intelligence. He leans towards Marvin Minsky's "moving frontier" formulation: "the study of computer problems which have not yet been solved". Then he struggles with defining intelligence itself and concludes "there appears to be no simple definition of intelligence that is satisfactory to most observers". That leads to a discussion about whether evolution, the process, could be considered intelligent. Kurzweil concludes that evolution is intelligent, but with an IQ only "infinitesimally greater than zero". He penalizes evolution for the extremely long time it takes to create its designs. The human brain operates much more quickly, evidenced by the rate of progress in the last few thousand years, so the brain is more inte
https://en.wikipedia.org/wiki/FROG
In cryptography, FROG is a block cipher authored by Georgoudis, Leroux and Chaves. The algorithm can work with any block size between 8 and 128 bytes, and supports key sizes between 5 and 125 bytes. The algorithm consists of 8 rounds and has a very complicated key schedule. It was submitted in 1998 by TecApro, a Costa Rican software company, to the AES competition as a candidate to become the Advanced Encryption Standard. Wagner et al. (1999) found a number of weak key classes for FROG. Other problems included very slow key setup and relatively slow encryption. FROG was not selected as a finalist. Design philosophy Normally a block cipher applies a fixed sequence of primitive mathematical or logical operators (such as additions, XORs, etc.) on the plaintext and secret key in order to produce the ciphertext. An attacker uses this knowledge to search for weaknesses in the cipher which may allow the recovery of the plaintext. FROG's design philosophy is to hide the exact sequence of primitive operations even though the cipher itself is known. While other ciphers use the secret key only as data (which are combined with the plain text to produce the cipher text), FROG uses the key both as data and as instructions on how to combine these data. In effect an expanded version of the key is used by FROG as a program. FROG itself operates as an interpreter that applies this key-dependent program on the plain text to produce the cipher text. Decryption works by applying the same program in reverse on the cipher text. Description The FROG key schedule (or internal key) is 2304 bytes long. It is produced recursively by iteratively applying FROG to an empty plain text. The resulting block is processed to produce a well formatted internal key with 8 records. FROG has 8 rounds, the operations of each round codified by one record in the internal key. All operations are byte-wide and consist of XORs and substitutions. FROG is very easy to implement (the reference C version has only about 150 lines of code). Much of the code needed to implement FROG is used to generate the secret internal key; the internal cipher itself is a very short piece of code. It is possible to write an assembly routine of just 22 machine instructions that does full FROG encryption and decryption. The implementation will run well on 8 bit processors because it uses only byte-level instructions. No bit-specific operations are used. Once the internal key has been computed, the algorithm is fairly fast: a version implemented using 8086 assembler achieves processing speeds of over 2.2 megabytes per second when run on a 200 MHz Pentium PC. Security FROG's design philosophy is meant to defend against unforeseen/unknown types of attacks. Nevertheless, the very fact that the key is used as the encryption program means that some keys may correspond to weak encryption programs. David Wagner et al. found that 2−33 of the keys are weak and that in these cases the key can be broken with 258 cho
https://en.wikipedia.org/wiki/The%2010%25%20Solution%20for%20a%20Healthy%20Life
The 10% Solution for a Healthy Life (, paperback, 1993) is a health book written by computer scientist Ray Kurzweil and published in 1993. In the book, he explains to readers "How to Reduce Fat in Your Diet and Eliminate Virtually All Risk of Heart Disease and Cancer". Some of his recommendations have been updated and revised in subsequent years, particularly in his newer books: Fantastic Voyage: Live Long Enough to Live Forever and Transcend: Nine Steps to Living Well Forever. Summary Atherosclerosis is a disease which is characterized by a progressive buildup of rigid material inside artery walls and channels. Eventually, they become so clogged that blood flow is stopped and the victim suffers a heart attack. This disease is caused by excess cholesterol in the bloodstream and afflicts approximately ninety percent of Americans, though it is a gradual process and may not even be detectable until later life. Kurzweil cites various studies showing that increased levels of atherosclerosis in the U.S. and other western countries are linked to high levels of caloric fat intake. In much of Asia, fat intake is around ten percent of total food energy consumed, and heart disease there is almost nonexistent. Kurzweil goes on to show that in America, closer to forty percent of caloric intake is from fat. Numerous agencies such as the American Dietetic Association, American Heart Association and U.S. Surgeon General advocate thirty percent of caloric intake from fat. However, Kurzweil says this causes a comparatively slight reduction in atherosclerosis levels. He says that he thinks these agencies use an artificially high figure because they assume that nobody would even attempt to attain a lower level if it were recommended. Kurzweil advocates, based on his findings, only ten percent caloric intake be from fat. Hence, The 10% Solution. He says that these levels not only prevent Atherosclerosis but cause its reversal in existing cases. This also apparently lowers the chance of other diseases including cancer, strokes, hypertension and type 2 diabetes. He believes that eating a diet that is very low in fat reduces the risk of most major cancers by 90 percent or more. Kurzweil also claims it increases energy and leads to a generally happier life. Further he gives advice for exercise, suggesting walking, because it is low-impact, and easy for anyone to do. Foods to avoid High-sugar jams High-fat cheese Sweets (high in fat) Butter and margarine Eggs (high in cholesterol) Oil-based dressings (better: balsamic vinegar) Cakey muffins and croissant-type pastries (full of butter) Too much meat (particularly red meat), organ meats (like liver, brain) Vitamin and mineral supplements that include iron, and so-called "fortified" foods which have added iron. Hamburgers (a typical fast-food hamburger is around 50 percent fat by calories) References External links Wellness Information Health Support Resources Complete book broken into chapters Dieting book
https://en.wikipedia.org/wiki/Serial%20Attached%20SCSI
In computing, Serial Attached SCSI (SAS) is a point-to-point serial protocol that moves data to and from computer-storage devices such as hard disk drives and tape drives. SAS replaces the older Parallel SCSI (Parallel Small Computer System Interface, usually pronounced "scuzzy" or "sexy") bus technology that first appeared in the mid-1980s. SAS, like its predecessor, uses the standard SCSI command set. SAS offers optional compatibility with Serial ATA (SATA), versions 2 and later. This allows the connection of SATA drives to most SAS backplanes or controllers. The reverse, connecting SAS drives to SATA backplanes, is not possible. The T10 technical committee of the International Committee for Information Technology Standards (INCITS) develops and maintains the SAS protocol; the SCSI Trade Association (SCSITA) promotes the technology. Introduction A typical Serial Attached SCSI system consists of the following basic components: An initiator: a device that originates device-service and task-management requests for processing by a target device and receives responses for the same requests from other target devices. Initiators may be provided as an on-board component on the motherboard (as is the case with many server-oriented motherboards) or as an add-on host bus adapter. A target: a device containing logical units and target ports that receives device service and task management requests for processing and sends responses for the same requests to initiator devices. A target device could be a hard disk drive or a disk array system. A service delivery subsystem: the part of an I/O system that transmits information between an initiator and a target. Typically cables connecting an initiator and target with or without expanders and backplanes constitute a service delivery subsystem. Expanders: devices that form part of a service delivery subsystem and facilitate communication between SAS devices. Expanders facilitate the connection of multiple SAS End devices to a single initiator port. History SAS-1: 3.0 Gbit/s, introduced in 2004 SAS-2: 6.0 Gbit/s, available since February 2009 SAS-3: 12.0 Gbit/s, available since March 2013 SAS-4: 22.5 Gbit/s called "24G", standard completed in 2017 SAS-5: 45 Gbit/s is being developed Identification and addressing A SAS Domain is the SAS version of a SCSI domain—it consists of a set of SAS devices that communicate with one another by means of a service delivery subsystem. Each SAS port in a SAS domain has a SCSI port identifier that identifies the port uniquely within the SAS domain, the World Wide Name. It is assigned by the device manufacturer, like an Ethernet device's MAC address, and is typically worldwide unique as well. SAS devices use these port identifiers to address communications to each other. In addition, every SAS device has a SCSI device name, which identifies the SAS device uniquely in the world. One doesn't often see these device names because the port identifiers tend to identify t
https://en.wikipedia.org/wiki/Southern%20Cross%20Cable
The Southern Cross Cable is a trans-Pacific network of telecommunications cables commissioned in 2000. The network is operated by the Bermuda-registered company Southern Cross Cables Limited. The network has of submarine and of terrestrial fiber optic cables, all which operate in a triple-ring configuration. Initially, each cable had a bandwidth capacity of 120 gigabit/s. Southern Cross offers capacity services from 100M/STM-1 to 100Gbit/s OTU-4, including 1G, 10G and 40G Ethernet Private Line services. History In April 2008 this capacity was doubled, and was once again upgraded to 860 gigabit/s at the end of 2008. Southern Cross upgraded the existing system to 1.2 Tbit/s in May 2010. After successful trials of 40G technology the first 400G of a planned 800G upgrade has been completed in February 2012, and the remaining 400G was completed in December 2012. An additional 400G was deployed utilizing 100G coherent wavelength technology in July 2013, taking total system capacity to 2.6Tbit/s, with an additional 500Gbit/s to be deployed per segment by Q2 2014, increasing total system capacity to 3.6Tbit/s. About every two or three years, the Southern Cross Company makes an effort to upgrade the cables in some way or another. In June 2014 a further 900Gbps was added. The system currently runs at circa 10Tbs employing a mix of 100Gbs, 200Gbs and 250Gbs wavelengths. Landing points Alexandria, Sydney, NSW, Australia Brookvale, Sydney, NSW, Australia Suva, Fiji North West Point, Kiribati Whenuapai, New Zealand Takapuna, New Zealand Kahe Point, Oahu, Hawaii, United States Samuel M. Spencer Beach, Hawaiʻi island, Hawaii, United States Nedonna Beach, Oregon, United States Morro Bay, California, United States Access points Equinix, Sydney, New South Wales, Australia (terrestrial connection only) Westin Building, Seattle, Washington, United States (terrestrial connection only) CoreSite, San Jose, California, United States (terrestrial connection only) Network segments The network comprises 12 segments (length of segment in brackets): Submarine A. Alexandria-Whenuapai () C. Takapuna-Spencer Beach () D. Spencer Beach-Morro Bay () F. Kahe Point-Hillsboro, Oregon () G1. Suva-Kahe Point () G2. Brookvale-Suva () I. Spencer Beach-Kahe Point () Terrestrial B. Whenuapai-Takapuna () E. Hillsboro, Oregon-Morro Bay () E1. Morro Bay-San Jose () E2. San Jose-Hillsboro, Oregon () H. Alexandria-Brookvale () Topology The network topology is configured to have redundant paths and be self-healing in case of physical damage. In the cross section diagram shown: Insulating high density polyethylene () Copper tubing () Steel wires Optical fibers in water resistant jelly () Spying and interception In 2013 the New Zealand Herald reported that the owners of the Southern Cross cable had asked the United States National Security Agency to pay them for mass surveillance of New Zealand internet activity through the cable. In May 2014, John Minto, vice-president of the New
https://en.wikipedia.org/wiki/Graftgold
Graftgold was an independent computer game developer that came to prominence in the 1980s, producing numerous computer games on a variety of 8-bit, 16-bit and 32-bit platforms. History The Hewson era Graftgold was formed in 1983 when Steve Turner quit his day job as a commercial programmer to concentrate on producing computer games. When the work became too much for him to do alone, he hired a close friend, Andrew Braybrook, to work for him. After a small period of time developing games for the Dragon home computer, Graftgold soon turned their attention to the more lucrative Commodore 64 and ZX Spectrum markets. Much of Graftgold's early success came about through their association with Hewson Consultants. Formed by Andrew Hewson in the early 1980s, Hewson Consultants became one of the UK's most successful computer game publishers. Whereas many publishers at the time relied on larger parent companies to handle the manufacturing of their products, Andrew Hewson owned his own cassette duplication plant, affording them much greater control over their ability to respond to market trends. Many of Graftgold's most memorable titles were published by Hewson, including Paradroid, Uridium, Quazatron , and Ranarama. The Telecomsoft era Towards the end of the 1980s, it became apparent that Hewson Consultants suffered financial difficulties. Steve Turner decided it would be in Graftgold's best interest to seek another publisher. They left their partnership with Hewson and signed a publishing deal with Telecomsoft, the software division of British Telecom. Two of Hewson's in-house programmers, Dominic Robinson and John Cumming left the company to join Graftgold. Hewson was not happy to see their most successful development partner leave, particularly because Graftgold was due to deliver two keenly anticipated titlesMagnetron (by Steve Turner for the ZX Spectrum) and Morpheus (by Andrew Braybrook for the C64). Graftgold argued that since they were not contracted to Hewson, they were within their rights to seek an alternative publisher. Unable to sustain a legal battle, Hewson eventually settled with Telecomsoft out of court and parted company with Graftgold. Graftgold produced several titles for Telecomsoft from 1987 until 1989, including their first arcade conversion, Flying Shark. However, the sale of Telecomsoft to MicroProse in 1989 resulted in their critically acclaimed conversion of Rainbow Islands being eventually released by Ocean Software. The MicroProse/Activision era The dawn of the 1990s saw a shift in the way computer games were developed. Whereas the games of the 8-bit era were typically developed by a single individual within a matter of weeks to months, the more demanding 16-bit titles required larger teams, longer development times and considerably larger budgets. Royalties from their impressive catalogue of titles allowed Graftgold to make this transition with ease, hiring 30 additional people to work on a large number of products. Gr
https://en.wikipedia.org/wiki/Adult%20FriendFinder
Adult FriendFinder (AFF) is an internet-based, adult-oriented social networking service, online dating service and swinger personals community website, founded by Andrew Conru in 1996. In 2007 AFF was one of the 100 most popular sites in the United States; its competitors include sites such as Match.com. History In 1993, Andrew Conru created the first online dating site, WebPersonals. After selling that site in 1995, he launched FriendFinder.com, an early social networking site, in 1996. Days after the site went live, Conru found that people were posting naked pictures of themselves and seeking partners for adult-oriented activities. As a result, Conru started Adult FriendFinder, which he described as "a release valve". FriendFinder has since established other niche dating sites, including Senior FriendFinder, Amigos.com, BigChurch.com, and Alt.com. The parent company (Various, Inc.) had difficulty finding venture capital due to the adult nature of its business. In December 2007, the company was sold to the Penthouse Media Group for $500 million. Penthouse later changed its name to FriendFinder Networks. In October 2009, as part of an arrangement with The Kluger Agency, musician Flo Rida released a music video for his song "Touch Me" via Adult FriendFinder. A representative of the agency stated that it was "always great to combine a very sexy high octane record with a very sexy brand. On September 17, 2013, parent company FriendFinder Networks filed for Chapter 11 bankruptcy protection. In December 2013, FriendFinder Networks emerged from bankruptcy protection with reorganization in effect. Founder Andrew Conru gained control of the company and serves as CEO. Overview Accessing certain features, such as e-mail, private chat rooms, webcams, blogging, and a webzine, requires paid membership. Adult FriendFinder has an affiliate program, whereby webmasters are compensated for referring users to the site. Criticism Adult FriendFinder has been accused of committing systematic billing fraud. According to the complaints filed, the company has a practice of continuing to bill customers even after they have cancelled their service. Former employees of the company have claimed that this is their standard policy and not the result of errors. These employees have stated that the majority of customers do not notice the charges for many months. As of October 2014, hundreds of civil cases have been filed against the company and a criminal indictment was made by the Federal Trade Commission against the company. In 2007, Adult FriendFinder settled with the Federal Trade Commission over allegations that the company had used malware to generate explicit pop-up ads for the service on computers without user consent. Adult FriendFinder's acquisition by Penthouse was the subject of a 2007 lawsuit by Broadstream Capital Partners, a merchant bank that assists with mergers, alleging Penthouse breached a 2006 contract by purchasing the company without obtaining
https://en.wikipedia.org/wiki/CDBA
CDBA can mean: Clearance Diver's Breathing Apparatus, types of naval diver's rebreather: Siebe Gorman CDBA Carleton CDBA common data bus architecture' in computers. 'Current differencing buffered amplifier in electronics. Centro de Big Data e Analytics - an analytics team. Dragon boat associations California Dragon Boat Association - the Governing Body for Dragon Boat Racing in California. Chinese Dragon Boat Association - the National Governing Body for Dragon Boat Racing in China. Canberra Dragon Boat Association - the dragon boat body for the Australian Capital Territory. Rebreathers
https://en.wikipedia.org/wiki/Nag
Nag or NAG may refer to: Computers Nag, a multi user tasklist manager included in Horde (software) Numerical Algorithms Group, a software company NAG Numerical Library, numerical analysis software Numeric Annotation Glyphs, in computerized chess Music "Nag", a song on Joan Jett's album I Love Rock 'n' Roll Stage name of Jan-Erik Romøren of Norwegian band Tsjuder Organizations Neighbourhood action group, community volunteer groups in the United Kingdom Neue Automobil Gesellschaft, a defunct German automobile manufacturer Nordic Aviation Group, an Estonian airline company People Martin Nag, Norwegian writer Places Nag, Iran, a village in Kerman Province Nag Hammadi, in Upper Egypt Nag River, in India Nag Tibba, a mountain in Uttarakhand, India Religion Nag Dhunga, a sacred stone worshiped by the people of Nepal Nag Hammadi library, a collection of Gnostic texts discovered in Egypt in 1945 Nag Hammadi Codex II, a collection of early Christian Gnostic texts Nag Hammadi Codex XIII, a collection of early Christian Gnostic texts Nag Panchami, Hindu snake worship Nag Shankar, a temple in the Sonitpur district, India Other Nag, a cobra in Rudyard Kipling's Rikki-Tikki-Tavi Nāg, refers to the Indian cobra Nag (missile), a third generation "fire and forget" anti-tank missile Nag Champa, an Indian fragrance Nag Hammadi massacre, a massacre of Coptic Christians in Egypt in 2010 Nag Nag Nag, a former nightclub in London Nag Nathaiya (festival), in Varanasi, India Nag Vidarbha Andolan Samiti, a separatist political organization in Maharashtra, India Dr. Babasaheb Ambedkar International Airport (IATA code), Nagpur, India N-Acetylglucosamine, a biological molecule See also Naga (disambiguation) Nāga, a deity in the form of a serpent in Hinduism and Buddhism Ichchadhari Naags, shape-shifting Nāgas in Indian folklore Nago, a city at the Okinawa Island in Japan Nagu, a former municipality in Finland Nag's Head (disambiguation) Nagging
https://en.wikipedia.org/wiki/Nazeh%20Darwazi
Nazeh Darwazi (also Romanized Darwazeh)(ca. 1958-1961 – 19 April 2003), was a Palestinian freelance cameraman for the US news agency Associated Press Television Network (APTN) and Palestinian state television when he was killed in Nablus in the West Bank while reporting, according to eyewitnesses, by a bullet in the head fired by an Israeli soldier from a distance of about 20 yards (6.9 m) after having pointed his weapon at group of journalist. Darwazi was one of five journalists to die while reporting on the Second Intifada between 2002 and 2003. Early life Nazeh Darwazi was born and raised in Nablus. Various accounts of his age were reported. He lived in Nablus with his wife Raeda and their five children. His funeral was reported to have attracted over a thousand people. Career Nazeh Darwazi worked for The Associated Press for two years. He was also a journalist for Palestinian television. Death Eighteen Palestinians were injured by rubber bullets and live fire, during clashes between Israeli troops and Palestinian's throwing stones. Nazeh Darwazi was filming these fights between Israeli troops and Palestinians in the central Casbah district (Old City), when he was shot through the back of his head by an Israeli soldier positioned behind a tank 6.9m away. Tony Loughran (ZeroRisk International) Independent investigator for APTN has edited previous information on Wikipedia which stated that Nazeh was shot through his right eye from the front, and validated this during the course of his investigation in April 2003. He was wearing a yellow jacket marked "press" and was with a group of around six journalists covering fights between the group of Palestinians and Israeli soldiers. The journalists said their group shouted in English and Hebrew making clear that they were with the media. Three people filmed the event, including a Reuters cameraman. At first the army claimed they were under attack by armed Palestinians who had been throwing explosives, but witnesses claimed the soldier shot the journalist in cold blood without any exchange of fire. Investigation ZeroRisk International (ZRI) commissioned by APTN conducted a thorough investigation into Nazeh's death and produced a report in June 2003. This report not only made it clear how Nazeh was targeted & shot, but provides a number of security & safety recommendations to be implemented on Journalist Hostile Environment Training Courses. Reporters Without Borders also investigated the event and found that the army had not interviewed eyewitnesses. The soldiers had been questioned, but nobody was punished. Palestinian Authority stated that Israel had "committed a war crime" by "opening fire on journalists and other civilians". ZeroRisk International & The Associated Press said two Palestinian cameramen: Hassan Titi from Reuters and Sami al-Assi from a Palestinian station confirmed that they saw soldier take aim and fire at the journalists. Video footage taken by Reuters confirms a soldier kn
https://en.wikipedia.org/wiki/Treehouse%20of%20Horror%20XV
"Treehouse of Horror XV" is the first episode of the sixteenth season of the American animated television series The Simpsons. It originally aired on the Fox network in the United States on November 7, 2004. In the fifteenth annual Treehouse of Horror, Ned Flanders' head injury gives him the power to predict others' deaths, Bart and Lisa play detective when a string of Victorian-era prostitutes are murdered by Jack the Ripper, and the Simpsons go on a fantastic voyage inside Mr. Burns' body to save Maggie. It was written by Bill Odenkirk and directed by David Silverman. Around 11.29 million Americans tuned in to watch the episode during its original broadcast. Airing on November 7, it is the latest date that a Treehouse of Horror has aired, (tied with Treehouse of Horror XXI) but had to be held back a week due to Fox's contractual obligation to air the World Series. Plot Opening sequence Kang and Kodos star in a fictional sitcom, entitled Keepin' it Kodos. In it, Kodos is preparing their boss' visit by cooking dinner: Homer on a baking tray (continually eating himself), Bart on a skillet, Marge and Maggie in pies and Lisa in a soup (with Bart seemingly being the only family member to be in pain). The boss gives the meal a delicious rating, but his stomach bursts, liberating Bart. Kang and Kodos are given a hyper-galactic promotion, much to the aliens' delight. Bart is sad about the loss of his parents and sisters, but Kang and Kodos decide to adopt him, which comforts Bart. The theme song from Perfect Strangers plays as the Treehouse of Horror logo appears on the screen; an alien tentacle stamps the "XV" underneath which makes it say the title of the episode in the fashion of the Mark VII Limited company logo. The Ned Zone In a parody of The Dead Zone, Homer tries to get his frisbee from the roof by throwing a bowling ball after it. The ball strikes a passing Ned Flanders on the head. When Ned recovers in Dr. Hibbert's hospital, he has a vision of Hibbert falling out of a window to his death. Homer then asks Hibbert to retrieve his frisbee from a ledge on the hospital. As Hibbert reaches for the ledge, he slips out of the window, causing Ned's vision to come true. Ned realizes that he can see the deaths of people whom he touches. After he gets out of the hospital, he attempts to save Hans Moleman from falling down but has a vision of him being eaten by alligators. In shock, he drops Moleman into an open manhole with dozens of alligators swimming in it. He also predicts the closing of the Rosie O'Donnell musical, which he already suspected. A later vision depicts him shooting Homer, which horrifies Ned and he tries to conceal this from Homer. When Homer finds out, he taunts Ned and even gives him Chief Wiggum's gun to shoot him with, and says he could not even shoot him by accident. Ned refrains from shooting Homer, seemingly changing the future, but then has another vision of Homer blowing up Springfield by pressing the "Core Destruct" butt
https://en.wikipedia.org/wiki/Delgo
Delgo is a 2008 American computer-animated fantasy adventure film directed by Marc F. Adler and Jason Maurer, written by Scott Biear, Patrick J. Cowan, Carl Dream, and Jennifer A. Jones. It stars Freddie Prinze Jr., Jennifer Love Hewitt, Anne Bancroft, Chris Kattan, Louis Gossett Jr., Burt Reynolds, Eric Idle, Michael Clarke Duncan, Kelly Ripa, Val Kilmer, and Malcolm McDowell with narration by Sally Kellerman. It was distributed by Freestyle Releasing with music by Geoff Zanelli and produced by Electric Eye Entertainment Corporation and Fathom Studios, a division of Macquarium Intelligent Communications, which began development of the project in 1999. Despite winning the Best Feature award at Anima Mundi, the film was widely panned by critics and audiences, and its box office was one of the lowest-grossing wide releases in recent history. Delgo grossed under $1 million in theaters against an estimated budget of $40 million. The film was released independently with a large screen count (over 2,000 screens) and a small marketing budget. As a result, it became a massive box office bomb, losing an estimated $46 million. 20th Century Fox later acquired the film rights for international and domestic home media distribution. Delgo was the final film for actors Anne Bancroft and John Vernon, both of whom died three years before its release. The film is dedicated to Bancroft. Plot After having left their own world due to a loss of natural resources, the winged humanoid Nohrin settle on Jhamora with the permission of the ground-dwelling Lokni. Would-be conqueror Sedessa leads those Nohrin that believe in its own racial superiority and try to take land away from the Lokni. The parents of Delgo, a Lokni, are killed in the resulting conflict. Nohrin King Zahn is horrified by the war and admonishes Sedessa, who then poisons the Queen and almost kills Zahn (who catches her) as well. She is subsequently banished, and her wings are clipped off. Delgo, meanwhile, is raised by Elder Marley, who tries to teach him how to use the power of magical stones. Delgo grows up, and he gives in to his desire for revenge against all Nohrin. He meets Nohrin Princess Kyla and develops a tentative friendship with her. When she is kidnapped by Nohrin General Raius, who is actually working for Sedessa, Delgo and his friend Filo are blamed and arrested. In the Nohrin prison, Delgo meets Nohrin General Bogardus, who was forced to illegally gamble with his weapons by Raius, because Bogardus opposed an all out war with the Lokni. Delgo, Filo, and Bogardus escape into some caverns and eventually reach Sedessa's stronghold and rescue Kyla. They return too late to avert a war taking place. Bogardus fights and defeats Raius, but he is mortally injured. Just as Bogardus dies from heavy wounds, Delgo realizes that he was the Nohrin soldier who spared his life many years ago during the first war between the Nohrin and the Lokni. Meanwhile, Sedessa's army of monsters joins in the battle.
https://en.wikipedia.org/wiki/Robert%20Woodhead
Robert J. Woodhead is an entrepreneur, software engineer and former game programmer. He claims that a common thread in his career is "doing weird things with computers". Career In 1979 he co-founded Sirotech (later known as Sir-Tech) with Norman Sirotek and Robert Sirotek. Along with Andrew C. Greenberg, he created the Apple II game Wizardry: Proving Grounds of the Mad Overlord, one of the first role-playing video games written for a personal computer, as well as several of its sequels. Woodhead designed the 1982 Apple II arcade game Star Maze, which was programmed by Gordon Eastman and sold through Sir-Tech. He told TODAY magazine in 1983, "I have loads of arcade game ideas, but lack the patience to do the actual coding. I'm sort of a big project person; I like the challenge of a program like Wizardry." Later, he authored Interferon and Virex, two of the earliest anti-virus applications for the Macintosh, and co-founded AnimEigo, one of the first US anime releasing companies. As a result of this venture, while living in Japan, he married his translator and interpreter, Natsumi Ueki, together with whom he has two children. He also ran a search engine promotion website called SelfPromotion.com. As a hobby, he builds combat robots, and his children, James Ueki and Alex Ueki, are the 2004 and 2005 Robot Fighting League National Champions in the 30 lb Featherweight class. Woodhead made a cameo appearance in the 1982 video game Ultima II as an NPC; when the player talked to him he would scream "Copy Protect!", a sarcastic reference to the extensive copy protection methods used in video games of the time. He also has a screen credit in the film Real Genius as their "Hacking Consultant". Woodhead has created two successful Kickstarter projects, "Bubblegum Crisis Ultimate Edition Blu-Ray Set" ($153,964 pledged on a $75,000 goal), and "BackerSupport" ($326 pledged on a $100 goal). Woodhead has also served on the Eve Online Council of Stellar Management with an in-game avatar name of Trebor Daehdoow. He was re-elected for 4 terms, serving in his last term as Chairman. References External links Animeigo homepage Family website Campaign page for CSM 7 election Robert Woodhead at Twitter Robert Woodhead at MobyGames Candidacy post for CSM 7 elections Eve Online Profile page for Trebor Daehdoow Eve Online Blog Audio interview for the CSM 8 election campaign 20th-century American businesspeople American businesspeople in retailing American video game designers American video game programmers Anime industry Computer programmers Cornell University alumni Living people Year of birth missing (living people)
https://en.wikipedia.org/wiki/International%20Suppliers%20Network
The International Suppliers Network is a system which logs and tracks vendors. Major companies such as General Motors often use the ISN to establish the "trustworthy" status of a new vendor. The ISN also allows companies to import a validated version of a vendor's details directly into their own procurement system. Companies which have an ISN Profile automatically are issued with an ISN Rating, which is a rating of a company's stability and ability to manage its business. This made the ISN profile a good International identifier. General ratings range from -10 to 10, with a default value of 1. This is based on a number of key criteria, such as financial stability, and trading history performance. The ISN is regulated by the International Charter organization. External links ISN Registration Credit rating agencies
https://en.wikipedia.org/wiki/Jane%20%28Ender%27s%20Game%29
Jane is a fictional character in Orson Scott Card's Ender series. She is an energy based non-artificial sentient creature called an Aiúa that was placed within the ansible network by which spaceships and planets communicate instantly across galactic distances. She has appeared in the novels Speaker for the Dead, Xenocide, and Children of the Mind, and in a short story "Investment Counselor". Her 'face', a computer-generated hologram that she uses to talk to Ender, is described as plain and young, and it is illustrated in First Meetings as having a bun. This article is arranged to reflect the Ender timeline. However, the Ender Quartet: Ender's Game (1985), Speaker for the Dead (1986), Xenocide (1990), and Children of the Mind (1994) was written first; then Ender's Shadow (1999), First Meetings (2004), and Shadow of the Giant (2005). Origin Ender's Game The Fantasy Game is the faculty's primary method of obtaining information about their students in Ender's Game. It is designed to secretly map out the psyche of the players, providing valuable data on each student's thoughts and decision-making processes. Colonel Graff refers to the game sarcastically as the Mind Game. In the course of its service at the Battle School, the Game successfully analyzes every student but one: Bean, who realizes the game's true purpose and refuses to play. The student chooses a character and plays through a number of scenarios. One scenario, called the Giant's Drink, offers the player a choice between two beverages, with the promise of admission to "Fairyland" with the correct choice. However, the scenario is actually a no-win situation which invariably results in the player's death. By tracking how frequently a player attempts it, the Giant's Drink detects and warns teachers of suicidal tendencies among the students. To their consternation, Ender confronts the Giant obsessively, failing dozens of times. Finally, he refuses to choose a drink and instead attacks the Giant, killing it and becoming the first student to enter Fairyland; the Game generates this new scenario on the spot, orienting it specifically to Ender himself. This creates a deep connection between him and the Game, with significant consequences later on. Ender's Shadow The Fantasy Game is discussed in greater depth in Ender's Shadow. It is described by the teachers themselves as an extremely complex program that generates content procedurally. The Mind Game is never meant to be conclusive, it only makes connections and discovers patterns that are too subtle for the human eye. Shadow of the Giant In Shadow of the Giant, when Bean suspects Peter Wiggin of embezzling Ender's trust fund for his Hegemony uses, he recalls the nature of the Fantasy Game and requests that Graff place it in charge of Ender's trust fund. The Game, whose original purpose was to seek out patterns across wide fields of data, is modified to predict markets and invest Ender's trust fund appropriately. Alarmingly effective in thi
https://en.wikipedia.org/wiki/Unode
Unode is a short form of underground node: a script or program that combines other programs for creating a decentralized anonymous encrypted communication network. Other programs include: Entropy, Mixmaster, GPG, NEWSPOST, plus Plugins for more. Unode is a project to create a set of bash scripts to help Activists communicate without revealing their IP, or other personal data. Some of these scripts are used to forward email lists to the newsgroup alt.activism.underground. By doing this, activists can read important action alerts and other information without being included on the servers email lists that run the lists. Email headers are removed from these posts, as well as email addresses themselves are altered to help avoid spam. Unix bash scripts are used because it is easy for other to alter or correct errors in the scripts, and because it is an easy way to try out new ideas. It is used as a rough draft, before the eventual move to stable binary programs. References External links http://newspost.unixcab.org/ http://www.freelists.org/list/unode http://unode.8m.com Anonymity networks Internet Protocol based network software
https://en.wikipedia.org/wiki/On%20the%20Cover
On the Cover may refer to: On the Cover, an extended play by the American punk band MxPx On the Cover, a 2004 game show broadcast on the former PAX network; see List of programs broadcast by Ion Television
https://en.wikipedia.org/wiki/Online%20codes
In computer science, online codes are an example of rateless erasure codes. These codes can encode a message into a number of symbols such that knowledge of any fraction of them allows one to recover the original message (with high probability). Rateless codes produce an arbitrarily large number of symbols which can be broadcast until the receivers have enough symbols. The online encoding algorithm consists of several phases. First the message is split into n fixed size message blocks. Then the outer encoding is an erasure code which produces auxiliary blocks that are appended to the message blocks to form a composite message. From this the inner encoding generates check blocks. Upon receiving a certain number of check blocks some fraction of the composite message can be recovered. Once enough has been recovered the outer decoding can be used to recover the original message. Detailed discussion Online codes are parameterised by the block size and two scalars, q and ε. The authors suggest q=3 and ε=0.01. These parameters set the balance between the complexity and performance of the encoding. A message of n blocks can be recovered, with high probability, from (1+3ε)n check blocks. The probability of failure is (ε/2)q+1. Outer encoding Any erasure code may be used as the outer encoding, but the author of online codes suggest the following. For each message block, pseudo-randomly choose q auxiliary blocks (from a total of 0.55qεn auxiliary blocks) to attach it to. Each auxiliary block is then the XOR of all the message blocks which have been attached to it. Inner encoding The inner encoding takes the composite message and generates a stream of check blocks. A check block is the XOR of all the blocks from the composite message that it is attached to. The degree of a check block is the number of blocks that it is attached to. The degree is determined by sampling a random distribution, p, which is defined as: for Once the degree of the check block is known, the blocks from the composite message which it is attached to are chosen uniformly. Decoding Obviously the decoder of the inner stage must hold check blocks which it cannot currently decode. A check block can only be decoded when all but one of the blocks which it is attached to are known. The graph to the left shows the progress of an inner decoder. The x-axis plots the number of check blocks received and the dashed line shows the number of check blocks which cannot currently be used. This climbs almost linearly at first as many check blocks with degree > 1 are received but unusable. At a certain point, some of the check blocks are suddenly usable, resolving more blocks which then causes more check blocks to be usable. Very quickly the whole file can be decoded. As the graph also shows the inner decoder falls just shy of decoding everything for a little while after having received n check blocks. The outer encoding ensures that a few elusive blocks from the inner decoder are not an i