source
stringlengths
32
199
text
stringlengths
26
3k
https://en.wikipedia.org/wiki/GWN7
GWN7 was an Australian television network serving all of Western Australia outside metropolitan Perth. It launched on 10 March 1967 as BTW-3 in Bunbury. It was an affiliate of the Seven Network and served one of the largest geographic television markets in the world—almost one-third of the continent. The network's name, GWN, is an acronym of Golden West Network, the network's name from 1979 to when the current name was adopted in 2011. In 2021, GWN7's parent company Prime Media Group merged with Seven and the brand was retired in 2022. History Origins GWN began life as a group of smaller, independent stations: 10 March 1967: BTW–3 Bunbury 28 August 1968: GSW–9 Mount Barker – relay 29 August 1974: GSW–10 Albany – relay 18 June 1971: VEW–8 Kalgoorlie 29 November 1971: VEW–3 Kambalda – relay 21 January 1977: GTW–11 Geraldton Prior to these stations signing on, remote Western Australia had been one of the few areas of Australia without local television; the only television outlets in the area were relays of ABC Television out of Perth. Jack Bendat purchased South West Telecasters (owner of BTW/GSW) in 1979, and changed the company's name to Golden West Network. GWN applied to broadcast an additional service on 31 October 1984, when the Australian Broadcasting Tribunal called for applications to broadcast to Christmas Island and the Cocos (Keeling) Islands via satellite as part of the Remote Commercial Television License (RCTS) scheme. GWN was granted the Remote Commercial Television License (RCTS) in June 1985 and the service went to air on 18 October 1986 using the call-sign WAW. Not long after, GWN continued to expand within Western Australia, acquiring Mid-Western Television (owner of VEW-8 Kalgoorlie) in December 1985 for 7 million, and Geraldton Telecasters (owner of GTW-11) in March 1987 for an undisclosed amount. The takeovers gave the network a monopoly over all commercial television services in regional Western Australia. In 1987, Bendat and Kerry Stokes merged their media interests into joint company BDC Investments. Later that year, Northern Star Holdings purchased BDC for 206 million. Northern Star were forced to sell GWN to satisfy existing media regulations. GWN was sold back to Stokes in December 1988 for 54 million, who upgraded equipment throughout the network. In April 1990, the callsigns BTW and GSW were merged, to become SSW. During the late 1980s, GWN was promoted as GWN Satellite Television and aired programs from mostly the Nine Network plus a few from Seven and Ten with STW's Channel Nine (later National Nine) News (from Perth) providing the national news link. 1990s to the 2000s Kerry Stokes gained control of the Seven Network in 1995, and attempted to sell GWN to Seven in return for more shares. Seven Network shareholders agreed to the trade in April 1996 – a deal which would have seen Seven acquire GWN for 72.8 million thus becoming the regional network affiliate for Western Australia. The arrangement was called
https://en.wikipedia.org/wiki/Channel%209
Channel 9 or TV 9 may refer to: Television networks, channels and stations Asia and Pacific Channel 39 (New Zealand TV channel), formerly Channel 9, a regional television station in Dunedin, New Zealand Channel 9 (Bangladeshi TV channel), a satellite TV channel from Bangladesh DZKB-TV, commonly known as Channel 9, the flagship television station of Radio Philippines Network in Manila, Philippines HTV9, an Ho Chi Minh City Television, Vietnam Modernine TV, formerly known as Thai TV Channel 4 and Channel 9 M.C.O.T. Nine Network, an Australian commercial television network commonly known as Channel 9 TV9 Bangla, a Bengali-language news channel in India TV9 Bharatvarsh, a Hindi-language news channel in India TV9 Gujarati, a Gujarati-language 24-hour news channel in Gujarat, India TV9 Kannada, a 24-hour Kannada-language news channel in India TV9 (Malaysian TV network), a free-to-air private television network in Malaysia TV9 Mongolia, a private television station in Mongolia TV9 Telugu, a Telugu-language news channel in India Europe and Middle East 9Live, a German TV channel C9TV, a local television station based in Derry, Northern Ireland, 1999–2012 Canal 9 (Danish TV channel), a Danish television channel owned by Discovery Networks Canal 9 (Norwegian TV channel), a Norwegian television channel owned by TV4 Group and C More Entertainment Canal Nou, in the Valencian Community, Spain SBS 9, a commercial TV channel in the Netherlands Channel 9 (Greece), a Greek television channel in the Attica region Channel 9 (Israeli TV channel), formerly Israel Plus, a television station in Israel Kanal 9, a commercial television channel in Sweden Kanal 9 (Serbian TV channel), one of three regional television stations in Šumadija and Pomoravlje Region South and Central America ATV (Peruvian TV channel), formerly Canal 9, in Peru Canal 9 (Costa Rican TV channel), a television station in Costa Rica 1994–2000 Canal 9 (Nicaraguan TV channel), a television channel in Nicaragua Channel 9 (La Rioja, Argentina), a government television channel in the Provinces of La Rioja and Catamarca, Argentina El Nueve or Azul Televisión, a general entertainment television network based in Buenos Aires, Argentina, formerly Canal 9 Sistema Nacional de Televisión (Paraguayan TV channel), formerly Canal 9, in Paraguay Telefe Bahía Blanca, a private television channel broadcasting on channel 9 in Bahía Blanca, Buenos Aires, Argentina Other uses Channel 9 (Microsoft), part of MSDN that publishes videos and podcasts on software development Chanel 9, a fictional television channel on the BBC television series The Fast Show See also Citizens band radio channel 9 (27.065 MHz), reserved for emergency and distress calls Channel 9 branded TV stations in the United States Channel 9 virtual TV stations in Canada Channel 9 virtual TV stations in Mexico Channel 9 virtual TV stations in the United States For VHF frequencies covering 186-192 MHz: Channel 9 TV stations in Canada C
https://en.wikipedia.org/wiki/Cyber%20%28Marvel%20Comics%29
Cyber is a supervillain appearing in American comic books published by Marvel Comics. The character is usually depicted as an enemy of Wolverine of the X-Men. Created by writer Peter David and artist Sam Kieth, he first appeared in Marvel Comics Presents #85 (Sept. 1991), though his physical appearance was obscured by a trench coat and hat. He was first fully seen and named in Marvel Comics Presents #86 (Sept. 1991). Fictional character biography Origin Silas Burr is believed to have been born in Canada. He was an agent for the Pinkerton National Detective Agency, and in the spring of 1912, he was eventually put on trial in Sioux City, Iowa. He was found guilty on 22 counts of murder and sentenced to death by hanging. Escaping from the courthouse, Cyber arrived at a Western Canadian military base. In the Canadian Army a new employer named Frederick Hudson took a special interest in his unique ability to push the men under his command beyond their moral and emotional limits. Cyber's earliest known confrontation with Logan seemingly occurred around World War I, where he served as Logan's brutal drill instructor during his early days in the military. Cyber is given instructions to focus his attention on Logan in particular, and eventually receives orders to murder a woman at the base known only as Janet, in whom Logan was interested romantically, to further dehumanize his conditioning. After witnessing her death at Cyber's hands, Logan attacks and is severely beaten as Cyber effortlessly gouges out Logan's left eye. This is Logan's most severe beating and defeats up to this point in his life and the resulting psychological effects result in a deep-seated fear of Cyber. Without any memory of Burr's abuse, Logan again finds himself under the command of Burr while enlisted with the Devil's Brigade during World War II. He introduces Logan to U.S. Army soldier Nick Fury for the clandestine rescue mission of Captain America from German-occupied Northern Africa. Returning from Indochina for nine months in 1959, Burr would train his finest student Daken (Logan's son) before the boy is secretly ordered to destroy the training camp and everyone associated with it, including its commander. Eviscerated and shot by Daken, Burr is spared from death, as he was chosen by Romulus to be the prototype for the Adamantium bonding process and has the metal permanently bonded to his skin. Modern era In the modern era, Cyber resurfaces in Madripoor, as an enforcer for an unnamed drug cartel, where he interferes between the rival crime cartels of Wolverine's ally Tiger Tyger and General Coy. With the exception of his Adamantium enhancements, Cyber's appearance remains unchanged, indicating that he ages much slower than an ordinary human. Wolverine, after running from the fight and barely escaping with his life from their latest encounter, eventually manages to overcome his fear of Cyber to save Tyger's life, as he bites out the villain's left eye before he falls into a
https://en.wikipedia.org/wiki/10%20%28Southern%20Cross%20Austereo%29
10 is an Australian television network distributed by Southern Cross Austereo (SCA) in regional Queensland, southern New South Wales, the Australian Capital Territory, regional Victoria, the Spencer Gulf, and Broken Hill. SCA's network is the primary affiliate of Network 10 in most regional areas. History Origins Southern Cross began as a small network of three stations in regional Victoria. The Southern Cross TV8 network comprised GLV-10 Gippsland, BCV-8 Bendigo, and STV-8 Mildura. GLV was the first regional television station in the country, launched on 9 December 1961. BCV-8 launched in the same year, on 23 December, while STV followed four years later, on 27 November 1965. GLV-10 became GLV-8 in 1980, when Melbourne commercial station ATV-0 moved frequencies to become ATV-10 The network began life in 1982 as Southern Cross TV8, but later changed its name in 1989 to the Southern Cross Network. Soon after this, STV-8 left the network after it was bought by businessman Alan Bond, and eventually sold on to ENT Limited (owners of Vic TV and Tas TV). 1992–2016: 10 affiliation Regional Victoria was aggregated in 1992. VIC Television, based in Shepparton and Ballarat affiliated with the Nine Network, while Prime Television, based in Albury-Wodonga became an affiliate of the Seven Network. Southern Cross, therefore, took on an affiliation with Network Ten. Soon after, it changed its name and logo to SCN, directly emulating the look of its metropolitan counterpart. Local news was axed six months later, while the name and logo changed once again to Ten Victoria along with new names Ten Capital, Ten Northern NSW & Ten Queensland as they carried and introduced the Network Ten logo into their brand. Canberra-based station Capital Television was purchased by Southern Cross' owner, Southern Cross Broadcasting, in 1994. It was soon integrated into the network, taking on the name Ten Capital soon after. Southern Cross Broadcasting acquired Telecasters Australia in 2001. As a result, Ten Queensland and Ten Northern NSW became a part of the Southern Cross Ten network, while Telecasters' other assets – Seven Darwin and Seven Central – were later integrated into the Southern Cross network. Local news bulletins in Canberra and parts of Queensland were axed on 22 November 2001 – one of a number of moves taken by Southern Cross and competitor Prime Television that resulted in an investigation by the Australian Broadcasting Authority into the adequacy of regional news. The network expanded into the Spencer Gulf and Broken Hill areas on 31 December 2003 under a supplementary license granted to Southern Cross GTS/BKN by the ABA. Southern Cross Ten moved away from generic Network Ten branding – in use since the early 1990s for most areas – with a new logo, similar to that of parent company Southern Cross Broadcasting in 2005. Three-minute local news updates were introduced in 2004, following recommendations put into place following the ABA's report. The bri
https://en.wikipedia.org/wiki/Mahindra%20Satyam
Mahindra Satyam (formerly Satyam Computer Services Limited) was an Indian information technology (IT) services company based in Hyderabad, India, offering software development, system maintenance, packaged software integration and engineering design services. Satyam Computer Services was listed on the Pink Sheets, the National Stock Exchange and Bombay Stock Exchange and provided services to a wide range of customers including 185 Fortune 500 companies. In January 2009, the company's founder and chairman Ramalinga Raju admitted of inflating the company's assets by $1 billion, leading to criminal charges and a collapse of the company's stock price. This was known as Satyam Scandal. Mahindra Group's IT arm, Tech Mahindra, purchased a major stake in the company and in June 2009 the company renamed itself Mahindra Satyam. Mahindra Satyam merged with Tech Mahindra on 24 June 2013. History Satyam Computer Services was founded in 1987 and by 2008 earned revenues of over $2 billion, employing 52,000 IT professionals across the world. It was one of India's five top IT companies, and focused on the enterprise segment. It had an extensive client list including 185 Fortune 500 companies. The company was the subject of what was called India's biggest corporate scandal in January 2009 when then-chairman Byrraju Ramalinga Raju admitted in a letter to the Securities and Exchange Board of India that the corporate accounts had been falsified, adding approximately $1 billion to the company's cash and cash-related assets. The government appointed a board to oversee the sale of the company. Tech Mahindra offered to purchase a majority stake in April, 2009, and the company was rebranded as Mahindra Satyam in June 2009. Tech Mahindra announced its plan to merge with Mahindra Satyam on 21 March 2012, after the boards of both companies gave their approval. The Bombay Stock Exchange, the National Stock Exchange, and the Competition Commission of India (CCI) also approved. Shareholders unanimously approved the move in January 2013. The merger ran into delays due to ambiguity over jurisdiction between investigating agencies and the government, and two tax cases totaling over 27 billion. On 11 June 2013 Andhra Pradesh High Court approved the merger, after the Bombay high court gave its approval. A new management structure was announced for the new entity, led by Anand Mahindra as Chairman, Vineet Nayyar as Vice Chairman and C. P. Gurnani as the CEO and Managing Director. The merger was announced to be complete on 25 June 2013, creating India's fifth largest software services company with a turnover of US$2.7 billion. Mahindra Satyam shareholders received two shares of Tech Mahindra stock for every 17 shares of Mahindra Satyam stock they owned. Shares in the new entity began trading on 12 July 2013. On 24 July 2013, a division bench of Andhra Pradesh High Court admitted a petition filed by Ekadanta Greenfields and Saptaswara Agro Farms challenging the Mahindra Satyam-Te
https://en.wikipedia.org/wiki/IEEE%20802.9
The 802.9 Working Group of the IEEE 802 networking committee developed standards for integrated voice and data access over existing Category 3 twisted-pair network cable installations. Its major standard was usually known as isoEthernet. IsoEthernet combines 10 megabits per second Ethernet and 96 64-kilobits per second ISDN B channels. It was originally developed to provide data and voice/video over the same wire without degradation by fixing the amount of bandwidth assigned to the Ethernet and B-channel sides. There was some vendor support for isoEthernet, but it lost in the marketplace due to the rapid adoption of Fast Ethernet and the working group was disbanded. References IEEE Std 802-1990: IEEE Standards for Local and Metropolitan Networks: Overview and Architecture New York:1990 IEEE 802 Telecommunications standards
https://en.wikipedia.org/wiki/ARN%20Regional
ARN Regional is an Australian regional radio network founded after the purchase of a group of radio stations from Grant Broadcasters by ARN parent Here, There & Everywhere. It includes a small number of metropolitan radio stations. History Grant Broadcasters was founded by Walter Grant in 1942 when he bought 2DU in Dubbo. In 1972 a shareholding in 2ST in Nowra was purchased followed in 1979 by 2PK in Parkes and in 1982 2MG in Mudgee. In 1986 2DU, 2PK and 2MG were sold with full ownership taken of 2ST. Over the next three decades, the company expanded through acquisition, purchasing radio stations in all states and territories of Australia, owning 53 stations by November 2021. In January 2022, Here, There & Everywhere purchased 46 stations from Grant Broadcasters and integrated these into its ARN business. The deal was finalised on 4 January 2022. Radio stations Contemporary Hits Radio (CHR) Hot 100 100.1 FM and DAB+ - Darwin, translators at Adelaide River 102.1 FM, and Katherine 765 AM Hot Tomato, Gold Coast Chilli FM 90.1 Chilli FM 90.1 FM - Launceston, translator at Launceston City 101.1 FM 99.7 Chilli FM 99.7 FM - Scottsdale, translators at Weldborough 94.5 FM, St Marys 103.5 FM and St Helens 90.5 FM Magic FM Magic FM 89.9 FM - Port Lincoln, translator at Cleve on 97.1 FM Magic FM 93.1 FM - Berri, translator at Waikerie on 97.1 FM Magic FM 105.9 FM - Spencer Gulf, translator at Roxby Downs on 100.3 FM and Quorn on 100.7 FM Power FM Network Power FM 102.5 FM - Bega, translator at Batemans Bay 104.3 FM Power FM 98.1 FM - Muswellbrook, translator at Merriwa 102.7 FM Power FM 94.9 FM - Nowra Power FM 103.1 FM - Ballarat Power FM 98.7 FM - Murray Bridge, South Australia, translators at Mount Barker 100.3 FM and Victor Harbor 99.7 FM Sea FM Sea FM 107.7 FM - Devonport Sea FM 101.7 FM - Burnie Adult Contemporary 7HO FM 101.7 FM - Hobart Mix 104.9 104.9 FM and DAB+ - Darwin, translators at Adelaide River 98.1 FM, Bathurst Island 89.7 FM, and Katherine 106.9 FM River 94.9 94.9 FM - Ipswich Hitz FM 93.9 FM - Bundaberg Hot 91 91.1 FM - Sunshine Coast 96.5 Wave FM 96.5 FM - Wollongong 89.3 LAFM 89.3 FM - Launceston, translator at Launceston City 100.3 FM. 4CC, 927 AM Gladstone, 1584 AM Rockhampton, 666 AM Biloela, 98.3 FM Agnes Water, Queensland. Rock Zinc/Star Network Star 102.7 Cairns Star 101.9 Mackay Star 106.3 - Townsville Zinc 96.1 Sunshine Coast Power 100.7 Townsville Classic/Adult Hits 2EC 765 AM - Bega, translators at Batemans Bay 105.9 FM, Narooma 1584 AM and Eden 105.5 FM. Batemans Bay has some local programming. 2NM 981 AM - Muswellbrook 2ST 999 AM - Nowra, (Format: Music and Talk), translators at Ulladulla 106.7 FM, Shoalhaven 91.7 FM and Bowral 102.9 FM. Bowral has some local programming. 3BA 102.3 FM - Ballarat Gold Ten-71 1071 AM - Maryborough (Victoria), translator at Bendigo 98.3 FM River 1467AM 1467 AM - Mildura 4BU 1332 AM - Bundaberg 4CA 846 AM - Cairns 4MK 1026 AM - Mackay 4RO 990 AM - Rockhampton 5AU 97.9 FM - Port
https://en.wikipedia.org/wiki/ATOS
or is a computerized control system used by the East Japan Railway Company to regulate train traffic on railway lines in metropolitan Tokyo, Japan. It was designed by Hitachi. The first deployment was on the Chūō Main Line in 1997. It is now used on fourteen lines listed below. On ATOS-enabled lines, each train station has electronic displays, which show scheduled arrival times and train destinations in Japanese and English, warn passengers when trains are arriving or passing through, send updates on system delays and accidents, and display messages to advertise JR products or warn passengers not to smoke. Pre-recorded voice announcements in train stations are also automated by ATOS. ATOS also directs train drivers through 16-by-16 lamp matrices, which flash messages telling the train driver to speed up, slow down, or adjust their scheduled departure time in order to keep the entire network running on schedule. Several JR lines in the Kantō region use CTC or PRC systems in lieu of ATOS. ATOS-enabled lines Chūō Main Line in metro Tokyo (March 1997) Yamanote Line (August 1998) Keihin-Tohoku Line (August 1998) Sobu Line (Rapid and Local) west of (June 1999) Yokosuka Line north of (July 2000) Tōkaidō Main Line (Passenger and freight) (September 2001) Joban Line (Rapid and Local) (January 2004) Tohoku Main Line (December 2004) Takasaki Line (December 2004) Saikyo Line (July 2005) Kawagoe Line (July 2005) Yamanote Freight Line (July 2005) Nambu Line (March 2006) Tohoku Freight Line (December 2004) Yokosuka Line (Ofuna - Kurihama) (November 2009) Musashino Line (January 2012) Yokohama Line (July 2015) Keiyō Line (September 2016) Future ATOS-enabled lines Ōme Line ( - ) Itsukaichi Line Further reading JR East Technical Review has several articles about digital train control and ATOS. See also Autonomous decentralized system Rail transport in Tokyo Rail transport operations
https://en.wikipedia.org/wiki/Parlay%20X
Parlay X was a set of standard Web service APIs for the telephone network (fixed and mobile). It is defunct and now replaced by OneAPI, which is the current valid standard from the GSM association for Telecom third party API. It enables software developers to use the capabilities of an underlying network. The APIs are deliberately high level abstractions and designed to be simple to use. An application developer can, for example, invoke a single Web Service request to get the location of a mobile device or initiate a telephone call. The Parlay X Web services are defined jointly by ETSI, the Parlay Group, and the Third Generation Partnership Project (3GPP). OMA has done the maintenance of the specifications for 3GPP release 8. The APIs are defined using Web Service technology: interfaces are defined using WSDL 1.1 and conform with Web Services Interoperability (WS-I Basic Profile). The APIs are published as a set of specifications. In general Parlay X provides an abstraction of functionality exposed by the more complex, but functionally richer Parlay APIs. ETSI provide a set of (informative not normative) Parlay X to Parlay mapping documents. Parlay X services have been rolled out by a number of telecom operators, including BT, Korea Telecom, T-Com, Mobilekom and Sprint. External links The last version of the original Parlay X website, as of 2013 on archive.org OneAPI GSMA's current API inherits of Parlay X purposes Parlay X Version 3.0 Specifications Parlay X Version 2.1 Specifications Parlay X Version 4.0 Specifications Parlay X, Ericsson Developer Program Java SE Components for Telecom Web Services (Parlay X made easy through JavaBeans) Telecom Web Services Network Emulator (Parlay X emulator) Getting started with ParlayX Telecommunications standards Application programming interfaces Web services
https://en.wikipedia.org/wiki/EO%20Personal%20Communicator
The EO is an early commercial tablet computer that was created by Eo, Inc. (later acquired by AT&T Corporation), and released in April 1993. Eo (Latin for "I go") is the hardware spin-out of GO. Officially named the AT&T EO Personal Communicator, it is similar to a large personal digital assistant with wireless communications, and competed against the Apple Newton. The unit was produced in conjunction with David Kelley Design, frog design, and the Matsushita, Olivetti and Marubeni corporations. Among the EO customers AT&T claimed were: New York Stock Exchange, Andersen Consulting, Lawrence Livermore Laboratories, FD Titus & Sons and Woolworths. Eo, Inc., 52 percent owned by AT&T, shut down operations on July 29, 1994, after failing to meet its revenue targets and to secure the funding to continue. It was reported that 10,000 of the computers had been sold. In 2012, PC Magazine called the AT&T EO 440, "the first true phablet". Product specifics Two models, the Communicator 440 and 880, were produced and measure about the size of a small clipboard. Both are powered by the AT&T Hobbit chip, created by AT&T specifically for running code written in the C programming language. They feature I/O ports such as modem, parallel, serial, VGA out and SCSI. The devices come with a wireless cellular network modem, a built-in microphone with speaker, and a free subscription to AT&T EasyLink Mail for both fax and e-mail messages. The operating system, PenPoint OS, was created by GO Corporation. Widely praised for its simplicity and ease of use, the OS did not gain widespread use. The applications suite, Perspective, was licensed to EO by Pensoft. See also Pen computing History of tablet computers Celeste Baranski Notes External links The EO 440 And EO 880 (subscription required) EO 440 receives one of 1993 Byte Awards Personal retrospective about working for EO Computer-related introductions in 1993 AT&T computers Personal digital assistants Tablet computers
https://en.wikipedia.org/wiki/Thundering%20herd%20problem
In computer science, the thundering herd problem occurs when a large number of processes or threads waiting for an event are awoken when that event occurs, but only one process is able to handle the event. When the processes wake up, they will each try to handle the event, but only one will win. All processes will compete for resources, possibly freezing the computer, until the herd is calmed down again. Mitigation The Linux kernel serializes responses for requests to a single file descriptor, so only one thread or process is woken up. For epoll() in version 4.5 of the Linux kernel, the EPOLLEXCLUSIVE flag was added. Thus several epoll sets (different threads or different processes) may wait on the same resource and only one set will be woken up. For certain workloads this flag can give significant processing time reduction. Similarly in Microsoft Windows, I/O completion ports can mitigate the thundering herd problem, as they can be configured such that only one of the threads waiting on the completion port is woken up when an event occurs. In systems that rely on a backoff mechanism (e.g. exponential backoff), the clients will retry failed calls by waiting a specific amount of time between consecutive retries. In order to avoid the thundering herd problem, jitter can be purposefully introduced in order to break the synchronization across the clients, thereby avoiding collisions. In this approach, randomness is added to the wait intervals between retries, so that clients are no longer synchronized. See also Process management (computing) Lock convoy Sleeping barber problem TCP global synchronization Cache stampede References External links A discussion of this observation on Linux Better Retries with Exponential Backoff and Jitter Concurrency control
https://en.wikipedia.org/wiki/Darknet
A dark net or darknet is an overlay network within the Internet that can only be accessed with specific software, configurations, or authorization, and often uses a unique customized communication protocol. Two typical darknet types are social networks (usually used for file hosting with a peer-to-peer connection), and anonymity proxy networks such as Tor via an anonymized series of connections. The term "darknet" was popularized by major news outlets to associate with Tor Onion services, when the infamous drug bazaar Silk Road used it, despite the terminology being unofficial. Technology such as Tor, I2P, and Freenet are intended to defend digital rights by providing security, anonymity, or censorship resistance and are used for both illegal and legitimate reasons. Anonymous communication between whistle-blowers, activists, journalists and news organisations is also facilitated by darknets through use of applications such as SecureDrop. Terminology The term originally described computers on ARPANET that were hidden, programmed to receive messages but not respond to or acknowledge anything, thus remaining invisible, in the dark. Since ARPANET, the usage of dark net has expanded to include friend-to-friend networks (usually used for file sharing with a peer-to-peer connection) and privacy networks such as Tor. The reciprocal term for a darknet is a clearnet or the surface web when referring to content indexable by search engines. The term "darknet" is often used interchangeably with "dark web" because of the quantity of hidden services on Tor's darknet. Additionally, the term is often inaccurately used interchangeably with the deep web because of Tor's history as a platform that could not be search-indexed. Mixing uses of both these terms has been described as inaccurate, with some commentators recommending the terms be used in distinct fashions. Origins "Darknet" was coined in the 1970s to designate networks isolated from ARPANET (the government-founded military/academical network which evolved into the Internet), for security purposes. Darknet addresses could receive data from ARPANET but did not appear in the network lists and would not answer pings or other inquiries. The term gained public acceptance following publication of "The Darknet and the Future of Content Distribution", a 2002 paper by Peter Biddle, Paul England, Marcus Peinado, and Bryan Willman, four employees of Microsoft who argued the presence of the darknet was the primary hindrance to the development of workable digital rights management (DRM) technologies and made copyright infringement inevitable. This paper described "darknet" more generally as any type of parallel network that is encrypted or requires a specific protocol to allow a user to connect to it. Sub-cultures Journalist J. D. Lasica, in his 2005 book Darknet: Hollywood's War Against the Digital Generation, described the darknet's reach encompassing file sharing networks. Subsequently, in 2014, journalist Ja
https://en.wikipedia.org/wiki/Andrew%20Stern
Andrew Stern may refer to: Andy Stern, president of the Service Employees International Union Andrew Stern (video game designer), co-designer of the artificial intelligence experiment Façade (interactive story) Andrew Stern (tennis), American tennis player in 1952, 1953 and 1954 U.S. National Championships – Men's Singles
https://en.wikipedia.org/wiki/Ed2k
ed2k may refer to: eDonkey network—file sharing network eDonkey2000—file sharing program ed2k URI scheme—links used by eDonkey2000 es:Ed2k
https://en.wikipedia.org/wiki/Key%20generator
A key generator is a protocol or algorithm that is used in many cryptographic protocols to generate a sequence with many pseudo-random characteristics. This sequence is used as an encryption key at one end of communication, and as a decryption key at the other. One can implement a key generator in a system that aims to generate, distribute, and authenticate keys in a way that without the private key, one cannot access the information in the public end. Examples of key generators include linear-feedback shift registers (LFSR) and the Solitaire (or Pontifex) cipher. References Key management
https://en.wikipedia.org/wiki/Physical%20schema
A physical data model (or database design) is a representation of a data design as implemented, or intended to be implemented, in a database management system. In the lifecycle of a project it typically derives from a logical data model, though it may be reverse-engineered from a given database implementation. A complete physical data model will include all the database artifacts required to create relationships between tables or to achieve performance goals, such as indexes, constraint definitions, linking tables, partitioned tables or clusters. Analysts can usually use a physical data model to calculate storage estimates; it may include specific storage allocation details for a given database system. seven main databases dominate the commercial marketplace: Informix, Oracle, Postgres, SQL Server, Sybase, IBM Db2 and MySQL. Other RDBMS systems tend either to be legacy databases or used within academia such as universities or further education colleges. Physical data models for each implementation would differ significantly, not least due to underlying operating-system requirements that may sit underneath them. For example: SQL Server runs only on Microsoft Windows operating-systems (Starting with SQL Server 2017, SQL Server runs on Linux. It's the same SQL Server database engine, with many similar features and services regardless of your operating system), while Oracle and MySQL can run on Solaris, Linux and other UNIX-based operating-systems as well as on Windows. This means that the disk requirements, security requirements and many other aspects of a physical data model will be influenced by the RDBMS that a database administrator (or an organization) chooses to use. Physical schema Physical schema is a term used in data management to describe how data is to be represented and stored (files, indices, et al.) in secondary storage using a particular database management system (DBMS) (e.g., Oracle RDBMS, Sybase SQL Server, etc.). In the ANSI/SPARC Architecture three schema approach, the internal schema is the view of data that involved data management technology. This is as opposed to an external schema that reflects an individual's view of the data, or the conceptual schema that is the integration of a set of external schemas. Subsequently the internal schema was recognized to have two parts: The logical schema was the way data were represented to conform to the constraints of a particular approach to database management. At that time the choices were hierarchical and network. Describing the logical schema, however, still did not describe how physically data would be stored on disk drives. That is the domain of the physical schema. Now logical schemas describe data in terms of relational tables and columns, object-oriented classes, and XML tags. A single set of tables, for example, can be implemented in numerous ways, up to and including an architecture where table rows are maintained on computers in different countries. See
https://en.wikipedia.org/wiki/Role-oriented%20programming
Role-oriented programming as a form of computer programming aims at expressing things in terms that are analogous to human conceptual understanding of the world. This should make programs easier to understand and maintain. The main idea of role-oriented programming is that humans think in terms of roles. This claim is often backed up by examples of social relations. For example, a student attending a class and the same student at a party are the same person, yet that person plays two different roles. In particular, the interactions of this person with the outside world depend on his current role. The roles typically share features, e.g., the intrinsic properties of being a person. This sharing of properties is often handled by the delegation mechanism. In the older literature and in the field of databases, it seems that there has been little consideration for the context in which roles interplay with each other. Such a context is being established in newer role- and aspect-oriented programming languages such as Object Teams. Compare the use of "role" as "a set of software programs (services) that enable a server to perform specific functions for users or computers on the network" in Windows Server jargon. Many researchers have argued the advantages of roles in modeling and implementation. Roles allow objects to evolve over time, they enable independent and concurrently existing views (interfaces) of the object, explicating the different contexts of the object, and separating concerns. Generally roles are a natural element of human daily concept-forming. Roles in programming languages enable objects to have changing interfaces, as we see in real life - things change over time, are used differently in different contexts, etc. Authors of role literature Barbara Pernici Bent Bruun Kristensen Bruce Wallace Charles Bachman Friedrich Steimann Georg Gottlob Kasper B. Graversen Kasper Østerbye Stephan Herrmann Trygve Reenskaug Thomas Kühn Programming languages with explicit support for roles Cameleon EpsilonJ JavaScript Delegation - Functions as Roles (Traits and Mixins) Object Teams Perl (Moose) Raku powerJava SCala ROLes Language See also Aspect-oriented programming Data, context and interaction Object Oriented Role Analysis Method Object-role modeling Subject (programming) Subject-oriented programming Traits (computer science) References External links Adaptive Plug-and-Play Components for Evolutionary Software Development, by Mira Mezini and Karl Lieberherr Context Aspect Sensitive Services Overview and taxonomy of Role languages ROPE: Role Oriented Programming Environment for Multiagent Systems Object-based programming languages Programming paradigms
https://en.wikipedia.org/wiki/Pharmacy%20College%20Admission%20Test
The Pharmacy College Admission Test (PCAT) is a computer-based standardized test administered to prospective pharmacy school students by Pearson Education, Inc as a service for the American Association of Colleges of Pharmacy (AACP); it is offered in January, July, and September. The test is divided into five sections to be taken in approximately three and a half hours. The test includes Writing, Biology, Chemistry, Critical reading, and Quantitative Reasoning sections. The composite score is based on the multiple-choice sections, and can range from 200 – 600. There is no passing score; pharmacy schools set their own standards for acceptable scores. Calculators are not allowed during the testing period and no penalty is given for incorrect answers. See also List of admissions tests American Association of Colleges of Pharmacy References External links Official PCAT website About the PCAT Pharmacy education Professional examinations in healthcare Standardized tests in the United States English language tests
https://en.wikipedia.org/wiki/Arthur%20Rock
Arthur Rock (born August 19, 1926) is an American businessman and investor. Based in Silicon Valley, California, he was an early investor in major firms including Intel, Apple, Scientific Data Systems and Teledyne. Early life Rock was born and raised in Rochester, New York, in a Jewish family. He was an only child and his father owned a small candy store where Rock worked in his youth. He joined the U.S. Army during World War II but the war ended before he was deployed. He then went to college on the G.I. Bill. He graduated with a Bachelor's degree in business administration from Syracuse University in 1948 and earned an MBA from Harvard Business School in 1951. Career Rock started his career in 1951 as a securities analyst in New York City, and then joined the corporate finance department of Hayden, Stone & Company in New York, where he focused on raising money for small high-technology companies. In 1957, when the "traitorous eight" left Shockley Semiconductor Laboratory, Rock was the one who helped them find a place to go: he convinced Sherman Fairchild to start Fairchild Semiconductor. In 1961, he moved to California. Along with Thomas J. Davis Jr., he formed the San Francisco venture capital firm Davis & Rock. In 1968, Robert Noyce, Gordon Moore, and another Fairchild employee named Andy Grove, were ready to start a new company, Intel. Noyce contacted his good friend Rock, with whom he used to hike and camp. Rock described how Intel started."Bob (Noyce) just called me on the phone. We'd been friends for a long time.… Documents? There was practically nothing. Noyce's reputation was good enough. We put out a page-and-a-half little circular, but I'd raised the money even before people saw it."Intel was incorporated in Mountain View, California, on July 18, 1968, by Gordon E. Moore (known for "Moore's law"), a chemist, Robert Noyce, a physicist and co-inventor of the integrated circuit. There were originally 500,000 shares outstanding of which Dr. Noyce bought 245,000 shares, Dr. Moore 245,000 shares, and Mr. Rock 10,000 shares; all at $1 per share. Rock raised $2.5 million of convertible debentures to a limited group of private investors in one day. Rock became Intel's first chairman. In 1978, Mike Markulla of Apple Computer hooked up Steve Jobs and Steve Wozniak with Rock. Rock bought 640,000 shares of Apple Computer and became a long-time director of the company. Rock's investments and personal guidance helped launch and govern a distinguished roster of corporate firms including Intel, Apple, Scientific Data Systems, Teledyne, Xerox, Argonaut Insurance, AirTouch, the Nasdaq Stock Market, and Echelon Corporation. Venture capital During the 1950s, putting a venture capital deal together may have required the help of two or three other organizations to complete the transaction. It was a business that was growing very rapidly, and as the business grew, the transactions grew exponentially. Arthur Rock, one of the pioneers of Silicon Valle
https://en.wikipedia.org/wiki/Linuxconf
Linuxconf is a system configuration tool for the Linux operating system. It features different user interfaces: a text interface or a graphical user interface in the form of a Web page or native application. Most Linux distributions consider it deprecated compared to other tools such as Webmin, the system-config-* tools on Red Hat Enterprise Linux/Fedora, drakconf on Mandriva, YaST on openSUSE and so on. Linuxconf was deprecated from Red Hat Linux in version 7.1 in April 2001. It was created by Jacques Gélinas of Solucorp, a company based in Québec. References External links Linuxconf home page Free system software Linux configuration utilities Unix configuration utilities Software that uses GTK
https://en.wikipedia.org/wiki/Bushido%20Blade%20%28video%20game%29
is a 3D fighting video game developed by Lightweight and published by Square and Sony Computer Entertainment for the PlayStation. The game features one-on-one armed combat. Its name refers to the Japanese warrior code of honor bushidō. Upon its release, the realistic fighting engine in Bushido Blade was seen as innovative, particularly the game's unique Body Damage System. A direct sequel, Bushido Blade 2, was released on the PlayStation a year later. Another game with a related title and gameplay, Kengo: Master of Bushido, was also developed by Lightweight for the PlayStation 2. Gameplay The bulk of the gameplay in Bushido Blade revolves around one-on-one third-person battles between two opponents. Unlike most fighting games, however, no time limit or health gauge is present during combat. Most hits will cause instant death, while traditional fighting games require many hits to deplete an opponent's health gauge. It is possible to wound an opponent without killing them. With the game's "Body Damage System," opponents are able to physically disable each other in increments with hits from an equipped weapon, slowing their attacking and running speed, or crippling their legs, forcing them to crawl. The game features eight weapons to choose from in many of its modes: katana, nodachi, long sword, saber, broadsword, naginata, rapier, and sledgehammer. Except the European weapons, which are noticeably shorter than historical counterparts, each weapon has a realistic weight and length, giving each one fixed power, speed, and an ability to block. A variety of attack combinations can be executed by the player using button sequences with the game's "Motion Shift System," where one swing of a weapon is followed through with another. Many of these attacks are only available in one of three stances, switched using the shoulder buttons or axis controls depending on controller layout: high, neutral, and low. The player also has a choice of six playable characters. Similar to the weapons, each one has a different level of strength and speed, and a number of unique special attacks. Some characters have a subweapon that can be thrown as well. All the characters have differing levels of proficiency with the selectable weapons and have a single preferred weapon. Characters in Bushido Blade also have the ability to run, jump, and climb within the 3D environments. Because battles are not limited to small arenas, the player is encouraged to freely explore during battle. The castle compound which most of the game takes place in acts as a large hub area of interconnected smaller areas including a cherry blossom grove, a moat, and a bridge labyrinth. Some areas, such as the bamboo thicket, allow some interaction. The story mode of Bushido Blade adds another gameplay mechanic: the Bushido code. Certain moves and tactics are considered dishonorable, such as striking a foe in the back, throwing dust in their eyes, or attacking while they bow at the start of fights. Acti
https://en.wikipedia.org/wiki/FBI%20Critical%20Incident%20Response%20Group
The Critical Incident Response Group (CIRG) is a division of the Criminal, Cyber, Response, and Services Branch of the United States Federal Bureau of Investigation. CIRG enables the FBI to rapidly respond to, and effectively manage, special crisis incidents in the United States. History In response to public outcry over the standoffs at Ruby Ridge, Idaho, and of the Branch Davidians in the Waco Siege, the FBI formed the CIRG in 1994 to deal with crisis situations more efficiently. The CIRG is designated to formulate strategies, manage hostage or siege situations, and if possible resolve them "without loss of life", as pledged in a 1995 Senate hearing by FBI Director Louis Freeh, who assumed the post four-and-a-half months after the Waco Siege. CIRG was intended to integrate tactical and investigative resources and expertise for critical incidents which necessitate an immediate response from law enforcement authorities. CIRG will deploy investigative specialists to respond to terrorist activities, hostage takings, child abductions and other high-risk repetitive violent crimes. Other major incidents include prison riots, bombings, air and train crashes, and natural disasters. Organization Each of the major areas of CIRG furnishes distinctive operational assistance and training to FBI field offices as well as state, local and international law enforcement agencies. Surveillance and Aviation Section - Provides aviation and surveillance support for all facets of FBI investigative activities with a priority on protecting the United States from terrorist attack and against foreign intelligence operations and espionage. Aviation Surveillance Branch Aviation Support Unit Special Flight Operations Unit Field Flight Operations Unit Mobile Surveillance Branch Mobile Surveillance Teams-Armed (MST-A) Mobile Surveillance Teams (MST) Tactical Section - Provides the FBI with a nationwide, three-tiered tactical resolution capability that upon proper authorization can be activated within four hours of notification to address a full spectrum of terrorist or criminal matters. Operations and Training Unit Hostage Rescue Team SWAT Operations Unit Special Weapons and Tactics Teams Crisis Negotiation Unit Tactical Helicopter Unit Operational Support Unit Investigative and Operations Support Section - Prepares for and responds to critical incidents, major investigations, and special events by providing expertise in behavioral and crime analysis, crisis management, and rapid deployment logistics. National Center for the Analysis of Violent Crime Behavioral Analysis Unit Violent Criminal Apprehension Program Crisis Management Unit Rapid Deployment and Logistics Strategic Information and Operation Center - Serves as the FBI's 24-hour clearinghouse for strategic information, and as the center for crisis management and special event monitoring. Counter-IED Section - Provides training, equipment, and advanced technical support to prevent and effectively respon
https://en.wikipedia.org/wiki/AFN%20Iraq
AFN Iraq was the American Forces Network of radio stations within Iraq. The network, nicknamed Freedom Radio, broadcast news, information, and entertainment programs, including adult contemporary music. Its mission was to "sustain and improve the morale and readiness" of U.S. forces in Iraq. The first song played live on AFN was "Freedom" by Paul McCartney. FM Transmitters The network includes the following transmitter sites and their power, if known (status as of October 2008): 93.3 MHz Baghdad (FOB Union III) — Transmitter Under Construction Fallujah (Camp Baharia) Al Taqaddum Airbase (TQ) 101.1 MHz Tikrit (COB Speicher) 104.5 MHz Baquba (FOB Warhorse) — Transmitter Under Construction 105.1 MHz Mosul (Camp Diamondback/FOB Marez) — 1 kW 107.3 MHz Al Asad Airbase Balad (LSA Anaconda) — 250 W Nasiriyah (Tallil Air Base) — 200 W Qayyarah West Airfield (Q-WEST) — 250 W Ramadi (FOB Blue Diamond) Samarra (FOB Brassfield-Mora) Camp Taji Tall Afar (FOB Sykes) Umm Qasr (Camp Bucca) 107.7 MHz Baghdad (Camp Slayer) — 1 kW History The 222nd Broadcast Operations Detachment, a United States Army Reserve unit, from Los Angeles, California took control of AFN Radio & Television in August 2003. The 222nd Broadcast Operations Detachment was relieved by the 209th Broadcast Operations Detachment (USAR) from Rome, Georgia, in August 2004. In August 2005 the 206th Broadcast Operations Detachment (USAR), from Dallas, Texas relieved the 209th Broadcast Operations Detachment. In August 2006 the 356th Operations Detachment (USAR), from Ft. Meade, Maryland, relieved the 206th Broadcast Operations Detachment. In August 2007, the US Air Force assumed control of AFN Iraq until February 2009. In March 2009, the 222nd Broadcast Operations Detachment (USAR), from Los Angeles, Calif. assumed control of the station. In March 2010, the 209th Broadcast Operations Detachment (USAR) from Rome, Georgia again assumed command of AFN-I. 2004- T-shirts sold through area AAFES outlets, depicted CH46 helicopters flying over a herd of camels, on a dark blue heavy weight 100% cotton T. "More 'Raq, less Taq" was the wording. A very popular item at the time. Television Freedom Journal Iraq was produced by the electronic news gathering (ENG) team of the American Forces Network-Iraq, five times a week until 2009. The 10-minute newscast was produced Monday through Friday and uploaded via satellite to the Pentagon Channel in Washington D.C. Originally, the program was produced weekly. The newscast was awarded 1st and 2nd place Keith L. Ware Broadcast Journalism Awards in 2005 for best Army Television Newscast while being produced by the 206th Broadcast Operations Detachment. Broadcast journalists have traveled the entire theater of operations in support of Freedom Journal Iraq, from Kuwait to Turkey. Although not designated combat camera correspondents, a few broadcast journalists have been awarded the Combat Action Badge for their duty serving at American Forces Network-Iraq. Freestyl
https://en.wikipedia.org/wiki/Computer%20experiment
A computer experiment or simulation experiment is an experiment used to study a computer simulation, also referred to as an in silico system. This area includes computational physics, computational chemistry, computational biology and other similar disciplines. Background Computer simulations are constructed to emulate a physical system. Because these are meant to replicate some aspect of a system in detail, they often do not yield an analytic solution. Therefore, methods such as discrete event simulation or finite element solvers are used. A computer model is used to make inferences about the system it replicates. For example, climate models are often used because experimentation on an earth sized object is impossible. Objectives Computer experiments have been employed with many purposes in mind. Some of those include: Uncertainty quantification: Characterize the uncertainty present in a computer simulation arising from unknowns during the computer simulation's construction. Inverse problems: Discover the underlying properties of the system from the physical data. Bias correction: Use physical data to correct for bias in the simulation. Data assimilation: Combine multiple simulations and physical data sources into a complete predictive model. Systems design: Find inputs that result in optimal system performance measures. Computer simulation modeling Modeling of computer experiments typically uses a Bayesian framework. Bayesian statistics is an interpretation of the field of statistics where all evidence about the true state of the world is explicitly expressed in the form of probabilities. In the realm of computer experiments, the Bayesian interpretation would imply we must form a prior distribution that represents our prior belief on the structure of the computer model. The use of this philosophy for computer experiments started in the 1980s and is nicely summarized by Sacks et al. (1989) . While the Bayesian approach is widely used, frequentist approaches have been recently discussed . The basic idea of this framework is to model the computer simulation as an unknown function of a set of inputs. The computer simulation is implemented as a piece of computer code that can be evaluated to produce a collection of outputs. Examples of inputs to these simulations are coefficients in the underlying model, initial conditions and forcing functions. It is natural to see the simulation as a deterministic function that maps these inputs into a collection of outputs. On the basis of seeing our simulator this way, it is common to refer to the collection of inputs as , the computer simulation itself as , and the resulting output as . Both and are vector quantities, and they can be very large collections of values, often indexed by space, or by time, or by both space and time. Although is known in principle, in practice this is not the case. Many simulators comprise tens of thousands of lines of high-level computer code, wh
https://en.wikipedia.org/wiki/Applix%201616
The Applix 1616 was a kit computer with a Motorola 68000 CPU, produced by a small company called Applix in Sydney, Australia, from 1986 to the early 1990s. It ran a custom multitasking multiuser operating system that was resident in ROM. A version of Minix was also ported to the 1616, as was the MGR Window System. Andrew Morton, designer of the 1616 and one of the founders of Applix, later became the maintainer of the 2.6 version of the Linux kernel. History Paul Berger and Andrew Morton formed the Australian company Applix Pty. Ltd. in approximately 1984 to sell a Z80 card they had developed for the Apple IIc that allowed it to run CP/M. This product was not a commercial success, but Paul later proposed they develop a Motorola 68000-based personal computer for sale in kit form. The project was presented to Jon Fairall, then editor of the Australia and New Zealand electronics magazine Electronics Today International, and in December 1986, the first of four construction articles was published as "Project 1616", with the series concluding in June 1987. In October and November 1987, a disk controller card was also published as "Project 1617". Over the next decade, about 400 1616s were sold. Applix Pty. Ltd., was in no way related to the North American company of the same name that produced Applixware. Hardware Main board The main board contains: a Motorola 68000 running at 7.5 MHz, or a 68010 running at 15 MHz. 512 kibibytes of Dynamic RAM between 64 kibibytes and 256 kibibytes of ROM on board bit mapped colour graphics (no "text" mode), with timing provided by a Motorola 6845 CRT controller. The video could produce 320x200 in 16 colours, or 640x200 in a palette of 4 colours out of 16, with a later modification providing a 960x512 monochrome mode. The frame buffer resided in system memory and video refresh provided DRAM refresh cycles. The video output was able to drive CGA, EGA, MGA and multisync monitors. dual RS-232 serial ports using a Zilog Z8530. a parallel port for Centronics-type printers or general purpose I/O. This was provided by a Rockwell 6522 Versatile Interface Adaptor, which was also the source of timer interrupts. 4 channel analog/audio output via an 8 bit DAC and multiplexor. software audio/analogue input via the DAC and a comparator. a PC/XT keyboard interface. The main board also had four 80-pin expansion slots. The 1616 shared this backplane with a platform developed by Andrew Morton for Keno Computer Systems, allowing the 1616 to use expansion boards developed for the Keno Computer Systems platform (primarily the 34010 graphics coprocessor), although the form-factor was different, which left the KCS cards sticking out of the top of the 1616 case! Disk controller card The disk controller card contains: A Zilog Z80 processor running at 8 MHz 32 kibibytes of ROM 64 kibibytes of Static RAM a WD1772 floppy disk controller dual RS-232 serial ports using a Zilog Z8530 An NCR5380 SCSI controller The coproces
https://en.wikipedia.org/wiki/Unexpand
unexpand is a command in Unix and Unix-like operating systems. It is used to convert groups of space characters into tab characters. For example: $ echo " asdf sdf" | unexpand | od -c 0000000 \t \t a s d f s d f \n 0000014 $ echo " asdf sdf" | od -c 0000000 0000020 a s d f s d f \n 0000032 Here the echo command prints a string of text that includes multiple consecutive spaces, then the output is directed into the unexpand command. The resulting output is then displayed by the octal dump command od. At the second prompt, the same echo output is sent directly through the od command. As can be seen by comparing the two, the unexpand program converts sequences of eight spaces into single tabs (printed as '\t'). See also List of Unix commands Expand (Unix) References External links The program's manpage Unix SUS2008 utilities
https://en.wikipedia.org/wiki/Inline%20assembler
In computer programming, an inline assembler is a feature of some compilers that allows low-level code written in assembly language to be embedded within a program, among code that otherwise has been compiled from a higher-level language such as C or Ada. Motivation and alternatives The embedding of assembly language code is usually done for one of these reasons: Optimization: Programmers can use assembly language code to implement the most performance-sensitive parts of their program's algorithms, code that is apt to be more efficient than what might otherwise be generated by the compiler. Access to processor specific instructions: Most processors offer special instructions, such as Compare and Swap and Test and Set instructions which may be used to construct semaphores or other synchronization and locking primitives. Nearly every modern processor has these or similar instructions, as they are necessary to implement multitasking. Examples of specialized instructions are found in the SPARC VIS, Intel MMX and SSE, and Motorola Altivec instruction sets. Access to special calling conventions not yet supported by the compiler. System calls and interrupts: High-level languages rarely have a direct facility to make arbitrary system calls, so assembly code is used. Direct interrupts are even more rarely supplied. To emit special directives for the linker or assembler, for example to change sectioning, macros, or to make symbol aliases. On the other hand, inline assembler poses a direct problem for the compiler itself as it complicates the analysis of what is done to each variable, a key part of register allocation. This means the performance might actually decrease. Inline assembler also complicates future porting and maintenance of a program. Alternative facilities are often provided as a way to simplify the work for both the compiler and the programmer. Intrinsic functions for special instructions are provided by most compilers and C-function wrappers for arbitrary system calls are available on every Unix platform. Syntax In language standards The ISO C++ standard and ISO C standards (annex J) specify a conditionally supported syntax for inline assembler: An asm declaration has the form asm-declaration: ( string-literal ) ; The asm declaration is conditionally-supported; its meaning is implementation-defined. This definition, however, is rarely used in actual C, as it is simultaneously too liberal (in the interpretation) and too restricted (in the use of one string literal only). In actual compilers In practical use, inline assembly operating on values is rarely standalone as free-floating code. Since the programmer cannot predict what register a variable is assigned to, compilers typically provide a way to substitute them in as an extension. There are, in general, two types of inline assembly supported by C/C++ compilers: (or ) in GCC. GCC uses a direct extension of the ISO rules: assembly code template is written in strin
https://en.wikipedia.org/wiki/Picture%20Transfer%20Protocol
Picture Transfer Protocol (PTP) is a protocol developed by the International Imaging Industry Association to allow the transfer of images from digital cameras to computers and other peripheral devices without the need of additional device drivers. The protocol has been standardized as ISO 15740. It is further standardized for USB by the USB Implementers Forum as the still image capture device class. USB is the default network transport media for PTP devices. USB PTP is a common alternative to the USB mass-storage device class (USB MSC), as a digital camera connection protocol. Some cameras support both modes. Description PTP specifies a way of creating, transferring and manipulating objects which are typically photographic images such as a JPEG file. While it is common to think of the objects that PTP handle as files, they are abstract entities identified solely by a 32-bit object ID. These objects can however have parents and siblings so that a file-system–like view of device contents can be created. History Until the standardization of PTP, digital camera vendors used different proprietary protocols for controlling digital cameras and transferring images to computers and other host devices. The term "Picture Transfer Protocol" and the acronym "PTP" were both coined by Steve Mann, summarizing work on the creation of a Linux-friendly way of transferring pictures to and from home-made wearable computers, at a time when most cameras required the use of Microsoft Windows or Mac OS device drivers to transfer their pictures to a computer. PTP was originally standardized as PIMA 15470 in 2000, while it was developed by the IT10 committee. Key contributors to the standard included Tim Looney and Tim Whitcher (Eastman Kodak Company) and Eran Steinberg (Fotonation). Storage PTP does not specify a way for objects to be stored – it is a communication protocol. Nor does it specify a transport layer. However, it is designed to support existing standards, such as Exif, TIFF/EP, DCF, and DPOF, and is commonly implemented over the USB and FireWire transport layers. Images on digital cameras are generally stored as files on a mass storage device, such as a memory card, which is formatted with a file system, most commonly FAT12, FAT16 or FAT32, which may be laid out as per the Design rule for Camera File system (DCF) specification. But none of these are required as PTP abstracts from the underlying representation. By contrast, if a camera is mounted via USB MSC, the physical file system and layout are exposed to the user. Device control Many modern digital cameras from Canon and Nikon can be controlled via PTP from a USB host enabled computing device (Smartphone, PC or Arduino for example). As is the norm for PTP, the communication takes place over a USB connection. When interacting with the camera in this manner, it is expected that the USB endpoints are in (synchronous) Bulk Transfer Mode, for getting/setting virtually all the camera's features/prope
https://en.wikipedia.org/wiki/Tom%20Miller%20%28computer%20programmer%29
Tom Miller (born 1950) is a software developer who was employed by Microsoft. Miller worked as a member of the original team of developers who followed Dave Cutler from DEC to Microsoft, where he initially started working in the networking group. After less than two years, Miller moved to the Windows NT team, where he worked with John Nelson on file systems and wrote the original 50 page specification document for the NT File System. References Computer programmers Microsoft employees Microsoft Windows people Living people 1950 births
https://en.wikipedia.org/wiki/Breeding%20bird%20survey
A breeding bird survey monitors the status and trends of bird populations. Data from the survey are an important source for the range maps found in field guides. The North American Breeding Bird Survey is a joint project of the United States Geological Survey (USGS) and the Canadian Wildlife Service. The UK Breeding Bird Survey is administered by the British Trust for Ornithology, the Joint Nature Conservation Committee, and the Royal Society for the Protection of Birds. The results of the BBS are valuable in evaluating the increasing and decreasing range of bird population which can be a key point to bird conservation. The BBS was designed to provide a continent-wide perspective of population change. History The North American Breeding Bird Survey was launched in 1966 after the concept of a continental monitoring program for all breeding birds had been developed by Chandler Robbins and his associates from the Migratory Bird Population Station. The program was developed in Laurel, Maryland. In the first year of its existence there were nearly 600 surveys conducted east of the Mississippi River. One year later, in 1967, the survey spread to the Great Plains states and by 1968 almost 2000 routes had been established across southern Canada and 48 American states. As more birders were introduced to this program, the number of active BBS routes continued to increase. In the 1980s, the Breeding Bird Survey included Yukon, Northwest Territories of Canada and Alaska. Additionally, the number of routes in established states has increased. Currently, there are approximately 3700 active BBS routes in the United States and Canada, of which approximately 2900 are surveyed on an annual basis. The density of the routes varies greatly across the continent and the largest number of routes can be found in New England and Mid-Atlantic states. Many bird watchers participate in these surveys as they find the experience rewarding. Future plans for the BBS include expanding coverage in central and western North America, and adding routes in northern Mexico. The surveys conducted by BBS take place during the peak of the nesting season; usually June, but also May in regions with warmer temperatures. A typical BBS route is 24.5 miles long with a stop every 0.5 miles, adding up to 50 stops per route. Routes are randomly located in order to sample habitats that are representative of the entire region. BBS data is quite difficult to analyze given that the survey does not produce a complete counting of the breeding bird populations but more like a relative abundance index. Even so, these surveys have proved to be of great value in studying bird population trends. BBS data can also be used to produce continental-scale relative abundance maps. When analyzed at larger scales, the relative abundance maps can offer a clear indication of the status and distribution of bird species that are observed by the BBS. However, the most effective use of these surveys is the oppor
https://en.wikipedia.org/wiki/IBM%20PL/S
PL/S, short for Programming Language/Systems, is a "machine-oriented" programming language based on PL/I. It was developed by IBM in the late 1960s, under the name Basic Systems Language (BSL), as a replacement for assembly language on internal software projects; it included support for inline assembly and explicit control over register usage. Early projects using PL/S were the batch utility, IEHMOVE, and the Time Sharing Option of MVT, TSO. By the 1970s, IBM was rewriting its flagship operating system in PL/S. Although users frequently asked IBM to release PL/S for their use, IBM refused, saying that the product was proprietary. Their concern was that open PL/S would give competitors, Amdahl, Itel (National Advanced Systems), Storage Technology Corporation, Trilogy Systems, Magnuson Computer Systems, Fujitsu, Hitachi, and other PCM vendors a competitive advantage. However, even though they refused to make available a compiler, they shipped the PL/S source code to large parts of the OS to customers, many of whom thus became familiar with reading it. Closed PL/S meant that only IBM could easily modify and enhance the operating system. PL/S was succeeded by PL/S II, PL/S III and PL/AS (Programming Language/Advanced Systems), and then PL/X (Programming Language/Cross Systems). PL/DS (Programming Language/Distributed Systems) was a closely related language used to develop the DPPX operating system, and PL/DS II was a port of the S/370 architecture for the DPPX/370 port. As the market for computers and software shifted away from IBM mainframes and MVS, IBM recanted and has offered the current versions of PL/S to select customers (ISVs through the Developer Partner program.) Rand Compiler for PL/S The Rand RL/S compiler for IBM’s PL/S language was developed in the early 1970s by the Computation Center of the Rand Corporation in Santa Monica, CA.  It was implemented using the XPL compiler generator system by a team of three Rand programmers (R. Lawrence Clark, James S. Reiley, and the team leader David J. Smith).  The Rand RL/S compiler was developed independently of and without any assistance from IBM.  Only publicly available, non-copyrighted PL/S documentation and PL/S source and generated assembler code examples from IBM distributed source files for the MVS operating system were used. RL/S is fully compatible with IBM’s PL/S II language.  This was assured by parsing many thousands of lines of IBM written PL/S code taken from the MVS distribution files.  The assembler language code produced by the Rand RL/S compiler is not identical to the code produced by IBM’s PL/S compiler, but it is functionally equivalent. Rand has been a long-time contributor to computer research and development (e.g., JOSS, the Rand tablet, WYLBUR) and was an early pioneer in the definition and development of “packet switching” network technology (Baran).  Rand was also one of the early nodes on the Arpanet, the Defense Department’s precursor to the Internet.
https://en.wikipedia.org/wiki/Dixon%27s%20factorization%20method
In number theory, Dixon's factorization method (also Dixon's random squares method or Dixon's algorithm) is a general-purpose integer factorization algorithm; it is the prototypical factor base method. Unlike for other factor base methods, its run-time bound comes with a rigorous proof that does not rely on conjectures about the smoothness properties of the values taken by a polynomial. The algorithm was designed by John D. Dixon, a mathematician at Carleton University, and was published in 1981. Basic idea Dixon's method is based on finding a congruence of squares modulo the integer N which is intended to factor. Fermat's factorization method finds such a congruence by selecting random or pseudo-random x values and hoping that the integer x2 mod N is a perfect square (in the integers): For example, if , (by starting at 292, the first number greater than and counting up) the is 256, the square of 16. So . Computing the greatest common divisor of and N using Euclid's algorithm gives 163, which is a factor of N. In practice, selecting random x values will take an impractically long time to find a congruence of squares, since there are only squares less than N. Dixon's method replaces the condition "is the square of an integer" with the much weaker one "has only small prime factors"; for example, there are 292 squares smaller than 84923; 662 numbers smaller than 84923 whose prime factors are only 2,3,5 or 7; and 4767 whose prime factors are all less than 30. (Such numbers are called B-smooth with respect to some bound B.) If there are many numbers whose squares can be factorized as for a fixed set of small primes, linear algebra modulo 2 on the matrix will give a subset of the whose squares combine to a product of small primes to an even power — that is, a subset of the whose squares multiply to the square of a (hopefully different) number mod N. Method Suppose the composite number N is being factored. Bound B is chosen, and the factor base is identified (which is called P), the set of all primes less than or equal to B. Next, positive integers z are sought such that z2 mod N is B-smooth. Therefore we can write, for suitable exponents ai, When enough of these relations have been generated (it is generally sufficient that the number of relations be a few more than the size of P), the methods of linear algebra, such as Gaussian elimination, can be used to multiply together these various relations in such a way that the exponents of the primes on the right-hand side are all even: This yields a congruence of squares of the form which can be turned into a factorization of N, This factorization might turn out to be trivial (i.e. ), which can only happen if in which case another try must be made with a different combination of relations; but if a nontrivial pair of factors of N is reached, the algorithm terminates. Pseudocode input: positive integer output: non-trivial factor of Choose bound Let be all primes r
https://en.wikipedia.org/wiki/Profit%20Impact%20of%20Market%20Strategy
The Profit Impact of Market Strategy (PIMS) program is a project that uses empirical data to try to determine which business strategies make the difference between success and failure. It is used to develop strategies for resource allocation and marketing. Some of the most important strategic metrics are market share, product quality, investment intensity and service quality (all measured by PIMS and strongly correlated with financial performance). One of the emphasized principles is that the same factors work identically across different industries. The business management authors Tom Peters and Nancy Austin wrote that PIMS "yields solid evidence in support of both common sense and counter-intuitive principles for gaining and sustaining competitive advantage". History The PIMS project was originally initiated by senior managers of General Electric who wanted to know why some of their business units were more profitable than others. Under the direction of Sidney Schoeffler, an Economics Professor hired by GE for the purpose, the PIMS project was launched in the 1960s as an internal empirical study. The aim was to make GE's different strategic business units (SBUs) comparable. Since GE was highly diversified at the time, key factors were sought that would have an impact on economic success regardless of the product. In particular, the return on investment (ROI), i.e. the profit per unit of tied capital, was used as the measure of success. In 1972, the project was transferred to the Marketing Sciences Institute (then under the wing of Harvard Business School, which extended it to other companies. In 1976, the American Strategic Planning Institute in Cambridge, Massachusetts, took charge of the project. Between 1970 and 1983, roughly 2600 strategic business units (SBUs) from around 200 companies took part in the surveys and provided key figures for the project. Today there are around 12,570 observations for 4200 SBUs. PIMS Associates in London has been the worldwide competence and design canter for PIMS since the 1990s and has been part of Malik Management (Fredmund Malik) in St. Gallen (Switzerland) since 2005. The PIMS project analyses the data they had gathered to identify the options, problems, resources and opportunities faced by each SBU. Based on the spread of each business across different industries, it was hoped that the data could be drawn upon to provide other businesses, in the same industry, with empirical evidence of which strategies lead to increased profitability. The database continues to be updated and drawn upon by academics and companies today. The PIMS databases currently comprise over 25,000 years of business experience at the SBU level (i.e. where the customer interface takes place and where marketing and investment decisions are made). Each SBU is characterized by hundreds of factors over a period of 3+ years, including the market share of itself and its competitors, customer preference, relative prices, service qua
https://en.wikipedia.org/wiki/Sound%20server
A sound server is software that manages the use of and access to audio devices (usually a sound card). It commonly runs as a background process. Sound server in an operating system In a Unix-like operating system, a sound server mixes different data streams (usually raw PCM audio) and sends out a single unified audio to an output device. The mixing is usually done by software, or by hardware if there is a supported sound card. Layers The "sound stack" can be visualized as follows, with programs in the upper layers calling elements in the lower layers: Applications (e.g. mp3 player, web video) Sound server (e.g. aRts, ESD, JACK, PulseAudio) Sound subsystem (described as kernel modules or drivers; e.g. OSS, ALSA) Operating system kernel (e.g. Linux, Unix) Motivation Sound servers appeared in Unix-like operating systems after limitations in Open Sound System were recognized. OSS is a basic sound interface that was incapable of playing multiple streams simultaneously, dealing with multiple sound cards, or streaming sound over the network. A sound server can provide these features by running as a daemon. It receives calls from different programs and sound flows, mixes the streams, and sends raw audio out to the audio device. With a sound server, users can also configure global and per-application sound preferences. Diversification and problems there are multiple sound servers; some focus on providing very low latency, while others concentrate on features suitable for general desktop systems. While diversification allows a user to choose just the features that are important to a particular application, it also forces developers to accommodate these options by necessitating code that is compatible with the various sound servers available. Consequently, this variety has resulted in a desire for a standard API to unify efforts. List of sound servers aRts Enlightened Sound Daemon JACK Network Audio System PipeWire PulseAudio sndio - OpenBSD audio and MIDI framework Streaming Icecast SHOUTcast References External links Introduction to Linux Audio RFC: GNOME 2.0 Multimedia strategy Servers (computing)
https://en.wikipedia.org/wiki/The%20Planiverse
The Planiverse is a novel by A. K. Dewdney, written in 1984. Plot In the spirit of Edwin Abbott Abbott's Flatland, Dewdney and his computer science students simulate a two-dimensional world with a complex ecosystem. To their surprise, they find their artificial 2D universe has somehow accidentally become a means of communication with an actual 2D world: Arde. They make a sort of "telepathic" contact with "YNDRD", referred to by the students as Yendred, a highly philosophical Ardean, as he begins a journey across the western half, Punizla, of the single continent Ajem Kollosh to learn more about the spiritual beliefs of the people of the East, Vanizla. Yendred mistakes Dewdney's class for "spirits" and takes great interest in communicating with them. The students and narrator communicate with Yendred by typing on the keyboard; Yendred's answers appear on the computer's printout. The name Yendred (or "Yendwed", as pronounced by one of the students, who has a speech impediment) is simply "Dewdney" reversed. Written as a travelogue, Yendred's journey through the West takes him through several cities. He visits the Punizlan Institute for Technology and Science, where Arde's technology is explored in great detail. For example, all houses are underground, so as not to be demolished by the periodic 2D rivers; nails are useless for attaching two objects, so tape and glue are used instead; most Ardean creatures cannot have deuterostomic digestive tracts since they would split into two; even games such as Go have one-dimensional Alak analogues. An appendix explains various other aspects of two-dimensional science and technology which could not fit into the main story. The underlying allegory culminates in Yendred's arrival at the watershed of the continent and the planet's only building above ground, where he at last finds Drabk, an Ardean who professes "knowledge of the Beyond", and teaches Yendred to fly. Yendred finds that to keep contact with Earth is no longer of benefit, and contact with Arde is lost. Development In 1977, Dewdney was inspired by an allegory of a two-dimensional universe, and decided to expand upon the physics and chemistry of such a universe. He published a short monograph in 1979 called Two-Dimensional Science and Technology. This was reviewed by Martin Gardner in his July 1980 "Mathematical Games" column in Scientific American, and shortly after this, all copies of the monograph were sold out. In 1981, following the success of the monograph, Dewdney published A Symposium on Two-Dimensional Science and Technology, which contained suggestions for how a two-dimensional universe would work from scientists and non-scientists on varied subjects. Dewdney wrote The Planiverse as a frame story in which to display the scientific and technical features from these previous works, as well as an allegory for his search for a reality deeper than that of scientific enquiry, and his subsequent conversion to Sufiism. Reception Dave Langford revi
https://en.wikipedia.org/wiki/Netatalk
Netatalk (pronounced "ned-uh-talk") is a free, open-source implementation of the Apple Filing Protocol (AFP). It allows Unix-like operating systems to serve as file servers for Macintosh computers running macOS or Classic Mac OS. Netatalk was originally developed by the Research Systems Unix Group at the University of Michigan for BSD-derived Unix systems and released in 1990. Apple had introduced AppleTalk soon after the release of the original Macintosh in 1985, followed by the file sharing application AppleShare (which was built on top of AFP) in 1987. This was an early example of zero-configuration networking, gaining significant adoption in educational and small to mid size office environments in the late 80s. Netatalk emerged as a part of the software ecosystem around AppleTalk. In 1986 Columbia University published the Columbia AppleTalk Package (CAP), which was an open source implementation of AppleTalk originally written for BSD 4.2, allowing Unix servers to be part of AppleTalk networks. CAP also had its own implementation of AFP/AppleShare, but Netatalk appearing in 1990 claimed better performance due to software design advantages. CAP and Netatalk were also interoperable, the latter being able to be run on an AppleTalk backend provided by CAP. As part of transitioning the software into an open source community project, the codebase was moved to SourceForge for revision control in July 2000, then re-licensed under the terms of the GNU General Public License with version 1.5pre7 in August 2001. Since Classic Mac OS used a forked file system, unlike the host operating systems where Netatalk would be running, Netatalk originally implemented the AppleDouble format for storing the resource fork separately from the data fork when a Mac OS file was transferred to the Unix-like computer's file system. This was required in order not to ruin most files by discarding the resource fork when copied to the Netatalk served AppleShare volume. With the release of Netatalk 3.0, the backend was re-implemented to use the Extended Attributes format that Apple had introduced with Mac OS X for backwards compatibility with Classic Mac OS resource forks. Development History The original developer of Netatalk was Wesley Craig at the University of Michigan. In 1997 Adrian Sun created a popular fork, coding the initial implementation of the then-new AppleShare IP (AFP over TCP/IP) network layer. By the time the project started transitioning into an open source model in 2000, the "ASUN" fork had been merged back into Netatalk proper. In October 2004 Netatalk 2.0 was released, which brought major improvements, including: support for Apple Filing Protocol version 3.1 (providing long UTF-8 filenames, file sizes > 2 gigabytes, full Mac OS X compatibility), CUPS integration, Kerberos V support allowing true "single sign-on", reliable and persistent storage of file and directory IDs and countless bug fixes compared to previous versions. Since version 2.0.5, Neta
https://en.wikipedia.org/wiki/SFTP
SFTP may refer to: Computing SSH File Transfer Protocol, a network protocol used for secure file transfer over secure shell Secure file transfer program, a SSH File Transfer Protocol client from the OpenSSH project Simple File Transfer Protocol, an unsecured file transfer protocol from the early days of the Internet Screened fully shielded twisted pair, a kind of network cable Other Science for the People, a U.S. left-wing organization and magazine Six Flags Theme Parks, chain of amusement parks and theme parks Stray from the Path, an American metalcore band Supplemental Federal Test Procedure, EPA fuel economy testing procedures which supplement FTP-75 standard See also FTPS, or FTP over SSL, another name used to encompass a number of ways in which FTP software can perform secure file transfers
https://en.wikipedia.org/wiki/AIDS%20%28computer%20virus%29
AIDS is a DOS computer virus which overwrites COM files. Description AIDS is the first virus known to exploit the MS-DOS "corresponding file" vulnerability. In MS-DOS, if the user enters in the command interpreter, in a directory where both and exist, then will always be executed. Thus, by creating infected COM files, AIDS code will always be executed before the intended EXE file. When the AIDS virus activates, it displays the following screen: ATTENTION: I have been elected to inform you that throughout your process of collecting and executing files, you have accidentally ¶HÜ¢KΣ► [phucked in leet] yourself over: again, that's PHUCKED yourself over. No, it cannot be; YES, it CAN be, a √ìτûs [virus] has infected your system. Now what do you have to say about that? HAHAHAHAHA. Have ¶HÜÑ [phun] with this one and rememember, there is NO cure for AIDS In the message above, the word "AIDS" covers about half of the screen. The system is then halted, and must be powered down and rebooted to restart it. The AIDS virus overwrites the first 13,952 bytes of an infected COM file. Overwritten files must be deleted and replaced with clean copies in order to remove the virus. It is not possible to recover the overwritten portion of the program. AIDS II AIDS II is a companion computer virus, which infects COM files. First discovered in April 1990, it appears to be a more elegant revision of AIDS, which also employs the corresponding file technique to execute infected code. Unlike generic file infectors, AIDS II is the second known virus to use the "corresponding file technique" of infection (after the original AIDS), and the first to use this technique in a way that does not modify the original target EXE file. AIDS II works by first finding an uninfected EXE file in the working directory and then creating a companion COM file with the viral code. The COM files will always be 8,064 bytes in length, with a timestamp corresponding to the time of infection. After creating the new COM file, the virus then plays a loud note, and displays the following message: Your computer is infected with ... ❤Aids Virus II❤ - Signed WOP & PGT of DutchCrack - AIDS II then executes EXE file the user intended to execute without incident. Once that program is exited, control returns to the virus. The note is replayed, with a new message displayed: Getting used to me? Next time, use a Condom ..... Since the EXE file is unchanged, cyclic redundancy checks, such as those present in antivirus software, cannot detect this virus having infected a system. A way to remove AIDS II manually is to check for EXE files with an identically named COM file 8,064 bytes in length. Those COM files can be deleted. According to Symantec, AIDS II may play a melody and display the following string: Your computer is infected with AIDS VIRUS II References External links DOS file viruses
https://en.wikipedia.org/wiki/Earthrise%20%28disambiguation%29
Earthrise is a famous photograph taken on the 1968 Apollo 8 space mission. Earthrise may also refer to: Earthrise (1990 video game), a computer game by Interstel Earthrise (album), a 1984 album by the Tandy Morgan Band Earthrise (film), a 2018 documentary by Emmanuel Vaughan-Lee Earthrise (video game), a 2011 massively multiplayer online role-playing game by Masthead Studios Earthrise Engine Earthrise, a TV series on Al Jazeera English Earthrise, the second chapter to 2020's Transformers: War for Cybertron Trilogy cartoon Earthrise, a sub-orbital space "burial" mission by Celestis "Earthrise/Return", a song by Mannheim Steamroller from Fresh Aire V "Earthrise", a song by Camel from Mirage "Earthrise," a song by American rock band Starset on their 2021 album, Horizons Earth phase, astronomical views of Earthrise and related See also Earth Rising, an American indie band
https://en.wikipedia.org/wiki/Unrolled%20linked%20list
In computer programming, an unrolled linked list is a variation on the linked list which stores multiple elements in each node. It can dramatically increase cache performance, while decreasing the memory overhead associated with storing list metadata such as references. It is related to the B-tree. Overview A typical unrolled linked list node looks like this: record node { node next // reference to next node in list int numElements // number of elements in this node, up to maxElements array elements // an array of numElements elements, // with space allocated for maxElements elements } Each node holds up to a certain maximum number of elements, typically just large enough so that the node fills a single cache line or a small multiple thereof. A position in the list is indicated by both a reference to the node and a position in the elements array. It is also possible to include a previous pointer for an unrolled doubly linked list. To insert a new element, we find the node the element should be in and insert the element into the elements array, incrementing numElements. If the array is already full, we first insert a new node either preceding or following the current one and move half of the elements in the current node into it. To remove an element, we find the node it is in and delete it from the elements array, decrementing numElements. If this reduces the node to less than half-full, then we move elements from the next node to fill it back up above half. If this leaves the next node less than half full, then we move all its remaining elements into the current node, then bypass and delete it. Performance One of the primary benefits of unrolled linked lists is decreased storage requirements. All nodes (except at most one) are at least half-full. If many random inserts and deletes are done, the average node will be about three-quarters full, and if inserts and deletes are only done at the beginning and end, almost all nodes will be full. Assume that: m = maxElements, the maximum number of elements in each elements array; v = the overhead per node for references and element counts; s = the size of a single element. Then, the space used for n elements varies between and . For comparison, ordinary linked lists require space, although v may be smaller, and arrays, one of the most compact data structures, require space. Unrolled linked lists effectively spread the overhead v over a number of elements of the list. Thus, we see the most significant space gain when overhead is large, maxElements is large, or elements are small. If the elements are particularly small, such as bits, the overhead can be as much as 64 times larger than the data on many machines. Moreover, many popular memory allocators will keep a small amount of metadata for each node allocated, increasing the effective overhead v. Both of these make unrolled linked lists more attractive. Because unrolled linked list nodes
https://en.wikipedia.org/wiki/In%20silico
In biology and other experimental sciences, an in silico experiment is one performed on computer or via computer simulation. The phrase is pseudo-Latin for 'in silicon' (correct ), referring to silicon in computer chips. It was coined in 1987 as an allusion to the Latin phrases , , and , which are commonly used in biology (especially systems biology). The latter phrases refer, respectively, to experiments done in living organisms, outside living organisms, and where they are found in nature. History The earliest known use of the phrase was by Christopher Langton to describe artificial life, in the announcement of a workshop on that subject at the Center for Nonlinear Studies at the Los Alamos National Laboratory in 1987. The expression in silico was first used to characterize biological experiments carried out entirely in a computer in 1989, in the workshop "Cellular Automata: Theory and Applications" in Los Alamos, New Mexico, by Pedro Miramontes, a mathematician from National Autonomous University of Mexico (UNAM), presenting the report "DNA and RNA Physicochemical Constraints, Cellular Automata and Molecular Evolution". The work was later presented by Miramontes as his dissertation. In silico has been used in white papers written to support the creation of bacterial genome programs by the Commission of the European Community. The first referenced paper where in silico appears was written by a French team in 1991. The first referenced book chapter where in silico appears was written by Hans B. Sieburg in 1990 and presented during a Summer School on Complex Systems at the Santa Fe Institute. The phrase in silico originally applied only to computer simulations that modeled natural or laboratory processes (in all the natural sciences), and did not refer to calculations done by computer generically. Drug discovery with virtual screening In silico study in medicine is thought to have the potential to speed the rate of discovery while reducing the need for expensive lab work and clinical trials. One way to achieve this is by producing and screening drug candidates more effectively. In 2010, for example, using the protein docking algorithm EADock (see Protein-ligand docking), researchers found potential inhibitors to an enzyme associated with cancer activity in silico. Fifty percent of the molecules were later shown to be active inhibitors in vitro. This approach differs from use of expensive high-throughput screening (HTS) robotic labs to physically test thousands of diverse compounds a day, often with an expected hit rate on the order of 1% or less, with still fewer expected to be real leads following further testing (see drug discovery). As an example, the technique was utilized for a drug repurposing study in order to search for potential cures for COVID-19 (SARS-CoV-2). Cell models Efforts have been made to establish computer models of cellular behavior. For example, in 2007 researchers developed an in silico model of tuberculosis
https://en.wikipedia.org/wiki/Testbed
A testbed (also spelled test bed) is a platform for conducting rigorous, transparent, and replicable testing of scientific theories, computing tools, and new technologies. The term is used across many disciplines to describe experimental research and new product development platforms and environments. They may vary from hands-on prototype development in manufacturing industries such as automobiles (known as "mules"), aircraft engines or systems and to intellectual property refinement in such fields as computer software development shielded from the hazards of testing live. Software development In software development, testbedding is a method of testing a particular module (function, class, or library) in an isolated fashion. It may be used as a proof of concept or when a new module is tested apart from the program or system, it will later be added to. A skeleton framework is implemented around the module so that the module behaves as if already part of the larger program. A typical testbed could include software, hardware, and networking components. In software development, the specified hardware and software environment can be set up as a testbed for the application under test. In this context, a testbed is also known as the test environment made of: Testing hardware equipment (test bench, optical table, custom testing rig, dummy equipment as simulates an actual product or its counterpart, external environment means, like showers, heaters, fans, vacuum chamber, anechoic chamber). Computing equipment (processing units, data centers, in-line FPGA, environment simulation equipment). Testing software (DAQ / oscilloscopes, visualisation and testing software, environment software to feed a dummmy equipment with data). Testbeds are also pages on the Internet where the public are given the opportunity to test CSS or HTML they have created and want to preview the results, for example: The Arena web browser was created by the World Wide Web Consortium (W3C) and CERN for testing HTML3, Cascading Style Sheets (CSS), Portable Network Graphics (PNG) and the libwww. The Line Mode browser got a new function to interact with the libwww library as a sample and test application. The libwww was also created to test network communication protocols which are under development or to experiment with new protocols. Aircraft development In aircraft development there are also examples of testbed use like in development of new aircraft engines when these are fitted to a testbed aircraft for flight testing. See also Iron bird (aviation) References External links PlanetLab Europe, the European portion of the publicly available PlanetLab testbed CMU's eRulemaking Testbed US National Science Foundation GENI - Global Environment for Network Innovations Initiative Helsinki Testbed (meteorology) Collaborative Adaptive Sensing of the Atmosphere (CASA) IP1 test bed Hardware testing Software testing
https://en.wikipedia.org/wiki/DCDS
DCDS may refer to: Detroit Country Day School, a private school DECHEMA Chemistry Data Series, a series of books with thermophysical data published by DECHEMA Deputy Chief of the Defence Staff
https://en.wikipedia.org/wiki/Ch%C5%AB%C5%8D%E2%80%93S%C5%8Dbu%20Line
The is a railway line that runs through Tokyo and Chiba Prefecture, Japan. Part of the East Japan Railway Company (JR East) network, the line operates on separate tracks along the right-of-way of the Chūō Main Line (Chūō Line (Rapid)) and Sōbu Main Line (Sōbu Line (Rapid)), providing service between Mitaka Station in the cities of Mitaka and Musashino and Chiba Station in Chiba. The term distinguishes local trains on the Chūō-Sōbu line from rapid service trains running on the Chūō Main Line between Mitaka and and on the Sōbu Main Line between and Chiba. Service patterns Chūō-Sōbu Line Regularly, trains terminate at Chiba or Tsudanuma at the east side, and terminate at Nakano or Mitaka at the west side All trains stop at every station. For station information on the parallel rapid/express lines, see the Chūō Line (Rapid) and Sōbu Line (Rapid) articles. Tōzai Line through service All through service trains enter the Tōzai Line at either Nakano, or Nishi-Funabashi. These trains operate within the following routes: Mitaka – Nakano – (Tōzai Line) – Nishi-Funabashi – Tsudanuma (weekday mornings/evenings only) Nakano – (Tōzai Line) – Nishi-Funabashi – Tsudanuma (weekday mornings/evenings only) Mitaka – Nakano – (Tōzai Line) – Nishi-Funabashi Mitaka – Nakano – (Tōzai Line) – Nishi-Funabashi – (Tōyō Rapid Railway Line) – Tōyō-Katsutadai Limited express Certain limited express and seasonal trains run through, or stop at stations on this line. For information on the Shinjuku Wakashio and the Shinjuku Sazanami that make stops on the Chūō-Sōbu Line at , see their respective articles. Former Early morning / Late night At around 9 -10pm, a few westbound trains headed beyond Mitaka onto the Chūō Line (Rapid), with some terminating at Musashi-Koganei, and the others at Tachikawa. The other trains during the hour operated regularly. At around 4 - 6am and 11pm - 1am, Chūō-Sōbu Line services were divided at Ochanomizu Station, into two sections. At the western section (Mitaka – Ochanomizu), Chūō Line (Rapid) trains ran through the Chūō-Sōbu Line tracks between and , with services serving between and as far as , or even Ōme, which stops at all stations. At the eastern section (Ochanomizu – Chiba), local trains operated and terminated at the two ends of the section. This service pattern last operated on 13 March 2020. To prepare for the eventual installation of platform doors on Chūō-Sōbu Line platforms and the future addition of Green Cars on the Rapid line, Chūō Line Rapid service trains no longer regularly operate on the Chūō-Sōbu Line tracks. Station list Legend ●: All trains stop ■: Some trains pass ▲: All trains pass on weekends and holidays |: All trains pass Rolling stock Chūō-Sōbu Line Trains used on the line are based at Mitaka Depot. E231-0 series 10-car EMUs (yellow stripe) (since February 2000) E231-500 series 10-car EMUs (yellow stripe) (since 1 December 2014) Tozai Line - Toyo Rapid Line through service Train
https://en.wikipedia.org/wiki/Flash%20mob%20computing
Flash mob computing or flash mob computer is a temporary ad hoc computer cluster running specific software to coordinate the individual computers into one single supercomputer. A flash mob computer is distinct from other types of computer clusters in that it is set up and broken down on the same day or during a similar brief amount of time and involves many independent owners of computers coming together at a central physical location to work on a specific problem and/or social event. Flash mob computer derives its name from the more general term flash mob which can mean any activity involving many people co-ordinated through virtual communities coming together for brief periods of time for a specific task or event. Flash mob computing is a more specific type of flash mob for the purpose of bringing people and their computers together to work on a single task or event. History The first flash mob computer was created on April 3, 2004 at the University of San Francisco using software written at USF called FlashMob (not to be confused with the more general term flash mob). The event, called FlashMob I, was a success. There was a call for computers on the computer news website Slashdot. An article in The New York Times "Hey, Gang, Let’s Make Our Own Supercomputer" brought a lot of attention to the effort. More than 700 computers were brought to the gym at the University of San Francisco, and were wired to a network donated by Foundry Networks. At FlashMob I the participants were able to run a benchmark on 256 of the computers, and achieved a peak rate of 180 Gflops (billions of calculations per second), though this computation stopped three quarters of the way due to a node failure. The best, complete run used 150 computers and resulted in 77 Gflops. FlashMob I was run off a bootable CD-ROM that ran a copy of Morphix Linux, which was only available for the x86 platform. Despite these efforts, the project was unable to achieve its original goal of running a cluster momentarily fast enough to enter the (November 2003) Top 500 list of supercomputers. The system would have had to provide at least 402.5 Gflops to match a Chinese cluster of 256 Intel Xeon nodes. For comparison, the fastest super computer at the time, Earth Simulator, provided 35,860 Gflops. Creators of flash mob computing Pat Miller was a research scientist at a national lab and adjunct professor at USF. His class on Do-It-Yourself Supercomputers evolved into FlashMob I from the original idea of every student bringing a commodity CPU or an Xbox to class to make an evanescent cluster at each meeting. Pat worked on all aspects of the FlashMob software. Greg Benson, USF Associate Professor of Computer Science, invented the name "flash mob computing", and proposed the first idea of wireless flash mob computers. Greg worked on the core infrastructure of the FlashMob run time environment. John Witchel (Stuyvesant High School '86) was a USF graduate student in computer science dur
https://en.wikipedia.org/wiki/Greatest%20Hits%20Radio%20Liverpool%20%26%20The%20North%20West
Greatest Hits Radio Liverpool & The North West is an Independent Local Radio station based in Liverpool, England, owned and operated by Bauer as part of the Greatest Hits Radio network. It broadcasts to Merseyside, North West England, Cheshire and parts of North Wales. The station forms part of Greatest Hits Radio North West. As of September 2023, the station has a weekly audience of 265,000 listeners according to RAJAR. Coverage Although intended mostly for Merseyside, like its sister station, it serves Flintshire, northern Denbighshire, Greater Wrexham and eastern Greater Conwy in Wales; northern Cheshire, some parts of Greater Manchester and south-western Lancashire in England via signal overspill. In the past, the station has been telephoned from the island of Anglesey. History Originally it was known as "1548 City Talk", this service existed between 1989 and 1991 originally between 0700 and 1900 on weekdays. It was not a success, and a "Gold" format of music was introduced as the station initially became "Radio City Gold", and then "City Gold". Radio City was purchased by EMAP in 1998, and City Gold was rebranded Magic 1548 as part of the network of Magic stations on AM. In December 2001, EMAP decided that it was more economical for the Magic network to share off-peak programmes and in line with the other Magic AM stations began networking between 10am-2pm, 7pm-10am, and then 2am-6am (because of Pete Price's phone-in, which switched stations in January 2006). This resulted in Magic 1548 having more local programming on weekdays. During these hours it was simply known as Magic, although there were local commercial breaks, and local news on the hour. In January 2003, the station ceased networking with the London station, Magic 105.4, and a regional network was created with Piccadilly Magic 1152 in Manchester at the hub at the weekend and Magic 1152 in Newcastle during the week. During networked hours, local adverts are aired, as well as a local news summary on the hour. From July 2006, more networking was introduced across the Northern Magic AM network which meant only 4 hours a day was to be presented from the local studios, between 06:00 and 10:00am. In April 2012 Magic 1548, inline with the majority of other Magic North stations, dropped local weekend breakfast shows. Between March 2013 and December 2014, weekday breakfast was syndicated from Piccadilly Magic 1152 in Manchester. Local news and traffic bulletins were reintroduced in January 2015, when the station rebranded as Radio City 2, as part of the launch of the Bauer City 2 network. In July 2015, Bauer submitted a formal request to OFCOM to swap Radio City 2's format and frequencies with that of Radio City Talk. The company proposed to reintroduce local breakfast and drive time programming to the station and an enhanced local news service, alongside programming provided by the City 2 network. The request was approved three months later and the switchover took place on Monda
https://en.wikipedia.org/wiki/BITS
BITS or bits may refer to: Technology Plural of bit, computer memory unit. Drill bits, cutting tools used to create cylindrical holes Background Intelligent Transfer Service, a file transfer service Built-in tests Institutions BITS Pilani (Birla Institute of Technology and Science), a technical university in Pilani, India Business and Information Technology School, a former business school in Iserlohn (Germany), now merged into University of Europe for Applied Sciences or Bucharest Yiddish Studio Theater, a theater in Bucharest Plural of Bilateral investment treaty (BITs) Art Bits (album), the fourth and final album by American indie rock band Oxford Collapse Bits, a 2012 play by the Catalan mime comedy group Tricicle Bits (TV series), a British television entertainment program, on air 1999–2001 Boyz in the Sink, a fictional boy band. See also Bit (disambiguation)
https://en.wikipedia.org/wiki/Institute%20for%20Scientific%20Information
The Institute for Scientific Information (ISI) was an academic publishing service, founded by Eugene Garfield in Philadelphia in 1956. ISI offered scientometric and bibliographic database services. Its specialty was citation indexing and analysis, a field pioneered by Garfield. Services ISI maintained citation databases covering thousands of academic journals, including a continuation of its longtime print-based indexing service the Science Citation Index (SCI), as well as the Social Sciences Citation Index (SSCI) and the Arts and Humanities Citation Index (AHCI). All of these were available via ISI's Web of Knowledge database service. This database allows a researcher to identify which articles have been cited most frequently, and who has cited them. The database provides some measure of the academic impact of the papers indexed in it, and may increase their impact by making them more visible and providing them with a quality label. Some anecdotal evidence suggests that appearing in this database can double the number of citations received by a given paper. The company's main product was Current Contents, which gathers the tables of contents for recent academic journals. The ISI also published the annual Journal Citation Reports which list an impact factor for each of the journals that it tracked. Within the scientific community, journal impact factors continue to play a large but controversial role in determining the kudos attached to a scientist's published research record. A list of over 14,000 journals was maintained by the ISI. The list included some 1,100 arts and humanities journals as well as scientific journals. Listings were based on published selection criteria and are an indicator of journal quality and impact. ISI published Science Watch, a newsletter which every two months identified one paper published in the previous two years as a "fast-breaking paper" in each of 22 broad fields of science, such as Mathematics (including Statistics), Engineering, Biology, Chemistry, and Physics. The designations were based on the number of citations and the largest increase from one bimonthly update to the next. Articles about the papers often included comments by the authors. The ISI also published a list of "ISI Highly Cited Researchers", one of the factors included in the Academic Ranking of World Universities published by Shanghai Jiao Tong University. This continues under Clarivate. History Initially, the company was named Documation. In 1992, ISI was acquired by Thomson Scientific & Healthcare, and became known as Thomson ISI. It was a part of the Intellectual Property & Science business of Thomson Reuters until 2016, when the IP & Science business was sold, becoming Clarivate Analytics. In February 2018, Clarivate announced it will re-establish ISI as part of its Scientific and Academic Research group. It exists as a group within Clarivate as of November 2018. ISI Highly Cited "ISI Highly Cited" is a database of "highly cited rese
https://en.wikipedia.org/wiki/Internet%20Broadway%20Database
The Internet Broadway Database (IBDB) is an online database of Broadway theatre productions and their personnel. It was conceived and created by Karen Hauser in 1996 and is operated by the Research Department of The Broadway League, a trade association for the North American commercial theatre community. History Karen Hauser, research director for the Broadway League, developed the Internet Broadway Database which launched in 1996 or 2001. Prior to that she served as the League's media director. She has written on the economic health of Broadway and how it contributes to New York City's economy as well as that of the cities that touring productions visit. Hauser co-produced the 2000 production of Keith Reddin's The Perpetual Patient. Overview This comprehensive history of Broadway provides records of productions from the beginnings of New York theatre in the 18th century up to today. Details include cast and creative lists for opening night and current day, song lists, awards and other interesting facts about every Broadway production. Other features of IBDB include an extensive archive of photos from past and present Broadway productions, headshots, links to cast recordings on iTunes or Amazon, gross and attendance information. Its mission was to be an interactive, user-friendly, searchable database for League members, journalists, researchers, and Broadway fans. The League recently added Broadway Touring shows to the database for ease of tracking shows that play in theatres across the country. It is managed by Michael Abourizk of the Broadway League. See also Internet Theatre Database – ITDb Internet Movie Database – IMDb Internet Book Database – IBookDb Lortel Archives – IOBDb The Broadway League References External links Broadway League website Theatre in the United States Culture of New York City Online databases Broadway theatre Internet properties established in 2000 Theatre databases
https://en.wikipedia.org/wiki/Disjoint-set%20data%20structure
In computer science, a disjoint-set data structure, also called a union–find data structure or merge–find set, is a data structure that stores a collection of disjoint (non-overlapping) sets. Equivalently, it stores a partition of a set into disjoint subsets. It provides operations for adding new sets, merging sets (replacing them by their union), and finding a representative member of a set. The last operation makes it possible to find out efficiently if any two elements are in the same or different sets. While there are several ways of implementing disjoint-set data structures, in practice they are often identified with a particular implementation called a disjoint-set forest. This is a specialized type of forest which performs unions and finds in near-constant amortized time. To perform a sequence of addition, union, or find operations on a disjoint-set forest with nodes requires total time , where is the extremely slow-growing inverse Ackermann function. Disjoint-set forests do not guarantee this performance on a per-operation basis. Individual union and find operations can take longer than a constant times time, but each operation causes the disjoint-set forest to adjust itself so that successive operations are faster. Disjoint-set forests are both asymptotically optimal and practically efficient. Disjoint-set data structures play a key role in Kruskal's algorithm for finding the minimum spanning tree of a graph. The importance of minimum spanning trees means that disjoint-set data structures underlie a wide variety of algorithms. In addition, disjoint-set data structures also have applications to symbolic computation, as well as in compilers, especially for register allocation problems. History Disjoint-set forests were first described by Bernard A. Galler and Michael J. Fischer in 1964. In 1973, their time complexity was bounded to , the iterated logarithm of , by Hopcroft and Ullman. In 1975, Robert Tarjan was the first to prove the (inverse Ackermann function) upper bound on the algorithm's time complexity,. He also proved it to be tight. In 1979, he showed that this was the lower bound for a certain class of algorithms, that include the Galler-Fischer structure. In 1989, Fredman and Saks showed that (amortized) words of bits must be accessed by any disjoint-set data structure per operation, thereby proving the optimality of the data structure in this model. In 1991, Galil and Italiano published a survey of data structures for disjoint-sets. In 1994, Richard J. Anderson and Heather Woll described a parallelized version of Union–Find that never needs to block. In 2007, Sylvain Conchon and Jean-Christophe Filliâtre developed a semi-persistent version of the disjoint-set forest data structure and formalized its correctness using the proof assistant Coq. "Semi-persistent" means that previous versions of the structure are efficiently retained, but accessing previous versions of the data structure invalidates later ones
https://en.wikipedia.org/wiki/Business%20software
Business software (or a business application) is any software or set of computer programs used by business users to perform various business functions. These business applications are used to increase productivity, measure productivity, and perform other business functions accurately. Overview Much business software is developed to meet the needs of a specific business, and therefore is not easily transferable to a different business environment, unless its nature and operation are identical. Due to the unique requirements of each business, off-the-shelf software is unlikely to completely address a company's needs. However, where an on-the-shelf solution is necessary, due to time or monetary considerations, some level of customization is likely to be required. Exceptions do exist, depending on the business in question, and thorough research is always required before committing to bespoke or off-the-shelf solutions. Some business applications are interactive, i.e., they have a graphical user interface or user interface and users can query/modify/input data and view results instantaneously. They can also run reports instantaneously. Some business applications run in batch mode: they are set up to run based on a predetermined event/time and a business user does not need to initiate or monitor them. Some business applications are built in-house and some are bought from vendors (off-the-shelf software products). These business applications are installed on either desktops or big servers. Prior to the introduction of COBOL (a universal compiler) in 1965, businesses developed their own unique machine language. RCA's language consisted of 12-position instructions. For example, to read a record into memory, the first two digits would be the instruction (action) code. The next four positions of the instruction (an 'A' address) would be the exact leftmost memory location where you want the readable character to be placed. Four positions (a 'B' address) of the instruction would note the very rightmost memory location where you want the last character of the record to be located. A two-digit 'B' address also allows the modification of any instruction. Instruction codes and memory designations excluded the use of 8's or 9's. The first RCA business application was implemented in 1962 on a 4k RCA 301. The RCA 301, mid-frame 501, and large frame 601 began their marketing in early 1960. Many kinds of users are found within the business environment, and can be categorized by using a small, medium, and large matrix: The small business market generally consists of home accounting software, and office suites such as LibreOffice, Microsoft Office or Google Workspace (formerly G Suite and Google Apps for Work). The medium size, or small and medium-sized enterprise (SME), has a broader range of software applications, ranging from accounting, groupware, customer relationship management, human resource management systems, outsourcing relationship management, loan or
https://en.wikipedia.org/wiki/Iterated%20logarithm
In computer science, the iterated logarithm of , written   (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to . The simplest formal definition is the result of this recurrence relation: On the positive real numbers, the continuous super-logarithm (inverse tetration) is essentially equivalent: i.e. the base b iterated logarithm is if n lies within the interval , where denotes tetration. However, on the negative real numbers, log-star is , whereas for positive , so the two functions differ for negative arguments. The iterated logarithm accepts any positive real number and yields an integer. Graphically, it can be understood as the number of "zig-zags" needed in Figure 1 to reach the interval on the x-axis. In computer science, is often used to indicate the binary iterated logarithm, which iterates the binary logarithm (with base ) instead of the natural logarithm (with base e). Mathematically, the iterated logarithm is well-defined for any base greater than , not only for base and base e. Analysis of algorithms The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as: Finding the Delaunay triangulation of a set of points knowing the Euclidean minimum spanning tree: randomized O(n  n) time. Fürer's algorithm for integer multiplication: O(n log n 2O( n)). Finding an approximate maximum (element at least as large as the median):  n − 4 to  n + 2 parallel operations. Richard Cole and Uzi Vishkin's distributed algorithm for 3-coloring an n-cycle: O( n) synchronous communication rounds. The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself. For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n ≤ 265536, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5. Higher bases give smaller iterated logarithms. Indeed, the only function commonly used in complexity theory that grows more slowly is the inverse Ackermann function . Other applications The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. The additive persistence of a number, the number of times someone must replace the number by the sum of its digits before reaching its digital root, is . In computational complexity theory, Santhanam shows that the computational resources DTIME — computation time for a deterministic Turing machine — and NTIME — computation time for a non-deterministic Turing machine — are distinct up to References Asymptotic analysis Logarithms
https://en.wikipedia.org/wiki/Combat%20%28video%20game%29
Combat is a 1977 video game by Atari, Inc. for the Atari Video Computer System (later renamed the Atari 2600). In the game, two players controlling either a tank, a biplane, or a jet fire missiles at each other for two minutes and sixteen seconds. Points are scored by hitting the opponent, and the player with the most points when the time runs out wins. Variations on the gameplay introduce elements such as invisible vehicles, missiles that ricochet off of walls, and different playing fields. The game was being made while the Atari 2600 was still in development. Combat was based on Atari's popular arcade game Tank. It was initially developed by Steve Mayer and Ron Milner. Joe Decuir later tested it on the developing Atari Video Computer System hardware, and Larry Wagner completed the game while adding the two plane variations to the game. The game was released as a pack-in game with the Atari Video Computer System. By the early 1980s, reception towards Combat commented on its graphics and game play as being out-of-date. A sequel titled Combat Two was planned for the Atari 2600, but never completed. A prototype of the sequel was later released in various formats. Retrospective reception by publications such as Classic Gamer Magazine and AllGame have praised the game, while PCMag and Flux included it on their list of Best Atari 2600 games and Best Game of all time lists respectively. Gameplay Combat uses two four-directional joysticks, one for each player. The game has several modes of gameplay: "Tank", "Biplane" and "Jet" and with variations of the above models. The tank and jet modes are viewed from a top-down perspective while biplane games are side view. In all forms of the game, pushing the button fires a missile. Hitting the other player with a missile scores a point. The player with the most points at the end of a two minutes and sixteen second round wins. Variations of the gameplay in Tank mode include adding obstacles to maneuver around, having the missiles rebound off of walls (referred to as Tank-Pong) and a mode where the tanks are invisible except when they are firing a missile or are hit by one. In the other two modes, Jet and Biplane, there are no obstacles, and both aircraft continuously move forward. These levels have options for obscuring clouds that the planes can hide behind. The original Atari VCS game system has six switches in front, including left and right difficulty switches and a game select switch. The difficulty switches change the range of fire of the missiles and in the plane-based games also changes the speed of flight. In his article "Combat in Context", Nick Montfort stated it was difficult to place the game genre to Combat as assigning one to it would be considered anachronistic. AllGame would later categorize it as a shooter game. Development Combat was programmed while the Atari 2600 hardware was still under development. Steve Mayer was developing the game by late 1975. The original version was a tank game
https://en.wikipedia.org/wiki/Traffic%20grooming
Traffic grooming is the process of grouping many small telecommunications flows into larger units, which can be processed as single entities. For example, in a network using both time-division multiplexing (TDM) and wavelength-division multiplexing (WDM), two flows which are destined for a common node can be placed on the same wavelength, allowing them to be dropped by a single optical add-drop multiplexer. Often the objective of grooming is minimizing the cost of the network. The cost of line terminating equipment (LTE) (also called add/drop multiplexers or ADMs) is the most dominant component in an optical WDM network's cost. Thus grooming typically involves minimizing the usage of ADMs. This is similar to the use of virtual channels and virtual paths in ATM networks. Effective grooming requires consideration of the topology of the network and the different routes in use. This is especially useful when dealing with mesh networks. Grooming Algorithms The traffic grooming problem has proven to be computationally intractable in the general case. Hence heuristic solutions are typically used. Electrical-Layer Grooming Grooms electrical signals at a granularity of ODUk(k =1, 2(e), 3, 4, or flex). Services are transmitted to tributary boards and groomed to line boards through the cross-connect board, encapsulated and mapped to OTU signals on the line boards, and then transmitted to the WDM side. Through electrical-layer grooming, services of different granularity are groomed and encapsulated into one wavelength and output to the WDM side. This enables multiple services to share the bandwidth, greatly improving bandwidth utilization. Optical-Layer Grooming Grooms optical signals at a granularity of wavelength (λ) by flexibly selecting transmission paths. After receiving OTU optical signals, the ROADM board creates optical cross-connection paths internally and outputs the signals to specified egresses. Each egress corresponds to a specific path Operation personnel can remotely control the transmission paths of optical signals by creating and adjusting cross-connection paths on the NMS Optical-layer grooming used together with the ASON technology can implement automatic fault detection and line adjustment to ensure normal transmission of services. References Teletraffic Optical-Layer and Electrical-Layer Grooming
https://en.wikipedia.org/wiki/New%20World%20%28Do%20As%20Infinity%20album%29
New World is the second album by Do As Infinity, released 2001. Track listing Chart positions External links New World at Avex Network New World at Oricon 2001 albums Do As Infinity albums Avex Group albums Albums produced by Seiji Kameda
https://en.wikipedia.org/wiki/Privilege%20separation
In computer programming and computer security, privilege separation is one software-based technique for implementing the principle of least privilege. With privilege separation, a program is divided into parts which are limited to the specific privileges they require in order to perform a specific task. This is used to mitigate the potential damage of a computer security vulnerability. A common method to implement privilege separation is to have a computer program fork into two processes. The main program drops privileges, and the smaller program keeps privileges in order to perform a certain task. The two halves then communicate via a socket pair. Thus, any successful attack against the larger program will gain minimal access, even though the pair of programs will be capable of performing privileged operations. Privilege separation is traditionally accomplished by distinguishing a real user ID/group ID from the effective user ID/group ID, using the setuid(2)/setgid(2) and related system calls, which were specified by POSIX. If these are incorrectly positioned, gaps can allow widespread network penetration. Many network service daemons have to do a specific privileged operation such as open a raw socket or an Internet socket in the well known ports range. Administrative utilities can require particular privileges at run-time as well. Such software tends to separate privileges by revoking them completely after the critical section is done, and change the user it runs under to some unprivileged account after so doing. This action is known as dropping root under Unix-like operating systems. The unprivileged part is usually run under the "nobody" user or an equivalent separate user account. Privilege separation can also be done by splitting functionality of a single program into multiple smaller programs, and then assigning the extended privileges to particular parts using file system permissions. That way the different programs have to communicate with each other through the operating system, so the scope of the potential vulnerabilities is limited (since a crash in the less privileged part cannot be exploited to gain privileges, merely to cause a denial-of-service attack). Separation of privileges is one of the major OpenBSD security features. The implementation of Postfix was focused on implementing comprehensive privilege separation. Another email server software designed with privilege separation and security in mind is Dovecot. Solaris implements a separate set of functions for privilege bracketing. See also Capability-based security Confused deputy problem Privilege escalation Privilege revocation (computing) Defensive programming Sandbox (computer security) External links Theo de Raadt: Exploit Mitigation Techniques in OpenBSD slides Niels Provos, Markus Friedl, Peter Honeyman: Preventing Privilege Escalation paper Niels Provos: Privilege Separated OpenSSH project Trusted Solaris Developer's Guide: Bracketing Effective Privil
https://en.wikipedia.org/wiki/MTS%20%28telecommunications%29
MTS (, МТС, "Mobile TeleSystems"), headquartered in Moscow, is the largest mobile network operator in Russia, operating on GSM, UMTS and LTE standards. Apart from cellular network, the company also offers local telephone service, broadband, mobile television, cable television, satellite television and digital television. As of Q1 2021, the company serves over 84.9 million subscribers in Russia, Armenia and Belarus. Operations Branding In May 2006, MTS changed their logo as a part of rebranding campaign performed by their parent company, JSFC Sistema. The logo now has two red squares next to each other. The left one, common in form (but not colour) to all JSFC Sistema's telecom subsidiaries, contains a white egg which symbolises simplicity and genius, while the right square bears the name of the company: МТС (MTS). In 2010, MTS announced acquisition of Sistema Telecom, the owners of the MTS "egg" logo, for $380 million, thus becoming the sole owner of the logo. In 2008, the MTS brand was included in the Top 100 World's Most Powerful Brands list by Financial Times/Millward Brown ranking, becoming the most valuable Russian brand. According to this ranking, in 2010 MTS brand was 72nd most valuable brand worldwide with the brand value of $9.7 billion. In 2010 MTS also became the most valuable Russian brand according to the Interbrand ranking. MTS Russia In 1994, a joint venture of Moscow City Telephone Network, T-Mobile and Siemens, which later MTS GSM became part of Mobile TeleSystems (MTS), offered Russia's MTS GSM first mobile phone service Мобильные ТелеСвязи for the public in Moscow. In the same year in June, VimpelCom also started Beeline mobile phone service. MTS having started MTS in the Moscow license zone in 1994, MTS in 1997 received licenses for further areas and began expansion across Russia, later entering other countries of the CIS. In 2009, MTS acquired several independent mobile retail chains, creating MTS monobrand retail network of 3300 stores — the second largest retail network in Russia. Also in 2009 MTS started marketing MTS-branded mobile handsets. Already in 2010 MTS became the 5th best selling handset brand in Russia, after Nokia, Samsung, LG and SonyEricsson. In 2010, MTS announced acquisition of 62% of the stock of Comstar, the biggest Russian fixed internet and cable TV provider with 7.5 million of passed households. Comstar products were re-branded to MTS in 2010, forming the largest Russian mobile and fixed telecommunications brand. Until this purchase, MTS was presented at the fixed telephony market through its subsidiary Moscow City Telephone Network (MGTS). In November 2013, the company has launched the "Home phone MTS" in Ryazan, Oryol, Kirov, Krasnodar, Rostov-on-Don and Yekaterinburg. The subscription fee for the wired telephone is 100 rubles. Per month, it includes unlimited calls to numbers of local fixed-line operators. The cost of calls to mobile numbers range from 1.1 rubles per minute depending on th
https://en.wikipedia.org/wiki/Algorithmic
Algorithmic may refer to: Algorithm, step-by-step instructions for a calculation Algorithmic art, art made by an algorithm Algorithmic composition, music made by an algorithm Algorithmic trading, trading decisions made by an algorithm Algorithmic patent, an intellectual property right in an algorithm Algorithmics, the science of algorithms Algorithmica, an academic journal for algorithm research Algorithmic efficiency, the computational resources used by an algorithm Algorithmic information theory, study of relationships between computation and information Algorithmic mechanism design, the design of economic systems from an algorithmic point of view Algorithmic number theory, algorithms for number-theoretic computation Algorithmic game theory, game-theoretic techniques for algorithm design and analysis Algorithmic cooling, a phenomenon in quantum computation Algorithmic probability, a universal choice of prior probabilities in Solomonoff's theory of inductive inference See also Algorithmic complexity (disambiguation)
https://en.wikipedia.org/wiki/Network%20analysis
Network analysis can refer to: Network theory, the analysis of relations through mathematical graphs Social network analysis, network theory applied to social relations Network analysis (electrical circuits) See also Network planning and design es:Análisis de redes pt:Análise de rede ru:Сетевой анализ
https://en.wikipedia.org/wiki/Move-to-front%20transform
The move-to-front (MTF) transform is an encoding of data (typically a stream of bytes) designed to improve the performance of entropy encoding techniques of compression. When efficiently implemented, it is fast enough that its benefits usually justify including it as an extra step in data compression algorithm. This algorithm was first published by B. Ryabko under the name of "book stack" in 1980. Subsequently, it was rediscovered by J.K. Bentley et al. in 1986, as attested in the explanatory note. The transform The main idea is that each symbol in the data is replaced by its index in the stack of “recently used symbols”. For example, long sequences of identical symbols are replaced by as many zeroes, whereas when a symbol that has not been used in a long time appears, it is replaced with a large number. Thus at the end the data is transformed into a sequence of integers; if the data exhibits a lot of local correlations, then these integers tend to be small. Let us give a precise description. Assume for simplicity that the symbols in the data are bytes. Each byte value is encoded by its index in a list of bytes, which changes over the course of the algorithm. The list is initially in order by byte value (0, 1, 2, 3, ..., 255). Therefore, the first byte is always encoded by its own value. However, after encoding a byte, that value is moved to the front of the list before continuing to the next byte. An example will shed some light on how the transform works. Imagine instead of bytes, we are encoding values in a–z. We wish to transform the following sequence: bananaaa By convention, the list is initially (abcdefghijklmnopqrstuvwxyz). The first letter in the sequence is b, which appears at index 1 (the list is indexed from 0 to 25). We put a 1 to the output stream: 1 The b moves to the front of the list, producing (bacdefghijklmnopqrstuvwxyz). The next letter is a, which now appears at index 1. So we add a 1 to the output stream. We have: 1,1 and we move the letter a back to the top of the list. Continuing this way, we find that the sequence is encoded by: 1,1,13,1,1,1,0,0 It is easy to see that the transform is reversible. Simply maintain the same list and decode by replacing each index in the encoded stream with the letter at that index in the list. Note the difference between this and the encoding method: The index in the list is used directly instead of looking up each value for its index. i.e. you start again with (abcdefghijklmnopqrstuvwxyz). You take the "1" of the encoded block and look it up in the list, which results in "b". Then move the "b" to front which results in (bacdef...). Then take the next "1", look it up in the list, this results in "a", move the "a" to front ... etc. Implementation Details of implementation are important for performance, particularly for decoding. For encoding, no clear advantage is gained by using a linked list, so using an array to store the list is acceptable, with worst-case perfor
https://en.wikipedia.org/wiki/Evolutionary%20Technologies%20International
Evolutionary Technologies International (ETI) was a company focused on developing database tools and data warehousing. Originally a research project at the Microelectronics and Computer Corporation (MCC) in Austin, Texas, ETI was spun off as a private company by co-founders Katherine Hammer, Robin Curle, Lisa Keeler, and Duane Voth in 1990. ETI's technology provided a mechanism for automatically generating ETL (Extract/Transform/Load) programs based on user-defined metadata. Their first product was called EXTRACT (later changed to ETI•EXTRACT). ETI was purchased by Versata in May, 2008. ETI's technology was acquired by Ignite in June 2015. References Information technology companies of the United States Companies based in Austin, Texas
https://en.wikipedia.org/wiki/List%20of%20satellites%20in%20geosynchronous%20orbit
This is a list of satellites in geosynchronous orbit (GSO). These satellites are commonly used for communication purposes, such as radio and television networks, back-haul, and direct broadcast. Traditional global navigation systems do not use geosynchronous satellites, but some SBAS navigation satellites do. A number of weather satellites are also present in geosynchronous orbits. Not included in the list below are several more classified military geosynchronous satellites, such as PAN. A special case of geosynchronous orbit is the geostationary orbit, which is a circular geosynchronous orbit at zero inclination (that is, directly above the equator). A satellite in a geostationary orbit appears stationary, always at the same point in the sky, to ground observers. Popularly or loosely, the term "geosynchronous" may be used to mean geostationary. Specifically, geosynchronous Earth orbit (GEO) may be a synonym for geosynchronous equatorial orbit, or geostationary Earth orbit. To avoid confusion, geosynchronous satellites that are not in geostationary orbit are sometimes referred to as being in an inclined geostationary orbit (IGSO). Some of these satellites are separated from each other by as little as 0.1° longitude. This corresponds to an inter-satellite spacing of approximately 73 km. The major consideration for spacing of geostationary satellites is the beamwidth at-orbit of uplink transmitters, which is primarily a factor of the size and stability of the uplink dish, as well as what frequencies the satellite's transponders receive; satellites with discontiguous frequency allocations can be much closer together. As of July 2023, the website UCS Satellite Database lists 6,718 known satellites. This includes all orbits and everything down to the little CubeSats, not just satellites in GEO. Of these, 580 are listed in the database as being at GEO. The website provides a spreadsheet containing details of all the satellites, which can be downloaded. Listings are from west to east (decreasing longitude in the Western Hemisphere and increasing longitude in the Eastern Hemisphere) by orbital position, starting and ending with the International Date Line. Satellites in inclined geosynchronous orbit are so indicated by a note in the "remarks" columns. Western hemisphere Eastern Hemisphere In transit Historical References External links SatcoDX, a useful 3rd party resource Lyngsat, a useful 3rd party resource Satbeams – satellite footprints TrackingSat – List of satellites in geostationary orbit Zarya.info's list of satellites in geosynchronous orbit, updated daily Geosynchronous orbit Geo
https://en.wikipedia.org/wiki/Cray-3
The Cray-3 was a vector supercomputer, Seymour Cray's designated successor to the Cray-2. The system was one of the first major applications of gallium arsenide (GaAs) semiconductors in computing, using hundreds of custom built ICs packed into a CPU. The design goal was performance around 16 GFLOPS, about 12 times that of the Cray-2. Work started on the Cray-3 in 1988 at Cray Research's (CRI) development labs in Chippewa Falls, Wisconsin. Other teams at the lab were working on designs with similar performance. To focus the teams, the Cray-3 effort was moved to a new lab in Colorado Springs, Colorado later that year. Shortly thereafter, the corporate headquarters in Minneapolis decided to end work on the Cray-3 in favor of another design, the Cray C90. In 1989 the Cray-3 effort was spun off to a newly formed company, Cray Computer Corporation (CCC). The launch customer, Lawrence Livermore National Laboratory, cancelled their order in 1991 and a number of company executives left shortly thereafter. The first machine was finally ready in 1993, but with no launch customer, it was instead loaned as a demonstration unit to the nearby National Center for Atmospheric Research in Boulder. The company went bankrupt in May 1995, and the machine was officially decommissioned. With the delivery of the first Cray-3, Seymour Cray immediately moved on to the similar-but-improved Cray-4 design, but the company went bankrupt before it was completely tested. The Cray-3 was Cray's last completed design; with CCC's bankruptcy, he formed SRC Computers to concentrate on parallel designs, but died in a car accident in 1996 before this work was delivered. History Background Seymour Cray began the design of the Cray-3 in 1985, as soon as the Cray-2 reached production. Cray generally set himself the goal of producing new machines with ten times the performance of the previous models. Although the machines did not always meet this goal, this was a useful technique in defining the project and clarifying what sort of process improvements would be needed to meet it. For the Cray-3, he decided to set an even higher performance improvement goal, an increase of 12x over the Cray-2. Cray had always attacked the problem of increased speed with three simultaneous advances; more execution units to give the system higher parallelism, tighter packaging to decrease signal delays, and faster components to allow for a higher clock speed. Of the three, Cray was normally least aggressive on the last; his designs tended to use components that were already in widespread use, as opposed to leading-edge designs. For the Cray-2, he introduced a novel 3D-packaging system for its integrated circuits to allow higher densities, and it appeared that there was some room for improvement in this process. For the new design, he stated that all wires would be limited to a maximum length of . This would demand the processor be able to fit into a block, about that of the Cray-2 CPU. This would not
https://en.wikipedia.org/wiki/Clive%20Finkelstein
Clive Finkelstein (born ca. 1939 died 9/12/2021) is an Australian computer scientist, known as the "Father" of information engineering methodology. Life and work In 1961 Finkelstein received his Bachelor of Science from the University of New South Wales in Sydney. After graduation Finkelstein started working in the field of database processing for IBM in Australia and in the USA. Back in Australia in 1976 he founded the IT consultancy firm Infocom Australia. In 1972 Finkelstein was elected Fellow of the Australian Computer Society. Finkelstein was a distinguished member of the International Advisory Board of the Data Administration Management Association (DAMA International), with John Zachman. In 2008 he was awarded a position in the Pearcey Hall of Fame of the ACS in Australia. From 1976 to 1980 Finkelstein developed the concept of information technology engineering, based on original work carried out by him to bridge from strategic business planning to information systems. He wrote the first publication on information technology engineering: a series of six in-depth articles by the same name published by US Computerworld in May - June 1981. He also co-authored with James Martin the influential Savant Institute Report titled: "Information Engineering", published in Nov 1981. He also wrote a monthly column, "The Enterprise" for DM Review magazine. Finkelstein died from Parkinsons Disease in September 2021 (per phone call with Jill Finkelstein) Selected publications Martin, James, and Clive Finkelstein. Information Engineering Savant, Nov 1981. Finkelstein, Clive. An introduction to Information Engineering: from Strategic Planning to Information Systems. Addison-Wesley Longman Publishing Co., Inc., 1989. Finkelstein, Clive. Information Engineering: Strategic Systems Development. Addison-Wesley Longman Publishing Co., Inc., 1992. Finkelstein, Clive, and Peter Aiken. Building Corporate portals with XML. McGraw-Hill, Inc., 2000. Clive Finkelstein. Enterprise Architecture for Integration: Rapid Delivery Methods and Technologies, First Edition, Artech House, 2006. Hardcover. Clive Finkelstein. Enterprise Architecture for Integration: Rapid Delivery Methods and Technologies, Second Edition, IES, 2011. ebook. Clive Finkelstein. Enterprise Architecture for Integration: Rapid Delivery Methods and Technologies, Third Edition, 2015 ebook - download in PDF from www.ies.aust.com. References External links Clive Finkelstein Home Page and latest books - Australia "Information Engineering, Portals and Data Warehouses" Interview with Clive Finkelstein (Real Video, Windows Media Video, MP3 podcast, running time 10:13) Year of birth missing Australian computer scientists Database specialists University of New South Wales alumni 1930s births 2021 deaths
https://en.wikipedia.org/wiki/Bill%20Inmon
William H. Inmon (born 1945) is an American computer scientist, recognized by many as the father of the data warehouse. Inmon wrote the first book, held the first conference (with Arnie Barnett), wrote the first column in a magazine and was the first to offer classes in data warehousing. Inmon created the accepted definition of what a data warehouse is - a subject oriented, nonvolatile, integrated, time variant collection of data in support of management's decisions. Compared with the approach of the other pioneering architect of data warehousing, Ralph Kimball, Inmon's approach is often characterized as a top-down approach. Biography William H. Inmon was born July 20, 1945, in San Diego, California. He received his Bachelor of Science degree in mathematics from Yale University in 1967, and his Master of Science degree in computer science from New Mexico State University. He worked for American Management Systems and Coopers & Lybrand before 1991, when he founded the company Prism Solutions, which he took public. In 1995 he founded Pine Cone Systems, which was renamed Ambeo later on. In 1999, he created a corporate information factory web site for his consulting business. Inmon coined terms such as the government information factory, as well as data warehousing 2.0. Inmon promotes building, usage, and maintenance of data warehouses and related topics. His books include "Building the Data Warehouse" (1992, with later editions) and "DW 2.0: The Architecture for the Next Generation of Data Warehousing" (2008). In July 2007, Inmon was named by Computerworld as one of the ten people that most influenced the first 40 years of the computer industry. Inmon's association with data warehousing stems from the fact that he wrote the first book on data warehousing he held the first conference on data warehousing (with Arnie Barnett), he wrote the first column in a magazine on data warehousing, he has written over 1,000 articles on data warehousing in journals and newsletters, he created the first fold out wall chart for data warehousing and he conducted the first classes on data warehousing. In 2012, Inmon developed and made public technology known as "textual disambiguation". Textual disambiguation applies context to raw text and reformats the raw text and context into a standard data base format. Once raw text is passed through textual disambiguation, it can easily and efficiently be accessed and analyzed by standard business intelligence technology. Textual disambiguation is accomplished through the execution of TextualETL. Inmon owns and operates Forest Rim Technology, a company that applies and implements data warehousing solutions executed through textual disambiguation and TextualETL. Awards (2002) DAMA International Professional Achievement Award for, "major contributions as the 'father of data warehousing' and a recognized thought leader in decision support" from DAMA International, The Global Data Management Community. (2018) Received
https://en.wikipedia.org/wiki/Database%20design
Database design is the organization of data according to a database model. The designer determines what data must be stored and how the data elements interrelate. With this information, they can begin to fit the data to the database model. A database management system manages the data accordingly. Database design involves classifying data and identifying interrelationships. This theoretical representation of the data is called an ontology. Determining data to be stored In a majority of cases, a person who is doing the design of a database is a person with expertise in the area of database design, rather than expertise in the domain from which the data to be stored is drawn e.g. financial information, biological information etc. Therefore, the data to be stored in the database must be determined in cooperation with a person who does have expertise in that domain, and who is aware of what data must be stored within the system. This process is one which is generally considered part of requirements analysis, and requires skill on the part of the database designer to elicit the needed information from those with the domain knowledge. This is because those with the necessary domain knowledge often cannot clearly express the system requirements for the database as they are unaccustomed to thinking in terms of the discrete data elements which must be stored. Data to be stored can be determined by Requirement Specification. Determining data relationships Once a database designer is aware of the data which is to be stored within the database, they must then determine where dependency is within the data. Sometimes when data is changed you can be changing other data that is not visible. For example, in a list of names and addresses, assuming a situation where multiple people can have the same address, but one person cannot have more than one address, the address is dependent upon the name. When provided a name and the list the address can be uniquely determined; however, the inverse does not hold – when given an address and the list, a name cannot be uniquely determined because multiple people can reside at an address. Because an address is determined by a name, an address is considered dependent on a name. (NOTE: A common misconception is that the relational model is so called because of the stating of relationships between data elements therein. This is not true. The relational model is so named because it is based upon the mathematical structures known as relations.) Logically structuring data Once the relationships and dependencies amongst the various pieces of information have been determined, it is possible to arrange the data into a logical structure which can then be mapped into the storage objects supported by the database management system. In the case of relational databases the storage objects are tables which store data in rows and columns. In an Object database the storage objects correspond directly to the objects used by the Object-or
https://en.wikipedia.org/wiki/Joe%20Costello
Joe or Joseph Costello may refer to: Joe Costello (politician) (born 1945), Irish Labour Party politician Joseph Costello (software executive) (born 1953), American computer software executive Joseph Arthur Costello (1915–1978), American bishop of the Catholic Church Joseph J. Costello (1892–1960), mayor of Galway Joe Costello (American football) (born 1960), American football defensive end
https://en.wikipedia.org/wiki/Data%20corruption
Data corruption refers to errors in computer data that occur during writing, reading, storage, transmission, or processing, which introduce unintended changes to the original data. Computer, transmission, and storage systems use a number of measures to provide end-to-end data integrity, or lack of errors. In general, when data corruption occurs, a file containing that data will produce unexpected results when accessed by the system or the related application. Results could range from a minor loss of data to a system crash. For example, if a document file is corrupted, when a person tries to open that file with a document editor they may get an error message, thus the file might not be opened or might open with some of the data corrupted (or in some cases, completely corrupted, leaving the document unintelligible). The adjacent image is a corrupted image file in which most of the information has been lost. Some types of malware may intentionally corrupt files as part of their payloads, usually by overwriting them with inoperative or garbage code, while a non-malicious virus may also unintentionally corrupt files when it accesses them. If a virus or trojan with this payload method manages to alter files critical to the running of the computer's operating system software or physical hardware, the entire system may be rendered unusable. Some programs can give a suggestion to repair the file automatically (after the error), and some programs cannot repair it. It depends on the level of corruption, and the built-in functionality of the application to handle the error. There are various causes of the corruption. Overview There are two types of data corruption associated with computer systems: undetected and detected. Undetected data corruption, also known as silent data corruption, results in the most dangerous errors as there is no indication that the data is incorrect. Detected data corruption may be permanent with the loss of data, or may be temporary when some part of the system is able to detect and correct the error; there is no data corruption in the latter case. Data corruption can occur at any level in a system, from the host to the storage medium. Modern systems attempt to detect corruption at many layers and then recover or correct the corruption; this is almost always successful but very rarely the information arriving in the systems memory is corrupted and can cause unpredictable results. Data corruption during transmission has a variety of causes. Interruption of data transmission causes information loss. Environmental conditions can interfere with data transmission, especially when dealing with wireless transmission methods. Heavy clouds can block satellite transmissions. Wireless networks are susceptible to interference from devices such as microwave ovens. Hardware and software failure are the two main causes for data loss. Background radiation, head crashes, and aging or wear of the storage device fall into the former cate
https://en.wikipedia.org/wiki/Nexus%3A%20The%20Jupiter%20Incident
Nexus: The Jupiter Incident is a science fiction themed real-time tactics computer game developed by the Hungary-based Mithis Entertainment and published by HD Interactive. The game focuses on tactics and ship management instead of resource collection and base construction. Gameplay and features In each of the game's missions, the player is given a small number of large space ships (always less than ten, and sometimes just one or two), along with accompanying fighters and bombers. The ships are large and cumbersome, and the battles between fleets protracted, giving the game a noted cinematic feel. Nexus uses the Black Sun Engine, made specifically for the game. Based on DirectX 9, it makes extensive use of vertex and pixel shaders, a parametric particle system, and other visual effects. Story The game is set in the 22nd Century. The player is Marcus Cromwell, a famed spacecraft captain whose father, Richard Cromwell, the first spaceborn human, captained the colony ship Noah's Ark through a wormhole near Mars that was presumed destroyed when the wormhole collapsed. The game begins with Cromwell setting out on the heavy corvette named Stiletto, manufactured by the company Spacetech, for Jupiter as standing escort for a pair of freighters. Upon arrival at a pair of Spacetech-owned space stations in orbit around Europa, Cromwell is retasked with intercepting a Kissaki Syndicate (a rival corporation to Spacetech) freighter that had been tagged for inspection, and later responding to an SOS call issued by Kissaki-owned station, fighting other corporations' ships and the station's automated defences. Inside the station was a cruiser-sized alien vessel, nicknamed the Angelwing by the Kissaki Syndicate, as well as information regarding another Kissaki station orbiting Pluto. After a battle with a Syndicate fleet for the control of the Angelwing back at the Spacetech Europa stations, Cromwell is given command of the cruiser and ordered to investigate the secret Kissaki base. At Pluto, an artificial intelligence, named Angel, uploads herself into the Angelwing and advises Cromwell to escape from an alien, orb-like entity - later known as a Mechanoid - through a nearby wormhole, which was the exact same wormhole as the one which collapsed near Mars. Cromwell exits the wormhole and finds a system (named Noah) populated by the colonists from Noah's Ark, who survived the wormhole collapse and started a colony. The Noah Colony fights as mercenaries for an advanced but pacifist alien race, called the Vardrags, whose technology was given to colonists, against another powerful race, the bloodthirsty, reptilian Gorgs, and a local group of renegade Vardrag elites, known as the Raptors. After aiding successful raid against a Raptor base, Cromwell is enlisted to fight against the Gorg Empire. In fights against the Gorg, the Ghosts occasionally seem to help the Angelwing. However, all the races would soon find themselves facing their greatest threat: a virulent ra
https://en.wikipedia.org/wiki/San%20Andreas%20Fault%20Observatory%20at%20Depth
The San Andreas Fault Observatory at Depth (SAFOD) is a research project that began in 2002 aimed at collecting geological data about the San Andreas Fault for the purpose of predicting and analyzing future earthquakes. The site consists of a pilot hole and a main hole. Drilling operations ceased in 2007. Located near the town of Parkfield, California, the project has installed geophone sensors and GPS clocks in a borehole that cuts directly through the fault. This data, along with samples collected during drilling, could shed new light on geochemical and mechanical properties around the fault zone. SAFOD is part of Earthscope, an Earth science program using geological and geophysical techniques to explore the structure of the North American continent and to understand the origin of earthquakes and volcanoes. Earthscope is funded by the National Science Foundation in conjunction with the U.S. Geological Survey and NASA. Data collected at SAFOD are available from The Northern California Earthquake Data Center at U.C. Berkeley and at the IRIS DMC. See also Chikyū Hakken, deep oceanic drilling program Integrated Ocean Drilling Program Kola Borehole (1970–2005, ) KTB Borehole (1987–1995, ) Mohorovičić discontinuity Project Mohole References Further reading External links Earthscope SAFOD at ICDP-Online.org Deepest boreholes Geology of California Seismological observatories, organisations and projects Structure of the Earth
https://en.wikipedia.org/wiki/Nicky%20Hager
Nicolas Alfred Hager (born 1958) is a New Zealand investigative journalist. He has produced seven books since 1996, covering topics such as intelligence networks, environmental issues and politics. He is one of two New Zealand members of the International Consortium of Investigative Journalists. Early life Hager was born in Levin to a middle-class "socially aware" family. His father was from Vienna, Austria, a clothing manufacturer who emigrated to New Zealand as a refugee from the Nazis. His mother was born in Zanzibar (part of Tanzania), where her father studied tropical medicine, and later grew up in Kenya and Uganda. His surname Hager is pronounced Har-gar, rhyming with lager. Hager studied physics at Victoria University of Wellington, where he also did an honours degree in philosophy. He stood as a Values Party candidate for Pahiatua in the 1978 general election. Early career After graduating from university, Hager worked at the ecology division of the Department of Scientific and Industrial Research (DSIR), and later worked with his brother-in-law building and renovating houses. Journalism career Hager is an investigative journalist who has written seven books. , the International Consortium of Investigative Journalists (ICIJ), an international network that has 249 investigative reporters in over 90 countries, has Hager as one of only two New Zealand members. Hager works on a number of projects at any given time, gathering information about them and looks for sources and mentors. Some projects "come together" but others can continue for a number of years. Secret Power Secret Power: New Zealand's Role in the International Spy Network, published in 1996, was Hager's first book. The book investigates various spying techniques, including signals intelligence (Sigint), a form of electronic eavesdropping between countries. The information was taken from interviews with staff in New Zealand's Sigint agency, the Government Communications Security Bureau (GCSB), who revealed the workings of the agency in minute detail: the intelligence targets, equipment, operating procedures, security systems and training, as well as the staff and layout of the intelligence agency's facilities. The book makes special mention of GCSB's facilities at Waihopai and Tangimoana. It revealed aspects of New Zealand's participation in the UKUSA Agreement facilitating intelligence gathering and sharing between the United States, United Kingdom, Canada, Australia and New Zealand. Since the Edward Snowden revelations, this partnership has become more commonly known as the Five Eyes. As a result, the insights into the inner workings of the GCSB provide information about the allied agencies as well. In particular, Hager documented the US-coordinated ECHELON system, through which the five agencies intercept and process huge volumes of international e-mail, fax and telephone communications. Hager was one of the earliest to write about the secretive ECHELON worldwide elec
https://en.wikipedia.org/wiki/Row%20%28database%29
In the context of a relational database, a row—also called a tuple—represents a single, implicitly structured data item in a table. In simple terms, a database table can be thought of as consisting of rows and columns. Each row in a table represents a set of related data, and every row in the table has the same structure. For example, in a table that represents companies, each row would represent a single company. Columns might represent things like company name, company street address, whether the company is publicly held, its VAT number, etc. In a table that represents the association of employees with departments, each row would associate one employee with one department. The implicit structure of a row, and the meaning of the data values in a row, requires that the row be understood as providing a succession of data values, one in each column of the table. The row is then interpreted as a relvar composed of a set of tuples, with each tuple consisting of the two items: the name of the relevant column and the value this row provides for that column. Each column expects a data value of a particular type. For example, one column might require a unique identifier, another might require text representing a person's name, another might require an integer representing hourly pay in dollars. See also Column (database) References zh-yue:列 (數據庫) Data modeling Relational model
https://en.wikipedia.org/wiki/Column%20%28database%29
In a relational database, a column is a set of data values of a particular type, one value for each row of the database. A column may contain text values, numbers, or even pointers to files in the operating system. Columns typically contain simple types, though some relational database systems allow columns to contain more complex data types, such as whole documents, images, or even video clips. A column can also be called an attribute. Each row would provide a data value for each column and would then be understood as a single structured data value. For example, a database that represents company contact information might have the following columns: ID, Company Name, Address Line 1, Address Line 2, City, and Postal Code. More formally, a row is a tuple containing a specific value for each column, for example: (1234, 'Big Company Inc.', '123 East Example Street', '456 West Example Drive', 'Big City', 98765). Field The word 'field' is normally used interchangeably with 'column'. However, database perfectionists tend to favor using 'field' to signify a specific cell of a given row. This is to enable accuracy in communicating with other developers. Columns (really column names) being referred to as field names (common for each row/record in the table). Then a field refers to a single storage location in a specific record (like a cell) to store one value (the field value). The terms record and field come from the more practical field of database usage and traditional DBMS system usage (This was linked into business like terms used in manual databases e.g. filing cabinet storage with records for each customer). The terms row and column come from the more theoretical study of relational theory. Another distinction between the terms 'column' and 'field' is that the term 'column' does not apply to certain databases, for instance key-value stores, that do not conform to the traditional relational database structure. See also Column-oriented DBMS, optimization for column-centric queries Column (data store), a similar object used in distributed data stores Row (database) SQL Query language Column groups and row groups References zh-yue:行 (數據庫) Data modeling
https://en.wikipedia.org/wiki/Granular%20computing
Granular computing is an emerging computing paradigm of information processing that concerns the processing of complex information entities called "information granules", which arise in the process of data abstraction and derivation of knowledge from information or data. Generally speaking, information granules are collections of entities that usually originate at the numeric level and are arranged together due to their similarity, functional or physical adjacency, indistinguishability, coherency, or the like. At present, granular computing is more a theoretical perspective than a coherent set of methods or principles. As a theoretical perspective, it encourages an approach to data that recognizes and exploits the knowledge present in data at various levels of resolution or scales. In this sense, it encompasses all methods which provide flexibility and adaptability in the resolution at which knowledge or information is extracted and represented. Types of granulation As mentioned above, granular computing is not an algorithm or process; there is no particular method that is called "granular computing". It is rather an approach to looking at data that recognizes how different and interesting regularities in the data can appear at different levels of granularity, much as different features become salient in satellite images of greater or lesser resolution. On a low-resolution satellite image, for example, one might notice interesting cloud patterns representing cyclones or other large-scale weather phenomena, while in a higher-resolution image, one misses these large-scale atmospheric phenomena but instead notices smaller-scale phenomena, such as the interesting pattern that is the streets of Manhattan. The same is generally true of all data: At different resolutions or granularities, different features and relationships emerge. The aim of granular computing is to try to take advantage of this fact in designing more effective machine-learning and reasoning systems. There are several types of granularity that are often encountered in data mining and machine learning, and we review them below: Value granulation (discretization/quantization) One type of granulation is the quantization of variables. It is very common that in data mining or machine-learning applications the resolution of variables needs to be decreased in order to extract meaningful regularities. An example of this would be a variable such as "outside temperature" (), which in a given application might be recorded to several decimal places of precision (depending on the sensing apparatus). However, for purposes of extracting relationships between "outside temperature" and, say, "number of health-club applications" (), it will generally be advantageous to quantize "outside temperature" into a smaller number of intervals. Motivations There are several interrelated reasons for granulating variables in this fashion: Based on prior domain knowledge, there is no expecta
https://en.wikipedia.org/wiki/IBM%20DPPX
Distributed Processing Programming Executive is a discontinued operating system introduced by IBM, pre-installed on the IBM 8100 and later ported to the ES/9370. Brief history It was first introduced on the IBM 8100 series, which was released in 1978. 1987 saw the release of Distributed Processing Programming Executive System Product (DPPX/SP) Release 4. In 1986, IBM decided to cease the IBM 8100 architecture to consolidate its hardware and software families. In 1988, they released DPPX/370 which ran on the ES/9370 processors (an S/370 model). By the end of June 1997, DPPX/370 was officially retired. Architecture DPPX was written in Programming Language for Distributed Systems (PL/DS), a PL/I-derived systems programming language, similar to the PL/S systems programming language used for MVS and VM. Part of the DPPX/370 development process was developing a PL/DS 2 language, which was based on PL/DS, but with changes necessitated by the changed instruction set. (PL/DS, like PL/S, is a high-level language which encourages significant use of inline assembly.) The user interfaces (e.g., command line) of DPPX were very clean and easy to use, the syntax of the commands, the whole concept and ideas of DPPX looked very straightforward and consistent (command line, online help, etc.), and each and every aspect was documented online and in a rich set of well organized printed manuals. A DPPX system could be operated truly operator-less and remote (hence the Distributed part of the name). One benefit of this clean design was that programs could be written in modern dialects of COBOL, and dialogs could be developed interactively. DPPX had a native DBMS with simple key-lookup architecture, and ability to move forward through a table after starting from a specific key value by issuing a read-forward command. A limitation of the DPPX DBMS was the lack of read-previous capability, which made it difficult, for example, to code page-back functionality for a screen loaded from a DPPX DBMS table. This limitation was mitigated by an enterprising young programmer (K. Riley of Anchorage, Alaska) who suggested at the application layer creating alternate keys for the DPPX tables that needed read-previous functionality. The alternate keys could then be loaded with the binary 1's complement of the primary key, at which point reading forward on the alternate key was equivalent to reading previous on the primary key. Software In addition to the expected functions of an operating system, DPPX included several functions which allowed for remote administration, such as Distributed Host Command Facility (DHCF), which allowed a Host Command Facility (HCF) user on a mainframe to log on in either full-screen mode or line mode to execute commands as though logged on locally, and Distributed Systems Network (or Node) Executive (DSNX), which allowed a Distributed Systems Executive (DSX) (later NetView/DM) job to manage files. Separate additional products were also available,
https://en.wikipedia.org/wiki/ANSI%20art
ANSI art is a computer art form that was widely used at one time on bulletin board systems. It is similar to ASCII art, but constructed from a larger set of 256 letters, numbers, and symbols — all codes found in IBM code page 437, often referred to as extended ASCII and used in MS-DOS and Unix environments. ANSI art also contains special ANSI escape sequences that color text with the 16 foreground and 8 background colours offered by ANSI.SYS, an MS-DOS device driver loosely based upon the ANSI X3.64 standard for text terminals. Some ANSI artists take advantage of the cursor control sequences within ANSI X3.64 in order to create animations, commonly referred to as ANSImations. ANSI art and text files which incorporate ANSI codes carry the de facto file extension. Overview ANSI art is considerably more flexible than ASCII art, because the particular character set it uses contains symbols intended for drawing, such as a wide variety of box-drawing characters and block characters that dither the foreground and background color. It also adds accented characters and math symbols that often find creative use among ANSI artists. The popularity of ANSI art encouraged the creation of a powerful shareware package called TheDraw coded by Ian E. Davis in 1986. Not only did it considerably simplify the process of making an ANSI art screen from scratch, but it also included a variety of "fonts", large letters constructed from box and block characters, and transition animations such as dissolve and clock. No new versions of TheDraw emerged after version 4.63 in 1993, but in later years a number of other ANSI editors appeared, some of which are still maintained today. The popular game creation system (GCS) ZZT used ANSI graphics exclusively. A later GCS based on the same concept, MegaZeux, allowed users to modify the extended ASCII character set as well. Trade Wars 2002, a multiplayer BBS game that remains popular decades after its release in 1986, used ANSI graphics to depict ships, planets, and important locations, and included cutscenes and even a cinema with ANSI animations. Many of these ANSI graphics were created by Drew Markham, who went on to form Xatrix Entertainment/Gray Matter Studios and develop Redneck Rampage and Return to Castle Wolfenstein, among other titles. The rise of the internet caused the decline of both BBSes and DOS users, which made ANSI graphics harder to create and to view due to the lack of software compatible with the new dominant operational system Microsoft Windows. In the end of 2002, all traditional ANSI art groups like ACiD, ICE, CIA, Fire, Dark and many others, were no longer making periodic releases of artworks, called "artpacks" and the community of artists almost vanished. Since then this form of art is no longer practiced to the degree it once was, but was still kept alive by fewer newly created groups like SENSE, 27inch and the late Blocktronics Textmode Art Collective, founded in 2008, and that currently releases
https://en.wikipedia.org/wiki/Peter%20Chen
Peter Pin-Shan Chen (; born 3 January 1947) is a Taiwanese American computer scientist. He is a (retired) distinguished career scientist and faculty member at Carnegie Mellon University and Distinguished Chair Professor Emeritus at LSU. He is known for the development of the entity–relationship model in 1976. Biography Born 1947 in Taichung, Taiwan, Peter Chen received a B.S. in electrical engineering in 1968 at the National Taiwan University, and a Ph.D. in computer science/applied mathematics at Harvard University in 1973. In 1970, he worked one summer at IBM. After graduating from Harvard, he spent one year at Honeywell and a summer at Digital Equipment Corporation. From 1974 to 1978 Chen was an assistant professor at the MIT Sloan School of Management. From 1978 to 1983 he was an associate professor at the University of California, Los Angeles (UCLA Management School). From 1983 to 2011 Chen held the position of M. J. Foster Distinguished Chair Professor of Computer Science at Louisiana State University and, for several years, adjunct professor in its Business School and Medical School (Shreveport). During this time period, he was a visiting professor once at Harvard in '89-'90 and three times at Massachusetts Institute of Technology (EECS Dept. in '86-'87, Sloan School in '90-'91, and Division of Engineering Systems in 06-'07). From 2010 to 2020, Chen was a Distinguished Career Scientist and faculty member at Carnegie Mellon University, U.S.A. Besides lecturing around the world, he has also served as an (honorary) professor outside of the U.S. In 1984, under the sponsorship of the United Nations, he taught a one-month short course on databases at Huazhong University of Science and Technology in Wuhan, China, and was awarded as Honorary Professor there. Then, he went to Beijing as a member of the IEEE delegation of the First International Conference on Computers and Applications (the first major IEEE computer conference held in China). From 2008 to 2014, he was an Honorary Chair Professor in the Institute of Service Science at National Tsing Hua University, Taiwan. Starting in 2016, he is an Honorary Chair Professor in the Department of Bioengineering and Bioinformatics, Asia University (Taiwan). Chen has served as an advisor for government agencies and corporations. He is a member of the advisory board of Computer and Information Science and Engineering Directorate of National Science Foundation (2004-2006) and the United States Air Force Scientific Advisory Board (2005-2009). Awards and honors Chen's original paper is one of the most influential papers in the computer software field based on a survey of more than 1,000 computer science professors documented in a book on "Great Papers in Computer Science". Chen's work is also cited in a book Software Challenges published by Time-Life Books in 1993 in the series on "Understanding Computers." Chen is recognized as one of the pioneers in a book on "Software Pioneers". He is listed in
https://en.wikipedia.org/wiki/Sergey%20Lebedev%20%28scientist%29
Sergey Alekseyevich Lebedev (; 2 November, 1902 – 3 July, 1974) was a Soviet scientist in the fields of electrical engineering and computer science, and designer of the first Soviet computers. Biography Lebedev was born in Nizhny Novgorod, Russian Empire. He graduated from Moscow Highest Technical School in 1928. From then until 1946 he worked at All-Union Electrotechnical Institute (formerly a division of MSTU) in Moscow and Kyiv. In 1939 he was awarded the degree of Doctor of Sciences for the development of the theory of "artificial stability" of electrical systems. During World War II, Lebedev worked in the field of control automation of complex systems. His group designed a weapon-aiming stabilization system for tanks and an automatic guidance system for airborne missiles. To perform these tasks Lebedev developed an analog computer system to solve ordinary differential equations. From 1946 to 1951 he headed the Kiev Electrotechnical Institute of the Ukrainian Academy of Sciences, working on improving the stability of electrical systems. For this work he received the Stalin (State) prize in 1950. In 1948 Lebedev learned from foreign magazines that scientists in western countries were working on the design of electronic computers, although the details were secret. In the autumn of the same year he decided to focus the work of his laboratory on computer design. Lebedev's first computer, MESM, was fully completed by the end of 1951. In April 1953 the State commission accepted the BESM-1 as operational, but it did not go into series production because of opposition from the Ministry of Machine and Instrument Building, which had developed its own weaker and less reliable machine. Lebedev then began development of a new, more powerful computer, the M-20, the number denoting its expected processing speed of twenty thousand operations per second. In 1958 the machine was accepted as operational and put into series production. Simultaneously the BESM-2, a development of the BESM-1, went into series production. Though the BESM-2 was slower than the M-20, it was more reliable. It was used to calculate satellite orbits and the trajectory of the first rocket to reach the surface of the Moon. Lebedev and his team developed several more computers, notably the BESM-6, which was in production for 17 years. In 1952, Lebedev became a professor at the Moscow Institute of Physics and Technology. From 1953 until his death he was the director of what is now called the Institute of Precision Mechanics and Computer Engineering. Lebedev died in Moscow and is interred at Novodevichy Cemetery. In 1996 the IEEE Computer Society recognized Sergey Lebedev with a Computer Pioneer Award for his work in the field of computer design and his founding of the Soviet computer industry. See also History of computing hardware List of pioneers in computer science References 1902 births 1974 deaths People from Nizhny Novgorod Academic staff of the Moscow Institute of Physic
https://en.wikipedia.org/wiki/Lebedev%20Institute%20of%20Precision%20Mechanics%20and%20Computer%20Engineering
Lebedev Institute of Precision Mechanics and Computer Engineering (IPMCE) is a Russian research institution. It used to be a Soviet Academy of Sciences organization in Soviet times. The institute specializes itself in the development of: Computer systems for national security Hardware and software for digital telecommunication Multimedia systems for control and training Positioning and navigational systems In August 2009 IPMCE became a joint-stock company. Computers developed by IPMCE BESM-1 BESM-2 BESM-4 BESM-6 Elbrus-1 Elbrus-2 Elbrus-3 Software developed by IPMCE Эль-76 External links IPMCE and IPMCE References Computing in the Soviet Union Institutes of the Russian Academy of Sciences Research institutes in the Soviet Union Computer science institutes Cultural heritage monuments in Moscow
https://en.wikipedia.org/wiki/Redcap%20%28TV%20series%29
Redcap is a British television series produced by ABC Weekend TV and broadcast on the ITV network. It starred John Thaw as Sergeant John Mann, a member of the Special Investigation Branch of the Royal Military Police and ran for two series and 26 episodes between 1964 and 1966. Other actors appearing in the series included Kenneth Colley, Keith Barron, Windsor Davies, David Battley, Allan Cuthbertson and Barry Letts. The series was created by Jack Bell and was written by Roger Marshall, Troy Kennedy-Martin, Jeremy Paul, Robert Holles and Richard Harris, among others. Of the run, 23 of the 26 episodes still exist in their complete form (the missing/incomplete episodes are indicated below). Episodes Season 1 Season 2 References External links 1960s British drama television series 1964 British television series debuts 1966 British television series endings Television shows produced by ABC Weekend TV ITV television dramas British military television series Adjutant General's Corps English-language television shows 1960s British crime television series
https://en.wikipedia.org/wiki/Mind%20Walker
Mind Walker is a video game written by Bill Williams and published by Commodore in 1986 as one of the first games for the new Amiga 1000 computer. The player is immersed inside a human brain and must cure a psychosis that is threatening the patient's well-being. Many aspects of the game (including enemies and power-ups) play on this psychological theme. The four player avatars, for instance, are the human bodybuilder, the water nymph, the mysterious wizard, and the alien spriggan. Gameplay The first stage of the game requires the player to build a path from a crystal to a special square located somewhere in the brain. Various types of platforms rest between the player's starting point and the destination —each type corresponds to one of the avatars. For example, water can only be connected by the nymph, towers by the wizard, and so on. The destination point is often surrounded by "tubes" which often block the players path as a path cannot be created on a square with a tube. Floating enemies assail the player, who can destroy them with a lightning bolt. However, the player must remain stationary while attacking enemies. The density of enemies increases as the player approaches the destination. Once the destination point is reached the surrounding tubes lower, allowing the player to venture into the next stage. Stages vary depending on which tube is chosen. The next stage takes place in a 3D first-person perspective (the player is falling down a pit). The player must maneuver so that he falls into one of the green zones and is taken into a deeper region of the brain. In the next stage, the player must guide the avatar through a maze of neurons that pulse with electricity. Touching any of these neurons while electrified instantly kills the player, and various colored enemies also float around the stage (the damage they incur increases as the game progresses). The player must travel through the maze trying to find a pyramid object (sound can be used as a cue here: it gets faster as the player gets closer). Once the pyramid is obtained the player must backtrack to their starting point, and exit the level (the exit is surrounded with neurons, one must determine the pattern of the electrical pulses and cross while they are inactive). The last stage, a reference to psychoanalysis, asks players to put together a strange, color-cycling puzzle. At the expense of their accumulated points, players can tap Sigmund Freud's pipe and be shown where a piece must be placed, or they can place the pieces in the puzzle themselves. The puzzle consists 42 spaces (6 x 7), and seven pieces are gained with the completion of each of the six levels. Reception Computer Gaming World described Mind Walker as "the most challenging game currently available", describing it as "a bizarre cross of Adept and Marble Madness. The review praised the game's graphics and sound ("It more fully uses the features of the Amiga than any other game"), and concluded "This game is highly r
https://en.wikipedia.org/wiki/You%20aren%27t%20gonna%20need%20it
"You aren't gonna need it" (YAGNI) is a principle which arose from extreme programming (XP) that states a programmer should not add functionality until deemed necessary. Other forms of the phrase include "You aren't going to need it" (YAGTNI) and "You ain't gonna need it". Ron Jeffries, a co-founder of XP, explained the philosophy: "Always implement things when you actually need them, never when you just foresee that you [will] need them." John Carmack wrote "It is hard for less experienced developers to appreciate how rarely architecting for future requirements / applications turns out net-positive." Context YAGNI is a principle behind the XP practice of "do the simplest thing that could possibly work" (DTSTTCPW). It is meant to be used in combination with several other practices, such as continuous refactoring, continuous automated unit testing, and continuous integration. Used without continuous refactoring, it could lead to disorganized code and massive rework, known as technical debt. YAGNI's dependency on supporting practices is part of the original definition of XP. See also Don't repeat yourself Feature creep If it ain't broke, don't fix it KISS principle Minimum viable product MoSCoW method Muntzing Overengineering Single-responsibility principle SOLID Unix philosophy Worse is better References Software development philosophies Programming principles
https://en.wikipedia.org/wiki/Jade%20Empire
Jade Empire is an action role-playing game developed by BioWare, originally published by Microsoft Game Studios in 2005 as an Xbox exclusive. It was later ported to Microsoft Windows personal computers (PC) and published by 2K in 2007. Later ports to macOS (2008) and mobile platforms (2016) were handled respectively by TransGaming and Aspyr. Set in a world inspired by Chinese mythology, players control the last surviving Spirit Monk on a quest to save their tutor Master Li and defeat the forces of corrupt emperor Sun Hai. The Spirit Monk is guided through a linear narrative, completing quests and engaging in action-based combat. With morality-based dialogue choices during conversations, the player can impact both story and gameplay progression in various ways. Development of Jade Empire began in 2001 as a dream project for company co-founders Ray Muzyka and Greg Zeschuk, who acted as the game's executive producers. Their first original role-playing intellectual property, the game reused the morality system from Star Wars: Knights of the Old Republic, but switched to a real-time combat system. The game's many elements such as its combat system, the world and script, the constructed language created for the game, and the musical score by Jack Wall drew influence from Chinese history, culture and folklore. Upon release, it received generally positive reviews but sold below expectations. It was followed by a PC version, which provided the basis for future ports and itself met with positive reviews. Gameplay Jade Empire is an action role-playing game (RPG) in which players take control of a character most frequently dubbed the "Spirit Monk"; the Spirit Monk has six available pre-set character archetypes with different statistics: these statistics are split into health, magic energy (chi) and Focus, used to slow down time during combat or use weapons. The characters are divided into three male and three female characters, with a fourth male character being available in later versions. Exploration is carried out from a third-person perspective through mainly linear or hub-based environments, where quests can be accepted from non-playable characters (NPCs). Completing quests grants rewards of experience points, in-game currency and occasionally fighting techniques. In addition to standard gameplay, players can engage in a shoot 'em up mini-game with a flying machine, earning items and additional experience. Combat takes place in real-time, with the protagonist and a chosen Follower fighting enemies either individually or in groups. Enemies range in type from normal humans to monsters and spirits. Attacks are divided into normal; heavy attacks, which take longer to execute while dealing higher damage; and area attacks, which damage multiple surrounding enemies. In addition to blocking, the protagonist and enemies can dodge attacks. The protagonist has access to different techniques, which range from purely offensive or guard-breaking techniques to hea
https://en.wikipedia.org/wiki/Jesse%20Hubbard%20and%20Angie%20Baxter
Jesse and Angela "Angie" Hubbard are fictional characters and a supercouple from the ABC and The Online Network daytime drama All My Children. Jesse is portrayed by Darnell Williams and Angie is portrayed by Debbi Morgan. Jesse first appeared in Pine Valley in 1981 as the nephew of Dr. Frank Grant, who assumed custody after the death of his sister (Jesse's mother). Angie first appeared in 1982, as the daughter of a well-to-do Pine Valley couple. Shortly after Angie's first appearance on the show, they were paired with one another. Jesse and Angie were best friends to fellow supercouple Greg Nelson and Jenny Gardner. They are daytime television's first African American supercouple, and arguably the two most popular African American characters in soap opera history. Angie also appeared on Loving and The City. Along with her son Frankie Hubbard and former heiress Skye Chandler, she is one of only three individuals who have been regular characters on three ABC soap operas. Background Casting Actress Debbi Morgan was working in Los Angeles, California on an episode of Trapper John, M.D. when she saw that a new storyline was being introduced on All My Children, which involved actor Darnell Williams in the role of Jesse Hubbard. Morgan's feeling that Jesse might need a love interest, as she expressed as much to her agent in New York, matched that of the show's producers, who were searching for a young woman to fill the role of Angie Baxter. When Morgan read for the part, she acquired the role within the same day. Williams was a regular dancer on Soul Train in the mid-1970s. Later that decade, he was a cast member of the Broadway musical Your Arms Too Short to Box with God. This, before acquiring the role of Jesse. Writing All My Children creator Agnes Nixon was able to intrigue male and female audiences of all ages by focusing on young adult romances that included not only romance and sex but their issues in growing and learning as individuals. Social issues were also applied. This specific formula caused All My Children'''s popularity to rise in the 1980s. The Jesse and Angie pairing, as well as fellow supercouple Greg Nelson and Jenny Gardner, were one notable aspect of Nixon's writing that prompted young high school and college students to race home just to view the soap opera. When characters Jenny and Jesse were killed off instead of being recast by new actors once the actors decided to leave their roles, it was so that no other actors could portray them. To Nixon, these actors were the characters. Morgan saw Jesse's death as bittersweet. She pointed to Williams leaving the series as one of her toughest moments as part of the cast, but how it also provided interesting story. "It really affected me more from a personal standpoint than from an actor's standpoint," she said. "From a personal standpoint, Darnell and I were like hooked at the neck or the back or something; we'd gotten to be such good friends. But from an actor's point of view, it
https://en.wikipedia.org/wiki/Emulation
Emulation may refer to: Emulation (computing), imitation of behavior of a computer or other electronic system with the help of another type of system Video game console emulator, software which emulates video game consoles Gaussian process emulator, a special case of the Gaussian process in statistics Surrogate model, a model which imitates or emulates a more complicated (usually in terms of computer simulation time) model. ASC Emulation, a football club in Martinique Emulation (observational learning), a theory of comparative psychology Emulation Lodge of Improvement, a masonic lodge whose aim is to preserve masonic ritual as closely as is possible to that which was formally accepted Socialist emulation, a form of competition that was practiced in the Soviet Union Whole brain emulation, aiming at mind uploading See also ST Emulous, a British tugboat Semulation, a mix of software simulation and hardware emulation of an electronic system
https://en.wikipedia.org/wiki/IBM%208100%20DPCX
DPCX (Distributed Processing Control eXecutive) was an operating system for the IBM 8100 small computer system. IBM hoped it would help their installed base of IBM 3790 customers migrate to the 8100 and the DPPX operating system. It was mainly deployed to support a word processing system, Distributed Office Support Facility (DOSF) which was derived from the earlier IBM 3730 word processing system. Like DPPX, it was written in the PL/S-like PL/DS language. The applications, including much of DOSF, however, were written an interpreted language that was "compiled" using the System/370 assembler macro facility. The 8100/DPCX/DOSF system was the first type of distributed system to connect to the IBM Distributed Office Support System (DISOSS) running on data host. Later versions of DISOSS relied on SNA Distribution System (SNADS) and eventually became peer-to-peer communication of documents which complied with Document Interchange Architecture (DIA) and Document Content Architecture (DCA) as other types of distributed system gained DISOSS support – Scanmaster, Displaywriter, and 5520 Office System. References 8100 DPCX
https://en.wikipedia.org/wiki/IBM%203790
The IBM 3790 Communications System was one of the first distributed computing platforms. The 3790 was developed by IBM's Data Processing Division (DPD) and announced in 1974. It preceded the IBM 8100, announced in 1979. It was designed to be installed in branch offices, stores, subsidiaries, etc., and to be connected to the central host mainframe, using IBM Systems Network Architecture (SNA). Although its successor's role in distributed data processing was said to be "a turning point in the general direction of worldwide computer development," the 3790 was described by Datamation in March 1979 as "less than successful." System description IBM described it as "a programmable, operator oriented terminal system." Components The 3790 supported up to 16 IBM 3277 display stations an integrated floppy disk unit an integrated 120 lines per minute (lpm) line printer up to three 3292 auxiliary control units up to four 3793 keyboard-printers a Synchronous Data Link Control (SDLC) communications interface A 1200 baud internal or external modem The base unit of the 3790 was the IBM 3791 programmable control unit, which was offered as a choice of: the model 1, supporting 8.3MB of disk storage the model 2, with up to 26.9MB. Attached to the 3791 were: The 3792 auxiliary control unit, which had options for attachment of up to two dial-in IBM 2741 communications terminals, up to four 3793 display stations, and a line printer. The 3793 printer-keyboard (up to four). The 3411 model 1, Magnetic tape unit and controller (added in 1977) and up to three 3410 tape units attached to the 3411 unit. Host software Function Support Program. Subsystem Support Services. VTAM (with the host running DOS/VS, OS/VS1, or OS/VS2) User Application Support Program. Reception The 3790 failed to achieve the success IBM intended, due to several issues. It had a complex programming language, The 3790 Macro Assembler, and the customers found it difficult to deploy applications on it. The Macro Assembler ran only on an IBM mainframe and then the compiled and linked object was moved to the 3790 for testing. The 3790 was designed as a departmental processor, but the requirement for an IBM mainframe development environment inhibited adoption in its target market of mid-size companies. The result was lackluster interest in the product. In addition the 3790 was priced higher than minicomputers of comparable processing power. One of the products IBM released to help developers was Program Validation Services (PVS). With PVS, one could test a program in the mainframe environment using scripts. The scripts were cumbersome to create, and prone to errors. Since mainframe time was expensive and often difficult to obtain very few programmers used PVS for anything other than initial testing. The manual for the Macro Assembler was bulky (about 4 inches thick) and difficult to use as a reference. Another programming issue was code design and size; the hardware archite
https://en.wikipedia.org/wiki/IBM%20System%209000
The System 9000 (S9000) is a family of microcomputers from IBM consisting of the System 9001, 9002, and 9003. The first member of the family, the System 9001 laboratory computer, was introduced in May 1982 as the IBM Instruments Computer System Model 9000. It was renamed to the System 9001 in 1984 when the System 9000 family name and the System 9002 multi-user general-purpose business computer was introduced. The last member of the family, the System 9003 industrial computer, was introduced in 1985. All members of the System 9000 family did not find much commercial success and the entire family was discontinued on 2 December 1986. The System 9000 was based around the Motorola 68000 microprocessor and the Motorola VERSAbus system bus. All members had the IBM CSOS real-time operating system (OS) stored on read-only memory; and the System 9002 could also run the multi-user Microsoft Xenix OS, which was suitable for business use and supported up to four users. Features There were three versions of the System 9000. The 9001 was the benchtop (lab) model, the 9002 was the desktop model without laboratory-specific features, and the 9003 was a manufacturing and process control version modified to be suitable for factory environments. The System 9002 and 9003 were based on the System 9001, which was based on around an 8MHz Motorola 68000, and the Motorola VERSAbus system bus (the System 9000 was one of the few that used the VERSAbus). Input/output ports included three RS-232C serial ports, an IEEE-488 instrument port, and a bidirectional 8-bit parallel port. For laboratory data acquisition, analog-to-digital converters that could be attached to its I/O ports were available. User input could be via a user-definable 10-key touch panel on the integrated CRT display, a 57-key user-definable keypad, or a 83-key Model F keyboard. The touch panel and keypad were designed for controlling experiments. All System 9000 members had an IBM real-time operating system called CSOS (Computer System Operating System) on 128KB of read-only memory (ROM). This was a multi-tasking operating system that could be extended by loading components from disk. IBM also offered Microsoft Xenix on the System 9002, but this required at least 640KB of main memory and a VERSAbus card containing a memory management unit. The machines shipped with 128KB of main memory as standard, and up to 5MB could be added to the system using memory boards that plugged into the VERSAbus. Each board could contain up to 1MB, which were installed in 256KB increments. History The System 9000 was developed by IBM Instruments, Inc., an IBM subsidiary established in 1980 that focused on selling scientific and technical instruments as well as the computer equipment designed to control, log, or process these instruments. It was originally introduced as the IBM Instruments Computer System Model 9000 in May 1982. Its long name led to it being referred to as the Computer System 9000, CS-9000, CS/9000, or CS9000.
https://en.wikipedia.org/wiki/Data%20set%20%28IBM%20mainframe%29
In the context of IBM mainframe computers in the S/360 line, a data set (IBM preferred) or dataset is a computer file having a record organization. Use of this term began with, e.g., DOS/360, OS/360, and is still used by their successors, including the current z/OS. Documentation for these systems historically preferred this term rather than file. A data set is typically stored on a direct access storage device (DASD) or magnetic tape, however unit record devices, such as punch card readers, card punches, line printers and page printers can provide input/output (I/O) for a data set (file). Data sets are not unstructured streams of bytes, but rather are organized in various logical record and block structures determined by the DSORG (data set organization), RECFM (record format), and other parameters. These parameters are specified at the time of the data set allocation (creation), for example with Job Control Language DD statements. Within a running program they are stored in the Data Control Block (DCB) or Access Control Block (ACB), which are data structures used to access data sets using access methods. Records in a data set may be fixed, variable, or “undefined” length. Data set organization For OS/360, the DCB's DSORG parameter specifies how the data set is organized. It may be CQ Queued Telecommunications Access Method (QTAM) in Message Control Program (MCP) CX Communications line group DA Basic Direct Access Method (BDAM) GS Graphics device for Graphics Access Method(GAM) IS Indexed Sequential Access Method (ISAM) MQ QTAM message queue in application PO Partitioned PS Physical Sequential among others. Data sets on tape may only be DSORG=PS. The choice of organization depends on how the data is to be accessed, and in particular, how it is to be updated. Programmers utilize various access methods (such as QSAM or VSAM) in programs for reading and writing data sets. Access method depends on the given data set organization. Record format (RECFM) Regardless of organization, the physical structure of each record is essentially the same, and is uniform throughout the data set. This is specified in the DCB RECFM parameter. RECFM=F means that the records are of fixed length, specified via the LRECL parameter. RECFM=V specifies a variable-length record. V records when stored on media are prefixed by a Record Descriptor Word (RDW) containing the integer length of the record in bytes and flag bits. With RECFM=FB and RECFM=VB, multiple logical records are grouped together into a single physical block on tape or DASD. FB and VB are fixed-blocked, and variable-blocked, respectively. RECFM=U (undefined) is also variable length, but the length of the record is determined by the length of the block rather than by a control field. The BLKSIZE parameter specifies the maximum length of the block. RECFM=FBS could be also specified, meaning fixed-blocked standard, meaning all the blocks except the last one were required to be in full BLKSIZE length. RECF
https://en.wikipedia.org/wiki/Voluntary%20collective%20licensing
Voluntary collective licensing is an alternative approach to solve the problem of software piracy using file sharing technologies. The idea is to make file sharing networks subscribe-only for a small fee and then distribute the collected money among the artists based on the popularity of their work. It has been endorsed since 2003 by the EFF. Arguably Spotify is an implementation of this idea, although it is not marketed as a file-sharing network and also allows ad-supported free use in addition to paid use. Supporters of this licensing project Downhill Battle Berkman Center for Internet & Society also launched Digital Media Exchange (DMX), a P2P content service, operated as a non-profit cooperative which uses the same concept. See also Threshold pledge system References External links Description and white paper at EFF Website "Building the Tools to Legalize P2P Video-Sharing" by JANKO ROETTGERS, NY Times: May 9, 2009 Copyright licenses File sharing Payment systems
https://en.wikipedia.org/wiki/General%20Computer%20Corporation
General Computer Corporation (GCC), later GCC Technologies, was an American hardware and software company formed in 1981 by Doug Macrae, John Tylko, and Kevin Curran. The company began as a video game developer and created the arcade games Ms. Pac-Man (1982) and Food Fight (1983) as well as designing the hardware for the Atari 7800 console and many of its games. In 1984 the company pivoted to developing home computer peripherals, such as the HyperDrive hard drive for the Macintosh 128K, and printers. GCC was disestablished in 2015. History Video games GCC started out making mod-kits for existing arcade games - for example Super Missile Attack, which was sold as an enhancement board to Atari's Missile Command. At first Atari sued, but ultimately dropped the suit and hired GCC to develop games for Atari (and stop making enhancement boards for Atari's games without permission). They created an enhancement kit for Pac-Man called Crazy Otto which they sold to Midway, who in turn sold it as the sequel Ms. Pac-Man; they also developed Jr. Pac-Man, that game's successor. Under Atari, Inc., GCC made the original arcade games Food Fight, Quantum, and the unreleased Nightmare; developed the Atari 2600 versions of Ms. Pac-Man and Centipede; produced over half of the Atari 5200 cartridges; and developed the chip design for the Atari 7800, plus the first round of cartridges for that system. Peripherals In 1984, the company changed direction to make peripherals for Macintosh computers: the HyperDrive (the Mac's first internal hard drive), the WideWriter 360 large format inkjet printer, and the Personal Laser Printer (the first QuickDraw laser printer). Prior to closing, the company focused exclusively on laser printers. HyperDrive was unusual because the original Macintosh did not have any internal interface for hard disks. It was attached directly to the CPU, and ran about seven times faster than Apple's "Hard Disk 20", an external hard disk that attached to the floppy disk port. The HyperDrive was considered an elite upgrade at the time, though it was hobbled by Apple's Macintosh File System, which had been designed to manage 400K floppy disks; as with other early Macintosh hard disks, the user had to segment the drive such that it appeared to be two or more partitions, called Drawers. The second issue of MacTech Magazine, in January 1985, included a letter that summed up the excitement: In 1986 the company shipped the "HyperDrive 2000", a 20MB internal hard disk that also included a Motorola 68881 floating-point unit, but the speed advantage of the HyperDrive had been negated on the new Macintosh Plus computers by Apple's inclusion of an external SCSI port. General Computer responded with the "HyperDrive FX-20" external SCSI hard disk, but drowned in a sea of competitors that offered fast large hard disks. General Computer changed its name to GCC Technologies and relocated to Burlington, Massachusetts. They continued to sell laser printers until
https://en.wikipedia.org/wiki/MAC%20times
MAC times are pieces of file system metadata which record when certain events pertaining to a computer file occurred most recently. The events are usually described as "modification" (the data in the file was modified), "access" (some part of the file was read), and "metadata change" (the file's permissions or ownership were modified), although the acronym is derived from the "mtime", "atime", and "ctime" structures maintained by Unix file systems. Windows file systems do not update ctime when a file's metadata is changed, instead using the field to record the time when a file was first created, known as "creation time" or "birth time". Some other systems also record birth times for files, but there is no standard name for this metadata; ZFS, for example, stores birth time in a field called "crtime". MAC times are commonly used in computer forensics. The name Mactime was originally coined by Dan Farmer, who wrote a tool with the same name. Modification time (mtime) A file's modification time describes when the content of the file most recently changed. Because most file systems do not compare data written to a file with what is already there, if a program overwrites part of a file with the same data as previously existed in that location, the modification time will be updated even though the contents did not technically change. Access time (atime) A file's access time identifies when the file was most recently opened for reading. Access times are usually updated even if only a small portion of a large file is examined. A running program can maintain a file as "open" for some time, so the time at which a file was opened may differ from the time data was most recently read from the file. Because some computer configurations are much faster at reading data than at writing it, updating access times after every read operation can be very expensive. Some systems mitigate this cost by storing access times at a coarser granularity than other times; by rounding access times only to the nearest hour or day, a file which is read repeatedly in a short time frame will only need its access time updated once. In Windows, this is addressed by waiting for up to an hour to flush updated access dates to the disk. Some systems also provide options to disable access time updating altogether. In Windows, starting with Vista, file access time updating is disabled by default. Change time and creation time (ctime) Unix and Windows file systems interpret 'ctime' differently: Unix systems maintain the historical interpretation of ctime as being the time when certain file metadata, not its contents, were last changed, such as the file's permissions or owner (e.g. 'This file's metadata was changed on 05/05/02 12:15pm'). Windows systems use ctime to mean 'creation time' (also called 'birth time') (e.g. 'This file was created on 05/05/02 12:15pm'). This difference in usage can lead to incorrect presentation of time metadata when a file created on a Windows syste
https://en.wikipedia.org/wiki/Freddy%20Pharkas%3A%20Frontier%20Pharmacist
Freddy Pharkas: Frontier Pharmacist is a comic Old West adventure computer game created by Al Lowe (of Leisure Suit Larry fame) and Josh Mandel (of Callahan's Crosstime Saloon fame) and published by Sierra On-Line in 1993. It was dubbed "the Blazing Saddles of computer games" by Computer Gaming World. Gameplay The game uses Sierra's SCI1.1 engine and features 256-color hand-drawn art, scaling sprites, and a point-and-click interface. Freddy Pharkas ran under both DOS and Windows 3.1. It was released in both floppy disk and CD-ROM versions, the latter having full voiceover speech for all characters. The game's manual is entitled The Modern Day Book of Health and Hygiene, a parody of 19th century medical texts. It contains information necessary for solving prescription puzzles. As a form of copy protection, the player must concoct prescriptions for Freddy's patients using recipes found in the user's manual. An incorrect prescription will result in the customer returning angrily, but does not end the game. Plot In the game, the player takes the role of Freddy Pharkas, an 1880s-era pharmacist in the town of Coarsegold, California which was the location of Sierra's headquarters in 1993. Freddy was once a gunslinger, but sought a new career after his last gunfight, in which "Kenny the Kid" (a reference to the infamous outlaw Billy the Kid) shot off one of his ears. Throughout the town, businesses are either being bought or proprietors are being scared out of town. Someone is obviously trying to take over the entire area, but who? And why? The slimy sheriff, Checkum P. Shift doesn't seem eager to help, so it's up to Freddy to find out the details. The cast includes the town's eccentric old man and story narrator Whittlin' Willy, Srini (Freddy's "Injun" sidekick – actually East Indian), Doc "Dizzy" Gillespie the drunken town doctor, the cafe owner Helen Back, otherwise known as Mom and her stereotypical Chinese chef Hopalong Singh (a reference to Hop Sing, the cook on Bonanza), the crooked banker Phineas (P.H.) Balance, town schoolmarm (and Freddy's love interest) Penelope Primm, and Madame Ovaree, who runs the local brothel. The villain "Kenny the Kid" is a cartoonish version of Sierra's then-president Ken Williams. Madame Ovaree's name is an obvious parody of Madame Bovary and (as evidenced by her occupation) ovaries. Also, there are some anachronisms in the game, such as Srini mentioning him being on Pakistani time, but Pakistan did not exist at the time the game is set, as the region where the country is located was still a part of India at that time, and Pakistan did not become a country until 1947, 67 years after the game's setting. Freddy must take part in numerous tasks such as mixing the right amount of chemicals to create the requested prescription remedy and lab equipment. He also must deal with various dilemmas taking part in town such as a gas leak aka all the town's horses with explosive flatulence, a snail stampede, a diarrhea epidemic
https://en.wikipedia.org/wiki/Bodenst%C3%A4ndig%202000
Bodenständig 2000 is an electronic music group from Germany, founded in 1995 by Dragan Espenschied and Bernhard Kirsch. They are the self-proclaimed pioneers of the home computer folk music movement. In 1999 they released their debut album "Maxi German Rave Blast Hits 3" on Rephlex Records, London. It contains mixture of chiptunes, rave, eurodance, some "serious electronica" plus German vocals and was completely produced at home with non-professional equipment and selfmade software. Up until 2002 some minor releases took place, like remixes, compilation tracks or home computer diskettes. In June 2003 the EP "Hart rockende Wissenschaftler" was released on Feed The Machine records, Detroit, containing hardcore chiptune dance tracks on one side and folky harmony singing on the other. By invitation of the US subsidiary of the German Goethe-Institut, Bodenständig 2000 was able to perform the first concert of the Version>3-Festivals in Chicago in 2003. The song 'In Rock 8-Bit' is featured in the Annoying Thing animation by TurboForce3d starring Crazy Frog. The musicians achieved worldwide coverage in July 2006 due to their opting out of the music portal iTunes, which they attributed to fundamental disagreement with the restrictive Digital Rights Management model. Gaining massive momentum in the blogosphere the story finally made its way into the International Herald Tribune. Being accused of trying to pull off a publicity stunt, the band decided to make the tracks in question available for free download. Discography Hemzärmelig (1998) (Translation: 'Short-sleeved' colloquial) Maxi German Rave Blast Hits 3 (1999) Hart Rockende Wissenschafter (2004) (Translation: 'Hard rocking scientists') UBER ALBUM (2008) External links Bodenständig 2000 homepage YM Rockers: chiptune label releasing Atari diskettes Link to the article in The New York Times about Bodenständig's Action against Apple's iTunes music store German electronic music groups Electronica musicians
https://en.wikipedia.org/wiki/Krakout
Krakout is a Breakout clone that was released for the ZX Spectrum, Amstrad CPC, BBC Micro, Commodore 64, Thomson computers and MSX platforms in 1987. One of the wave of enhanced Breakout variants to emerge in the wake of Arkanoid, its key distinctions are that gameplay is horizontal in layout, and that it allows the player to select the acceleration characteristics of the bat before playing. It was written by Andy Green and Rob Toone and published by Gremlin Graphics. The music was composed by Ben Daglish. Reception In 1990, Dragon gave the game 4 out of 5 stars, calling it "one of our favorites, this is Breakout with a different flavor". Reviews Computer Gamer (Jun, 1987) Tilt (May, 1987) Happy Computer (1987) ASM (Aktueller Software Markt) (Mar, 1987) Tilt (Jul, 1987) Computer Gamer (Apr, 1987) Commodore User (Apr, 1987) Your Sinclair (Feb, 1989) Zzap! (Apr, 1987) Crash! (Feb, 1989) Jeux & Stratégie #45 References External links Krakout at Complete BBC Games Archive 1987 video games Breakout clones BBC Micro and Acorn Electron games Amstrad CPC games Commodore 64 games MSX games ZX Spectrum games Video games scored by Ben Daglish Video games developed in the United Kingdom
https://en.wikipedia.org/wiki/Workplace%20OS
Workplace OS is IBM's ultimate operating system prototype of the 1990s. It is the product of an exploratory research program in 1991 which yielded a design called the Grand Unifying Theory of Systems (GUTS), proposing to unify the world's systems as generalized personalities cohabitating concurrently upon a universally sophisticated platform of object-oriented frameworks upon one microkernel. Developed in collaboration with Taligent and its Pink operating system imported from Apple via the AIM alliance, the ambitious Workplace OS was intended to improve software portability and maintenance costs by aggressively recruiting all operating system vendors to convert their products into Workplace OS personalities. In 1995, IBM reported that "Nearly 20 corporations, universities, and research institutes worldwide have licensed the microkernel, laying the foundation for a completely open microkernel standard." At the core of IBM's new unified strategic direction for the entire company, the project was intended also as a bellwether toward PowerPC hardware platforms, to compete with the Wintel duopoly. With protracted development spanning four years and $2 billion (or 0.6% of IBM's revenue for that period), the project suffered development hell characterized by workplace politics, feature creep, and the second-system effect. Many idealistic key assumptions made by IBM architects about software complexity and system performance were never tested until far too late in development, and found to be infeasible. In January 1996, the first and only commercial preview was billed under the OS/2 family with the name "OS/2 Warp Connect (PowerPC Edition)" for limited special order by select IBM customers, as a crippled product. The entire Workplace OS platform was discontinued in March due to very low market demand, including that for enterprise PowerPC hardware. A University of California case study described the Workplace OS project as "one of the most significant operating systems software investments of all time" and "one of the largest operating system failures in modern times". Overview Objective By 1990, IBM acknowledged the software industry to be in a state of perpetual crisis. This was due to the chaos from inordinate complexity of software engineering inherited by its legacy of procedural programming practices since the 1960s. Large software projects were too difficult, fragile, expensive, and time-consuming to create and maintain; they required too many programmers, who were too busy with fixing bugs and adding incremental features to create new applications. Different operating systems were alien to each other, with their own proprietary applications. IBM envisioned "life after maximum entropy" through "operating systems unification at last" and wanted to lay a new worldview for the future of computing. IBM sought a new world view of a unified foundation for computing, based upon the efficient reuse of common work. It wanted to break the traditional
https://en.wikipedia.org/wiki/Mediolanum%20Santonum
Mediolanum Santonum was a Roman town in Gallia Aquitania, now Saintes. It was founded in about 20 BC in connection with an expansion of the network of Roman roads serving Burdigala. The name means 'centre of the Santones', the tribe that then inhabited the area; the town became an important center in the Roman province of Gallia Aquitania. Monuments The principal extant monuments of the Roman period are: a Roman city gate now called the Arch of Germanicus a fairly large Roman lapidary collection a large amphitheatre Gallery References Bibliography 20s BC establishments in the Roman Empire Populated places established in the 1st century BC Roman towns and cities in France Santones Gallia Aquitania Saintes, Charente-Maritime
https://en.wikipedia.org/wiki/MD2%20%28hash%20function%29
The MD2 Message-Digest Algorithm is a cryptographic hash function developed by Ronald Rivest in 1989. The algorithm is optimized for 8-bit computers. MD2 is specified in IETF RFC 1319. The "MD" in MD2 stands for "Message Digest". Even though MD2 is not yet fully compromised, the IETF retired MD2 to "historic" status in 2011, citing "signs of weakness". It is deprecated in favor of SHA-256 and other strong hashing algorithms. Nevertheless, , it remained in use in public key infrastructures as part of certificates generated with MD2 and RSA. Description The 128-bit hash value of any message is formed by padding it to a multiple of the block length (128 bits or 16 bytes) and adding a 16-byte checksum to it. For the actual calculation, a 48-byte auxiliary block and a 256-byte S-table are used. The constants were generated by shuffling the integers 0 through 255 using a variant of Durstenfeld's algorithm with a pseudorandom number generator based on decimal digits of (pi) (see nothing up my sleeve number). The algorithm runs through a loop where it permutes each byte in the auxiliary block 18 times for every 16 input bytes processed. Once all of the blocks of the (lengthened) message have been processed, the first partial block of the auxiliary block becomes the hash value of the message. The S-table values in hex are: { 0x29, 0x2E, 0x43, 0xC9, 0xA2, 0xD8, 0x7C, 0x01, 0x3D, 0x36, 0x54, 0xA1, 0xEC, 0xF0, 0x06, 0x13, 0x62, 0xA7, 0x05, 0xF3, 0xC0, 0xC7, 0x73, 0x8C, 0x98, 0x93, 0x2B, 0xD9, 0xBC, 0x4C, 0x82, 0xCA, 0x1E, 0x9B, 0x57, 0x3C, 0xFD, 0xD4, 0xE0, 0x16, 0x67, 0x42, 0x6F, 0x18, 0x8A, 0x17, 0xE5, 0x12, 0xBE, 0x4E, 0xC4, 0xD6, 0xDA, 0x9E, 0xDE, 0x49, 0xA0, 0xFB, 0xF5, 0x8E, 0xBB, 0x2F, 0xEE, 0x7A, 0xA9, 0x68, 0x79, 0x91, 0x15, 0xB2, 0x07, 0x3F, 0x94, 0xC2, 0x10, 0x89, 0x0B, 0x22, 0x5F, 0x21, 0x80, 0x7F, 0x5D, 0x9A, 0x5A, 0x90, 0x32, 0x27, 0x35, 0x3E, 0xCC, 0xE7, 0xBF, 0xF7, 0x97, 0x03, 0xFF, 0x19, 0x30, 0xB3, 0x48, 0xA5, 0xB5, 0xD1, 0xD7, 0x5E, 0x92, 0x2A, 0xAC, 0x56, 0xAA, 0xC6, 0x4F, 0xB8, 0x38, 0xD2, 0x96, 0xA4, 0x7D, 0xB6, 0x76, 0xFC, 0x6B, 0xE2, 0x9C, 0x74, 0x04, 0xF1, 0x45, 0x9D, 0x70, 0x59, 0x64, 0x71, 0x87, 0x20, 0x86, 0x5B, 0xCF, 0x65, 0xE6, 0x2D, 0xA8, 0x02, 0x1B, 0x60, 0x25, 0xAD, 0xAE, 0xB0, 0xB9, 0xF6, 0x1C, 0x46, 0x61, 0x69, 0x34, 0x40, 0x7E, 0x0F, 0x55, 0x47, 0xA3, 0x23, 0xDD, 0x51, 0xAF, 0x3A, 0xC3, 0x5C, 0xF9, 0xCE, 0xBA, 0xC5, 0xEA, 0x26, 0x2C, 0x53, 0x0D, 0x6E, 0x85, 0x28, 0x84, 0x09, 0xD3, 0xDF, 0xCD, 0xF4, 0x41, 0x81, 0x4D, 0x52, 0x6A, 0xDC, 0x37, 0xC8, 0x6C, 0xC1, 0xAB, 0xFA, 0x24, 0xE1, 0x7B, 0x08, 0x0C, 0xBD, 0xB1, 0x4A, 0x78, 0x88, 0x95, 0x8B, 0xE3, 0x63, 0xE8, 0x6D, 0xE9, 0xCB, 0xD5, 0xFE, 0x3B, 0x00, 0x1D, 0x39, 0xF2, 0xEF, 0xB7, 0x0E, 0x66, 0x58, 0xD0, 0xE4, 0xA6, 0x77, 0x72, 0xF8, 0xEB, 0x75, 0x4B, 0x0A, 0x31, 0x44, 0x50, 0xB4, 0x8F, 0xED, 0x1F, 0x1A, 0xDB, 0x99, 0x8D, 0x33, 0x9F, 0x11, 0x83, 0x14 } MD2 hashes The 128-bit (16-byte) MD2 hashes (also term
https://en.wikipedia.org/wiki/Chung%20Kwei%20%28algorithm%29
Chung Kwei is a spam filtering algorithm based on the TEIRESIAS Algorithm for finding coding genes within bulk DNA. It is named after Zhong Kui, a figure in Chinese folklore. See also Spam (electronic) CAN-SPAM Act of 2003 DNSBL SpamAssassin External links Official Report TEIRESIAS: Sequence Pattern Discovery, from IBM Bioinformatics Group DNA technique protects against "evil" emails, from NewScientist.com "DNA analysis" spots e-mail spam, from BBC News Networking algorithms Anti-spam
https://en.wikipedia.org/wiki/Nonlinear%20regression
In statistics, nonlinear regression is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations. General In nonlinear regression, a statistical model of the form, relates a vector of independent variables, , and its associated observed dependent variables, . The function is nonlinear in the components of the vector of parameters , but otherwise arbitrary. For example, the Michaelis–Menten model for enzyme kinetics has two parameters and one independent variable, related by by: This function is nonlinear because it cannot be expressed as a linear combination of the two s. Systematic error may be present in the independent variables but its treatment is outside the scope of regression analysis. If the independent variables are not error-free, this is an errors-in-variables model, also outside this scope. Other examples of nonlinear functions include exponential functions, logarithmic functions, trigonometric functions, power functions, Gaussian function, and Lorentz distributions. Some functions, such as the exponential or logarithmic functions, can be transformed so that they are linear. When so transformed, standard linear regression can be performed but must be applied with caution. See Linearization§Transformation, below, for more details. In general, there is no closed-form expression for the best-fitting parameters, as there is in linear regression. Usually numerical optimization algorithms are applied to determine the best-fitting parameters. Again in contrast to linear regression, there may be many local minima of the function to be optimized and even the global minimum may produce a biased estimate. In practice, estimated values of the parameters are used, in conjunction with the optimization algorithm, to attempt to find the global minimum of a sum of squares. For details concerning nonlinear data modeling see least squares and non-linear least squares. Regression statistics The assumption underlying this procedure is that the model can be approximated by a linear function, namely a first-order Taylor series: where . It follows from this that the least squares estimators are given by compare generalized least squares with covariance matrix proportional to the unit matrix. The nonlinear regression statistics are computed and used as in linear regression statistics, but using J in place of X in the formulas. When the function itself is not known analytically, but needs to be linearly approximated from , or more, known values (where is the number of estimators), the best estimator is obtained directly from the Linear Template Fit as (see also linear least squares). The linear approximation introduces bias into the statistics. Therefore, more caution than usual is required in interpreting statistics derived from a nonlinear model.