source stringlengths 32 199 | text stringlengths 26 3k |
|---|---|
https://en.wikipedia.org/wiki/Pro%20TV | PRO TV (, often stylized as PRO•TV as of 2017) is a Romanian free-to-air television network, launched on 1 December 1995 as the fourth private TV channel in the country (after TV SOTI, Antena 1, and the now-defunct Tele7ABC). It is owned by CME (Central European Media Enterprises), which is owned by PPF Group.
Since 3 September 1999, the company has also been broadcasting its own signal for the Republic of Moldova, under the PRO TV Chișinău brand. It broadcasts, in addition to PRO TV Bucharest programs (according to its own grid, different from the Romanian one), a series of local news and programs and its own advertising slots throughout the day.
Targeting urban adults aged 21 to 54, PRO TV uses a programming strategy of top international series and movies, as well as a wide variety of local productions including news programming, local entertainment and local fiction.
On 29 August 2014, PRO TV launched its own streaming service, called PRO TV Plus, dedicated to original series. Later, in 2021, it was replaced by VOYO(Romanian), which has the same series and original shows of the PRO channels, plus some other exclusives and original content. PRO TV Plus still exists, although VOYO is now promoted much more. Also, unlike PRO TV Plus, which could be used for free, VOYO has monthly or annual subscriptions.
Since 2014, the idents for commercials and promos have become more different from those of other stations, focusing on the stars appearing in the station's shows. During the summer, idents are used that present activities that are practiced in the summer, while in the winter, idents are used that present things and activities related to the winter holidays, but also things related to winter.
Programs
The station's local productions include entertainment shows, news programs and TV series.
Știrile PRO TV
Știrile PRO TV () is one of the most popular news programs in Romania, with an average rating of 9.3 points and 25.1% market share, being watched by over a million urban viewers. According to 2022 report of Reuters Institute for the Study of Journalism, 76% of the interviewed persons confirmed that PRO TV news are the most trusted ones. According to different research studies, PRO TV has at this moment a reach of 63% in terms of weekly use, 51% of the people watching the programs at least three times per week.
Știrile PRO TV won the International Emmy Award News of 2008 in September 2008.
Andreea Esca is the longest-standing newscaster in Romania. She began her career over 25 years ago, and has spent 23 years with PRO TV.
PRO TV news programs are broadcast daily, multiple times per day.
Newscasters and celebrities
Amalia Enache
Andra
Andreea Esca
Andi Moisescu
Carmen Tănase
Cătălin Măruță
Cătălin Radu Tănase
Cristian Leonte
Corina Caragea
Ramona Păun
Vadim Vîjeu
Florin Busuioc
Iulia Pârlea
Magda Pălimariu
Daniel Nițoiu
Mihai Dedu
Lavinia Petrea
Andreea Marinescu
Roxana Hulpe
Ovidiu Oanță
Smiley
Pavel Bartoș
Tud |
https://en.wikipedia.org/wiki/TVR%20%28TV%20network%29 | Televiziunea Română (), more commonly referred to as TVR , is the short name for Societatea Română de Televiziune ("Romanian Television Society"; SRTV), the Romanian public television. It operates seven channels: TVR 1, TVR 2, TVR 3, TVR Cultural, TVR Folclor, TVR Info, TVRi, TVR Moldova and TVR Sport along with six regional studios in Bucharest, Cluj-Napoca, Iași, Timișoara, Craiova and Târgu Mureș.
TVR 1 has a total national coverage of 99.8%, virtually the entire Romanian population, and TVR 2 has 91% national coverage. All of the other channels and networks solely broadcast in major population centers. Even though it does not have the largest audience, due to the dominance of the five private TV networks (which consistently get higher ratings in the urban market segment), it offers a wider variety of services, including webcasts and international viewing via TVRi.
As of November 2019, TVR 1 and TVR 2 broadcast in full high-definition.
History
Early years
TVR was established in 1956 in the capital city of Bucharest and had its first broadcasts on New Year's Eve, 31 December, from a little building (a deserted cinema studio) on 2 Molière Street. This began a long tradition of hosting the annual New Year special on this channel which also doubles not just as a way to honor the achievements and events of the past year, but also as the anniversary of the beginning of television broadcasting in Romania.
During the Ceaușescu era
Headquarters and a second channel
TVR moved in 1969 to a new building, a purpose-built television center on Dorobanților Avenue. It was designed by well-known architect Tiberiu Ricci, and since then serves as the network headquarters where the main studios and offices are located.
A second channel, TVR 2, was created in 1968, initially known as Programul 2, and, in the immediate aftermath, TVR became Programul 1. TVR2 was suspended in 1985, due to the "energy saving program" initiated by Nicolae Ceaușescu (1918–1989) and TVR1 became TVR again, becoming the only television station in Romania at the time, until the Romanian Revolution in 1989, corresponding with the fall of communism in the remaining Eastern Bloc countries that same year.
Program policy
From 1966 to 1980, TVR had an open program policy. Many films, serials, cartoons and other programs from the West, such as shows from the United States and Western Europe, were broadcast on the two main channels.
Color broadcasts and schedule changes
In 1983, TVR became the first Romanian channel to broadcast in color. Although the rest of the Eastern Bloc countries adopted the French, Soviet-backed SECAM system, TVR chose to implement the West German PAL system. Plans to introduce color television broadcasting date as far back as 1968, when TVR began trial broadcasts in color. It was, however, deemed too costly at the time to impose color broadcasting, and plans were shelved up to 1983. Even so, by 1990, only some broadcasts were in color and very few people owned a c |
https://en.wikipedia.org/wiki/Romanian%20television | Romanian television may refer to:
Communications media in Romania
Televiziunea Română, TVR, the national television network
List of Romanian-language television channels |
https://en.wikipedia.org/wiki/Antena%201%20%28Romania%29 | Antena 1 () is a Romanian free-to-air television network owned by the Antena TV Group, part of the Intact Media Group. Its programming consists of television news programs, soap opera shows, football matches, entertainment programmes, movies and television series.
Antena 1's headquarters was seized by the Romanian state on 8 August 2014, due to a judicial sentence against Dan Voiculescu, the founder of Intact Media Group. The building may be sold in order for the state to recuperate the loss brought to it as a result of the fraudulent privatization of the Institute for Alimentary Research in 2003. After company employees destroyed the interior of the building whilst moving out, the building required refurbishment before being placed on sale. The National Agency for the Management of Seized Assets (ANABI) has placed the building for sale on its website.
Current Programs
The station's top-rated local productions include entertainment shows, news programs and TV series.
News
Observator
Observator is the channel's daily newscast (that airs at the Observator 12 from Monday to Friday) it is one of the most watched newscasts in Romania. It has fifth daily editions, starting at 6.00 AM, 12.00 PM on weekdays or 1 PM on weekend, 5.00 PM, 7.00 PM, and 11.00 PM/11.30 PM. Its flagship daily evening newscast is Observator 19.00, anchored by Alessandra Stoicescu on weekdays, and Irina Ursu on weekend.
The daily morning edition of Observator is broadcast at 6.00 AM. Its hosts are Iuliana Pepene and Bogdan Alecsandru on weekdays, Mihai Jurca and Andra Petrescu on weekend are the hosts for Observator 6.00 AM. Laura Nuredin is hosted for Observator 13.00. From Monday to Friday, Andreea Ţopan, Olivia Păunescu and Valentin Butnaru are the hosts for Observator 12.00, Florin Căruceru and Mihaela Călin are anchoring the broadcast at 17:00. Also, from Monday to Thursday, Observator has a nightly news edition, hosted by Marius Pancu.
On November 28, 2016, when Antena 1 launched its own HD feed, Observator debuted a new set, which includes a new studio, a new logo, opening theme and a new graphics package. The Observator website, observator.tv, was also relaunched at the end of November 2016. On April 19, 2020, Observator came with a new graphics package and its website changed its name and domain to observatornews.ro, launching also a news application.
Original TV Series
Entertainment
Variety shows
Past Programs
2k1, host Mirela Boureanu-Vaida
Aici eu sunt vedeta, host Dan Bittman
Adela, was a Romanian television drama
Acces Direct, hosts Mirela Vaida and Adrian Velea - now aired on Antena Stars
Băieți de oraș, sitcom starring Mihai Bendeac and Vlad Drăgulin
Burlacul, was a Romanian reality television show and the Romanian version of The Bachelor, host Cătălin Botezatu
Burlăcița, was a Romanian reality television show and the Romanian version of The Bachelorette, host Cătălin Botezatu
Câștigi în 60 de secunde, host Dan Negru
Comanda la mine!
Danseaza Print |
https://en.wikipedia.org/wiki/Disk%20formatting | Disk formatting is the process of preparing a data storage device such as a hard disk drive, solid-state drive, floppy disk, memory card or USB flash drive for initial use. In some cases, the formatting operation may also create one or more new file systems. The first part of the formatting process that performs basic medium preparation is often referred to as "low-level formatting". Partitioning is the common term for the second part of the process, dividing the device into several sub-devices and, in some cases, writing information to the device allowing an operating system to be booted from it. The third part of the process, usually termed "high-level formatting" most often refers to the process of generating a new file system. In some operating systems all or parts of these three processes can be combined or repeated at different levels and the term "format" is understood to mean an operation in which a new disk medium is fully prepared to store files. Some formatting utilities allow distinguishing between a quick format, which does not erase all existing data and a long option that does erase all existing data.
As a general rule, formatting a disk by default leaves most if not all existing data on the disk medium; some or most of which might be recoverable with privileged or special tools. Special tools can remove user data by a single overwrite of all files and free space.
History
A block, a contiguous number of bytes, is the minimum unit of storage that is read from and written to a disk by a disk driver. The earliest disk drives had fixed block sizes (e.g. the IBM 350 disk storage unit (of the late 1950s) block size was 100 six-bit characters) but starting with the 1301 IBM marketed subsystems that featured variable block sizes: a particular track could have blocks of different sizes. The disk subsystems and other direct access storage devices on the IBM System/360 expanded this concept in the form of Count Key Data (CKD) and later Extended Count Key Data (ECKD); however the use of variable block size in HDDs fell out of use in the 1990s; one of the last HDDs to support variable block size was the IBM 3390 Model 9, announced May 1993.
Modern hard disk drives, such as Serial attached SCSI (SAS) and Serial ATA (SATA) drives, appear at their interfaces as a contiguous set of fixed-size blocks; for many years 512 bytes long but beginning in 2009 and accelerating through 2011, all major hard disk drive manufacturers began releasing hard disk drive platforms using the Advanced Format of 4096 byte logical blocks.
Floppy disks generally only used fixed block sizes but these sizes were a function of the host's OS and its interaction with its controller so that a particular type of media (e.g., 5¼-inch DSDD) would have different block sizes depending upon the host OS and controller.
Optical discs generally only use fixed block sizes.
Disk formatting process
Formatting a disk for use by an operating system and its applications typically |
https://en.wikipedia.org/wiki/Tsuen%20Wan%20line | The Tsuen Wan line () is one of the ten lines of the metro network in Hong Kong's MTR. It is indicated in red on the MTR map.
There are 16 stations on the line. The southern terminus is Central station on Hong Kong Island and the northwestern terminus is Tsuen Wan station in the New Territories. A journey on the entire line takes 35 minutes.
As a cross-harbour route that goes through the heart of Kowloon and densely populated Sham Shui Po and Kwai Chung, the line is very heavily travelled.
History
Construction
The Tsuen Wan line was the second of the three original lines of the MTR network. The initial plan for this line is somewhat different from the current line, especially in the names and the construction characteristics of the New Territories section.
The original plan envisioned a terminus in a valley further west of the present Tsuen Wan station. That Tsuen Wan West station is different from the current Tsuen Wan West station on the Tuen Ma line, which is located under land reclaimed at a much later time. The line was supposed to run underground in Tsuen Wan rather than as currently on the ground level.
The approved route was truncated, terminating at Tsuen Wan station. The construction of the Tsuen Wan Extension project was approved in 1975 and commenced soon afterwards. Testing of the new line began on 1 March 1982.
The extension was formally opened on 10 May 1982 by Sir Philip Haddon-Cave, the acting governor and former chairman of the Mass Transit Railway Provisional Authority. The project was opened seven and a half months ahead of schedule, and cost HK$3.9 billion, under budget compared to the original estimate of HK$4.1 billion.
The new section from Tsuen Wan to Lai King and skipping all intermediate stations to Prince Edward opened on 17 May 1982 and joined the section under Nathan Road in Kowloon that had been in service since 1979 as part of the Kwun Tong line. On opening, Prince Edward was an interchange-only station with no option to enter or exit. It did not become a standard station until the remaining stations on the line in Sham Shui Po District, i.e. Sham Shui Po, Cheung Sha Wan, Lai Chi Kok and Mei Foo, opened a week later.
Several stations differ in names or location from the initial plan. During planning, Kwai Hing was named Kwai Chung, Kwai Fong was Lap Sap Wan (literally "rubbish bay", as the location was close to a now-disused landfill in Gin Drinker's Bay), Lai Wan (now Mei Foo) was Lai Chi Kok, Lai Chi Kok was Cheung Sha Wan, Cheung Sha Wan was So Uk. These stations were all renamed in English and Chinese before service began.
Upon the opening of the Island line, Chater, Waterloo, and Argyle, originally named based on the streets crossing or above the stations, Chater Road, Argyle Street, and Waterloo Road respectively, were renamed to Central, Yau Ma Tei, and Mong Kok, resembling the names of the station in Chinese. Lai Wan was renamed to Mei Foo in both English and Chinese.
Mong Kok station was planned |
https://en.wikipedia.org/wiki/Juris%20Hartmanis | Juris Hartmanis (July 5, 1928 – July 29, 2022) was a Latvian-born American computer scientist and computational theorist who, with Richard E. Stearns, received the 1993 ACM Turing Award "in recognition of their seminal paper which established the foundations for the field of computational complexity theory".
Life and career
Hartmanis was born in Latvia on July 5, 1928. He was a son of , a general in the Latvian Army, and Irma Marija Hartmane. He was the younger brother of the poet Astrid Ivask. After the Soviet Union occupied Latvia in 1940, Mārtiņš Hartmanis was arrested by the Soviets and died in a prison. Later in World War II, the wife and children of Mārtiņš Hartmanis left Latvia in 1944 as refugees, fearing for their safety if the Soviet Union took over Latvia again.
They first moved to Germany, where Juris Hartmanis received the equivalent of a master's degree in physics from the University of Marburg. He then moved to the United States, where in 1951 he received a master's degree in applied mathematics at the University of Kansas City (now known as the University of Missouri–Kansas City) and in 1955 a Ph.D. in mathematics from Caltech under the supervision of Robert P. Dilworth. The University of Missouri–Kansas City honored him with an Honorary Doctor of Humane Letters in May 1999.
After teaching mathematics at Cornell University and Ohio State University, Hartmanis joined the General Electric Research Laboratory in 1958. While at General Electric, he developed many principles of computational complexity theory. In 1965, he became a professor at Cornell University. He was one of the founders and the first chair of its computer science department (which was one of the first computer science departments in the world).
Hartmanis contributed to national efforts to advance computer science and engineering (CS&E) in many ways. Most significantly, he chaired the National Research Council study that resulted in the 1992 publication Computing the Future – A Broad Agenda for Computer Science and Engineering, which made recommendations based on its priorities to sustain the core effort in CS&E, to broaden the field, and to improve undergrad education in CS&E. He was assistant director of the National Science Foundation (NSF) Directorate of Computer and Information Science and Engineering (CISE) from 1996 to 1998.
In 1989, Hartmanis was elected as a member into the National Academy of Engineering for fundamental contributions to computational complexity theory and to research and education in computing. He was a Fellow of the Association for Computing Machinery and of the American Mathematical Society, also a member of the National Academy of Sciences. He was also a foreign member of the Latvian Academy of Sciences, which bestowed him their in 2001 for his contributions to computer science.
Hartmanis died on July 29, 2022.
Computational complexity: foundational contributions
In 1993, Hartmanis and R.E. Stearns received
the highest prize in |
https://en.wikipedia.org/wiki/Computational%20fluid%20dynamics | Computational fluid dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid flows. Computers are used to perform the calculations required to simulate the free-stream flow of the fluid, and the interaction of the fluid (liquids and gases) with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved, and are often required to solve the largest and most complex problems. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial validation of such software is typically performed using experimental apparatus such as wind tunnels. In addition, previously performed analytical or empirical analysis of a particular problem can be used for comparison. A final validation is often performed using full-scale testing, such as flight tests.
CFD is applied to a wide range of research and engineering problems in many fields of study and industries, including aerodynamics and aerospace analysis, hypersonics, weather simulation, natural science and environmental engineering, industrial system design and analysis, biological engineering, fluid flows and heat transfer, engine and combustion analysis, and visual effects for film and games.
Background and history
The fundamental basis of almost all CFD problems is the Navier–Stokes equations, which define many single-phase (gas or liquid, but not both) fluid flows. These equations can be simplified by removing terms describing viscous actions to yield the Euler equations. Further simplification, by removing terms describing vorticity yields the full potential equations. Finally, for small perturbations in subsonic and supersonic flows (not transonic or hypersonic) these equations can be linearized to yield the linearized potential equations.
Historically, methods were first developed to solve the linearized potential equations. Two-dimensional (2D) methods, using conformal transformations of the flow about a cylinder to the flow about an airfoil were developed in the 1930s.
One of the earliest type of calculations resembling modern CFD are those by Lewis Fry Richardson, in the sense that these calculations used finite differences and divided the physical space in cells. Although they failed dramatically, these calculations, together with Richardson's book Weather Prediction by Numerical Process, set the basis for modern CFD and numerical meteorology. In fact, early CFD calculations during the 1940s using ENIAC used methods close to those in Richardson's 1922 book.
The computer power available paced development of three-dimensional methods. Probably the first work using computers to model fluid flow, as governed by the Navier–Stokes equations, was performed at Los Alamos National Lab, in the T3 group. This group was led by Francis H. Harlow, who is widely considered one of the pionee |
https://en.wikipedia.org/wiki/Ed%2C%20Edd%20n%20Eddy | Ed, Edd n Eddy is an animated television series created by Danny Antonucci for Cartoon Network. The series revolves around three friends named Ed, Edd (called "Double D" to avoid confusion with Ed), and Eddy—collectively known as "the Eds"—who are voiced by Matt Hill, Sam Vincent, and Tony Sampson respectively. They live in a suburban cul-de-sac in the fictional town of Peach Creek along with fellow neighborhood children Kevin, Nazz, Sarah, Jimmy, Rolf, Jonny, and the Eds' female adversaries, the Kanker Sisters, Lee, Marie, and May. Under the unofficial leadership of Eddy, the trio frequently invents schemes to make money from their peers to purchase their favorite confection, jawbreakers. Their plans usually fail, leaving them in various, often humiliating and painful, predicaments.
Adult cartoonist Antonucci was dared to create a children's cartoon; while designing a commercial, he conceived Ed, Edd n Eddy, designing it to resemble classic cartoons from the 1940s–1970s. When pitching the series to Nickelodeon, the network declined to give him creative control, a deal to which Antonucci did not agree. He then pitched the series to Cartoon Network. A deal was made with the network to commission the series under his control, and it premiered on January 4, 1999. During the show's run, several specials and shorts were produced in addition to the regular television series. The series concluded with a television film, Ed, Edd n Eddy's Big Picture Show, on November 8, 2009.
Ed, Edd n Eddy received critical acclaim and became one of Cartoon Network's most successful original series. It won a Reuben Award, two Leo Awards and a SOCAN Award, and was also nominated for another four Leo Awards, an Annie Award and two Kids' Choice Awards. The show attracted an audience of 31 million households, was broadcast in 120 countries, and proved to be popular among children, teenagers, and adults. The series has also included spin-off media such as video games, DVD releases, and a series of books and comic books featuring characters from the series. With a 10-year run, Ed, Edd n Eddy is the longest running standalone Cartoon Network original series. The series also was broadcast on Teletoon in Canada.
Premise
Ed, Edd n Eddy follows the lives of "the Eds", three scheming boys who all share variations of nicknames of the name Edward, but differ greatly in their personalities: Ed (Matt Hill) is the strong and dim-witted yet kind-hearted dogsbody of the group; Edd (Samuel Vincent), called Double D, is an inventor, neat freak, and the most intelligent of the Eds; and Eddy (Tony Sampson) is a devious, quick-tempered, bitter con artist, and self-appointed leader of the Eds. The three devise plans to obtain money from the other kids in their cul-de-sac, which they want to use to buy jawbreakers. However, problems always ensue, and the Eds' schemes usually end in failure and humiliation.
The cul-de-sac kids do not include the Eds as part of their group, making the trio o |
https://en.wikipedia.org/wiki/Orange%20Romania | Orange România is a broadband Internet service provider and mobile provider in Romania. It is Romania's largest GSM network operator which is majority owned by Orange S.A. that also uses some of the Telekom Romania infrastructure, the biggest initial investor, who gradually increased its ownership.
Between 1997 and April 2002, the company was named Mobil Rom, operated under two brand names Dialog (for monthly subscription plans, in Romanian means "dialogue") and Alo (for prepay services). In April 2002, after France Télécom gained a majority stake it was re-branded to comply with the group's global strategy. As of December 2012, Orange Romania has 10.3 million mobile subscribers.
Orange is in head-to-head competition with Vodafone Romania for one of the most dynamic mobile telephony markets in south eastern Europe. Currently the mobile penetration is at about 115% (active users only). Orange edged ahead of Vodafone (formerly Connex) in terms of number of subscribers in September 2004. They are the main mobile telephony operators, with Orange having a market share of almost 38% of the total market (active and inactive users).
Orange România also controls 4% of the Moldovan operator Orange Moldova.
Radio frequency summary
The following is a list of known frequencies which Orange uses in Romania:
See also
List of mobile network operators
Communications media in Romania
References
External links
Orange Moldova
Companies based in Bucharest
Mobile phone companies of Romania
Romania |
https://en.wikipedia.org/wiki/NAMD | Nanoscale Molecular Dynamics (NAMD, formerly Not Another Molecular Dynamics Program) is computer software for molecular dynamics simulation, written using the Charm++ parallel programming model (not to be confused with CHARMM). It is noted for its parallel efficiency and is often used to simulate large systems (millions of atoms). It has been developed by the collaboration of the Theoretical and Computational Biophysics Group (TCB) and the Parallel Programming Laboratory (PPL) at the University of Illinois Urbana–Champaign.
It was introduced in 1995 by Nelson et al. as a parallel molecular dynamics code enabling interactive simulation by linking to the visualization code VMD. NAMD has since matured, adding many features and scaling beyond 500,000 processor cores.
NAMD has an interface to quantum chemistry packages ORCA and MOPAC, as well as a scripted interface to many other quantum packages. Together with Visual Molecular Dynamics (VMD) and QwikMD, NAMD's interface provides access to hybrid QM/MM simulations in an integrated, comprehensive, customizable, and easy-to-use suite.
NAMD is available as freeware for non-commercial use by individuals, academic institutions, and corporations for in-house business uses.
See also
Charm++
Comparison of software for molecular mechanics modeling
References
External links
, at TCB website
NAMD page at the PPL website
NAMD on GPUs
Molecular dynamics software
Science software |
https://en.wikipedia.org/wiki/TACACS | Terminal Access Controller Access-Control System (TACACS, ) refers to a family of related protocols handling remote authentication and related services for network access control through a centralized server. The original TACACS protocol, which dates back to 1984, was used for communicating with an authentication server, common in older UNIX networks including but not limited to the ARPANET, MILNET and BBNNET. It spawned related protocols:
Extended TACACS (XTACACS) is a proprietary extension to TACACS introduced by Cisco Systems in 1990 without backwards compatibility to the original protocol. TACACS and XTACACS both allow a remote access server to communicate with an authentication server in order to determine if the user has access to the network.
TACACS Plus (TACACS+) is a protocol developed by Cisco and released as an open standard beginning in 1993. Although derived from TACACS, TACACS+ is a separate protocol that handles authentication, authorization, and accounting (AAA) services. TACACS+ has largely replaced its predecessors.
History
TACACS was originally developed in 1984 by BBN, later known as BBN Technologies, for administration of ARPANET and MILNET, which ran unclassified network traffic for DARPA at the time and would later evolve into the U.S. Department of Defense's NIPRNet. Originally designed as a means to automate authentication – allowing someone who was already logged into one host in the network to connect to another on the same network without needing to re-authenticate – it was first formally described by BBN's Brian Anderson TAC Access Control System Protocols, BBN Tech Memo CC-0045 with minor TELNET double login avoidance change in December 1984 in IETF RFC 927. Cisco Systems began supporting TACACS in its networking products in the late 1980s, eventually adding several extensions to the protocol. In 1990, Cisco's extensions on top of TACACS became a proprietary protocol called Extended TACACS (XTACACS). Although TACACS and XTACACS are not open standards, Craig Finseth of the University of Minnesota, with Cisco's assistance, published a description of the protocols in 1993 as IETF RFC 1492 for informational purposes.
Technical descriptions
TACACS
TACACS is defined in RFC 8907 (older RFC 1492), and uses (either TCP or UDP) port 49 by default. TACACS allows a client to accept a username and password and send a query to a TACACS authentication server, sometimes called a TACACS daemon. It determines whether to accept or deny the authentication request and sends a response back. The TIP (routing node accepting dial-up line connections, which the user would normally want to log in into) would then allow access or not, based upon the response. In this way, the process of making the decision is "opened up" and the algorithms and data used to make the decision are under the complete control of whoever is running the TACACS daemon.
XTACACS
Extended TACACS (XTACACS) extends the TACACS protocol with additional functionality. |
https://en.wikipedia.org/wiki/CDO | CDO may refer to:
Aeronautics
pronunciation of the zero-lift drag coefficient „”
Chemistry
Cysteine dioxygenase, an enzyme
CDO, trade name of chlordiazepoxide
CdO, cadmium oxide
Computing
Climate Data Operators, a command line suite for manipulating and analyzing climate data
Collaboration Data Objects, a Microsoft application programming interface for data access
Connected Data Objects, a free implementation of a distributed shared model
Places
Cagayan de Oro, a city on Mindanao Island, Philippines
Canyon del Oro High School, a public school in Oro Valley, Arizona, USA
Cañada del Oro, a primary watershed channel in the valley of Tucson, Arizona, USA
People
Job titles
Chief data officer, an information systems title
Chief Dental Officer (Canada), a Canadian official who advises on oral health
Chief Dental Officer (United Kingdom), a professional advisor for dentistry in each of the 4 UK governments
Chief design officer, a corporate design position
Chief development officer (aka Chief business development officer), a business position
Chief Development Officer (India), a civil servant in the Indian states of Uttar Pradesh and Uttarakhand
Chief digital officer
Chief diversity officer
Command duty officer
Other uses
CDO Foodsphere, a Philippine meat processing company
Central dense overcast, in a tropical storm
Collateralized debt obligation, a structured finance product
Community dial office, a telephone switching system for small communities
Continuous duty overnight, a regional airline crew scheduling term
ISO-639 code for the Eastern Min branch of Chinese
Conservative Democratic Organisation, an organization associated with the UK Conservative Party
See also
Chief Dental Officer (disambiguation) |
https://en.wikipedia.org/wiki/Two%27s%20complement | Two's complement is the most common method of representing signed (positive, negative, and zero) integers on computers, and more generally, fixed point binary values. Two's complement uses the binary digit with the greatest place value as the sign to indicate whether the binary number is positive or negative. When the most significant bit is 1, the number is signed as negative; and when the most significant bit is 0 the number is signed as positive.
Unlike the one's complement scheme, the two's complement scheme has only one representation for zero. Furthermore, arithmetic implementations can be used on signed as well as unsigned integers
and differ only in the integer overflow situations.
Procedure
Two's complement is achieved by:
Step 1: starting with the equivalent positive number.
Step 2: inverting (or flipping) all bits – changing every 0 to 1, and every 1 to 0;
Step 3: adding 1 to the entire inverted number, ignoring any overflow. Accounting for overflow will produce the wrong value for the result.
For example, to calculate the decimal number −6 in binary:
Step 1: +6 in decimal is 0110 in binary; the leftmost significant bit (the first 0) is the sign (just 110 in binary would be -2 in decimal).
Step 2: flip all bits in 0110, giving 1001.
Step 3: add the place value 1 to the flipped number 1001, giving 1010.
To verify that 1010 indeed has a value of −6, add the place values together, but subtract the sign value from the final calculation. Because the most significant value is the sign value, it must be subtracted to produce the correct result: 1010 = −(1×23) + (0×22) + (1×21) + (0×20) = 1×−8 + 0 + 1×2 + 0 = −6.
Theory
Two's complement is an example of a radix complement.
The 'two' in the name refers to the term which, expanded fully in an -bit system, is actually "two to the power of N" - (the only case where exactly 'two' would be produced in this term is , so for a 1-bit system, but these do not have capacity for both a sign and a zero), and it is only this full term in respect to which the complement is calculated. As such, the precise definition of the Two's complement of an -bit number is the complement of that number with respect to .
The defining property of being a complement to a number with respect to is simply that the summation of this number with the original produce . For example, using binary with numbers up to three-bits (so and , where '2' indicates a binary representation), a two's complement for the number 3 () is 5 (), because summed to the original it gives . Where this correspondence is employed for representing negative numbers, it effectively means, using an analogy with decimal digits and a number-space only allowing eight non-negative numbers 0 through 7, dividing the number-space in two sets: the first four of the numbers 0 1 2 3 remain the same, while the remaining four encode negative numbers, maintaining their growing order, so making 4 encode -4, 5 encode -3, 6 encode -2 and 7 encode -1. |
https://en.wikipedia.org/wiki/Moorestown | Moorestown may refer to:
Technology:
Moorestown computing platform by Intel
United States geography:
Moorestown, Indiana
Moorestown, Michigan
Moorestown, New Jersey
Moorestown-Lenola, New Jersey
United States education:
Moorestown Friends School, private Quaker school located at East Main Street and Chester Avenue in Moorestown, New Jersey
Moorestown High School, four-year comprehensive public high school that serves students in ninth through twelfth grades
Moorestown Township Public Schools, comprehensive community public school district
United States court cases
Hornstine v. Moorestown, a 2003 case in U.S. Federal District Court
See also
Moorstown Castle |
https://en.wikipedia.org/wiki/RealVideo | RealVideo, or also spelled as Real Video, is a suite of proprietary video compression formats developed by RealNetworks — the specific format changes with the version. It was first released in 1997 and was at version 10. RealVideo is supported on many platforms, including Windows, Mac, Linux, Solaris, and several mobile phones.
RealVideo is usually paired with RealAudio and packaged in a RealMedia (.rm) container. RealMedia is suitable for use as a streaming media format, that is one which is viewed while it is being sent over the network. Streaming video can be used to watch live television, since it does not require downloading the entire video in advance.
Technology
The first version of RealVideo was announced in 1997 and was based on the H.263 format. At the time, RealNetworks issued a press release saying they had licensed Iterated Systems' ClearVideo technology and were including it as the RealVideo Fractal Codec. However, support for ClearVideo quietly disappeared in the next version of RealVideo.
RealVideo continued to use H.263 until RealVideo 8, when the company switched to a proprietary video format. RealVideo codecs are identified by four-character codes. RV10 and RV20 are the H.263-based codecs. RV30 and RV40 are RealNetworks' proprietary H.264-based codecs. These identifiers have been the source of some confusion, as people may assume that RV10 is RealVideo version 10, when it is actually the first version of RealVideo. RealVideo 10 uses RV40.
RealVideo can be played from a RealMedia file or streamed over the network using the Real Time Streaming Protocol (RTSP), a standard protocol for streaming media developed by the IETF. However, RealNetworks uses RTSP only to set up and manage the connection. The actual video data is sent with their own proprietary Real Data Transport (RDT) protocol. This tactic has drawn criticism because it made it difficult to use RealVideo with other player and server software. However, the open source MPlayer project has now developed software capable of playing the RDT streams.
To facilitate real-time streaming, RealVideo (and RealAudio) normally uses constant bit rate encoding, so that the same amount of data is sent over the network each second. Recently, RealNetworks has introduced a variable bit rate form called RealMedia Variable Bitrate (RMVB). This allows for better video quality, however this format is less suited for streaming because it is difficult to predict how much network capacity a certain video stream will need. Video with fast motion or rapidly changing scenes will require a higher bit rate. If the bit rate of a video stream increases significantly, it may exceed the speed at which data can be transmitted over the network, leading to an interruption in the video.
RealNetworks says that the RealVideo and RealAudio codecs are not available in source code under the RPSL license. Source code is available only under RCSL license for commercial porting to non-supported processors and op |
https://en.wikipedia.org/wiki/Henri%20Gouraud%20%28computer%20scientist%29 | Henri Gouraud (born 1944) is a French computer scientist. He is the inventor of Gouraud shading used in computer graphics. He is the great-nephew of general Henri Gouraud.
During 1964–1967, he studied at École Centrale Paris. He received his Ph.D. from the University of Utah College of Engineering in 1971, working with Dave Evans and Ivan Sutherland, with his dissertation titled Computer Display of Curved Surfaces.
In 1971, Gouraud made the first computer graphics geometry capture and representation of a human face in wire-frame model, and applied his shader to produce the famous human face images showing the effect of his shading, which were done using his wife Sylvie Gouraud as the model.
Original publications
H. Gouraud, "Continuous shading of curved surfaces," IEEE Transactions on Computers, C-20(6):623–629, 1971.
H. Gouraud, Computer Display of Curved Surfaces, Doctoral Thesis, University of Utah, United States, 1971.
H. Gouraud, Continuous shading of curved surfaces. In Rosalee Wolfe (editor), Seminal Graphics: Pioneering efforts that shaped the field, ACM Press, 1998. .
References
1944 births
École Centrale Paris alumni
Computer graphics professionals
Computer graphics researchers
French computer scientists
Living people
University of Utah alumni |
https://en.wikipedia.org/wiki/Richard%20E.%20Stearns | Richard Edwin Stearns (born July 5, 1936) is an American computer scientist who, with Juris Hartmanis, received the 1993 ACM Turing Award "in recognition of their seminal paper which established the foundations for the field of computational complexity theory". In 1994 he was inducted as a Fellow of the Association for Computing Machinery.
Stearns graduated with a B.A. in mathematics from Carleton College in 1958. He then received his Ph.D. in mathematics from Princeton University in 1961 after completing a doctoral dissertation, titled Three person cooperative games without side payments, under the supervision of Harold W. Kuhn. Stearns is now Distinguished Professor Emeritus of Computer Science at the University at Albany, which is part of the State University of New York.
Bibliography
. A first systematic study of language operations that preserve regular languages.
. Contains the time hierarchy theorem, one of the theorems that shaped the field of computational complexity theory.
. Answers a basic question about deterministic pushdown automata: it is decidable whether a given deterministic pushdown automaton accepts a regular language.
. Introduces LL parsers, which play an important role in compiler design.
References
External links
1936 births
American computer scientists
Fellows of the Association for Computing Machinery
Living people
Turing Award laureates
University at Albany, SUNY faculty
People from Caldwell, New Jersey
Princeton University alumni |
https://en.wikipedia.org/wiki/Edward%20Feigenbaum | Edward Albert Feigenbaum (born January 20, 1936) is a computer scientist working in the field of artificial intelligence, and joint winner of the 1994 ACM Turing Award. He is often called the "father of expert systems."
Education and early life
Feigenbaum was born in Weehawken, New Jersey in 1936 to a culturally Jewish family, and moved to nearby North Bergen, where he lived until the age of 16, when he left to start college. His hometown did not have a secondary school of its own, and so he chose Weehawken High School for its college preparatory program. He was inducted into his high school's hall of fame in 1996.
Feigenbaum completed his undergraduate degree (1956), and a Ph.D. (1960), at Carnegie Institute of Technology (now Carnegie Mellon University). In his PhD thesis, carried out under the supervision of Herbert A. Simon, he developed EPAM, one of the first computer models of how people learn.
Career and research
Feigenbaum completed a Fulbright Fellowship at the National Physical Laboratory (United Kingdom) and in 1960 went to the University of California, Berkeley, to teach in the School of Business Administration. He joined the Stanford University faculty in 1965 as one of the founders of its computer science department. He was the director of the Stanford Computation Center from 1965 to 1968. He established the Knowledge Systems Laboratory at Stanford University. Important projects that Feigenbaum was involved in include systems in medicine, as ACME, MYCIN, SUMEX, and Dendral. He also co-founded companies IntelliCorp and Teknowledge.
Since 2000 Feigenbaum is a Professor Emeritus of Computer Science at Stanford University. His former doctoral students include Peter Karp, Niklaus Wirth, and Alon Halevy.
Honors and awards
1984: Selected as one of the initial fellows of the American College of Medical Informatics (ACMI)
1986: Elected a member of the National Academy of Engineering for pioneering contributions to knowledge engineering and expert systems technology, and for leadership in education and technology of applied artificial intelligence.
1994: Turing Award jointly with Raj Reddy for "pioneering the design and construction of large scale artificial intelligence systems, demonstrating the practical importance and potential commercial impact of artificial intelligence technology".
1997: U.S. Air Force Exceptional Civilian Service Award
2007: Inducted as fellow of the Association for Computing Machinery (ACM)
2011: IEEE Intelligent Systems AI's Hall of Fame for "significant contributions to the field of AI and intelligent systems".
2012. Made fellow of the Computer History Museum "for his pioneering work in artificial intelligence and expert systems."
2013. IEEE Computer Society Computer Pioneer Award for "pioneering work in Artificial Intelligence, including development of the basic principles and methods of knowledge-based systems and their practical applications".
Works
References
1936 births
Living people
Artifici |
https://en.wikipedia.org/wiki/Descent%3A%20FreeSpace%20%E2%80%93%20The%20Great%20War | Descent: FreeSpace – The Great War, known as Conflict: FreeSpace – The Great War in Europe, is a 1998 space combat simulation IBM PC compatible computer game developed by Volition, when it was split off from Parallax Software, and published by Interplay Productions. In 2001, it was ported to the Amiga platform as FreeSpace: The Great War by Hyperion Entertainment. The game places players in the role of a human pilot, who operates in several classes of starfighter and combats against opposing forces, either human or alien, in various space-faring environments, such as in orbit above a planet or within an asteroid belt. The story of the game's single player campaign focuses on a war in the 24th century between two factions, one human and the other alien, that is interrupted in its fourteenth year by the arrival of an enigmatic and militant alien race, whose genocidal advance forces the two sides into a ceasefire in order to work together to halt the threat.
Descent: FreeSpace was well-received as a single-player space simulation that integrated all the desired features of its genre, from competent AI wingmen, to the presence of large capital ships that dwarf the fighters piloted by the player and explode spectacularly when destroyed. The game's multiplayer mode was criticised, as it was plagued by lag and inaccurate tracking of statistics. An expansion for the game, which was less well-received, was also released in 1998 under the title of Silent Threat, and focuses on events after the main game's campaign with the player working for an intelligence branch of the Terrans' armed forces that later attempt to overthrow the Terran government. A sequel to Descent: FreeSpace entitled FreeSpace 2, was released in 1999 to critical acclaim.
Gameplay
Descent: FreeSpace features two modes of play; a single player campaign and multiplayer matches, with the game's main menu designed around the interior of a ship's quarterdeck, with various elements (mostly doors) leading to different options, such as starting a new game, configuring the game, reviewing the crafts featured in the game and various story elements, and replaying completed single player missions. In both modes, the player controls their craft and other commands through either a joystick, or a keyboard (either on its own or with a mouse), and primarily view the game's environments from the first-person perspective of a cockpit within a starfighter. While the game features additional third-person camera viewpoints, the game's interface - the head-up display (HUD) - can only be viewed from the primary viewpoint, and can be customised with different colours. Because of the flexibility in the control scheme, some have categorised the game as being a flight simulator, since it has more controls and commands than a typical arcade game, yet its flight model is simple, and incorporates some elements of Newtonian physics such as precise collision physics.
When conducting a match or a single-player mission |
https://en.wikipedia.org/wiki/Datapoint%202200 | The Datapoint 2200 was a mass-produced programmable terminal usable as a computer, designed by Computer Terminal Corporation (CTC) founders Phil Ray and Gus Roche and announced by CTC in June 1970 (with units shipping in 1971). It was initially presented by CTC as a versatile and cost-efficient terminal for connecting to a wide variety of mainframes by loading various terminal emulations from tape rather than being hardwired as most contemporary terminals, including their earlier Datapoint 3300. However, Dave Gust, a CTC salesman, realized that the 2200 could meet Pillsbury Foods's need for a small computer in the field, after which the 2200 was marketed as a stand-alone computer. Its industrial designer John "Jack" Frassanito has later claimed that Ray and Roche always intended the Datapoint 2200 to be a full-blown personal computer, but that they chose to keep quiet about this so as not to concern investors and others. Also significant is the fact that the terminal's multi-chip CPU (processor)'s instruction set became the basis of the Intel 8008 instruction set, which inspired the Intel 8080 instruction set and the x86 instruction set used in the processors for the original IBM PC and its descendants.
Technical description
The Datapoint 2200 had a built-in full-travel keyboard, a built-in 12-line, 80-column green screen monitor, and two 47 character-per-inch cassette tape drives each with 130 KB capacity. Its size, , and shape—a box with protruding keyboard—approximated that of an IBM Selectric typewriter. Initially, a Diablo 2.5 MB 2315-type removable cartridge hard disk drive was available, along with modems, several types of serial interface, parallel interface, printers and a punched card reader. Later, an 8-inch floppy disk drive was also made available, along with other, larger hard disk drives. An industry-compatible 7/9-track (user selectable) magnetic tape drive was available by 1975. In late 1977, Datapoint introduced ARCNET local area networking. The original Type 1 2200 shipped with 2 kilobytes (KiB) of serial shift register main memory, expandable to 8 KiB. The Type 2 2200 used denser 1 kbit RAM chips, giving it a default 4 KiB of memory, expandable to 16 KiB. Its starting price was around US $5,000 (), and a full 16 KiB Type 2 2200 had a list price of just over $14,000.
The 8-bit processor architecture that CTC designed for the Datapoint 2200 was implemented in four distinct ways, all with nearly identical instruction sets, but very different internal microarchitectures: CTC's original design that communicated data serially, CTC's parallel design, the Texas Instruments TMC 1795, and the Intel 8008.
Datapoint 2200 Version II (CTC's parallel design) was much faster than the TMC 1795, which was slightly faster than the original serial design of the Datapoint 2200, which in turn was considerably faster than the 8008.
The 2200 models were succeeded by the 5500, 1100, 6600, 3800/1800, 8800, etc.
The fact that most laptops and clo |
https://en.wikipedia.org/wiki/Raj%20Reddy | Dabbala Rajagopal "Raj" Reddy (born 13 June 1937) is an Indian-born American computer scientist and a winner of the Turing Award. He is one of the early pioneers of artificial intelligence and has served on the faculty of Stanford and Carnegie Mellon for over 50 years. He was the founding director of the Robotics Institute at Carnegie Mellon University. He was instrumental in helping to create Rajiv Gandhi University of Knowledge Technologies in India, to cater to the educational needs of the low-income, gifted, rural youth. He is the chairman of International Institute of Information Technology, Hyderabad. He is the first person of Asian origin to receive the Turing Award, in 1994, known as the Nobel Prize of Computer Science, for his work in the field of artificial intelligence.
Early life and education
Raj Reddy was born in a Telugu family in Katur village of Chittoor district of present-day Andhra Pradesh, India. His father, Sreenivasulu Reddy, was a farmer, and his mother, Pitchamma, was a homemaker. He was the first member of his family to attend college.
He received his bachelor's degree in civil engineering from College of Engineering, Guindy, then affiliated to the University of Madras (now to Anna University, Chennai), India, in 1958, and a MEng degree from the University of New South Wales, Australia, in 1960. He received his PhD degree in Computer Science from Stanford University in 1966.
Career
Reddy is the University Professor of Computer Science and Robotics and Moza Bint Nasser Chair at the School of Computer Science at Carnegie Mellon University. From 1960, he worked for IBM in Australia. He was an Assistant Professor of Computer Science at Stanford University from 1966 to 1969. He joined the Carnegie Mellon faculty as an associate professor of Computer Science in 1969. He became a full professor in 1973 and a university professor, in 1984.
He was the founding director of the Robotics Institute from 1979 to 1991 and the Dean of School of Computer Science from 1991 to 1999. As a dean of SCS, he helped create the Language Technologies Institute, Human Computer Interaction Institute, Center for Automated Learning and Discovery (since renamed as the Machine Learning Department), and the Institute for Software Research. He is the chairman of Governing Council of IIIT Hyderabad. He was the founding Chancellor (2008-2019) of Rajiv Gandhi University of Knowledge Technologies (RGUKT).
Reddy was a co-chair of the President's Information Technology Advisory Committee (PITAC) from 1999 to 2001. He was one of the founders of the American Association for Artificial Intelligence and was its president from 1987 to 1989. He served on the International board of governors of Peres Center for Peace in Israel. He served as a member of the governing councils of EMRI and HMRI which use technology-enabled solutions to provide cost-effective health care coverage to rural population in India.
AI Research
Reddy's early research was conducted at the A |
https://en.wikipedia.org/wiki/Rochambeau | Rochambeau or Ro-Sham-Bo may refer to:
Arts and media
"Roshambo", a song by The Network
Another name for the game of rock–paper–scissors
A game similar to "sack tapping" played by characters on the animated TV show South Park
A 1992 album by the band Farside
Ro Sham Bo (album), 1994 album by The Grays
People
Jean-Baptiste Donatien de Vimeur, comte de Rochambeau (1725–1807), French nobleman and soldier who participated in the American Revolutionary War
Donatien-Marie-Joseph de Vimeur, vicomte de Rochambeau (1755–1813), French soldier, the son of Jean-Baptiste Donatien de Vimeur, comte de Rochambeau
Places
Cayenne – Rochambeau Airport in South America
Rochambeau, a building in Washington D.C. designed by Thomas Franklin Schneider
Rochambeau Middle School in Connecticut
Rochambeau Monument, a statue in Newport, Rhode Island
Rochambeau French International School, a private French international school in Maryland
Rochambeau Library-Providence Community Library, a historic public library in Providence, Rhode Island
Rochambeau Worsted Company Mill, a historic textile mill in Providence, Rhode Island
Vessels
French ironclad Rochambeau
SS Rochambeau, a French Transatlantic ocean liner
USS Rochambeau (AP-63), American ship |
https://en.wikipedia.org/wiki/PLT | PLT may stand for:
Patent Law Treaty
Plantronics, stock symbol
Power line communication or power line telecommunications
Princeton Large Torus, a nuclear fusion reactor
Programming language theory, in computer science
PLT Scheme, a programming language |
https://en.wikipedia.org/wiki/ETA%20Systems | ETA Systems was a supercomputer company spun off from Control Data Corporation (CDC) in the early 1980s in order to regain a footing in the supercomputer business. They successfully delivered the ETA-10, but lost money continually while doing so. CDC management eventually gave up and folded the company.
Historical development
Seymour Cray left CDC in the early 1970s when they refused to continue funding of his CDC 8600 project. Instead they continued with the CDC STAR-100 while Cray went off to build the Cray-1. Cray's machine was much faster than the STAR, and soon CDC found itself pushed out of the supercomputing market.
William Norris was convinced the only way to regain a foothold would be to spin off a division that would be free from management prodding. In order to regain some of the small-team flexibility that seemed essential to progress in the field, ETA was created in 1983 with the mandate to build a 10 GFLOPS machine by 1986.
In April 1989 CDC decided to shut down the ETA operation and keep a bare-bones continuation effort alive at CDC. At shutdown, 7 liquid-cooled and 27 air-cooled machines had been sold. At this point ETA had the best price/performance ratio of any supercomputer on the market, and its initial software problems appeared to be finally sorted out. Nevertheless, shortly thereafter CDC exited the supercomputer market entirely, giving away remaining ETA machines free to high schools through the SuperQuest computer science competition.
Products
ETA had only one product, the ETA-10. It was a derivative of the CDC Cyber 205 supercomputer, and deliberately kept compatibility with it. Like the Cyber 205, the ETA-10 did not use vector registers as in the Cray machines, but instead used pipelined memory operations to a high-bandwidth main memory. The basic layout was a shared-memory multiprocessor with up to 8 CPUs, each capable of 4 double-precision or 8 single-precision operations per clock cycle, and up to 18 I/O processors.
The main reason for the ETA-10's speed was the use of liquid nitrogen (LN2) cooling in some models to cool the CPUs. Even though it was based on then-current CMOS technologies, the low temperature allowed the CPUs to operate with a ~7 ns cycle time, so a fully loaded ETA-10 was capable of about 9.1 GFLOPS. The design goal had been 10 GFLOPS, so the design was technically a failure. Two LN2-cooled models were designated ETA-10E and ETA-10G. Two slower, lower-cost air-cooled versions, the ETA-10Q and ETA-10P (code named "Piper") were also marketed.
The planned successor to the ETA-10 was the 30 GFLOPS ETA-30.
Software
Software for the ETA-10 line was initially regarded as a disaster. When CDC and ETA first designed the ETA architecture, they made the conscious decision not to merely port the CDC VSOS operating system from the existing CDC Cyber 205. It was felt by both the vendor, and the existing customer base (who wrongly believed that their vendor knew best), that a new OS needed to be written t |
https://en.wikipedia.org/wiki/ETA10 | The ETA10 is a vector supercomputer designed, manufactured, and marketed by ETA Systems, a spin-off division of Control Data Corporation (CDC). The ETA10 was an evolution of the CDC Cyber 205, which can trace its origins back to the CDC STAR-100, one of the first vector supercomputers to be developed.
CDC announced it was creating ETA Systems, and a successor to the Cyber 205, on 18 April 1983 at the Frontiers of Supercomputing conference, held at the Los Alamos National Laboratory. It was then referred to tentatively as the Cyber 2XX, and later as the GF-10, before it was named the ETA10. Prototypes were operational in mid-1986, and the first delivery was made in December 1986. The supercomputer was formally announced in April 1987 at an event held at its first customer installation, the Florida State University, Tallahassee's Scientific Computational Research Institute. On 17 April 1989, CDC abruptly closed ETA Systems due to ongoing financial losses, and discontinued production of the ETA10. Many of its users, such as Florida State University, negotiated Cray hardware in exchange.
Historical development
CDC had a strong history of creating powerful supercomputers, starting with the CDC 6600. One of the most famous computer architects to emerge from CDC was Seymour Cray. After a disagreement with CDC management regarding the development of the CDC 8600, he went on to form his own supercomputer company, Cray Research. Meanwhile, work continued at CDC in developing a high-end supercomputer, the CDC STAR-100—led by another famous architect, Neil Lincoln. Cray Research's Cray-1 vector supercomputer was successful, beating CDC's STAR-100. CDC responded with derivatives of the STAR, the Cyber 203 and 205. The Cyber 205 was moderately successful against the Cray-1's successor, the Cray X-MP. It became apparent to CDC's top management that it needed to decrease the development time for the next generation computer—thus a new approach was considered for the follow-on to the Cyber 205.
After spinning off from CDC in September 1983, ETA set a goal of producing a supercomputer with a cycle time less than 10ns. To accomplish this, several innovations were made. Among these was the use of liquid nitrogen for cooling the CMOS-based CPUs.
The ETA10 successfully met the company's initial goals (10 GFLOPS), with some models achieving a cycle time of about 7 ns (143 MHz) - considered rapid by mid-1980s standards. They delivered seven liquid nitrogen-cooled versions and 27 smaller, air-cooled versions. The CMOS circuits produced only a fraction of the heat of previous ICs. The planned 1987 follow-on was supposed to be designated Cyber 250 or ETA30, as in 30 GFLOPS. ETA was eventually reincorporated back into CDC, ceasing operations on April 17, 1989.
Operating systems and applications
The ETA10 series could run either ETA's EOS operating system, which was widely criticized for various problems, or a port by Lachman Associates, a software personnel firm, of UN |
https://en.wikipedia.org/wiki/Claris | Claris International Inc., formerly FileMaker Inc., is a computer software development company formed as a subsidiary company of Apple Computer (now Apple Inc.) in 1987. It was given the source code and copyrights to several programs that were owned by Apple, notably MacWrite and MacPaint, in order to separate Apple's application software activities from its hardware and operating systems activities.
In 1998, the company divested itself of all but its flagship product, and reformed as FileMaker Inc. In 2019, FileMaker Inc. announced at DevCon that it was restoring the Claris brand name. Also in 2019, Claris acquired Italian startup, Stamplay, a cloud-based integration platform which connects web services like Dropbox and Slack without writing code, and announced they would rename their product offering as Claris Connect.
The company develops, supports and markets the relational database program FileMaker. The FileMaker Platform is available for the macOS, Microsoft Windows and iOS operating systems and is aimed towards business users and power users.
History
Creation
During the early days of the Macintosh computer, Apple shipped the machines with two basic programs, MacWrite and MacPaint, so that users would have a working machine "out of the box". However, this resulted in complaints from third-party developers, who felt that these programs were good enough for so many users that there was little reason to buy something better.
Apple decided to allow the programs to "wither" so that the third-party developers would have time to write suitable replacements. The developers did not seem to hold up their end of the bargain, and it was some time before truly capable replacements like WriteNow came along. In the meantime users complained about the lack of upgrades, while the third-party developers continued to complain about the possibility of upgrades.
Eventually Apple decided the only solution was to spin off the products to a third party of its own creation, forming Claris in 1987. Claris was also given the rights to several lesser-known Apple products such as MacProject, MacDraw, and the hit Apple II product AppleWorks. Claris' second corporate headquarters (nicknamed "The Wedge") was in Santa Clara, about six miles from the main Apple campus.
At first Claris provided only trivial upgrades, limited to making the products continue to run on newer versions of the Macintosh operating system. In 1988, Claris purchased FileMaker from Nashoba Systems and quickly released a rebranded version called FileMaker II, to conform to its naming scheme for other products, such as MacWrite II. The product, however, changed little from the last Nashoba version. Several minor versions followed; it was succeeded by FileMaker Pro 1.0 in 1990. In the meantime, development began on major overhauls of their entire product line, including FileMaker. Each of these would be eventually released as part of the Pro series of products.
In 1990, Apple decided that Cl |
https://en.wikipedia.org/wiki/Presentation%20program | In computing, a presentation program (also called presentation software) is a software package used to display information in the form of a slide show. It has three major functions:
an editor that allows text to be inserted and formatted
a method for inserting and manipulating graphic images and media clips
a slide-show system to display the content
Presentation software can be viewed as enabling a functionally-specific category of electronic media, with its own distinct culture and practices as compared to traditional presentation media (such as blackboards, whiteboards and flip charts).
Presentations in this mode of delivery have become pervasive in many aspects of business communication, especially in business planning, as well as in academic-conference and professional conference settings, and in the knowledge economy generally, where ideas are a primary work output. Presentations may also feature prominently in political settings, especially in workplace politics, where persuasion is a central determinant of group outcomes.
Most modern meeting-rooms and conference halls are configured to include presentation electronics, such as projectors suitable for displaying presentation slides, often driven by the presenter's own laptop, under direct control of the presentation program used to develop the presentation. Often a presenter will present a lecture using the slides as a visual aid both for the presenter (to track the lecture's coverage) and for the audience (especially when an audience member mishears or misunderstands the verbal component).
Generally in presentations, the visual material is considered supplemental to a strong aural presentation that accompanies the slide show, but in many cases, such as statistical graphics, it can be difficult to convey essential information other than by visual means; additionally, a well-designed infographic can be extremely effective in a way that words are not. Endemic over-reliance on slides with low information density and with a poor accompanying lecture has given presentation software a negative reputation as sometimes functioning as a crutch for the poorly informed or the poorly prepared.
Using Autographix and Dicomed, it became quite easy to make last-minute changes compared to traditional typesetting and pasteup. It was also a lot easier to produce a large number of slides in a small amount of time. However, these workstations also required skilled operators, and a single workstation represented an investment of $50,000 to $200,000 (in 1979 dollars).
In the mid-1980s developments in the world of computers changed the way presentations were created. Inexpensive, specialized applications now made it possible for anyone with a PC to create professional-looking presentation graphics.
Originally these programs were used to generate 35 mm slides, to be presented using a slide projector. As these programs became more common in the late 1980s several companies set up services that would accep |
https://en.wikipedia.org/wiki/UUNET | UUNET, founded in 1987, was one of the first and largest commercial Internet service providers and one of the early Tier 1 networks. It was based in Northern Virginia. Today, UUNET is an internal brand of Verizon Business (formerly MCI).
History
Background
Prior to its founding, access to Usenet and e-mail exchange from non-ARPANET sites was accomplished using a cooperative network of systems running the UUCP protocol over POTS lines. During the mid-1980s, growth of this network began to put considerable strain on the resources voluntarily provided by the larger UUCP hubs. This prompted Rick Adams, a system administrator at the Center for Seismic Studies, to explore the possibilities of providing these services commercially as a way to reduce the burden on the existing hubs.
Early existence
With funding in the form of a loan from Usenix, UUNET Communications Services began operations in 1987 as a non-profit corporation providing Usenet feeds, e-mail exchange, and access to a large repository of software source code and related information. The venture proved successful and shed its non-profit status within two years. At the same time, the company changed its name to UUNET Technologies. In 1990, UUNET launched its AlterNet service, which provided access to an IP backbone independent of the constraints of those operated by the government. That network lives on in a much larger form and serves as the core of a set of products that include access at dial-up and broadband speeds as well as web hosting. UUNET raised $6 Million from Accel Partners, Menlo Ventures, and New Enterprise Associates in 1993 and $8.2 million in 1996 for expanding its network and hiring new executives with experience in marketing.
In the mid-1990s, UUNET was the fastest-growing ISP, outpacing MCI and Sprint. At its peak, Internet traffic was briefly doubling every few months, which translates to 10x growth each year. However, the continuing UUNET claims of such growth (long after it had fallen to lower, albeit still substantial levels) artificially fueled the expectations of the dot-com and telecom companies of the late 1990s, leading to the dot-com bubble and crash in 2000/2001.
Mergers and acquisitions
In 1996, UUNET was acquired by MFS on 30 April 1996. This was an independent acquisition unrelated to the acquisition of MFS by Worldcom. However, as MFS was a public company and the acquisition made the company a Wall Street darling, it likely influenced Worldcom's decision to pursue MFS.
In 1996, UUNET was acquired by WorldCom on 26 August 1996, as part of WorldCom's purchase of MFS Communications Company.
In 2001, UUNET was fully integrated with WorldCom and the name was dropped from all official documents.
In 2002, the owner of UUNET at that time (WorldCom) filed for what was then the largest Chapter 11 bankruptcy protection in history.
In 2005, its Internet service and infrastructure, assigned AS701, maintained the highest outdegree of any ISP.
Verizon
In 2 |
https://en.wikipedia.org/wiki/Manuel%20Blum | Manuel Blum (born 26 April 1938) is a Venezuelan born American computer scientist who received the Turing Award in 1995 "In recognition of his contributions to the foundations of computational complexity theory and its application to cryptography and program checking".
Education
Blum was born to a Jewish family in Venezuela. Blum was educated at MIT, where he received his bachelor's degree and his master's degree in electrical engineering in 1959 and 1961 respectively, and his Ph.D. in mathematics in 1964 supervised by Marvin Minsky.
Career
Blum worked as a professor of computer science at the University of California, Berkeley until 2001. From 2001 to 2018, he was the Bruce Nelson Professor of Computer Science at Carnegie Mellon University, where his wife, Lenore Blum, was also a professor of Computer Science.
In 2002, he was elected to the United States National Academy of Sciences. In 2006, he was elected a member of the National Academy of Engineering for contributions to abstract complexity theory, inductive inference, cryptographic protocols, and the theory and applications of program checkers.
In 2018 he and his wife Lenore resigned from Carnegie Mellon University to protest against sexism after a change in management structure of Project Olympus led to sexist treatment of her as director and the exclusion of other women from project activities.
Research
In the 60s he developed an axiomatic complexity theory which was independent of concrete machine models. The theory is based on Gödel numberings and the Blum axioms. Even though the theory is not based on any machine model it yields concrete results like the compression theorem, the gap theorem, the honesty theorem and the Blum speedup theorem.
Some of his other work includes a protocol for flipping a coin over a telephone, median of medians (a linear time selection algorithm), the Blum Blum Shub pseudorandom number generator, the Blum–Goldwasser cryptosystem, and more recently CAPTCHAs.
Blum is also known as the advisor of many prominent researchers. Among his Ph.D. students are Leonard Adleman, Dana Angluin, Shafi Goldwasser, Mor Harchol-Balter, Russell Impagliazzo, Silvio Micali, Gary Miller, Moni Naor, Steven Rudich, Michael Sipser, Ronitt Rubinfeld, Umesh Vazirani, Vijay Vazirani, Luis von Ahn, and Ryan Williams.
See also
List of Venezuelans
Graph isomorphism problem
Non-interactive zero-knowledge proof
Quantum coin flipping
Pancake sorting
References
American computer scientists
Theoretical computer scientists
1938 births
Living people
Jewish American scientists
International Association for Cryptologic Research fellows
Members of the United States National Academy of Engineering
Members of the United States National Academy of Sciences
Turing Award laureates
Carnegie Mellon University faculty
UC Berkeley College of Engineering faculty
Venezuelan emigrants to the United States
Venezuelan Jews
Massachusetts Institute of Technology School of Science alumni
20th-century Amer |
https://en.wikipedia.org/wiki/John%20McCarthy%20%28computer%20scientist%29 | John McCarthy (September 4, 1927 – October 24, 2011) was an American computer scientist and cognitive scientist. He was one of the founders of the discipline of artificial intelligence. He co-authored the document that coined the term "artificial intelligence" (AI), developed the programming language family Lisp, significantly influenced the design of the language ALGOL, popularized time-sharing, and invented garbage collection.
McCarthy spent most of his career at Stanford University. He received many accolades and honors, such as the 1971 Turing Award for his contributions to the topic of AI, the United States National Medal of Science, and the Kyoto Prize.
Early life and education
John McCarthy was born in Boston, Massachusetts, on September 4, 1927, to an Irish immigrant father and a Lithuanian Jewish immigrant mother, John Patrick and Ida (Glatt) McCarthy. The family was obliged to relocate frequently during the Great Depression, until McCarthy's father found work as an organizer for the Amalgamated Clothing Workers in Los Angeles, California. His father came from Cromane, a small fishing village in County Kerry, Ireland. His mother died in 1957.
Both parents were active members of the Communist Party during the 1930s, and they encouraged learning and critical thinking. Before he attended high school, he got interested in science by reading a translation of a Russian popular science book for children, called 100,000 Whys. John was fluent in the Russian language and made friends with Russian scientists during multiple trips to the Soviet Union but he distanced himself after making visits to the Soviet Bloc, which led to him becoming a conservative Republican.
McCarthy graduated from Belmont High School two years early. McCarthy was accepted into Caltech in 1944.
McCarthy showed an early aptitude for mathematics; during his teens he taught himself college mathematics by studying the textbooks used at the nearby California Institute of Technology (Caltech). As a result, he was able to skip the first two years of mathematics at Caltech.
McCarthy was suspended from Caltech for failure to attend physical education courses. He then served in the US Army and was readmitted, receiving a BS in mathematics in 1948.
It was at Caltech that he attended a lecture by John von Neumann that inspired his future endeavors.
McCarthy initially completed graduate studies at Caltech before moving to Princeton University. He received a PhD in mathematics from Princeton in 1951 after completing a doctoral dissertation, titled "Projection operators and partial differential equations", under the supervision of Donald C. Spencer.
Academic career
After short-term appointments at Princeton and Stanford University, McCarthy became an assistant professor at Dartmouth in 1955.
A year later, McCarthy moved to MIT as a research fellow in the autumn of 1956. By the end of his years at MIT he was already affectionately referred to as "Uncle John" by his students.
I |
https://en.wikipedia.org/wiki/Amir%20Pnueli | Amir Pnueli (; April 22, 1941 – November 2, 2009) was an Israeli computer scientist and the 1996 Turing Award recipient.
Biography
Pnueli was born in Nahalal, in the British Mandate of Palestine (now in Israel) and received a Bachelor's degree in mathematics from the Technion in Haifa, and Ph.D. in applied mathematics from the Weizmann Institute of Science (1967). His thesis was on the topic of "Calculation of Tides in the Ocean". He switched to computer science during a stint as a post-doctoral fellow at Stanford University. His works in computer science focused on temporal logic and model checking, particularly regarding fairness properties of concurrent systems.
He returned to Israel as a researcher; he was the founder and first chair of the computer science department at Tel Aviv University. He became a professor of computer science at the Weizmann Institute in 1981. From 1999 until his death, Pnueli also held a position at the Computer Science Department of New York University, New York, U.S. He's also served as an associate professor at the University of Pennsylvania and the Joseph Fourier University.
Pnueli also founded two startup technology companies during his career. He had three children and, at his death, had four grandchildren.
Pnueli died on November 2, 2009 of a brain hemorrhage.
Awards and honours
In 1996, Pnueli received the Turing Award for seminal work introducing temporal logic into computing science and for outstanding contributions to program and systems verification.
On May 30, 1997 Pnueli received an honorary doctorate from the Faculty of Science and Technology at Uppsala University, Sweden.
In 1999, he was inducted as a Foreign Associate of the U.S. National Academy of Engineering.
In 2000, he was awarded the Israel Prize, for computer science.
In 2007, he was inducted as a Fellow of the Association for Computing Machinery.
The Weizmann Institute of Science presents a memorial lecture series in his honour.
See also
List of Israel Prize recipients
References
External links
New York University homepage
Short biography
Weizmann Institute homepage
Profile
1941 births
2009 deaths
Fellows of the Association for Computing Machinery
Formal methods people
Israeli Jews
Israeli computer scientists
Israel Prize in computer sciences recipients
Jewish scientists
Members of the Israel Academy of Sciences and Humanities
Courant Institute of Mathematical Sciences faculty
People from Nahalal
Programming language researchers
Technion – Israel Institute of Technology alumni
Academic staff of Tel Aviv University
Theoretical computer scientists
Turing Award laureates
Academic staff of Weizmann Institute of Science
Polytechnic Institute of New York University faculty
Foreign associates of the National Academy of Engineering |
https://en.wikipedia.org/wiki/Jim%20Gray%20%28computer%20scientist%29 | James Nicholas Gray (1944 – declared dead in absentia 2012) was an American computer scientist who received the Turing Award in 1998 "for seminal contributions to database and transaction processing research and technical leadership in system implementation".
Early years and personal life
Gray was born in San Francisco, the second child of Ann Emma Sanbrailo, a teacher, and James Able Gray, who was in the U.S. Army; the family moved to Rome, Italy, where Gray spent most of the first three years of his life; he learned to speak Italian before English. The family then moved to Virginia, spending about four years there, until Gray's parents divorced, after which he returned to San Francisco with his mother. His father, an amateur inventor, patented a design for a ribbon cartridge for typewriters that earned him a substantial royalty stream.
After being turned down for the Air Force Academy he entered the University of California, Berkeley as a freshman in 1961. To help pay for college, he worked as a co-op for General Dynamics, where he learned to use a Monroe calculator. Discouraged by his chemistry grades, he left Berkeley for six months, returning after an experience in industry he later described as "dreadful". Gray earned his B.S. in engineering mathematics (Math and Statistics) in 1966.
After marrying, Gray moved with his wife Loretta to New Jersey, his wife's home state; she worked as a teacher and he worked at Bell Labs on a digital simulation that was to be part of Multics. At Bell, he worked three days a week and spent two days as a Master's student at New York University's Courant Institute. After a year they traveled for several months before settling again in Berkeley, where Gray entered graduate school with Michael A. Harrison as his advisor. In 1969 he received his Ph.D. in programming languages, then did two years of postdoctoral work for IBM.
While at Berkeley, Gray and Loretta had a daughter; they were later divorced. His second wife was Donna Carnes.
Research
Gray pursued his career primarily working as a researcher and software designer at a number of industrial companies, including IBM, Tandem Computers, and DEC. He joined Microsoft in 1995 and was a Technical Fellow for the company until he was lost at sea in 2007.
Gray contributed to several major database and transaction processing systems. IBM's System R was the precursor of the SQL relational databases that have become a standard throughout the world. For Microsoft, he worked on TerraServer-USA and Skyserver.
His best-known achievements include:
ACID, an acronym describing the requirements for reliable transaction processing and its software implementation
Granular database locking
Two-tier transaction commit semantics
The Five-minute rule for allocating storage
OLAP cube operator for data warehousing
He assisted in developing Virtual Earth. He was also one of the co-founders of the Conference on Innovative Data Systems Research.
Disappearance
Gray, an exper |
https://en.wikipedia.org/wiki/Non-uniform%20rational%20B-spline | Non-uniform rational basis spline (NURBS) is a mathematical model using basis splines (B-splines) that is commonly used in computer graphics for representing curves and surfaces. It offers great flexibility and precision for handling both analytic (defined by common mathematical formulae) and modeled shapes. It is a type of curve modeling, as opposed to polygonal modeling or digital sculpting. NURBS curves are commonly used in computer-aided design (CAD), manufacturing (CAM), and engineering (CAE). They are part of numerous industry-wide standards, such as IGES, STEP, ACIS, and PHIGS. Tools for creating and editing NURBS surfaces are found in various 3D graphics and animation software packages.
They can be efficiently handled by computer programs yet allow for easy human interaction. NURBS surfaces are functions of two parameters mapping to a surface in three-dimensional space. The shape of the surface is determined by control points. In a compact form, NURBS surfaces can represent simple geometrical shapes. For complex organic shapes, T-splines and subdivision surfaces are more suitable because they halve the number of control points in comparison with the NURBS surfaces.
In general, editing NURBS curves and surfaces is intuitive and predictable. Control points are always either connected directly to the curve or surface, or else act as if they were connected by a rubber band. Depending on the type of user interface, the editing of NURBS curves and surfaces can be via their control points (similar to Bézier curves) or via higher level tools such as spline modeling and hierarchical editing.
History
Before computers, designs were drawn by hand on paper with various drafting tools. Rulers were used for straight lines, compasses for circles, and protractors for angles. But many shapes, such as the freeform curve of a ship's bow, could not be drawn with these tools. Although such curves could be drawn freehand at the drafting board, shipbuilders often needed a life-size version which could not be done by hand. Such large drawings were done with the help of flexible strips of wood, called splines. The splines were held in place at a number of predetermined points, called "ducks" ( which were made of lead & 3-ish inches long. The "beak" of the "duck" pushed against the spline: the old yacht-design books assumed these methods ); between the ducks, the elasticity of the spline material caused the strip to take the shape that minimized the energy of bending, thus creating the smoothest possible shape that fit the constraints. The shape could be adjusted by moving the ducks.
In 1946, mathematicians started studying the spline shape, and derived the piecewise polynomial formula known as the spline curve or spline function. I. J. Schoenberg gave the spline function its name after its resemblance to the mechanical spline used by draftsmen.
As computers were introduced into the design process, the physical properties of such splines were investigated so |
https://en.wikipedia.org/wiki/Import%20%28disambiguation%29 | Import is the act of bringing goods into a country.
Import may also refer to:
import and export of data, in computing
import tariff, a tax on imported goods
import quota, a type of trade restriction
Import substitution industrialization, an economic policy
Import scene, a subculture that centers on modifying imported brand cars
The #Import directive in Objective-C
The import keyword in Java
See also
Export (disambiguation) |
https://en.wikipedia.org/wiki/UDF | UDF may refer to:
Astronomy
Ultra Deep Field, a view of the distant universe taken in 2004 by the Hubble space telescope
UDF 423, a distant spiral galaxy
UDF 2457, a red dwarf star
Computing
Universal Disk Format, an operating-system-independent file system commonly used on DVD and other digital media
Uniqueness Database File, a Windows XP Professional configuration text file
User-defined function, a function provided by the user of a program or environment
Organizations
Politics
United Democratic Forces (ОДС), an electoral alliance in Bulgaria
United Democratic Forces of Belarus, a coalition of political parties participating as the main opposition group during the 2006 presidential election
United Democratic Front (Botswana)
United Democratic Front (Kerala), India
United Democratic Front (Mizoram), India
United Democratic Front (Malawi)
United Democratic Front (Namibia)
United Democratic Front (Pakistan)
United Democratic Front (South Africa)
United Democratic Front (South Sudan)
United Democratic Forum Party, a former political party in Kenya
Union pour la Démocratie Française (Union for French Democracy), a former centrist pro-European French political party
Military
Ulster Defence Force, a paramilitary group in Northern Ireland
Union Defence Force (South Africa), the predecessor of the South African Defence Force from 1912 to 1957
Union Defence Force (UAE), the armed forces of the United Arab Emirates
Other organizations
United Dairy Farmers, an American chain of ice cream shops
Other uses
Unducted fan, another name for a propfan engine
United Defense Force, a fictional global military in the manga All You Need Is Kill and its film adaptation Edge of Tomorrow
Ural Delay Factor, When your sidecar attracts attention from bystanders, thereby delaying your departure.
See also
Union of Democratic Forces (disambiguation), several political parties |
https://en.wikipedia.org/wiki/Hudson%20Soft | was a Japanese video game company that released numerous games for video game consoles, home computers and mobile phones, mainly from the 1980s to the 2000s. It was headquartered in the Midtown Tower in Tokyo, with an additional office in the Hudson Building in Sapporo.
Hudson Soft was founded on May 18, 1973. Initially, it dealt with personal computer products, but later expanded to the development and publishing of video games, mobile content, video game peripherals and music recording. Primarily a video game publisher, it internally developed many of the video games it released while outsourcing others to external companies. It is known for series such as Bomberman, Adventure Island, Star Soldier, and Bonk. Hudson also developed video games released by other publishers such as the Mario Party series from Nintendo. The mascot of the company is a bee named Hachisuke.
Hudson Soft made the TurboGrafx-16 in association with NEC, to compete against Nintendo, Sega, and SNK, while continuing making games on other platforms, as a third-party developer.
Hudson Soft ceased to exist as a company on March 1, 2012, and merged with Konami Digital Entertainment, which was the surviving entity. Konami owns the assets of Hudson and has since rereleased its video game back catalogue on different occasions.
History
Hudson Soft Ltd. was founded in Toyohira-ku, Sapporo, Japan on May 18, 1973 by brothers Yuji and Hiroshi Kudo. The founders grew up admiring trains, and named the business after their favourite, the Hudson locomotives (called the "4-6-4", and especially Japanese C62).
Hudson began as an amateur radio shop called CQ Hudson (CQハドソン), selling radio telecommunications devices and art photographs. Yuji Kudo had originally planned to start a coffee shop, but there was already one in the same building, resulting in the decision to change to a wireless radio shop at the eleventh hour. Although the Kudo brothers had university education, neither had studied in business management. That factor, combined with the difficulty to find trustworthy people to accompany the Kudos in their venture, meant that Hudson was almost always in the red each month during its era exclusively as a radio shop.
In September 1975, Hudson began selling personal computer-related products and in March 1978 started developing and selling video game packages. At that time, many amateur radio shops were switching to the sales of personal computers because they deal with the same electronic equipment. CQ Hudson would continue to operate for decades in Sapporo until Hudson Soft closed the shop in May 2001.
In the late 1970s and early 1980s, Hudson Soft favoured a quantity-over-quality approach for the marketing of video games. At one point, the company released up to 30 different computer software titles per month; none of which were hugely successful. Things changed in late 1983, when Hudson started to prioritise quality-over-quantity. Hudson became Nintendo's first third-party softwa |
https://en.wikipedia.org/wiki/Athena%20%28video%20game%29 | is a 1986 platform arcade video game developed and published by SNK. Conversions were later released for the NES console and ZX Spectrum and Commodore 64 home computers.
The game's protagonist, Princess Athena, has gone on to appear in later fighting games by SNK as a secret character or assistant to her descendant Athena Asamiya, a frequent main character in these games.
Plot
Athena was the young, headstrong princess of the heavenly Kingdom of Victory. She was bored of the monotonous daily life in the palace and desired exciting adventures. One day, she opened the "Door Which Shouldn't Be Opened" in the basement of Castle Victory, said to lead to a savage and deadly place. As she dared cross the doorway, it caused her to fall from the skies and to another realm called Fantasy World, which was dominated by the evil Emperor Dante. After her flowing dress was lost while catching the wind for her fall, the perilous adventures of Princess Athena began as she landed in a wilderness overrun by beast-like warriors and more dangers than she could ever wish for. She readied to fight for her life and arm herself, with no other choice than to face the ruthless Dante and every obstacle on her way, to free this kingdom and make it back alive to her own.
After Athena defeats Dante, it all begins anew in the sequel, Athena: Full Throttle, in which the princess, again bored, opens the "Door Which Shouldn't Be Opened B", disregarding her loyal maid Helene's advice, and they both fall to Elysium World, where they face off against other villains.
Many of the game's elements are inspired by Greek mythology or ancient Roman culture such as weapons, equipment, items and enemy designs, while Princess Athena herself is named after the Greek goddess Athena.
Gameplay
Upon landing, unarmed and nearly nude, the princess only has her kicks to fend off the approaching monsters, but she may collect the dead enemies's various weapons and also has the chance to find shields, headgear and armor to cover her body, although these will be lost after withstanding some attacks. Her journey requires leaping and climbing as well as fighting through the land's eight hazardous worlds, each leading up to an oversized enemy that must be dealt with before proceeding to the next area. The use of certain weapons such as a hammer allows Athena to break through stone blocks, sometimes revealing not only armor but magic items such as Mercury's sandals that, when worn, allow her to make great leaps.
The game features certain role-playing video game elements to complement the platform action. Princess Athena has to defeat enemies such as the final boss by using various mythological weapons, items and equipment. Without some items, she cannot make it through the adventure.
Ports
Athena was later converted for the NES by Micronics. Conversions were also done for the ZX Spectrum and Commodore 64 in 1987 by Ocean Software and released under their Imagine label.
The NES version was only release |
https://en.wikipedia.org/wiki/Case%20modding | Case modification, commonly referred to as case modding, is the modification of a computer case or a video game console chassis. Modifying a computer case in any non-standard way is considered a case mod. Modding is done, particularly by hardware enthusiasts, to show off a computer's apparent power by showing off the internal hardware, and also to make it look aesthetically pleasing to the owner.
Cases may also be modified to improve a computer's performance; this is usually associated with cooling and involves changes to components as well as the case.
History
When personal computers first became available to the public, the majority were produced in simple, beige-colored cases. This design is sometimes referred to be as a beige box. Although this met the purpose of containing the components of the personal computer, many users considered their computers as "tacky" or "dull", and some began modifying their existing chassis, or building their own from scratch. One of the original case mods is the "Macquarium", which consists of replacing the CRT screen in a Compact Macintosh case with a fishbowl.
A new market for third-party computer cases and accessories began to develop, and today cases are available in a wide variety of colors and styles. Today the business of "modding" computers and their cases is a hugely profitable endeavor, and modding competitions are everywhere. Since 2017, Computer hardware companies have started to offer some of their products with built-in RGB LED lighting, replacing earlier Non-RGB LED (single color LED) lighting. Non-RGB LED lighting started to replace earlier CCFL-based (mixed with single color LEDs) lighting, starting in the late 2000s and early 2010s. RGB lighting may be integrated onto fans, Liquid cooler pumps, RAM modules, or graphic card coolers, or they may be installed in the case itself as an RGB light strip. RGB lights may be controlled by the motherboard with an onboard lighting controller, or externally with a separate controller. They may also draw power directly from the power supply. Many cases now (as of 2019) come with side windows and RGB fans.
Common modifications
Appearance
Peripheral mods
Peripherals like the keyboard, mouse, and speakers are sometimes painted or otherwise modified to match the computer. Some system builders, in an effort to make their system more portable and convenient, install speakers and small screens into the case.
Case building
Sometime modders build entire cases from scratch. Some may attempt to treat the case as a work of art. Others make it look like or appear to be something else, like a teddy bear, wooden cabinet, a shelf mounted on a wall, or antique equipment such as a Macintosh Plus or an old Atari 2600 video game console. Relatively few case modders or builders make their computer cases from scratch; those who do sometimes put hundreds of hours into their work. The WMD case, Project Nighthawk, and Dark Blade case are a few examples of professional |
https://en.wikipedia.org/wiki/Infiniti | (stylized as INFINITI) is the luxury vehicle division of the Japanese automaker Nissan. Infiniti officially started selling vehicles on November 8, 1989, in North America. The marketing network for Infiniti-branded vehicles included dealers in over 50 countries in the 2010s. As of 2020, there were 25 markets served by new car dealers. The main markets are North America, China, and Middle East.
According to the company, the Infiniti badge has a double meaning, as stylized representations of both a road extending into the horizon and of Mount Fuji, reflecting its Japanese origins.
History
The beginning
The Infiniti brand was introduced in the United States in 1989 to target the premium vehicle segments in the United States that would not have otherwise fit in with Nissan's more mainstream image, and partially influenced by the Plaza Accord of 1985. The brand was created around the same time that Japanese rivals Toyota and Honda developed their Lexus and Acura premium brands, respectively. The Japanese government imposed voluntary export restraints for the U.S. market, so it was more profitable for automakers to export more expensive cars to the U.S.
The Infiniti marque was launched with two models, the Q45, and the M30 that were previously sold at Japanese Nissan Motor Store dealership networks. The Q45 was based on the all new second generation JDM Nissan President on a five millimeter shorter wheelbase platform at 2,875 mm (113.2 in). Starting with model year 1992, the wheelbase matched the President's wheelbase at 2880 mm (113.4 in). The Q45 included a V8 engine, four wheel steering, and active suspension system offered on the first generation Q45t. The car's features would have made it competitive in the full-sized "luxury" segment against the Mercedes S-Class, BMW 7 Series, Jaguar XJ and Cadillac Fleetwood.
A second model was introduced in November 1989, the two-door M30, a badge engineered Nissan Leopard. It remained in production for three years as an alternative to the Lexus SC. The powertrain was the VG30E engine and an automatic transmission. The M30 coupe was underpowered for its stock weight of . The M30 convertible weighed even more, due to the required body and chassis reinforcements. The appearance of the M30 had almost no resemblance to the larger Q45, and the interior was almost completely different.
Infiniti did not offer a mid-luxury sedan to match the first Japanese luxury sedan introduced to North America, the Acura Legend, which was later joined by the Lexus GS. Infiniti's first offering in the entry-level luxury segment was the Infiniti J30, which had to compete with the revised 1992 Lexus ES and was unsuccessful owing to its small interior and unusual styling to which it was succeeded in 1996 by the Infiniti I series introduced previously in April 1995, related to the Nissan Maxima and in 2002 by the Infiniti G35.
1990s
In September 1990, Infiniti introduced a third model, the Infiniti G20, derived from the compac |
https://en.wikipedia.org/wiki/Agrep | agrep (approximate grep) is an open-source approximate string matching program, developed by Udi Manber and Sun Wu between 1988 and 1991, for use with the Unix operating system. It was later ported to OS/2, DOS, and Windows.
It selects the best-suited algorithm for the current query from a variety of the known fastest (built-in) string searching algorithms, including Manber and Wu's bitap algorithm based on Levenshtein distances.
agrep is also the search engine in the indexer program GLIMPSE. agrep is under a free ISC License.
Alternative implementations
A more recent agrep is the command-line tool provided with the TRE regular expression library. TRE agrep is more powerful than Wu-Manber agrep since it allows weights and total costs to be assigned separately to individual groups in the pattern. It can also handle Unicode. Unlike Wu-Manber agrep, TRE agrep is licensed under a 2-clause BSD-like license.
FREJ (Fuzzy Regular Expressions for Java) open-source library provides command-line interface which could be used in the way similar to agrep. Unlike agrep or TRE it could be used for constructing complex substitutions for matched text. However its syntax and matching abilities differs significantly from ones of ordinary regular expressions.
See also
Bitap algorithm
TRE (computing)
References
External links
Wu-Manber agrep
AGREP home page
For Unix (To compile under OSX 10.8, add -Wno-return-type to the CFLAGs = -O line in the Makefile)
See also
TRE regexp matching package
cgrep a defunct command line approximate string matching tool
nrgrep a command line approximate string matching tool
agrep as implemented in R
Information retrieval systems
Unix text processing utilities
Software using the ISC license |
https://en.wikipedia.org/wiki/Pulau%20Tekong | {
"type": "ExternalData",
"service": "geoshape",
"ids": "Q2611322",
"title": "Pulau Tekong"
}
Pulau Tekong, also known colloquially as Tekong or Tekong Island, is the second-largest of Singapore's outlying islands after Jurong Island. Tekong is located off Singapore's northeastern coast, east of Pulau Ubin. Since the 1990s, the island has been used by the Singapore Armed Forces (SAF) and is generally restricted from public access. Transport to the island for permitted persons is via the SAF Changi Ferry Terminal at Changi Beach.
The original island has undergone extensive land reclamation works for military use on its southern and northwestern coasts subsuming many of its surrounding small islets, including the former Pulau Tekong Kechil (Small Tekong Island). When fully completed, the island is estimated to reach an area of about .
Etymology
Pulau Tekong appears in the Franklin and Jackson's 1828 map as Po. Tukang. The early name could have arisen because the island served as a trading station for both residents of Pulau Ubin and the state of Johor. Tukang means merchants in this case.
Tekong means "an obstacle", so-called because the island blocks the mouth of the Sungai Johor. Pulo Tekong Besar came under the Changi district, and the island had a sizeable population, being the largest island off Singapore and two miles from Fairy Point. Ferries plied from the pier at that point and the island daily. After 1920, it was mostly known for its rubber plantations.
History
Civilian era
The island was once home to 5000 inhabitants, the last of which moved out in 1987. 60 percent of the inhabitants were Chinese, out of which 70 percent were Hakkas and 30 percent were Teochews, and 40 percent were Malays. There were a few Indians as well.
The reason for Hakka being the majority of the Chinese population is that most of the Hokkien and Teochew businessmen already had flourishing businesses on the mainland. When the Hakkas arrived, they decided to make a living on an island less inhabited. Most were farmers, fishermen and shop owners selling sundry goods.
Wild pigs and deer were once plentiful on Pulau Tekong, and attracted hunters from Singapore. Pulo Tekong Besar had undergone so much development after World War II, with vegetable, fruit and poultry farms, that the wildlife has mostly disappeared.
Military era
On 29 May 1990, national servicemen spotted a family of three Indian elephants which had swum across the Straits of Johor. The Singapore Zoo worked with the Malaysian Wildlife Department's Elephant Capture and Translocation Unit to help in its plan to recapture the runaway elephants. On 10 June, all three elephants were captured and relocated back to the jungles of Johor.
In March 2004, Pulau Tekong was the hiding place for a group of armed robbers comprising two Indonesians and a Malaysian. The robbers had fled from Malaysia, sparking off a massive coordinated manhunt involving Air Force helicopters, commandos, ground surveil |
https://en.wikipedia.org/wiki/Nonlinear%20dimensionality%20reduction | Nonlinear dimensionality reduction, also known as manifold learning, refers to various related techniques that aim to project high-dimensional data onto lower-dimensional latent manifolds, with the goal of either visualizing the data in the low-dimensional space, or learning the mapping (either from the high-dimensional space to the low-dimensional embedding or vice versa) itself. The techniques described below can be understood as generalizations of linear decomposition methods used for dimensionality reduction, such as singular value decomposition and principal component analysis.
Applications of NLDR
Consider a dataset represented as a matrix (or a database table), such that each row represents a set of attributes (or features or dimensions) that describe a particular instance of something. If the number of attributes is large, then the space of unique possible rows is exponentially large. Thus, the larger the dimensionality, the more difficult it becomes to sample the space. This causes many problems. Algorithms that operate on high-dimensional data tend to have a very high time complexity. Many machine learning algorithms, for example, struggle with high-dimensional data. Reducing data into fewer dimensions often makes analysis algorithms more efficient, and can help machine learning algorithms make more accurate predictions.
Humans often have difficulty comprehending data in high dimensions. Thus, reducing data to a small number of dimensions is useful for visualization purposes.
The reduced-dimensional representations of data are often referred to as "intrinsic variables". This description implies that these are the values from which the data was produced. For example, consider a dataset that contains images of a letter 'A', which has been scaled and rotated by varying amounts. Each image has 32×32 pixels. Each image can be represented as a vector of 1024 pixel values. Each row is a sample on a two-dimensional manifold in 1024-dimensional space (a Hamming space). The intrinsic dimensionality is two, because two variables (rotation and scale) were varied in order to produce the data. Information about the shape or look of a letter 'A' is not part of the intrinsic variables because it is the same in every instance. Nonlinear dimensionality reduction will discard the correlated information (the letter 'A') and recover only the varying information (rotation and scale). The image to the right shows sample images from this dataset (to save space, not all input images are shown), and a plot of the two-dimensional points that results from using a NLDR algorithm (in this case, Manifold Sculpting was used) to reduce the data into just two dimensions.
By comparison, if principal component analysis, which is a linear dimensionality reduction algorithm, is used to reduce this same dataset into two dimensions, the resulting values are not so well organized. This demonstrates that the high-dimensional vectors (each representing a letter 'A') that s |
https://en.wikipedia.org/wiki/WAITS | WAITS was a heavily modified variant of Digital Equipment Corporation's Monitor operating system (later renamed to, and better known as, "TOPS-10") for the PDP-6 and PDP-10 mainframe computers, used at the Stanford Artificial Intelligence Laboratory (SAIL) from the mid-1960s up until 1991; the mainframe computer it ran on also went by the name of "SAIL".
Overview
There was never an "official" expansion of WAITS, but a common variant was "West-coast Alternative to ITS"; another variant was "Worst Acronym Invented for a Timesharing System". The name was endorsed by the SAIL community in a public vote choosing among alternatives. Two of the other contenders were SAINTS ("Stanford AI New Timesharing System") and SINNERS ("Stanford Incompatible Non-New Extensively Rewritten System"), proposed by the systems programmers. Though WAITS was less visible than ITS, there was frequent exchange of people and ideas between the two communities, and innovations pioneered at WAITS exerted enormous indirect influence.
WAITS alumni at Xerox PARC and elsewhere also played major roles in the developments that led to the Xerox Star, the Macintosh, and the SUN workstation (later sold by Sun Microsystems).
The early screen modes of Emacs, for example, were directly inspired by WAITS' "E" editor – one of a family of editors that were the first to do real-time editing, in which the editing commands were invisible and where one typed text at the point of insertion/overwriting. The modern style of multi-region windowing is said to have originated there.
The system also featured an unusual level of support for what is now called multimedia computing, allowing analog audio and video signals (including TV and radio) to be switched to programming terminals. This switching capability for terminal video even allowed users in separate offices to view and type on the same virtual terminal, or a single user to instantly switch among multiple full virtual terminals.
Also invented there were "bucky bits" - thus, the "Alt" key on every IBM PC is a WAITS legacy.
One WAITS feature very notable in pre-Web days was a news-wire search engine called NS (for News Service) that allowed WAITS hackers to instantly find, store and be notified about selected AP and New York Times news-wire stories by doing searches using arbitrary combinations of words. News story retrieval by such search was instantaneous because each story was automatically indexed by all its words when it came in over the wire.
References
External links
The autobiography of SAIL
FOLDOC description
SAILDART archive
SAILDART Prolegomenon, 2016 edition
Time-sharing operating systems
1967 software |
https://en.wikipedia.org/wiki/CODASYL | CODASYL, the Conference/Committee on Data Systems Languages, was a consortium formed in 1959 to guide the development of a standard programming language that could be used on many computers. This effort led to the development of the programming language COBOL, the CODASYL Data Model, and other technical standards.
CODASYL's members were individuals from industry and government involved in data processing activity. Its larger goal was to promote more effective data systems analysis, design, and implementation. The organization published specifications for various languages over the years, handing these over to official standards bodies (ISO, ANSI, or their predecessors) for formal standardization.
History
CODASYL is remembered almost entirely for two activities: its work on the development of the COBOL language and its activities in standardizing database interfaces. It also worked on a wide range of other topics, including end-user form interfaces and operating system control languages, but these projects had little lasting impact.
The remainder of this section is concerned with CODASYL's database activities.
In 1965 CODASYL formed a List Processing Task Force. This group was chartered to develop COBOL language extensions for processing collections of records; the name arose because Charles Bachman's IDS system (which was the main technical input to the project) managed relationships between records using chains of pointers. In 1967 the group renamed itself the Data Base Task Group (DBTG), and its first report in January 1968 was entitled COBOL extensions to handle data bases.
In October 1969 the DBTG published its first language specifications for the network database model which became generally known as the CODASYL Data Model. This specification in fact defined several separate languages: a data definition language (DDL) to define the schema of the database, another DDL to create one or more subschemas defining application views of the database; and a data manipulation language (DML) defining verbs for embedding in the COBOL programming language to request and update data in the database. Although the work was focused on COBOL, the idea of a host-language independent database was starting to emerge, prompted by IBM's advocacy of PL/I as a COBOL replacement.
In 1971, largely in response to the need for programming language independence, the work was reorganized: development of the Data Description Language was continued by the Data Description Language Committee, while the COBOL DML was taken over by the COBOL language committee. With hindsight, this split had unfortunate consequences. The two groups never quite managed to synchronize their specifications, leaving vendors to patch up the differences. The inevitable consequence was a lack of interoperability among implementations.
A number of vendors implemented database products conforming (roughly) to the DBTG specifications: the best-known implementations were Honeywell's— origina |
https://en.wikipedia.org/wiki/PL/C | PL/C is an instructional dialect of the programming language PL/I, developed at the Department of Computer Science of Cornell University in the early 1970s in an effort headed by Professor Richard W. Conway and graduate student Thomas R. Wilcox. PL/C was developed with the specific goal of being used for teaching programming. The PL/C compiler, which implemented almost all of the large PL/I language, had the unusual capability of never failing to compile a program, through the use of extensive automatic correction of many syntax errors and by converting any remaining syntax errors to output statements. This was important because, at the time, students submitted their programs on
IBM punch cards and might not get their output back for several hours. Over 250 other universities adopted PL/C; as one late-1970s textbook on PL/I noted, "PL/C ... the compiler for PL/I developed at Cornell University ... is widely used in teaching programming." Similarly, a mid-late-1970s survey of programming languages said that "PL/C is a widely used dialect of PL/I."
Origins and rationale
Work on this project was based on a prior Cornell compiler for the programming language CUPL, which in turn was influenced by the earlier Cornell language implementation CORC. Both of these were small, very restricted languages intended for the teaching of beginning programming. CORC had been used at Cornell from 1962 to 1966 and CUPL from 1965 to 1969. Conway's group had been involved in the development of both of those efforts, each of which attempted automatic repair of source code errors.
As the 1970s began, Cornell was attempting to find a teaching language that had general commercial acceptance but also contained modern language features. As another Cornell computer science professor, David Gries, wrote at the time, the first criterion effectively eliminated the ALGOL family of languages and the second criteria argued against FORTRAN and BASIC (with COBOL not even being considered); thus, they chose PL/I. While PL/I did have a foothold in educational use, the decision went against the grain of most universities, where one survey found that some 70 percent of American college students were being taught with FORTRAN. However, Cornell was intent on having a language useful for showing computer science principles and best engineering practices and through which methods such as structured programming and stepwise refinement could be taught, and PL/I was a more expressive vehicle for that than FORTRAN.
For educational institutions that did choose to use the language, the production IBM PL/I F compiler then available was much too slow, in both compile time and execution time, for its use to be practical for student programs. A similar situation existed for FORTRAN, where the IBM FORTRAN IV G compiler was too slow and the University of Waterloo's WATFOR implementation had become a very popular alternate solution. So there was an opening for a student compiler for PL/I; |
https://en.wikipedia.org/wiki/LCS | LCS may refer to:
Schools and organizations
Laboratory for Computer Science, research institute at the Massachusetts Institute of Technology
Lake County Schools school district of Lake County, Florida
Lakefield College School an independent school in Lakefield, Ontario, Canada
Larchmont Charter School, a public charter school in Los Angeles, California
Lebanese Community School in Lagos, Nigeria
Legal Complaints Service, a former body that formally investigated complaints about solicitors in the United Kingdom
Lincoln Christian College and Seminary
Lincoln Community School in Accra, Ghana
Littlehampton Community School, large secondary school in West Sussex, England
Littleover Community School in Derby, England
Lockerby Composite School Canadian Secondary School in Ontario
London Controlling Section, a British World War II secret organisation
London Co-operative Society, a former consumer co-operative society of the United Kingdom
London Corresponding Society, a radical British society founded in 1792
Louisville Collegiate School, a private, nonsectarian, college preparatory k-12 school in Louisville, Kentucky.
Lutheran Confessional Synod, type of church
Lynchburg City Schools
Lynden Christian Schools
Science, mathematics, and computing
Laser Camera System, a type of scanner used on the Space Shuttle
Lagrangian coherent structure, in fluid mechanics, a type of flow structure
Learning classifier system, machine learning system
Lincoln Calibration Sphere 1, first of a series of inert globes used as radar calibration satellites
Liquid cooling system
Live Communications Server
Live Communications Server 2003
Live Communications Server 2005
Locally convex space
Longest common substring problem in computer science, the longest shared sequence of consecutive characters
Longest common subsequence problem in computer science, the longest shared sequence of not necessarily consecutive characters
Sports and entertainment
Grand Theft Auto: Liberty City Stories, a game for the PlayStation Portable and PlayStation 2
Last Comic Standing, an NBC reality program that premiered in 2003
League Championship Series, a round of playoffs in Major League Baseball
League Championship Series (formerly League of Legends Championship Series), a North American professional esports league for the video game League of Legends
Loose Cannon Studios, an American video game company
Other uses
Landing Craft Support, amphibious landing support ship, from World War II
LCS, a psychology credential for "Licensed Clinical Social Worker". See List of credentials in psychology
Littoral combat ship, a type of warship used by the United States
LCS, a family of companies involved with senior living communities
See also
LC (disambiguation) |
https://en.wikipedia.org/wiki/Barcode%20reader | A barcode reader or barcode scanner is an optical scanner that can read printed barcodes, decode the data contained in the barcode to a computer. Like a flatbed scanner, it consists of a light source, a lens and a light sensor for translating optical impulses into electrical signals. Additionally, nearly all barcode readers contain decoder circuitry that can analyse the barcode's image data provided by the sensor and send the barcode's content to the scanner's output port.
Types of barcode scanners
Technology
Barcode readers can be differentiated by technologies as follows:
Pen-type readers
Pen-type readers consist of a light source and photodiode that are placed next to each other in the tip of a pen. To read a barcode, the person holding the pen must move the tip of it across the bars at a relatively uniform speed. The photodiode measures the intensity of the light reflected back from the light source as the tip crosses each bar and space in the printed code. The photodiode generates a waveform that is used to measure the widths of the bars and spaces in the barcode. Dark bars in the barcode absorb light and white spaces reflect light so that the voltage waveform generated by the photodiode is a representation of the bar and space pattern in the barcode. This waveform is decoded by the scanner in a manner similar to the way Morse code dots and dashes are decoded.
Laser scanners
Laser scanners direct the laser beam back and forth across the barcode. As with the pen-type reader, a photo-diode is used to measure the intensity of the light reflected back from the barcode. In both pen readers and laser scanners, the light emitted by the reader is rapidly varied in brightness with a data pattern and the photo-diode receive circuitry is designed to detect only signals with the same modulated pattern.
CCD readers (also known as LED scanners)
Charge-coupled device (CCD) readers use an array of hundreds of tiny light sensors lined up in a row in the head of the reader. Each sensor measures the intensity of the light immediately in front of it. Each individual light sensor in the CCD reader is extremely small and because there are hundreds of sensors lined up in a row, a voltage pattern identical to the pattern in a barcode is generated in the reader by sequentially measuring the voltages across each sensor in the row. The important difference between a CCD reader and a pen or laser scanner is that the CCD reader is measuring emitted ambient light from the barcode whereas pen or laser scanners are measuring reflected light of a specific frequency originating from the scanner itself. LED scanners can also be made using CMOS sensors, and are replacing earlier Laser-based readers.
Camera-based readers
Two-dimensional imaging scanners are a newer type of barcode reader. They use a camera and image processing techniques to decode the barcode.
Video camera readers use small video cameras with the same CCD technology as in a CCD barcode reader exce |
https://en.wikipedia.org/wiki/Radio%20Free%20Virgin | Radio Free Virgin was a digital radio broadcaster started in early 1999 and a member company of the Virgin Group. Their programming consisted of over 60 professionally programmed channels playing various genres of music. It quickly gained popularity and its downloadable radio player reached the 1 million downloads within a few months in early 2000. The company was privately held corporation funded by Richard Branson and was a unit of Virgin Audio Holdings, LLC. It was headed by Zack Zalon and Brendon Cassidy who were early pioneers in the internet music business. Dave Gordon was an early webmaster for the fledgling group.
While initially a completely free service, programming was ultimately broadcast over the Internet in a two-tier setup: a free tier that allowed access to a subset of channels and a monthly-subscription tier ("RFV Royal") for paying customers with higher-quality streaming audio and access to a greater number of channels. By March 2003, Radio Free Virgin servers accommodated 2.8 million unique listeners per month and Virgin was offering an integrated digital download and subscription service that was in direct competition with iTunes, Napster and Rhapsody called Virgin Digital. Radio Free Virgin (RFV) was also available at the time via the Philips Streamium device, delivering its channels in MP3Pro.
As of February 2007, the service ceased to operate. It ended with the following cryptic message posted to its homepage:
"Letter from the road - January 3rd, 2007: Dear loyal listener... This marks the 44th blog posting from my trip. It's been particularly cold on this leg of the journey. I guess that's what you get for hitchhiking Alaska this time of year. I've been calling the office for almost two weeks straight, but no answer. I'm starting to think that they sent me up here on a ruse of some sort. Like maybe if they got me out of the office on my first vacation in 7 years they'd have a chance to actually have some fun or something. I'm pretty damn sure I heard Antoinette say something about St. John, but that could have just been the voices again. I spent about fifteen miles with a group of hippies last night. I mean real hippies, not the kind that don't shower and eat raw corn all the time. Bona-fide hippies. Got tuned-out and turned-on in '68 or something and never looked back. Main girl is called Ragina (nasty) and thinks it's 1970. Literally. Keeps bitching about Nixon and how the commies are going to save the world. Still, I may be in love with her. I have my reasons. My thoughts run to the office again - seventy-one of us, all crammed in that little space. Angeline (my driver) with that floppy mane of hair and sarcastic attitude that can only come from an upbringing in rural Wales, Sophie (chef) and her Sonoma foie-gras compote topped Roti A La Broche and constant humming out-of-tune, Vanity (really her name, types up my dictation when Delissa is at lunch) plucking her brows. I miss them all. But I know that being on the ro |
https://en.wikipedia.org/wiki/Public%20Radio%20International | Public Radio International (PRI) was an American public radio organization. Headquartered in Minneapolis, Minnesota, PRI provided programming to over 850 public radio stations in the United States.
PRI was one of the main providers of programming for public radio stations in the US, alongside National Public Radio, American Public Media and the Public Radio Exchange. PRI merged with the Public Radio Exchange in 2018.
Background
In the United States, PRI distributed well-known programming to public radio stations. Among its programs were the global news program The World, which PRI co-produced with WGBH Boston. Programs on PRI—sometimes mis-attributed to National Public Radio—were produced by a variety of organizations, including PRI in the United States and other countries. PRI, along with NPR and American Public Media, was one of the largest program producers and distributors of public radio programming in the United States. PRI offered over 280 hours of programming each week to stations and listeners. Public Radio International said its mission was to "serve audiences as a distinctive content source for information, insights and cultural experiences essential to living in our diverse, interconnected world."
Approximately 850 radio station affiliates and other audio venues broadcast, streamed and downloaded PRI programs. According to the 2017 Nielsen Audio ratings, 8.1 million people listened to PRI programming each week.
PRI's programs won awards for quality and innovation, including the DuPont-Columbia Award, Scripps Howard Award for Excellence in Electronic Media/Radio, George M. Foster Peabody Award, Golden Reel Award and Gabriel Award.
PRI programming received funding from station fees, corporate underwriting, and individual and corporate grants. Less than 2% of the overall operating budget came from United States government agencies.
History
PRI was founded in 1983 as American Public Radio as an alternative to NPR for public radio program distribution. Five stations established American Public Radio as a syndicate: the Minnesota Public Radio network, KQED/San Francisco, WNYC/New York City, WGUC/Cincinnati, and KUSC in Los Angeles. The corporation changed its name to Public Radio International in 1994 to reflect its growing interest and involvement in international audio publishing, as typified by its many collaborations with the BBC.
In the mid-1990s, PRI began to expand its reach by producing programming in addition to distributing programming. This evolution in the company began with PRI's The World, originally a co-production among PRI, the BBC World Service, and WGBH.
In 2004, Minnesota Public Radio left PRI and began distributing its own shows (including A Prairie Home Companion and Marketplace and excluding Classical 24) through its newly created arm, American Public Media. In 2012, PRI was acquired by the WGBH Educational Foundation.
Public Radio International and Public Radio Exchange merged in 2018. Both networks maintain |
https://en.wikipedia.org/wiki/Visual%20IRC | Visual IRC (ViRC) is an open-source Internet Relay Chat client for the Windows operating system. Unlike many other IRC clients, nearly all of the functionality in ViRC is driven by the included IRC script, with the result that the program's behavior can be extended or changed without altering the source code.
History
Visual IRC (16-bit) – Released in 1995 for Windows 3.x, written by MeGALiTH. This program had many built-in features, but it was also scriptable with VPL (ViRC Programming Language), the predecessor to ViRCScript and Versus.
Visual IRC '96 (and later Visual IRC '97, Visual IRC '98) – Released in 1996, written by MeGALiTH. This was the first 32-bit version of ViRC, written for Windows 9x/NT. Many of the features that were built into 16-bit ViRC were handled by the default script in ViRC '96. ViRC '98 contained some code contributed by Tara McGrew AKA "Mr2001", particularly enhancements to the ViRCScript engine. The scripting language was incompatible with the earlier version. In later versions, voice chat and video conferencing features were added.
Development of the second incarnation slowed and by 2000 Visual IRC appeared to be dead. The original author MeGALiTH (Adrian Cable) passed the source code to a user, Mr2001 (Tara McGrew), who had previously contributed some code, and who had secretly been developing a clone called Bisual IRC (BIRC). Rather than restarting development of the ViRC '98 code base, he merged some of ViRC '98's features into BIRC and released it as Visual IRC 2.
Visual IRC 2 – First released by Mr2001, coincidentally in 2001, this version's Versus scripting language is based on ViRCScript, but internally it has been almost totally rewritten. In fact, ViRC 2 only shares a few hundred lines of code with ViRC '98. The voice and video conferencing features were removed in this version because the libraries used to implement them were no longer supported.
Much of the source code to BIRC, ViRC 2, and the related utilities has been released under the GPL through the project's web site and SourceForge.
Versus
Versus is a scripting language originally developed for the IRC client Bisual IRC, and currently used with Visual IRC. It is similar in many ways to the scripting languages used by ircII and mIRC, as well as Tcl and C.
The name "Versus" was chosen because it could be shortened to "VS", which was a common abbreviation for ViRCScript, the language used by Visual IRC '96 through '98. Versus remained mostly backward compatible with ViRCScript, so existing documentation and commentary that mentioned "VS" remained mostly accurate when applied to Versus. The name also alluded to BIRC's origins as a replacement for ViRC.
Object Versus, or OVS, refers to the object-oriented features of Versus. Scripts can define classes and work with objects and methods instead of textual data and aliases; however, in practice, OVS is mostly used to manipulate the VCL objects that make up ViRC's interface.
Script storage
Scripts ar |
https://en.wikipedia.org/wiki/List%20of%20chordate%20orders | This article contains a list of all of the classes and orders that are located in the Phylum Chordata.
Subphylum Cephalochordata
Class Leptocardii: Lancelets
Order Amphioxiformes
Family Pikaiidae †
Genus Pikaia †
Olfactores (unranked)
Subphylum Tunicata
Class Ascidiacea: Ascideans and sessile tunicates
Order Enterogona
Order Pleurogona
Order Aspiraculata
Class Thaliacea: Pelagic tunicates
Order Doliolida
Order Pyrosomida
Order Salpida: salps
Class Appendicularia: Solitary, free-swimming tunicates
Order Copelata
Subphylum Vertebrates
Infraphylum Cyclostomata, Superclass Agnatha: Paraphyletic jawless vertebrates
Class Myxini: Hagfish
Order Myxiniformes
Family Myxinidae
Class Hyperoartia: Lampreys and their † kin
Order Petromyzontiformes
Infraphylum Gnathostomata: Jawed vertebrates
Class Placodermi †
Order Acanthothoraci
Order Arthrodira
Order Antiarchi
Order Brindabellaspida
Order Petalichthyida
Order Phyllolepida
Order Ptyctodontida
Order Rhenanida
Order Pseudopetalichthyida (The placement of this order is debated.)
Order Stensioellida (The placement of this monotypic order is debated.)
Class Chondrichthyes: Cartilaginous fish
Subclass Elasmobranchii
Superorder Batoidea
Order Rajiformes: rays and skates
Order Rhinopristiformes: sawfishes
Order Torpediniformes: electric rays
Order Myliobatiformes: (sting)rays
Superorder Selachimorpha (sharks)
Order Heterodontiformes: bullhead sharks
Order Orectolobiformes: carpet sharks
Order Carcharhiniformes: ground sharks
Order Lamniformes: mackerel sharks
Order Hexanchiformes: frilled and cow sharks
Order Squaliformes: dogfish sharks
Order Squatiniformes: angel sharks
Order Pristiophoriformes: saw sharks
Subclass Holocephali
Order Chimaeriformes: chimaeras
Class Acanthodii †
Order Climatiiformes
Order Ischnacanthiformes
Order Acanthodiformes
Superclass Osteichthyes: Bony fish
Class Actinopterygii: Ray-finned fish
Order Asarotiformes †
Order Discordichthyiformes †
Order Paphosisciformes †
Order Scanilepiformes †
Order Cheirolepidiformes †
Order Paramblypteriformes †
Order Rhadinichthyiformes †
Order Palaeonisciformes †
Order Tarrasiiformes †
Order Pachycormiformes †
Order Ptycholepiformes †
Order Redfieldiiformes †
Order Haplolepidiformes †
Order Aeduelliformes †
Order Platysomiformes †
Order Dorypteriformes †
Order Eurynotiformes †
Subclass Cladistii
Order Polypteriformes
Subclass Chondrostei
Order Acipenseriformes: sturgeons and paddlefishes
Subclass Neopterygii
Infraclass Holostei
Order Lepisosteiformes, the gars
Order Amiiformes, the bowfins
Infraclass Teleostei
Superorder Osteoglossomorpha
Order Osteoglossiformes, the bony-tongued fishes
Order Hiodontiformes, including the mooneye and goldeye
Order Lycopteriformes
Order Ichthyodectiformes †
Superorder Elopomorpha
Order Elopiformes, including the ladyfishes and tarpon
Order Albuliformes, the bonefishes
Order Notacanthiformes, including the halosaurs and |
https://en.wikipedia.org/wiki/Citrix%20Systems | Citrix Systems, Inc. is an American multinational cloud computing and virtualization technology company that provides server, application and desktop virtualization, networking, software as a service (SaaS), and cloud computing technologies. Citrix products were claimed to be in use by over 400,000 clients worldwide, including 99% of the Fortune 100, and 98% of the Fortune 500.
The company was founded in Richardson, Texas, in 1989 by Ed Iacobucci, who served as chairman until his departure in 2000. It began by developing remote access products for Microsoft operating systems, licensing source code from Microsoft, and has been in partnership with the company throughout its history. By the 1990s, Citrix became an industry leader in thin client technology, enabling purpose-built devices to access remote servers and resources. The company launched its first initial public offering in 1995 and, with few competitors, experienced significant revenue increases between 1995 and 1999.
Citrix acquired Sequoia Software Corp. in 2001 and ExpertCity, a provider of remote desktop products, in 2003. This was followed by more than a dozen other acquisitions from 2005 to 2012, which allowed Citrix to expand into server and desktop virtualization, cloud computing, infrastructure as a service, and software as a service offerings. In 2014, Citrix acquired Framehawk and used its technology to improve the delivery of virtual desktops and applications over wireless networks. In 2016, as part of a US$1.8 billion product deal with LogMeIn, Citrix spun off the GoTo product line into a new business entity, entitled GetGo. In 2017, Citrix completed the merger of GetGo with LogMeIn's products.
Citrix had its corporate headquarters in Fort Lauderdale, Florida, with subsidiary operations in California and Massachusetts, and additional development centers in Canada, Denmark, Germany, India, and the United Kingdom. In 2021, Citrix generated $3.2 billion in revenue and had 9,700 employees.
Following the completion of the acquisition by Vista Equity Partners and Evergreen Coast Capital Corp on September 30, 2022, Citrix merged with TIBCO Software under the newly formed Cloud Software Group. Citrix spun off the re-branded Citrix ADC back into a standalone entity Netscaler under the same parent.
History
Early history
Citrix was founded in Richardson, Texas, in 1989 by former IBM developer Ed Iacobucci with $3 million in funding. Following its initial setup and development, Iacobucci moved the company to his former home in Coral Springs, Florida. The company's first employees were five other engineers from IBM that Iacobucci convinced to join his team. Iacobucci served as chairman of the company, and Roger Roberts became the CEO of Citrix in 1990. Citrix was originally named Citrus but changed its name after an existing company claimed trademark rights. The Citrix name is a portmanteau of Citrus and UNIX.
The company's first product was Citrix Multiuser, an extension of OS/2 |
https://en.wikipedia.org/wiki/Network%20Computer | The Network Computer (or NC) was a diskless desktop computer device made by Oracle Corporation from about 1996 to 2000. The devices were designed and manufactured by an alliance, which included Sun Microsystems (acquired by Oracle in 2010), IBM, and others. The devices were designed with minimum specifications, based on the Network Computer Reference Profile. The brand was also employed as a marketing term to try to popularize this design of computer within enterprise and among consumers.
The NC brand was mainly intended to inspire a range of desktop computers from various suppliers that, by virtue of their diskless design and use of inexpensive components and software, were cheaper and easier to manage than standard fat client desktops. However, due to the commoditization of standard desktop components, and due to the increasing availability and popularity of various software options for using full desktops as diskless nodes, thin clients, and hybrid clients, the Network Computer brand never achieved the popularity hoped for by Oracle and was eventually mothballed.
The term "network computer" is now used for any diskless desktop computer or a thin client.
History
The failure of the NC to impact on the scale predicted by Larry Ellison may have been caused by a number of factors. Firstly, prices of PCs quickly fell below $1000, making the competition very hard. Secondly, the software available for NCs was neither mature nor open.
Thirdly, the idea could simply have been ahead of its time, as at the NC's launch in 1996, the typical home Internet connection was only a 28.8 kbit/s modem dialup. This was simply insufficient for the delivery of executable content. The World Wide Web itself was not considered mainstream until its breakout year, 1998. Prior to this, very few Internet service providers advertised in mainstream press (at least outside of the US), and knowledge of the Internet was limited. This could have held back uptake of what would be seen as a very niche device with no (then) obvious appeal.
NCs ended up being used as the very 'dumb terminals' they were intended to replace, as the proprietary backend infrastructure is not readily available. 1990s era NCs are often network-booted into a minimal Unix with X, to serve as X terminals. While NC purists may consider this to be a suboptimal use of NC hardware, the NCs work well as terminals, and are considerably cheaper than purpose-built terminal hardware.
NC standards and drafts
Reference Profile
The initial Network Computing standard, the Network Computer Reference Profile (NCRef), required that all 'NC' appliances supported HTML, Java, HTTP, JPEG, and other key standards.
Other standards
Because many NCs did not use Intel CPUs or Microsoft software, Microsoft and Intel developed a competing standard called NetPC. Other alternatives to the NCRef were WeBRef (Motorola and HDS Network Systems) and Odin (National Semiconductor). The HDS @workStation was stated to ship by the end o |
https://en.wikipedia.org/wiki/Independent%20Computing%20Architecture | Independent Computing Architecture (ICA) is a proprietary protocol for an application server system, designed by Citrix Systems. The protocol lays down a specification for passing data between server and clients, but is not bound to any one platform. Citrix's ICA is an alternative to Microsoft's Remote Desktop Protocol (RDP).
Practical products conforming to ICA are Citrix's WinFrame, Citrix XenApp (formerly called MetaFrame/Presentation Server), and Citrix XenDesktop products. These permit ordinary Windows applications to be run on a suitable Windows server, and for any supported client to gain access to those applications. Besides Windows, ICA is also supported on a number of Unix server platforms and can be used to deliver access to applications running on these platforms. The client platforms need not run Windows; for example, there are clients for Mac, Unix, Linux, and various smartphones. ICA client software is also built into various thin client platforms.
ICA is broadly similar in purpose to window servers such as the X Window System. It also provides for the feedback of user input from the client to the server, and a variety of means for the server to send graphical output, as well as other media such as audio, from the running application to the client.
The key challenges in an architecture are network latency and performance—a graphically intensive application (as most are when presented using a GUI) being served over a slow or bandwidth-restricted network connection requires considerable compression and optimization to render the application usable by the client. The client machine may be a different platform, and may not have the same GUI routines available locally—in this case the server may need to send the actual bitmap data over the connection. Depending on the client's capabilities, servers may also off-load part of the graphical processing to the client, e.g. to render multimedia content. ICA runs natively over TCP port 1494 or may be encapsulated in Common Gateway Protocol (CGP) on TCP 2598. ICA supports the concept of channels at a session layer to encapsulate rich media redirection or USB extension within ICA.
Client software
Citrix ICA Client (DOS, OS/2)
Citrix Presentation Server Client (Mac, Java)
Citrix Receiver (Linux, Unix, Windows, Mac OS X, iOS, Android, Chrome)
Citrix XenApp/XenDesktop Plugin (Windows)
SAP
See also
Desktop virtualization
HP RGS
Remote Desktop Protocol
References
External links
A Slashdot discussion giving insights on how ICA works
A web page contains a description of the ICA file syntax.
Citrix Systems
Remote desktop
Remote desktop protocols |
https://en.wikipedia.org/wiki/UXF | In computing, UML eXchange Format (UXF) is a XML-based model interchange format for Unified Modeling Language (UML), which is a standard software modeling language. UXF is a structured format described in 1998 and intended to encode, publish, access and exchange UML models.
More recent alternatives include XML Metadata Interchange and OMG's Diagram Definition standard.
Known uses
UMLet is an application that uses UXF as its native file format.
References
Unified Modeling Language |
https://en.wikipedia.org/wiki/Glitch | A glitch is a short-lived fault in a system, such as a transient fault that corrects itself, making it difficult to troubleshoot. The term is particularly common in the computing and electronics industries, in circuit bending, as well as among players of video games. More generally, all types of systems including human organizations and nature experience glitches.
A glitch, which is slight and often temporary, differs from a more serious bug which is a genuine functionality-breaking problem. Alex Pieschel, writing for Arcade Review, said: bug' is often cast as the weightier and more blameworthy pejorative, while 'glitch' suggests something more mysterious and unknowable inflicted by surprise inputs or stuff outside the realm of code." The word itself is sometimes humorously described as being short for "gremlins lurking in the computer hardware."
Etymology
Some reference books, including Random House's American Slang, claim that the term comes from the German word and the Yiddish word . Either way, it is a relatively new term. It was first widely defined for the American people by Bennett Cerf on the June 20, 1965, episode of What's My Line as "a kink ... when anything goes wrong down there [Cape Kennedy], they say there's been a slight glitch." The astronaut John Glenn explained the term in his section of the book Into Orbit, writing that
Another term we adopted to describe some of our problems was "glitch." Literally, a glitch is a spike or change in voltage in an electrical circuit which takes place when the circuit suddenly has a new load put on it. You have probably noticed a dimming of lights in your home when you turn a switch or start the dryer or the television set. Normally, these changes in voltage are protected by fuses. A glitch, however, is such a minute change in voltage that no fuse could protect against it.
John Daily further defined the word on the July 4, 1965, episode of What's My Line, saying that it's a term used by the Air Force at Cape Kennedy, in the process of launching rockets, "it means something's gone wrong and you can't figure out what it is so you call it a 'glitch'." Later, on July 23, 1965, Time magazine felt it necessary to define it in an article: "Glitches—a spaceman's word for irritating disturbances." In relation to the reference by Time, the term has been believed to enter common usage during the American Space Race of the 1950s, where it was used to describe minor faults in the rocket hardware that were difficult to pinpoint.
According to a Wall Street Journal article written by Ben Zimmer, The Yale law librarian Fred Shapiro came up with the new earliest use of the word yet found: May 19, 1940. That was when the novelist Katharine Brush wrote about glitch in her column "Out of My Mind" (syndicated in The Washington Post, The Boston Globe, and other papers). Brush corroborated Tony Randall's radio recollection:When the radio talkers make a little mistake in diction they call it a "fluff," and when t |
https://en.wikipedia.org/wiki/Parsing | Parsing, syntax analysis, or syntactic analysis is the process of analyzing a string of symbols, either in natural language, computer languages or data structures, conforming to the rules of a formal grammar. The term parsing comes from Latin pars (orationis), meaning part (of speech).
The term has slightly different meanings in different branches of linguistics and computer science. Traditional sentence parsing is often performed as a method of understanding the exact meaning of a sentence or word, sometimes with the aid of devices such as sentence diagrams. It usually emphasizes the importance of grammatical divisions such as subject and predicate.
Within computational linguistics the term is used to refer to the formal analysis by a computer of a sentence or other string of words into its constituents, resulting in a parse tree showing their syntactic relation to each other, which may also contain semantic information. Some parsing algorithms may generate a parse forest or list of parse trees for a syntactically ambiguous input.
The term is also used in psycholinguistics when describing language comprehension. In this context, parsing refers to the way that human beings analyze a sentence or phrase (in spoken language or text) "in terms of grammatical constituents, identifying the parts of speech, syntactic relations, etc." This term is especially common when discussing which linguistic cues help speakers interpret garden-path sentences.
Within computer science, the term is used in the analysis of computer languages, referring to the syntactic analysis of the input code into its component parts in order to facilitate the writing of compilers and interpreters. The term may also be used to describe a split or separation.
Human languages
Traditional methods
The traditional grammatical exercise of parsing, sometimes known as clause analysis, involves breaking down a text into its component parts of speech with an explanation of the form, function, and syntactic relationship of each part. This is determined in large part from study of the language's conjugations and declensions, which can be quite intricate for heavily inflected languages. To parse a phrase such as 'man bites dog' involves noting that the singular noun 'man' is the subject of the sentence, the verb 'bites' is the third person singular of the present tense of the verb 'to bite', and the singular noun 'dog' is the object of the sentence. Techniques such as sentence diagrams are sometimes used to indicate relation between elements in the sentence.
Parsing was formerly central to the teaching of grammar throughout the English-speaking world, and widely regarded as basic to the use and understanding of written language. However, the general teaching of such techniques is no longer current.
Computational methods
In some machine translation and natural language processing systems, written texts in human languages are parsed by computer programs. Human sentences are not easil |
https://en.wikipedia.org/wiki/VT220 | The VT220 is a computer terminal introduced by Digital Equipment Corporation (DEC) in November 1983. The VT240 added monochrome ReGIS vector graphics support to the base model, while the VT241 did the same in color. The 200 series replaced the successful VT100 series, providing more functionality in a much smaller unit with a much smaller and lighter keyboard. Like the VT100, the VT200 series implemented a large subset of ANSI X.364. Among its major upgrades was a number of international character sets, as well as the ability to define new character sets.
The VT200 series was extremely successful in the market. Released at $1,295, but later priced at $795, the VT220 offered features, packaging and price that no other serial terminal could compete with at the time. In 1986, DEC shipped 165,000 units, giving them a 42% market share, double that of the closest competitor, Wyse. Competitors adapted by introducing similar models at lower prices, leading DEC to do the same by releasing the less-expensive $545 VT300 series in 1987. By that time, DEC had shipped over one million VT220s.
Hardware
The VT220 improved on the earlier VT100 series of terminals with a redesigned keyboard, much smaller physical packaging, and a much faster microprocessor. The VT220 was available with CRTs that used white, green, or amber phosphors.
The VT100s, like the VT50s before them, had been packaged in relatively large cases that provided room for expansion systems. The VT200s abandoned this concept, and wrapped the much smaller 1980s-era electronics tightly around the CRT. The result was a truncated pyramidal case with the apex at the back, only slightly larger than the CRT. This made it much easier to fit the terminal on a desk. An adjustable stand allowed the angle of the CRT to be adjusted up and down. Because it was lower than head height, the result was an especially ergonomic terminal.
The LK201 keyboard supplied with the VT220 was one of the first full-length, low-profile keyboards available; it was developed at DEC's Roxbury, Massachusetts facility. It was much smaller and lighter than the VT100s version, and connected to the terminal using a lighter and more flexible coiled cable and a telephone jack connector.
The VT200s were the last DEC terminals to provide a 20mA current loop serial interface, an older standard originally developed for the telegraph system but became popular on computers due to the early use of Teletype Model 33's as ad hoc terminals. A standard 25-pin D-connector was also provided for RS-232. Only one of the two ports could be in use at a given time. Later DEC terminals would replace both of these with their proprietary Modified Modular Jack (MMJ) connectors.
Software
The VT220 was designed to be compatible with the VT100, but added features to make it more suitable for an international market. This was accomplished by including a number of different character sets that could be selected among using a series of ANSI commands.
Glyphs w |
https://en.wikipedia.org/wiki/NCR%20CRAM | CRAM, or Card Random-Access Memory, model 353-1, was a data storage device invented by NCR, which first appeared on their model NCR-315 mainframe computer in 1962. It was also available for NCR's third generation NCR Century series as the NCR/653-100.
A CRAM cartridge contained 256 3x14 inch cards with a PET film magnetic recording surface. Each "deck" of cards could contain up to 5.5 MB of alphanumeric characters. The cards were suspended from eight d-section rods, which were selectively rotated to release a specific card, each card having a unique pattern of notches at one end. The selected card was dropped and wrapped around a rotating drum to be read or written. Each cartridge could store 5.5 MB.
Later versions of the CRAM, the 353-2 and 353-3, used decks of 512 cards, thus doubling the storage capacity of each unit.
Each card contains seven tracks containing 1550 slabs (12 bits each). Normally the track was initialized with a four slab header containing the cartridge number (two slabs), the card number and the track number.
Cards were dropped by changing the card rods to a binary configuration and release the two outside "release" rods. Air was blown over the top of the cards to keep them separated, and to increase the dropping speed. Once on the rotating "drum" a series of positive and negative air pressure chambers pulled the card across a magnetic read-write head. After one or more passes over the head, where data is written to or read from the card, a release gate allow the card to be "thrown" along a raceway over the card deck, and onto a "loader" mechanism. The loader used a group of electro-magnetic solenoids to slam the card back onto the control rods. The unit was a monster with two large electric motors that drove four large vacuum/blowers. It was possible to have up to five cards in motion at any point in time; one dropping, one on the drum, two in the return transport, and one being loaded back onto the deck.
If the card didn't succeed in dropping there was a "magic wand" similar to a pencil available to solve the problem.
One feature of this device was the potential for a "double drop", where two cards would drop at once, due to a break in a notch on one card, or, more commonly, a card being held by one rod being dislodged by the adjacent card dropping, usually cards 000 (the deck directory card) and 001 which resulted in the necessity of recreating the directory. This would result in a high pitched noise with which operators were very familiar and could hear even outside the computer room, and damage to the cards. These were called "screamers", while the opposite problem, cards that wouldn't drop, were called "hangers".
Another interesting feature was that, should an operator accidentally drop all the cards from a cartridge, they could be replaced without worrying about order. The order of the cards was not important because of the notch encoding system.
The first CRAM units were deployed on NCR's 315 system. A s |
https://en.wikipedia.org/wiki/New%20Zealand%20state%20highway%20network | The New Zealand state highway network is the major national highway network in New Zealand. Nearly 100 roads in the North and South Islands are state highways. All state highways are administered by the NZ Transport Agency.
The highways were originally designated using a two-tier system, national (SH 1 to 8) and provincial, with national highways having a higher standard and funding priorities. Now all are state highways, and the network consists of SH 1 running the length of both islands, SH 2 to 5 and 10 to 59 in the North Island, and SH 6 to 8 and 60 to 99 in the South Island, numbered approximately north to south. State highways are marked by red shield-shaped signs with white numbering (shields for the former provincial highways were blue). Road maps usually number state highways in this fashion.
Of the total state highway network, New Zealand currently has of motorways and expressways with grade-separated access and they carry ten percent of all New Zealand traffic. The majority of the state highway network is made up of single-carriageway roads with one lane each way and at-grade access.
History
In the early days all roads were managed by local road boards. Initially they were set up by the Provinces. For example, Auckland Province passed a Highways Act in 1862 allowing their Superintendent to define given areas of settlement as Highways Districts, each with a board of trustees elected by the landowners. Land within the boundaries of highway districts became subject to a rate of not more than 1/- an acre, or of 3d in the £ of its estimated sale value and that was to be equalled by a grant from the Province. By 1913 the government was collecting £21,000 in duty on cars, but spending £40,000 on roads.
The idea of a national network of highways did not emerge until the early twentieth century, when a series of pieces of legislation was passed to allow for the designation of main highways (starting with the Main Highways Act 1922, followed by gazetting of roads) and state highways (in 1936). This saw the National Roads Board, an arm of the Ministry of Works, responsible for the state highway network.
From 1989 to 2008, state highways were the responsibility of Transit New Zealand, a Crown entity. In 1996 the funding of the network was removed from the operational functions with the creation of Transfund New Zealand, which then merged with the Land Transport Safety Authority to create Land Transport New Zealand. That was done to ensure that funding of state highways was considered on a similar basis to funding for local roads and regional council subsidised public transport. In August 2008, Transit and Land Transport NZ merged to become the NZ Transport Agency.
Every five years the NZ Transport Agency will embark on a state highway review to consider whether the existing network should be expanded or reduced, according to traffic flows, changes in industry, tourism and development.
From 2009 many new road schemes were classed as Roads |
https://en.wikipedia.org/wiki/TOPS | Total Operations Processing System (TOPS) is a computer system for managing railway locomotives and rolling stock, known for many years of use in the United Kingdom.
TOPS was originally developed between the Southern Pacific Railroad (SP), Stanford University and IBM as a replacement for paper-based systems for managing rail logistics. A jointly-owned consultancy company, TOPS On-Line Inc., was established in 1960 with the goal of implementing TOPS, as well as selling it to third parties. Development was protracted, requiring around 660 man-years of effort to produce a releasable build. During mid-1968, the first phase of the system was introduced on the SP, and quickly proved its advantages over the traditional methods practiced prior to its availability.
In addition to SP, TOPS was widely adopted throughout North America and beyond. While it was at one point in widespread use across many of the United States railroads, the system has been perhaps most prominently used in the United Kingdom. During 1971, the country's nationalised rail operation, British Rail (BR), opted to procure and integrate TOPS into its operations. The acquisition of an existing system rather than develop an indigenous programme was reasoned to be both cheaper and quicker to implement; it was noted, however, that TOPS was not capable of performing all desired functions. Since its implementation during the mid 1970s, both BR and its successors have continued to operate the system. SP itself has developed a newer system called the Terminal Information Processing System (TIPS), which replaced TOPS entirely during 1980.
Early development
During the 1950s and 1960s, it was increasingly recognised that the adoption of computer-based management systems could provide substantial benefits in various operations, particularly those involving logistics. Consequently, by the 1960s, various railways in various countries, including Japan, Canada, and the United States had begun to develop and introduce such systems. Amongst the organisations that adopted the technology early on was the Southern Pacific Railroad (SP).
During the late 1950s, SP entered into discussions with the American technology company IBM about implementing its technology for rail management purposes. IBM repurposed much of their work on the US Air Force's SAGE project, designed to direct interceptor aircraft against approaching Soviet nuclear bombers, to instead serve the needs of the Southern Pacific. The project gained the name Total Operations Processing System, or TOPS, and its development was handled by a specially established consultancy company, TOPS On-Line Inc., which was 80 percent owned by SP with the remaining stake held by IBM.
TOPS was to take all the paperwork associated with a locomotive or rolling stock - its maintenance history, its allocation to division and depot and duty, its status, its location, and much more - and keep it in computer form, constantly updated by terminals at every maintena |
https://en.wikipedia.org/wiki/BBN | BBN might refer to:
Bayesian belief network, a probabilistic graphical model that represents a set of random variables and their conditional independencies via a directed acyclic graph
Bible Broadcasting Network, a global Christian radio network headquartered in Charlotte, North Carolina
Big Bang nucleosynthesis
Big Blue Nation, the fan base of the University of Kentucky athletics programs
Big Brother Naija, a Nigerian reality show
Brevard Business News
Buckingham Browne & Nichols School (BB&N), a private school in Cambridge, Massachusetts
9-Borabicyclo(3.3.1)nonane (9-BBN), a reagent used in organic chemistry
The 3-letter code for Blackburn railway station in the UK
Bengbu South railway station, China Railway pinyin code BBN
Raytheon BBN Technologies, formerly Bolt, Beranek and Newman, a technology company in Cambridge, Massachusetts
BBN Music, American music cooperation
Beyond National Jurisdiction (BBN), United Nations Convention on the Law of the Sea |
https://en.wikipedia.org/wiki/William%20Grey%20Walter | William Grey Walter (February 19, 1910 – May 6, 1977) was an American-born British neurophysiologist, cybernetician and robotician.
Early life and education
Walter was born in Kansas City, Missouri, United States, on 19 February 1910, the only child of Minerva Lucrezia (Margaret) Hardy (1879–1953), an American journalist and Karl Wilhelm Walter (1880–1965), a British journalist who was working on the Kansas City Star at the time. His parents had met and married in Italy, and during the First World War the family moved from to Britain. Walter's ancestry was German/British on his father's side, and American/British on his mother's side. He was brought to England in 1915, and educated at Westminster School with an interest in classics and science, and entered King's College, Cambridge, in 1928. He achieved a third class in part one (1930) and a first class in physiology in part two of the natural sciences tripos (1931).
He failed to obtain a research fellowship in Cambridge and so turned to doing basic and applied neurophysiological research in hospitals, in London, from 1935 to 1939 and then at the Burden Neurological Institute in Bristol, from 1939 to 1970. He also carried out research work in the United States, in the Soviet Union and in various other places in Europe. He married twice, had two sons from his first marriage, and one from the second. According to his eldest son, Nicolas Walter, "he was politically on the left, a communist fellow-traveller before the Second World War and an anarchist sympathiser after it. Throughout his life he was a pioneer in the field of cybernetics. In 1970, he suffered a brain injury in a motor scooter accident. He never fully recovered and died seven years later, on May 6, 1977.
Brain waves
As a young man, Walter was greatly influenced by the work of the Russian physiologist Ivan Pavlov. He visited the lab of Hans Berger, who invented the electroencephalograph, or EEG machine, for measuring electrical activity in the brain. Walter produced his own versions of Berger's machine with improved capabilities, which allowed it to detect a variety of brain wave types ranging from the high speed alpha waves to the slow delta waves observed during sleep.
In the 1930s, Walter made a number of discoveries using his EEG machines at Burden Neurological Institute in Bristol. He was the first to determine by triangulation the surface location of the strongest alpha waves within the occipital lobe (alpha waves originate from the thalamus deep within the brain). Walter demonstrated the use of delta waves to locate brain tumours or lesions responsible for epilepsy. He developed the first brain topography machine based on EEG, using on an array of spiral-scan CRTs connected to high-gain amplifiers.
During the Second World War, Walter worked on scanning radar technology and guided missiles, which may have influenced his subsequent alpha wave scanning hypothesis of brain activity.
In the 1960s, Walter also went on to discover |
https://en.wikipedia.org/wiki/Church%20of%20All%20Worlds | The Church of All Worlds (CAW) is an American Neopagan religious group whose stated mission is to evolve a network of information, mythology, and experience that provides a context and stimulus for reawakening Gaia and reuniting her children through tribal community dedicated to responsible stewardship and evolving consciousness. It is based in Cotati, California.
The key founder of CAW is Oberon Zell-Ravenheart, who serves the Church as "primate", later along with his wife, Morning Glory Zell-Ravenheart (d. 2014), designated high priestess. CAW was formed in 1962, evolving from a group of friends and lovers who were in part inspired by a fictional religion of the same name in the science fiction novel Stranger in a Strange Land (1961) by Robert A. Heinlein; the church's mythology includes science fiction to this day.
CAW's members, called Waterkin, espouse Paganism, but the Church is not a belief-based religion. Members experience Divinity and honor these experiences while also respecting the views of others. They recognize "Gaea," the Earth Mother Goddess and the Father God, as well as the realm of Faeries and the deities of many other pantheons. Many of their ritual celebrations are centered on the gods and goddesses of ancient Greece.
Formation
CAW began in 1961 with a group of high school friends. One of these was Richard Lance Christie from Tulsa, Oklahoma. Christie was fascinated by the "self-actualization" concepts of Abraham Maslow, a renowned American psychologist, and after meeting then-Timothy Zell at Westminster College in Fulton, Missouri, he began experiments in extrasensory perception. It was during this time that the group read Heinlein's science fiction novel, Stranger in a Strange Land (1961), which became the inspiration for CAW.
Heinlein's book, combined with Maslow's self-actualization concepts, led to the formation of a "waterbrotherhood" that Zell and Christie called Atl, the Aztec word for "water", and also meaning "home of our ancestors". Atl became dedicated to political and social change and the group grew to about 100 members.
Zell formed CAW from Atl, and filed for incorporation as a church in 1967. It was formally chartered on March 4, 1968, making it the third Pagan Church to incorporate; they followed The Church of Aphrodite, which incorporated in New York in 1939, followed by the Goddess and wilderness-based group, Feraferia, Inc. which received their incorporation on August 1, 1967.
Early organization and beliefs
CAW modeled its organization after the group in Heinlein's novel, as a series of 9 nests in circles of advancement that were each named after a planet. The basic dogma of the CAW was that there was no dogma – the basic "belief" was a stated "lack of belief". Within their religion, the only sin was hypocrisy and the only crime in the eyes of the church was interfering with another person.
Evolution
Moving toward an emphasis on nature eventually led to a breaking of the relationship |
https://en.wikipedia.org/wiki/Doc%20%28computing%29 | .doc (an abbreviation of "document") is a filename extension used for word processing documents stored on Microsoft's proprietary Microsoft Word Binary File Format. Microsoft has used the extension since 1983.
Microsoft Word Binary File Format
Binary DOC files often contain more text formatting information (as well as scripts and undo information) than some other document file formats like Rich Text Format and Hypertext Markup Language, but are usually less widely compatible.
The DOC files created with Microsoft Word versions differ. Microsoft Word versions before Word 97 ("8.0") used a different format from the OLE and CFBF-based Microsoft Word 97 – 2003.
In Microsoft Word 2007 and later, the binary file format was replaced as the default format by the Office Open XML format, though Microsoft Word can still produce DOC files.
Application support
The DOC format is native to Microsoft Word. Other word processors, such as OpenOffice.org Writer, IBM Lotus Symphony, Apple Pages and AbiWord, can also create and read DOC files, although with some limitations. Command line programs for Unix-like operating systems that can convert files from the DOC format to plain text or other standard formats include the wv library, which itself is used directly by AbiWord.
Specification
Because the DOC file format was a closed specification for many years, inconsistent handling of the format persists and may cause some loss of formatting information when handling the same file with multiple word processing programs. Some specifications for Microsoft Office 97 binary file formats were published in 1997 under a restrictive license, but these specifications were removed from online download in 1999. Specifications of later versions of Microsoft Office binary file formats were not publicly available. The DOC format specification was available from Microsoft on request since 2006 under restrictive RAND-Z terms until February 2008. Sun Microsystems and OpenOffice.org reverse engineered the file format. On February 15, 2008, Microsoft released a .DOC format specification under the Microsoft Open Specification Promise. However, this specification does not describe all of the features used by DOC format and reverse engineered work remains necessary. Since 2008 the specification has been updated several times; the latest change was made in May 2022.
The format used in earlier, pre-97 ("1.0" 1989 through "7.0" 1995) versions of Word are less known, but both OpenOffice and LibreOffice contain open-source code for reading these formats. The format is probably related to the "Stream" format found in similar Excel versions. Word 95 also seems to have an OLE-wrapped form.
Other file formats
Some historical documentations may use the DOC filename extension for plain-text files, indicating documentation for software or hardware. The DOC filename extension was also used during the 1980s by WordPerfect for its proprietary format.
DOC is sometimes used by users of Palm OS as |
https://en.wikipedia.org/wiki/Stovepipe%20system | In engineering and computing, "stovepipe system" is a pejorative term for a system that has the potential to share data or functionality with other systems but which does not do so. The term evokes the image of stovepipes rising above buildings, each functioning individually. A simple example of a stovepipe system is one that implements its own user IDs and passwords, instead of relying on a common user ID and password shared with other systems.
Stovepipes are
A stovepipe system is generally considered an example of an anti-pattern, particularly found in legacy systems. This is due to the lack of code reuse, and resulting software brittleness due to potentially general functions only being used on limited input.
However, in certain cases stovepipe systems are considered appropriate, due to benefits from vertical integration and avoiding dependency hell. For example, the Microsoft Excel team has avoided dependencies and even maintained its own C compiler, which helped it to ship on time, have high-quality code, and generate small, cross-platform code.
See also
Not invented here
Reinventing the wheel
Stovepipe (organisation)
References
Anti-patterns
Software maintenance |
https://en.wikipedia.org/wiki/Hatcher | Hatcher is a data-driven venture capital company based in Singapore. Founded in 2016, the Company was recently named one of the Top 20 Data-Driven Venture Capital Companies of 2023 in a report sponsored by Affinity, Carta, and Vestberry.
Hatcher is a surname. Notable people with the surname include:
Allen Hatcher (born 1944), U.S. mathematician
Anna Granville Hatcher (1905–1978), U.S. linguist
Edwin Starr (born Charles Edwin Hatcher, 1942–2003), U.S. soul singer
Chris Hatcher (disambiguation), several people
Claude A. Hatcher (1876–1933), U.S. pharmacist and soft drink developer (R.C. Cola)
Derian Hatcher (born 1972), U.S. hockey player
Gene Hatcher (born 1959), U.S. boxer
Harlan Hatcher (1898–1998), American academic who served as the eighth President of the University of Michigan from 1951 to 1967
Jade Hatcher (born 1990), Australian dancer
Jason Hatcher (born 1982), U.S football player
Jeffrey Hatcher, U.S. playwright
John Bell Hatcher (1861–1904), U.S. paleontologist, discoverer of Triceratops
Julian Hatcher (1888–1963), U.S. general, firearms expert and author
Kevin Hatcher (born 1966), U.S. hockey player
Layne Hatcher (born 1999), American football player
Leigh Hatcher (born 1955), Australian journalist and news presenter
Lillian Hatcher (1915–1998), African American riveter and union organizer
Mickey Hatcher (born 1955), American baseball player
Ragen Hatcher, American politician
Richard G. Hatcher (1933–2019), American politician and lawyer
Teri Hatcher (born 1964), American actress
Wiley Ward Hatcher (1828–???), American politician in Wisconsin
William S. Hatcher (1935–2005), mathematician and philosopher
See also
Places
Hatcher, Georgia, United States
Hatcher, Kentucky, United States
Hatchers, Virginia, United States
References |
https://en.wikipedia.org/wiki/Courage%20the%20Cowardly%20Dog | Courage the Cowardly Dog is an American animated comedy horror television series created by John R. Dilworth for Cartoon Network. It was produced by Dilworth's animation studio, Stretch Films. The titular character is a dog who lives with an elderly couple in a farmhouse in the middle of Nowhere, a fictional town in Kansas. In each episode, the trio is thrown into bizarre, frequently disturbing, and often paranormal or supernatural adventures. The series is known for its dark, surreal humor and atmosphere.
Dilworth pitched the series to Hanna-Barbera's animated shorts showcase What a Cartoon! and a pilot titled "The Chicken from Outer Space" aired on Cartoon Network on February 18, 1996. The segment was nominated for an Academy Award, but lost to the Wallace and Gromit short film A Close Shave. The short was greenlit to become a series, which premiered on November 12, 1999, and ended on November 22, 2002, with 4 seasons each consisting of 13 episodes. It was nominated for three Golden Reel Awards and won one Annie Award.
Premise
Courage the Cowardly Dog follows Courage (Marty Grabstein), a kind yet easily nervously frightened troublesome dog. He was abandoned as a puppy after his parents were sent into outer space by a crazed veterinarian. Soon after, he was found in an alleyway by Muriel Bagge (Thea White), a friendly and caring Scottish woman who decided to take Courage in as her own, and was inspired by the nature of this first meeting to give him his name. In the present, he lives with the now elderly Muriel and her husband Eustace Bagge (Lionel Wilson in episodes 1–33, Arthur Anderson in episodes 34–52), a cranky and greedy man who is often jealous of Courage and refers to him as "stupid dog", and frequently uses the "Ooga Booga" mask to scare him out of his wits in an isolated farmhouse: the nearest town to the farmhouse is a town called Nowhere.
Courage and his owners frequently encounter monsters, aliens, zombies, and other paranormal or supernatural creatures that are attracted to Nowhere. The plot generally uses conventions common to horror films. Although most of the creatures the three face are hostile, some only appear that way, but are simply suffering from distress, anger, and/or acting in desperation, and sometimes because they are depressed, but they can turn out to be friendly to them.
The task of protecting Muriel and Eustace from such dangers falls on Courage, who endeavors to thwart or reconcile with the monster of the week and remedy or repair any damages done. Although Courage is occasionally aided with that task, the full extent of his efforts is usually performed unbeknownst to Muriel and Eustace. Ironically, given his name, Courage may be considered a genuine hero who often goes to great lengths to protect his owners, and a genuine coward who still expresses much of his distress with over-the-top, piercing shrieks.
Although episodic in nature, there are a handful of recurring characters in the show's cast, includ |
https://en.wikipedia.org/wiki/Cangjie%20input%20method | The Cangjie input method (Tsang-chieh input method, sometimes called Changjie, Cang Jie, Changjei or Chongkit) is a system for entering Chinese characters into a computer using a standard computer keyboard. In filenames and elsewhere, the name Cangjie is sometimes abbreviated as cj.
The input method was invented in 1976 by Chu Bong-Foo, and named after Cangjie (Tsang-chieh), the mythological inventor of the Chinese writing system, at the suggestion of Chiang Wei-kuo, the former Defense Minister of Taiwan. Chu Bong-Foo released the patent for Cangjie in 1982, as he thought that the method should belong to Chinese cultural heritage. Therefore, Cangjie has become open-source software and is on every computer system that supports traditional Chinese characters, and it has been extended so that Cangjie is compatible with the simplified Chinese character set.
Cangjie is the first Chinese input method to use the QWERTY keyboard. Chu saw that the QWERTY keyboard had become an international standard, and therefore believed that Chinese-language input had to be based on it. Other, earlier methods use large keyboards with 40 to 2400 keys, except the Four-Corner Method, which uses only number keys.
Unlike the Pinyin input method, Cangjie is based on the graphological aspect of the characters: each graphical unit, called a "radical" (not to be confused with Kangxi radicals), is re-parented by a basic character component, 24 in total, each mapped to a particular letter key on a standard QWERTY keyboard. An additional "difficult character" function is mapped to the X key. Keys are categorized into four groups, to facilitate learning and memorization. Assigning codes to Chinese characters is done by separating the constituent "radicals" of the characters.
Overview
Keys and "radicals"
The basic character components in Cangjie are called "radicals" () or "letters" (). There are 24 radicals but 26 keys; the 24 radicals (the basic shapes ) are associated with roughly 76 auxiliary shapes (), which in many cases are either rotated or transposed versions of components of the basic shapes. For instance, the letter A () can represent either itself, the slightly wider , or a 90° rotation of itself. (For a more complete account of the 76-odd transpositions and rotations than the ones listed below, see the article on Cangjie entry in Chinese Wikibooks.)
The 24 keys are placed in four groups:
Philosophical Group — corresponds to the letters 'A' to 'G' and represents the sun, the moon, and the five elements
Strokes Group — corresponds to the letters 'H' to 'N' and represents the brief and subtle strokes
Body-Related Group — corresponds to the letters 'O' to 'R' and represents various parts of the human anatomy
Shapes Group — corresponds to the letters 'S' to 'Y' and represents complex and enclosed character forms
The auxiliary shapes of each Cangjie radical have changed slightly across different versions of the Cangjie method. Thus, this is one reason that differe |
https://en.wikipedia.org/wiki/Bersirc | Bersirc is a discontinued open-source Internet Relay Chat client for the Microsoft Windows operating system. Linux and Mac OS X versions were "in development". Bersirc uses the Claro toolkit, which aims to provide an interface to native windowing systems and widgets on all operating systems. Microsoft .NET and Qt toolkit ports were also planned. The final version of Bersirc was 2.2.14.
Features
Bersirc features connections to multiple servers, a finger client, DCC File Transfers and Chat, Smart Paste, Object Pascal Scripting, Internet Time Support (Swatch Netbeats), Channel Lists, Favorite Channels list, Ident Server, AutoJoin on Invite, AutoRejoin on Kick, configurable date formats, an ICQ-like notify list, advanced filtering, a configurable user interface, and a built in IRC user guide.
License
Bersirc was licensed under the GNU Lesser General Public License and there are no plans to change this. Bersirc 2.1 was to be released under the Qt Public License, but the Qt toolkit and license were abandoned.
History
Originally bersIRC was created as a TCL/TK script unrelated to the currently used Bersirc; it was created by the irc-user: SeLf-AdHeSiVe, it was last modified in 1998, and is freely downloadable at defiled.8m.com and has been for years.
Bersirc was originally written in Delphi by Jamie Frater in 1999 as a Windows-only IRC client, comparable to HydraIRC and Klient. But development stagnated due to his growing responsibilities in real life.
On 10 February 2004 Nicholas Copeland bought the source code from Frater and released it as open-source. The older Delphi client, Bersirc 1.4, was supposed to be maintained under the name Bersirc 1.5. The original site was also archived by the new owner, including all the old plugins and extensions, but there has been almost no information about the future of the legacy clients since.
Developers stated that development of the 1.4 client stalled because the original source code extensively used proprietary software components. The 1.4 client relies on many parts of old versions of the Raize Components package.
The primary developer, Theo Julienne, announced plans to develop the 2.1 branch in C++ using the Qt toolkit, but with the release of the 2.2 branch this was changed to C using Claro Graphics.
Reception
In 2001, New Zealand gaming website GamePlanet recommended Bersirc for users to connect to its IRC services.
Bersirc has received positive reviews. The German website Winfuture referred to version 2.2.13 as a "great free alternative to the popular shareware IRC client mIRC. The program contains only what is necessary for chatting on IRC...". Snapfiles gave the program 3.5/5 stars, referring to it as "feature rich and nicely designed".
See also
Comparison of Internet Relay Chat clients
References
External links
Bersirc website
Bersirc 1.4 site (archive)
Bersirc official IRC channel
Bersirc mailing list
Jamie Frater's Official website
Internet Relay Chat clients
Free Internet Rela |
https://en.wikipedia.org/wiki/John%20Lewis%20%28disambiguation%29 | John Lewis was an American politician and civil rights leader from Georgia.
John Lewis may also refer to:
People
Academics
John Lewis (computer scientist) (born 1963), American computer science educator and author
John Lewis (headmaster) (born 1942), New Zealand headmaster of Eton College
John Lewis (philosopher) (1889–1976), British Unitarian minister and Marxist philosopher
John David Lewis (1955–2012), American political scientist, historian, and Objectivist scholar
John S. Lewis (born 1941), American professor of planetary science at the University of Arizona's Lunar and Planetary Laboratory
John T. Lewis (1932–2004), Welsh mathematical physicist
John Wilson Lewis (1930–2017), American political scientist
Businesspeople
John Lewis (brewer) (1713–1792), British brewer
John Lewis (department store founder) (1836–1928), British draper and founder of the John Lewis department store
John Spedan Lewis (1885–1963), British industrial democracy pioneer, founder of the John Lewis Partnership
Sir John Lewis (businessman), British businessman, solicitor and charity executive
John Allen Lewis (1819–1895), American newspaper editor
Clergy
John Lewis (antiquarian) (1675–1747), English clergyman
John Lewis (archbishop of Ontario) (1825–1901), Anglican bishop, archbishop and author in Canada
John Lewis (archdeacon of Cheltenham) (born 1934), British Anglican priest
John Lewis (archdeacon of Hereford) (1909–1984), Anglican priest
John Lewis (archdeacon of North-West Europe) (1939–1994), Archdeacon of North West Europe from 1982 to 1993
John Lewis (bishop of North Queensland) (1926–2015), Australian Anglican bishop
John Lewis (dean of Llandaff) (1947-2019), Welsh Anglican priest
John Lewis (dean of Ossory) (1717–1783), Dean of Ossory in Ireland from 1755 to 1783
Musicians
John Lewis (electronic musician) (died 1984), Canadian-British electronic music composer
John Lewis (pianist) (1920–2001), American jazz pianist and composer with the Modern Jazz Quartet
John Lewis (singer) (born 1947), British singer and multi-instrumentalist known professionally as Jona Lewie
Politicians
American politicians
John Lewis (1940–2020), member of US House of Representatives from Georgia
John Lewis (Arizona politician) (born 1957), businessman and mayor of Gilbert, Arizona
John Lewis (California politician) (born 1954), politician in the California Senate
John L. Lewis (politician) (1800–1886), mayor of New Orleans
John Wood Lewis Sr. (1801–1865), Confederate States of America Senator
John Lewis (Shawnee leader) ( – 1826), Native American leader of the Shawnee in Lewistown, Ohio
John F. Lewis (1818–1895), US Senator from Virginia
John H. Lewis (1830–1929), US Representative from Illinois
John V. Lewis (died 1913), Ohio state senator
John W. Lewis (1841–1913), US Representative from Kentucky
John W. Lewis Jr. (1906–1977), Illinois Secretary of State
John W. Lewis III (born 1949), politician in Florida
Australian politicians
John |
https://en.wikipedia.org/wiki/PPD | PPD may refer to:
Computing
Prearranged Payment and Deposit; a payment format used in US inter-bank debit and credit transactions, part of the ACH Network specifications.
Pixels per degree, a measure of the resolution of a display screen as seen from an angle
Points per day, a mechanism for measuring work done in the Folding@home distributed computing project
PostScript Printer Description, a file created by a printer vendor that describes the entire set of capabilities of a particular PostScript printer model
Portable Programmer Device, Device used to program programmable ic
Police and security
Personal protection detail, a security detail tasked with protecting one or more persons
Philadelphia Police Department, a police agency in Pennsylvania, United States
Phoenix Police Department, a police agency in Arizona, United States
Presidential Protective Division, part of the United States Secret Service tasked with protecting the President and others
Probation and Parole Division in New Mexico, United States
Political parties
Social Democratic Party (Portugal) (Partido Social Democrata), originally named Popular Democratic Party or Democratic People's Party (Partido Popular Democrático), a political party in Portugal
Partito Popolare Democratico Svizzero, a political party in Switzerland
Party for Democracy (Chile) (Partido por la Democracia), a political party in Chile
Popular Democratic Party (Puerto Rico) (Partido Popular Democrático), a political party in Puerto Rico
Science and medicine
Paranoid personality disorder, a mental disorder characterized by paranoia and a pervasive, long-standing suspiciousness and generalized mistrust of others.
p-Phenylenediamine, an aromatic amine
Persistent Pigment Darkening, a measure of UVA protection of sunscreens
Pharmaceutical Product Development, a global contract research organization (CRO)
Pheophorbidase, an enzyme
PPD test, Purified Protein Derivative test or Mantoux test, a screening test for tuberculosis
Postpartum depression, a mental disorder affecting parents within the first year of their child's birth
Pour point depressant, a chemical added to crude oil to lower its "pour point"
ppd, Protopanaxadiol, a molecule
Psychogenic polydipsia, excessive water intake with a psychiatric or pharmaceutical cause
Postharvest physiological deterioration, natural change of crop tissues which is undesirable for human or livestock use
Other
PPD, Inc., an American contract clinical trial company.
Pani Poni Dash!, a Japanese manga series
Partners in Population and Development, an international intergovernmental organization for "southern" countries worldwide
Pengangkutan Penumpang Djakarta, a bus operator in Jakarta, Indonesia
Pontypridd railway station, Wales, its National Rail station code
ppd, an American professional video game player
PPD-40, a Russian submachine gun
Prevention Project Dunkelfeld, an effort to help self-identifying pedophiles to stay offence free
Published |
https://en.wikipedia.org/wiki/Intertwingularity | Intertwingularity is a term coined by Ted Nelson to express the complexity of interrelations in human knowledge.
Nelson wrote in Computer Lib/Dream Machines : "EVERYTHING IS DEEPLY INTERTWINGLED. In an important sense there are no "subjects" at all; there is only all knowledge, since the cross-connections among the myriad topics of this world simply cannot be divided up neatly."
He added the following comment in the revised edition : "Hierarchical and sequential structures, especially popular since Gutenberg, are usually forced and artificial. Intertwingularity is not generally acknowledged—people keep pretending they can make things hierarchical, categorizable and sequential when they can't."
Intertwingularity is related to Nelson's coined term hypertext, partially inspired by "As We May Think" (1945) by Vannevar Bush.
Influence
Peter Morville, an influential figure in information architecture, discusses intertwingularity in some of his books. In Ambient Findability: What We Find Changes Who We Become (2005), Morville uses the concept of intertwingularity to describe the experience of using hypertext on the web and starting to use computers embedded in everyday objects, known as ubiquitous computing. In 2014, he published a book called Intertwingled: Information Changes Everything about the intertwingularity of the universe, crediting Nelson with the word.
David Weinberger wrote about intertwingularity in Everything Is Miscellaneous: The Power of the New Digital Disorder in 2008, explaining that providing unique identifiers for items helps enable intertwingularity.
The concept of intertwingularity was celebrated at the "Intertwingled: The Work and Influence of Ted Nelson" conference on April 14, 2014, at Chapman University. The organizers published a book called Intertwingled: The Work and Influence of Ted Nelson in 2015, with articles about Nelson's work and legacy. One of the organizers of the conference and editors of the book, Douglas Dechow, said, "In the 1960s, he saw a world of networked, interlinked – intertwingled, if you will – documents where all of the world’s knowledge is able to interact and intermingle[...] He was the first, or among the first, people to have that idea."
See also
Connectedness
Directed graph
Multicategory
Multiclass classification, Multicriteria classification, Multi-label classification
Multigraph
Multiple inheritance
Polysemy
Rhizome (philosophy)
References
External links
blue sky: miscellaneous by Jamie Zawinski
Intertwingly - Sam Ruby's blog named for this concept
Ted Nelson
Knowledge |
https://en.wikipedia.org/wiki/Integer%20BASIC | Integer BASIC is a BASIC interpreter written by Steve Wozniak for the Apple I and Apple II computers. Originally available on cassette for the Apple I in 1976, then included in ROM on the Apple II from its release in 1977, it was the first version of BASIC used by many early home computer owners.
The only numeric data type was the integer; floating-point numbers were not supported. Using integers allowed numbers to be stored in a much more compact 16-bit format that could be more rapidly read and processed than the 32- or 40-bit floating-point formats found in most BASICs of the era. This made it so fast that Bill Gates complained when it outperformed Microsoft BASIC in benchmarks. However, this also limited its applicability as a general-purpose language.
Another difference with other BASICs of the era is that Integer BASIC treated strings as arrays of characters, similar to the system in C or Fortran 77. Substrings were accessed using array slicing rather than string functions. This style was introduced in HP Time-Shared BASIC, and could also be found in other contemporary BASICs patterned on HP, like North Star BASIC and Atari BASIC. It contrasted with the style found in BASICs derived from DEC, including Microsoft BASIC.
The language was initially developed under the name GAME BASIC and referred to simply as Apple BASIC when it was introduced on the Apple I. It became Integer BASIC when it was ported to the Apple II and shipped alongside Applesoft BASIC, a port of Microsoft BASIC which included floating-point support. Integer BASIC was phased out in favor of Applesoft BASIC starting with the Apple II Plus in 1979.
History
As a senior in high school, Steve Wozniak's electronics teacher arranged for the leading students in the class to have placements at local electronics companies. Wozniak was sent to Sylvania where he programmed in FORTRAN on an IBM 1130. That same year, General Electric placed a terminal in the high school that was connected to one of their mainframes running their time-sharing BASIC service, which they were heavily promoting at the time. After being given three days of access, the students were asked to write letters on why the school should receive a terminal permanently, but their efforts were ultimately unsuccessful.
Some years later, Wozniak was working at Hewlett-Packard (HP) running simulations of chip designs and logic layout for calculators. HP made major inroads in the minicomputer market with their HP 2000 series machines running a custom timesharing version of BASIC. For approximately , one could build up a reasonably equipped machine that could support between 16 and 32 users running BASIC programs. While expensive, it was still a fraction of the cost of the mainframe machines and, for heavy users, less than the timesharing services. HP followed this with the HP 9830, a desktop-sized machine for that also ran BASIC, which Wozniak had access to.
In January 1975 the Altair 8800 was announced and sparked off |
https://en.wikipedia.org/wiki/Memory%20paging | In computer operating systems, memory paging (or swapping on some Unix-like systems) is a memory management scheme by which a computer stores and retrieves data from secondary storage for use in main memory. In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. Paging is an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physical memory.
For simplicity, main memory is called "RAM" (an acronym of random-access memory) and secondary storage is called "disk" (a shorthand for hard disk drive, drum memory or solid-state drive, etc.), but as with many aspects of computing, the concepts are independent of the technology used.
Depending on the memory model, paged memory functionality is usually hardwired into a CPU/MCU by using a Memory Management Unit (MMU) or Memory Protection Unit (MPU) and separately enabled by privileged system code in the operating system's kernel. In CPUs implementing the x86 instruction set architecture (ISA) for instance, the memory paging is enabled via the CR0 control register.
History
In the 1960s, swapping was an early virtual memory technique. An entire program or entire segment would be "swapped out" (or "rolled out") from RAM to disk or drum, and another one would be swapped in (or rolled in). A swapped-out program would be current but its execution would be suspended while its RAM was in use by another program; a program with a swapped-out segment could continue running until it needed that segment, at which point it would be suspended until the segment was swapped in.
A program might include multiple overlays that occupy the same memory at different times. Overlays are not a method of paging RAM to disk but merely of minimizing the program's RAM use. Subsequent architectures used memory segmentation, and individual program segments became the units exchanged between disk and RAM. A segment was the program's entire code segment or data segment, or sometimes other large data structures. These segments had to be contiguous when resident in RAM, requiring additional computation and movement to remedy fragmentation.
Ferranti's Atlas, and the Atlas Supervisor developed at the University of Manchester, (1962), was the first system to implement memory paging. Subsequent early machines, and their operating systems, supporting paging include the IBM M44/44X and its MOS operating system (1964),, the SDS 940 and the Berkeley Timesharing System (1966), a modified IBM System/360 Model 40 and the CP-40 operating system (1967), the IBM System/360 Model 67 and operating systems such as TSS/360 and CP/CMS (1967), the RCA 70/46 and the Time Sharing Operating System (1967), the GE 645 and Multics (1969), and the PDP-10 with added BBN-designed paging hardware and the TENEX operating system (1969).
Those machines, and subsequent machines supporting memory paging, use either a set of |
https://en.wikipedia.org/wiki/SSE2 | SSE2 (Streaming SIMD Extensions 2) is one of the Intel SIMD (Single Instruction, Multiple Data) processor supplementary instruction sets introduced by Intel with the initial version of the Pentium 4 in 2000. It extends the earlier SSE instruction set, and is intended to fully replace MMX. Intel extended SSE2 to create SSE3 in 2004. SSE2 added 144 new instructions to SSE, which has 70 instructions. Competing chip-maker AMD added support for SSE2 with the introduction of their Opteron and Athlon 64 ranges of AMD64 64-bit CPUs in 2003.
Features
Most of the SSE2 instructions implement the integer vector operations also found in MMX. Instead of the MMX registers they use the XMM registers, which are wider and allow for significant performance improvements in specialized applications. Another advantage of replacing MMX with SSE2 is avoiding the mode switching penalty for issuing x87 instructions present in MMX because it is sharing register space with the x87 FPU. The SSE2 also complements the floating-point vector operations of the SSE instruction set by adding support for the double precision data type.
Other SSE2 extensions include a set of cache control instructions intended primarily to minimize cache pollution when processing infinite streams of information, and a sophisticated complement of numeric format conversion instructions.
AMD's implementation of SSE2 on the AMD64 (x86-64) platform includes an additional eight registers, doubling the total number to 16 (XMM0 through XMM15). These additional registers are only visible when running in 64-bit mode. Intel adopted these additional registers as part of their support for x86-64 architecture (or in Intel's parlance, "Intel 64") in 2004.
Differences between x87 FPU and SSE2
FPU (x87) instructions provide higher precision by calculating intermediate results with 80 bits of precision, by default, to minimise roundoff error in numerically unstable algorithms (see IEEE 754 design rationale and references therein). However, the x87 FPU is a scalar unit only whereas SSE2 can process a small vector of operands in parallel.
If code designed for x87 is ported to the lower precision double precision SSE2 floating point, certain combinations of math operations or input datasets can result in measurable numerical deviation, which can be an issue in reproducible scientific computations, e.g. if the calculation results must be compared against results generated from a different machine architecture. A related issue is that, historically, language standards and compilers had been inconsistent in their handling of the x87 80-bit registers implementing double extended precision variables, compared with the double and single precision formats implemented in SSE2: the rounding of extended precision intermediate values to double precision variables was not fully defined and was dependent on implementation details such as when registers were spilled to memory.
Differences between MMX and SSE2
SSE2 extends |
https://en.wikipedia.org/wiki/Minds%2C%20Machines%20and%20G%C3%B6del | "Minds, Machines and Gödel" is J. R. Lucas's 1959 philosophical paper in which he argues that a human mathematician cannot be accurately represented by an algorithmic automaton. Appealing to Gödel's incompleteness theorem, he argues that for any such automaton, there would be some mathematical formula which it could not prove, but which the human mathematician could both see, and show, to be true.
The paper is a Gödelian argument against mechanism.
Lucas presented the paper in 1959 to the Oxford Philosophical Society. It was first printed in Philosophy, XXXVI, 1961, then reprinted in The Modeling of Mind, Kenneth M. Sayre and Frederick J. Crosson, eds., Notre Dame Press, 1963, and in Minds and Machines, ed. Alan Ross Anderson, Prentice-Hall, 1964, .
See also
Artificial intelligence
Philosophy of artificial intelligence
External links
Minds, Machines and Gödel — the original paper
Philosophy essays
1959 essays
Works originally published in Philosophy (journal)
Cognitive science literature |
https://en.wikipedia.org/wiki/Legion%20%28software%29 | Legion is a computer software system variously classified as a distributed operating system, a peer-to-peer system, metacomputing software, and middleware. It is an object-based system designed to provide secure, transparent access to large numbers of machines, both to computational power and data.
The project was funded by the National Science Foundation and other funding agencies, and was mostly developed at the University of Virginia by a group led by Andrew Grimshaw. The Legion people formed the Avaki Corporation to commercialize the project in 1999, but Avaki eventually abandoned the Legion software base, and finally went bankrupt in 2005, with its intellectual property acquired by Sybase.
Legion is the successor to Hydra, developed to run on the C.mmp hardware system developed at Carnegie Mellon University in the late 1960s.
One of the slogans of the Legion project is "mechanism, not policy!"
References
Distributed data storage
Distributed operating systems
University of Virginia
Carnegie Mellon University |
https://en.wikipedia.org/wiki/Data%20rate | Data rate and data transfer rate can refer to several related and overlapping concepts in communications networks:
Achieved rate
Bit rate, the number of bits that are conveyed or processed per unit of time
Data signaling rate or gross bit rate, a bit rate that includes protocol overhead
Symbol rate or baud rate, the number of symbol changes, waveform changes, or signaling events across the transmission medium per unit of time
Data-rate units, measures of the bit rate or baud rate of a link
Data transfer rate (disk drive), a data rate specific to disk drive operations
Throughput, the rate of successful message delivery, or level of bandwidth consumption
Capacity
Bandwidth (computing), the maximum rate of data transfer across a given path
Channel capacity, an information-theoretic upper bound on the rate at which data can be reliably transmitted, given noise on a channel
Temporal rates
Broad-concept articles |
https://en.wikipedia.org/wiki/Reverse%20domain%20hijacking | Reverse domain name hijacking (also known as reverse cybersquatting or commonly abbreviated as 'RDNH'), occurs where a rightful trademark owner attempts to secure a domain name by making cybersquatting claims against a domain name’s "cybersquatter" owner. This often intimidates domain name owners into transferring ownership of their domain names to trademark owners to avoid legal action, particularly when the domain names belong to smaller organizations or individuals. Reverse domain name hijacking is most commonly enacted by larger corporations and famous individuals, in defense of their rightful trademark or to prevent libel or slander.
Reverse domain name "hijacking" is a legal remedy to counter the practice of domain squatting, wherein individuals hold many registered domain names containing famous third party trademarks with the intent of profiting by selling the domain names back to trademark owners. Trademark owners initially responded by filing cybersquatting lawsuits against registrants to enforce their trademark rights. However, as the number of cybersquatting incidents grew, trademark owners noticed that registrants would often settle their cases rather than litigate. Cybersquatting lawsuits are a defensive strategy to combat cybersquatting, however such lawsuits may also be used as a way of strongarming innocent domain name registrants into giving up domain names that the trademark owner is not, in fact, entitled to.
UDRP restrictions on reverse domain name hijacking
Paragraph 15(e) of the UDRP Rules defines reverse domain name hijacking as the filing of a complaint in bad faith, resulting in the abuse of the UDRP administrative process. It becomes difficult to objectively quantify what constitutes subjective “bad faith,” resulting in panels often viewing parties’ factual discrepancies as indeterminable or immaterial at best. Therefore, despite its express recognition in the UDRP, reverse domain name hijacking findings are rare and based heavily on the factual circumstances surrounding each case.
Circumstances which have been cited by WIPO panels as justification for a finding of reverse domain name hijacking includes:
When the registration of the domain predates any trademark rights of the Complainant.
When the complaint has provided no evidence of bad faith registration or use directed towards the Complainant.
Where the Complainant has used the UDRP as a Plan "B" option to attempt to secure the domain after commercial negotiations have broken off.
Where the Complainant has attempted to deceive the domain owner or makes misrepresentations or fails to disclose material information to the panel.
Examples of such findings include the following WIPO cases: Gregory Ricks vs. RVK, Inc. (Formally RVKuhns and Associates) (2015). RVK,Inc. has also recently been accused of failure to promptly report a significant breach of its network, which resulted in fraud against the Central Bank of Chile. Scott Gratsinger is in cha |
https://en.wikipedia.org/wiki/PDM | PDM may stand for:
Computing
.pdm (disambiguation), several file formats
Personal data manager - portable hardware tool enabling secure storage and easy access to user data
Phase dispersion minimization, a data analysis technique for finding periodic components in time series data
Physical data model, a representation of a data design as implemented, or intended to be implemented, in a database management system
Point distribution model, deformable contour model (used in Computer Vision)
Programming Development Manager
Protocol-dependent module, decision making about routing table entries
Pulse-density modulation, a form of modulation used in analog to digital conversions
Product Data Management, Product data management (PDM) or Product information management (PIM) is the business function often within product lifecycle management (PLM)
Politics
Democratic Party of Moldova, a political party of Moldova
Modern Democratic Party, a political party of Moldova
Mexican Democratic Party a former political party in Mexico
Pakistan Democratic Movement, an anti-establishment coalition of political parties in Pakistan
Party-directed mediation, a mediation approach that relies heavily on pre-caucus and joint sessions
People's Democratic Movement, a political party of Papua New Guinea
People's Democratic Movement (Dominica), a political party of Dominica
People's Democratic Movement (Montserrat), a political party of Montserrat
People's Democratic Movement (Turks and Caicos Islands), a political party of the Turks and Caicos Islands
Popular Democratic Movement, Namibian political party
Southern Democratic Party a former political party in Calabria, Italy
Others
École Polyvalente Deux-Montagnes, a high school in Deux-Montagnes, Quebec, Canada
Partial-propensity direct method, a stochastic simulation algorithm for chemical reaction networks
PDM (cycling team), the cycling team sponsored by Philips Dupont Magnetics
PDM Group of Institutions (popularly known as PDM) is a Group of Educational Institutions located in India
Penny-drop moment, an abbreviation used by cryptic crossword bloggers
Philips Dupont Magnetics, a joint venture between Philips and DuPont
Ponta da Madeira, an enormous deep-water port in northern Brazil
Polarization-division multiplexing
Post-detonation material such as trinitite formed following nuclear weapon detonations
Prague Daily Monitor, a newspaper published in the Czech Republic
Precedence Diagram Method, a project scheduling technique
Predictive maintenance, a method for planning equipment maintenance based on their condition
Public Domain Mark, a way of distinguishing works that are free of known copyright
Psychodynamic Diagnostic Manual, a psychoanalytically-oriented manual for use by mental health professionals
PDM-A, modernized version of the RPO-A reactive flamethrower
M86 Pursuit Deterrent Munition, a type of anti-personnel mine produced in US
Personal Diabetes Manager a machine |
https://en.wikipedia.org/wiki/Video%20game%20programmer | A game programmer is a software engineer, programmer, or computer scientist who primarily develops codebases for video games or related software, such as game development tools. Game programming has many specialized disciplines, all of which fall under the umbrella term of "game programmer". A game programmer should not be confused with a game designer, who works on game design.
History
In the early days of video games (from the early 1970s to mid-1980s), a game programmer also took on the job of a designer and artist. This was generally because the abilities of early computers were so limited that having specialized personnel for each function was unnecessary. Game concepts were generally light and games were only meant to be played for a few minutes at a time, but more importantly, art content and variations in gameplay were constrained by computers' limited power.
Later, as specialized arcade hardware and home systems became more powerful, game developers could develop deeper storylines and could include such features as high-resolution and full color graphics, physics, advanced artificial intelligence and digital sound. Technology has advanced to such a great degree that contemporary games usually boast 3D graphics and full motion video using assets developed by professional graphic artists. Nowadays, the derogatory term "programmer art" has come to imply the kind of bright colors and blocky design that were typical of early video games.
The desire for adding more depth and assets to games necessitated a division of labor. Initially, art production was relegated to full-time artists. Next game programming became a separate discipline from game design. Now, only some games, such as the puzzle game Bejeweled, are simple enough to require just one full-time programmer. Despite this division, however, most game developers (artists, programmers and even producers) have some say in the final design of contemporary games.
Disciplines
A contemporary video game may include advanced physics, artificial intelligence, 3D graphics, digitised sound, an original musical score, complex strategy and may use several input devices (such as mice, keyboards, gamepads and joysticks) and may be playable against other people via the Internet or over a LAN. Each aspect of the game can consume all of one programmer's time and, in many cases, several programmers. Some programmers may specialize in one area of game programming, but many are familiar with several aspects. The number of programmers needed for each feature depends somewhat on programmers' skills, but mostly are dictated by the type of game being developed.
Game engine programmer
Game engine programmers create the base engine of the game, including the simulated physics and graphics disciplines. Increasingly, video games use existing game engines, either commercial, open source or free. They are often customized for a particular game, and these programmers handle these modifications.
Physics |
https://en.wikipedia.org/wiki/RAR%20%28file%20format%29 | RAR is a proprietary archive file format that supports data compression, error correction and file spanning. It was developed in 1993 by Russian software engineer Eugene Roshal and the software is licensed by win.rar GmbH. The name RAR stands for Roshal Archive.
File format
The filename extensions used by RAR are .rar for the data volume set and .rev for the recovery volume set. Previous versions of RAR split large archives into several smaller files, creating a "multi-volume archive". Numbers were used in the file extensions of the smaller files to keep them in the proper sequence. The first file used the extension .rar, then .r00 for the second, and then .r01, .r02, etc.
RAR compression applications and libraries (including GUI based WinRAR application for Windows, console rar utility for different OSes and others) are proprietary software, to which Alexander L. Roshal, the elder brother of Eugene Roshal, owns the copyright. Version 3 of RAR is based on Lempel-Ziv (LZSS) and prediction by partial matching (PPM) compression, specifically the PPMd implementation of PPMII by Dmitry Shkarin.
The minimum size of a RAR file is 20 bytes. The maximum size of a RAR file is 9,223,372,036,854,775,807 (263−1) bytes, which is one byte less than 8 EiB.
Versions
The RAR file format revision history:
1.3 – the first public version, does not have the "Rar!" signature.
1.5 – changes are not known.
2.0 – released with WinRAR 2.0 and Rar for MS-DOS 2.0; features the following changes:
Multimedia compression for true color bitmap images and uncompressed audio.
Up to 1 MB compression dictionary.
Introduces archives data recovery protection record.
2.9 – released in WinRAR version 3.00. Feature changes in this version include:
File extensions is changed from {volume name}.rar, {volume name}.r00, {volume name}.r01, etc. to {volume name}.part001.rar, {volume name}.part002.rar, etc.
Encryption of both file data and file headers.
Improves compression algorithm using 4 MB dictionary size, Dmitry Shkarin's PPMII algorithm for file data.
Optional creation of "recovery volumes" (.rev files) for error correction, which can be used to reconstruct missing files in a volume set.
Support for archive files larger than 9 GB.
Support for Unicode file names stored in UTF-16 little endian format.
5.0 – supported by WinRAR 5.0 and later. Changes in this version:
Maximum compression dictionary size increased to 1 GB (default for WinRAR 5.x is 32 MB and 4 MB for WinRAR 4.x).
Maximum path length for files in RAR and ZIP archives is increased up to 2048 characters.
Support for Unicode file names stored in UTF-8 format.
Faster compression and decompression.
Multicore decompression support.
Greatly improves recovery.
Optional AES encryption increased from 128-bit to 256-bit.
Optional 256-bit BLAKE2 file hash instead of a default 32-bit CRC32 file checksum.
Optional duplicate file detection.
Optional NTFS hard and symbolic links.
Optional Quick Open Record. Ra |
https://en.wikipedia.org/wiki/Ra%C3%BAl%20De%20Molina | Raúl "El Gordo" De Molina (born March 29, 1959, in Havana, Cuba) is a Cuban-American television presenter, best known as the co-host of the Univision Network entertainment news show El Gordo y la Flaca, for which he won multiple Emmy Awards.
Early life and education
Raúl De Molina was born in Havana, Cuba in 1959. De Molina's father was detained as a political prisoner for 24 years by the Communist Party of Cuba. De Molina's family left Havana and lived in Spain when he was 10 years old. They moved to the United States when he was 16. As a child, De Molina became interested in photography. While in high school, he took photos for the school yearbook. He later attended The Art Institute of Fort Lauderdale. He was also a graduate of Miami Photography College in North Miami.
Career
Photojournalism
After graduating from the Art Institute of Fort Lauderdale, De Molina worked as a freelance photographer during the 1980s. He first freelanced for Associated Press, before freelancing for the Time, Newsweek, U.S. News & World Report and USA Today.
He documented news and live sports events, before eventually becoming a celebrity photographer. He was known for photographing celebrities and royalty including Elizabeth II, Diana, Princess of Wales, Oprah Winfrey, Robert De Niro and Melanie Griffith. In an interview with Entertainment Weekly, De Molina commented on the lengths he went to for his photographs, including dangling outside of a helicopter to photograph the wedding of Jane Fonda and Ted Turner.
His candid photos appeared in publications such as Life, ¡Hola!, and Paris Match. During the United States invasion of Panama, De Molina was one of the first photographers present and took photos of the inside of Manuel Noriega's house. In addition, he was a special contributor for the Spanish edition of Travel + Leisure magazine and has been featured in National Geographic Traveler, and The New York Times Travel section.
In 2005, De Molina's photography was displayed in the "Pictures of a Lifetime" exhibition at the Gary Nader Gallery in Miami.
Television
De Molina began appearing on various talk shows during the 1990s, including The Joan Rivers Show, Maury Povich Show, and Geraldo. These early television appearances brought him to the attention of Spanish-language channels Telemundo and Univision. He made appearances on shows like Sábado Gigante, and in 1998 became the co-host of El Gordo y la Flaca, alongside Lili Estefan. He has continued to host the show ever since, which has more viewers on its time slot than ABC, CBS, NBC and FOX combined.
He has hosted and reported for programs such as Primer Impacto, Ocurrió Así, Hola América, and Club Telemundo, as well as primetime specials and his own productions.
De Molina has also covered live events such as the Latin Grammy Awards in Las Vegas, and the New Year's Eve celebration in Times Square, Manhattan. While in South Africa to cover the 2010 FIFA World Cup, Molina was stranded during a safari whe |
https://en.wikipedia.org/wiki/VMware | VMware, Inc. is an American cloud computing and virtualization technology company with headquarters in Palo Alto, California. VMware was the first commercially successful company to virtualize the x86 architecture.
VMware's desktop software runs on Microsoft Windows, Linux, and macOS. VMware ESXi, its enterprise software hypervisor, is an operating system that runs on server hardware.
In May 2022, Broadcom Inc. announced an agreement to acquire VMware in a cash-and-stock transaction valued at $61 billion.
History
Early history
In 1998, VMware was founded by Diane Greene, Mendel Rosenblum, Scott Devine, Ellen Wang and Edouard Bugnion. Greene and Rosenblum were both graduate students at the University of California, Berkeley. Edouard Bugnion remained the chief architect and CTO of VMware until 2005, and went on to found Nuova Systems (now part of Cisco). For the first year, VMware operated in stealth mode, with roughly 20 employees by the end of 1998. The company was launched officially early in the second year, in February 1999, at the DEMO Conference organized by Chris Shipley. The first product, VMware Workstation, was delivered in May 1999, and the company entered the server market in 2001 with VMware GSX Server (hosted) and VMware ESX Server (hostless).
In 2003, VMware launched VMware Virtual Center, vMotion, and Virtual Symmetric Multi-Processing (SMP) technology. 64-bit support was introduced in 2004.
EMC acquisition
On January 9, 2004, under the terms of the definitive agreement announced on December 15, 2003, EMC (now Dell EMC) acquired the company with $625 million in cash. On August 14, 2007, EMC sold 15% of VMware to the public via an initial public offering. Shares were priced at per share and closed the day at .
On July 8, 2008, after disappointing financial performance, the board of directors fired VMware co-founder, president and CEO Diane Greene, who was replaced by Paul Maritz, a retired 14-year Microsoft veteran who was heading EMC's cloud computing business unit. Greene had been CEO since the company's founding, ten years earlier. On September 10, 2008, Mendel Rosenblum, the company's co-founder, chief scientist, and the husband of Diane Greene, resigned.
On September 16, 2008, VMware announced a collaboration with Cisco Systems. One result was the Cisco Nexus 1000V, a distributed virtual software switch, an integrated option in the VMware infrastructure.
In April 2011, EMC transferred control of the Mozy backup service to VMware.
On April 12, 2011, VMware released an open-source platform-as-a-service system called Cloud Foundry, as well as a hosted version of the service. This supported application deployment for Java, Ruby on Rails, Sinatra, Node.js, and Scala, as well as database support for MySQL, MongoDB, Redis, Postgres, RabbitMQ.
In August 2012, Pat Gelsinger was appointed as the new CEO of VMware, coming over from EMC. Paul Maritz went over to EMC as Head of Strategy before moving on to lead the Pivotal spi |
https://en.wikipedia.org/wiki/Packet%20Switch%20Stream | Packet Switch Stream (PSS) was a public data network in the United Kingdom, provided by British Telecommunications (BT). It operated from the late 1970s through to the mid 2000s.
Research, development and implementation
EPSS
Roger Scantlebury was seconded from the National Physical Laboratory to the British Post Office Telecommunications division (BPO-T) in 1969. He had worked with Donald Davies in the late 1960s pioneering the implementation of packet switching and the associated communication protocols on the local-area NPL network. By 1973, BPO-T engineers had developed a packet-switching communication protocol from basic principles for an Experimental Packet Switched Service (EPSS) based on a virtual call capability. However, the protocols were complex and limited; Donald Davies described them as "esoteric".
Ferranti supplied the hardware and software. The handling of link control messages (acknowledgements and flow control) was different from that of most other networks. The EPSS began operating in 1977, the first public data network in the UK.
IPSS
The International Packet Switch Stream (IPSS) was an international network service, based on the X.25 standard, launched by the international division of BT. This venture was driven by the high demand for affordable access to US-based database and other network services. A service was provided by IPSS to this market, which started operation in 1978. IPSS was later linked to PSS and other packet switched networks around the world using gateways based on the X.75 standard.
PSS
A period of pre-operational testing with customers, mainly UK universities and computer manufacturers, began in 1980. Packet Switch Stream launched as a commercial service on 20 August 1981 based on X.25/X.75. The experimental predecessor network (EPSS) formally closed down on 31 July 1981 after all the existing connections had been moved to PSS.
The network was initially based upon a dedicated modular packet switch using DCC's TP 4000 communication processor hardware. The operating system and the packet switching software were developed by Telenet (later on GTE Telenet). BT bought Telenet's system via Plessey Controls of Poole, Dorset who also sold Telex and Traffic light systems. PSS was launched before Telenet's own upgrade of its network and, at the time, most other networks still used general purpose mini-computers as packet switches.
For a brief time the EEC operated a packet switched network, Euronet, and a related project Diane to encourage more database and network services to develop in Europe. These connections moved over to PSS and other European networks as commercial X.25 services launched.
Later on the InterStream gateway between the Telex network and PSS was introduced based on a low speed PAD interface.
In addition, BT used Telematics packet switches for the Vascom network to support the Prestel service.
The network management systems were based in London and Manchester. Packet switches were ins |
https://en.wikipedia.org/wiki/IBM%20OpenDX | OpenDX stands for Open Data Explorer and is IBM's scientific data visualization software. It can handle complex domains (such as a mechanical gear or a human brain) along with measured or computed data. The data may be scalar (such as the concentration of a chemical agent in the brain), vector or tensor fields (like the displacement or strain tensor fields when the gear is in action) at different points of the object. The points at which data is measured don't have to be equally spaced and not need to be homogeneously spaced. The project started in 1991 as Visualization Data Explorer.
OpenDX can produce 3D images with the quantities plotted as color or gray-scale coded, or as vectors, streamlines and ribbons. It allows the object to be sliced to obtain a view of the internal structure, and then represent the data on this slice plane as a height-coded graph. It can rotate the object to provide a view of the data from any angle and allows animations of this motion to be made.
Graphical user interface
OpenDX is based on the Motif widget toolkit on top of the X Window System. Its graphical user interface has a wide variety of interactors, both direct and indirect. Direct interactors allow the user to directly manipulate images (e.g. rotate or zoom). Indirect interactors (dials, switches, buttons, sliders) enable the user to control various aspects of her visualization. Interactors are smarter because they are data-driven. Interactors are auto-ranging and self-limiting. They examine the data and, depending on its type, will determine the minimum and maximum of the data, or create a list for an option menu based on the data. The user can even set the label of the interactor based on some aspect of the data (e.g., metadata).
The data-driven concept is not simply for sliders, dials and option menus. It also applies to vector interactors. These reconfigure themselves based on the dimensionality of the data. They also auto-range themselves based on the maximum and minimum of each vector component.
Design
Data Explorer is a system of tools and user interfaces for visualizing data. In general terms the visualization of data can be considered a 3-stage process:
Describing and importing data
Processing the data through a visualization program
Presenting the resulting image.
The principal components of OpenDX are
Data model This is the set of definitions, rules, and conventions used to describe Data Explorer entities (including data fields, geometrical objects, and images).
Data Prompter A user interface for describing data to be imported into Data Explorer.
Data Browser A user interface for viewing a data file, determining the layout and organization of the data it contains, and transferring this information to the Data Prompter.
Scripting Language A high-level language for creating visualization programs. It can also be used directly in a command mode to perform various tasks. Visual programs—i.e., the visualization programs displayed in |
https://en.wikipedia.org/wiki/Lossless%20Transform%20Audio%20Compression | Lossless Transform Audio Compression (LTAC) is a compression algorithm developed by Tilman Liebchen, Marcus Purat and Peter Noll at Institute for Telecommunications, Technical University Berlin (TU Berlin), to compress PCM audio in a lossless manner, unlike conventional lossy audio compression algorithms (like MP3).
LTAC will not be developed any further since it has been superseded by its successor Lossless Predictive Audio Compression (LPAC), which is based on linear prediction. This makes it much faster than LTAC and even leads to better compression results. LPAC has become official standard as MPEG-4 Audio Lossless Coding.
See also
Lossless Predictive Audio Compression (LPAC)
References
External links
Lossless Transform Coding (LTAC) of Audio Signals
Lossless audio codecs |
https://en.wikipedia.org/wiki/Spasim | Spasim is a 32-player 3D networked space flight simulation game and first-person space shooter developed by Jim Bowery for the PLATO computer network and released in March 1974. The game features four teams of eight players, each controlling a planetary system, where each player controls a spaceship in 3D space in first-person view. Two versions of the game were released: in the first, gameplay is limited to flight and space combat, and in the second systems of resource management and strategy were added as players cooperate or compete to reach a distant planet with extensive resources while managing their own systems to prevent destructive revolts. Although Maze is believed to be the earliest 3D game and first-person shooter as it had shooting and multiplayer by fall 1973, Spasim has previously been considered along with it to be one of the "joint ancestors" of the first-person shooter genre, due to earlier uncertainty over Mazes development timeline.
The game was developed in 1974 at the University of Illinois at Urbana–Champaign; Bowery was assisted in the second version by fellow student Frank Canzolino. Bowery encountered the PLATO system of thousands of graphics terminals remotely connected to a set of mainframe computers that January while assisting a computer art class. He was inspired to create the original game by the multiplayer PLATO action game Empire, and the second version by the concept of positive sum games. Spasim was one of the first 3D first-person video games; at one point, Bowery offered a reward to any person who could offer proof that Spasim was not the first. He also claims that Spasim was the direct initial inspiration for several other PLATO games, including Airace (1974) and Panther (1975).
Gameplay
Spasim is a multiplayer space flight simulation game, in which up to 32 players fly spaceships around 4 planetary systems. Players are grouped into teams of up to 8 players, with 1 team per system; players add their names to the rosters of the four teams, named Aggstroms, Diffractions, Fouriers, and Lasers, each with a different type of spaceship from Star Trek. Players control their ships in first person in a 3D environment, with other ships appearing as wireframe models. There is no hidden-line removal implemented on the models, meaning that the models appear see-through and the player can see the wireframe of the "back" of an object as well. The positions of the planets and other players relative to the player update once a second. Players can fire "phasers and torpedoes" to destroy other players' ships. Spasim was intended to include an educational component; players enter instructions to move their spaceships using polar coordinates, e.g. altitude and azimuth, along with acceleration, while their position in space is given in Cartesian coordinates. Players can switch their perspective between their ship, their starting space station, and torpedoes they have launched, in addition to changing the angle and magnificati |
https://en.wikipedia.org/wiki/Lossless%20predictive%20audio%20compression | Lossless predictive audio compression (LPAC) is an improved lossless audio compression algorithm developed by Tilman Liebchen, Marcus Purat and Peter Noll at Institute for Telecommunications, Technical University Berlin (TU Berlin), to compress PCM audio in a lossless manner, unlike conventional audio compression algorithms which are lossy.
Meanwhile, it is no longer developed because an advanced version of it has become an official standard under the name of MPEG-4 Audio Lossless Coding.
See also
Monkey's Audio (APE)
Free Lossless Audio Codec (FLAC)
Lossless Transform Audio Compression (LTAC)
True Audio (TTA)
External links
Lossless Predictive Audio Compression (LPAC)
The basic principles of lossless audio data compression (TTA) The Lossless Audio Blog Lossless Audio News & Information Site.
Lossless audio codecs |
https://en.wikipedia.org/wiki/Trigram | Trigrams are a special case of the n-gram, where n is 3. They are often used in natural language processing for performing statistical analysis of texts and in cryptography for control and use of ciphers and codes.
Frequency
Context is very important, varying analysis rankings and percentages are easily derived by drawing from different sample sizes, different authors; or different document types: poetry, science-fiction, technology documentation; and writing levels: stories for children versus adults, military orders, and recipes.
Typical cryptanalytic frequency analysis finds that the 16 most common character-level trigrams in English are:
Because encrypted messages sent by telegraph often omit punctuation and spaces, cryptographic frequency analysis of such messages includes trigrams that straddle word boundaries. This causes trigrams such as "edt" to occur frequently, even though it may never occur in any one word of those messages.
Examples
The sentence "the quick red fox jumps over the lazy brown dog" has the following word-level trigrams:
the quick red
quick red fox
red fox jumps
fox jumps over
jumps over the
over the lazy
the lazy brown
lazy brown dog
And the word-level trigram "the quick red" has the following character-level trigrams (where an underscore "_" marks a space):
the
he_
e_q
_qu
qui
uic
ick
ck_
k_r
_re
red
References
Natural language processing
Computational linguistics
Speech recognition |
https://en.wikipedia.org/wiki/Amiga%20Halfbrite%20mode | Extra Half Brite (also referred to as Extra-Half-Brite or Extra-Halfbrite), usually abbreviated as EHB, is a planar display mode of the Amiga computer. This mode uses six bitplanes (six bits/pixel). The first five bitplanes index 32 colors selected from a 12-bit color space (4096 possible colors). If the bit on the sixth bitplane is set, the display hardware halves the brightness of the corresponding color component. This way 64 simultaneous colors are possible (32 arbitrary colors plus 32 half-bright components) while only using 32 color registers. The number of color registers is a hardware limitation of pre-AGA chipsets used in Amiga computers.
Some contemporary game titles (Fusion, Defender of the Crown, Agony, Lotus II or Unreal) and animations (HalfBrite Hill) used EHB mode as a hardware-assisted means to display shadows or silhouettes. EHB was also often used as general-purpose 64 color mode with the aforementioned restrictions.
Some early versions of the first Amiga, the Amiga 1000 sold in the United States lack the EHB video mode, which is present in all later Amiga models.
See also
Original Chip Set
Hold-And-Modify
References
External links
Animated demo using Halfbrite mode (requires Java)
Amiga Graphics Archive - Extra Half-Brite
Computer display standards
Amiga
Color depths |
https://en.wikipedia.org/wiki/Autonomous%20system%20%28Internet%29 | An autonomous system (AS) is a collection of connected Internet Protocol (IP) routing prefixes under the control of one or more network operators on behalf of a single administrative entity or domain, that presents a common and clearly defined routing policy to the Internet. Each AS is assigned an autonomous system number (ASN), for use in Border Gateway Protocol (BGP) routing. Autonomous System Numbers are assigned to Local Internet Registries (LIRs) and end user organizations by their respective Regional Internet Registries (RIRs), which in turn receive blocks of ASNs for reassignment from the Internet Assigned Numbers Authority (IANA). The IANA also maintains a registry of ASNs which are reserved for private use (and should therefore not be announced to the global Internet).
Originally, the definition required control by a single entity, typically an Internet service provider (ISP) or a very large organization with independent connections to multiple networks, that adhered to a single and clearly defined routing policy. In March 1996, the newer definition came into use because multiple organizations can run BGP using private AS numbers to an ISP that connects all those organizations to the Internet. Even though there may be multiple autonomous systems supported by the ISP, the Internet only sees the routing policy of the ISP. That ISP must have an officially registered ASN.
Until 2007, AS numbers were defined as 16-bit integers, which allowed for a maximum of 65,536 assignments. Since then, the IANA has begun to also assign 32-bit AS numbers to regional Internet registries (RIRs). These numbers are written preferably as simple integers, in a notation referred to as "asplain", ranging from 0 to 4,294,967,295 (hexadecimal 0xFFFF FFFF). Or, alternatively, in the form called "asdot+" which looks like x.y, where x and y are 16-bit numbers. Numbers of the form 0.y are exactly the old 16-bit AS numbers. The special 16-bit ASN 23456 ("AS_TRANS") was assigned by IANA as a placeholder for 32-bit ASN values for the case when 32-bit-ASN capable routers ("new BGP speakers") send BGP messages to routers with older BGP software ("old BGP speakers") which do not understand the new 32-bit ASNs.
The first and last ASNs of the original 16-bit integers (0 and 65,535) and the last ASN of the 32-bit numbers (4,294,967,295) are reserved and should not be used by operators; AS0 is used by all five RIRs to invalidate unallocated space. ASNs 64,496 to 64,511 of the original 16-bit range and 65,536 to 65,551 of the 32-bit range are reserved for use in documentation. ASNs 64,512 to 65,534 of the original 16-bit AS range, and 4,200,000,000 to 4,294,967,294 of the 32-bit range are reserved for Private Use.
The number of unique autonomous networks in the routing system of the Internet exceeded 5,000 in 1999, 30,000 in late 2008, 35,000 in mid-2010, 42,000 in late 2012, 54,000 in mid-2016 and 60,000 in early 2018.
The number of allocated ASNs exceeded 100,000 as of March |
https://en.wikipedia.org/wiki/WinNuke | In computer security, WinNuke is an example of a Nuke remote denial-of-service attack (DoS) that affected the Microsoft Windows 95, Microsoft Windows NT, Microsoft Windows 3.1x computer operating systems and Windows 7. The exploit sent a string of out-of-band data (OOB data) to the target computer on TCP port 139 (NetBIOS), causing it to lock up and display a Blue Screen of Death. This does not damage or change the data on the computer's hard disk, but any unsaved data would be lost.
Details
The so-called OOB simply means that the malicious TCP packet contained an Urgent pointer (URG). The "Urgent pointer" is a rarely used field in the TCP header, used to indicate that some of the data in the TCP stream should be processed quickly by the recipient. Affected operating systems did not handle the Urgent pointer field correctly.
A person under the screen-name "_eci" published C source code for the exploit on May 9, 1997. With the source code being widely used and distributed, Microsoft was forced to create security patches, which were released a few weeks later. For a time, numerous flavors of this exploit appeared going by such names as fedup, gimp, killme, killwin, knewkem, liquidnuke, mnuke, netnuke, muerte, nuke, nukeattack, nuker102, pnewq, project1, , simportnuke, sprite, sprite32, vconnect, vzmnuker, wingenocide, winnukeit, winnuker02, winnukev95, wnuke3269, wnuke4, and wnuke95.
A company called SemiSoft Solutions from New Zealand created a small program, called AntiNuke, that blocks WinNuke without having to install the official patch.
Years later, a second incarnation of WinNuke that uses another, similar exploit was found.
See also
Ping of death
References
Attacks against TCP
Denial-of-service attacks |
https://en.wikipedia.org/wiki/CIH | CIH or cih may refer to:
CIH (computer virus), also known as Chernobyl and Spacefiller
CIH Bank, a wholly owned subsidiary of the Moroccan Caisse de dépôt et de gestion
Capricorn Investment Holdings, a southern African umbrella for the Capricorn group of companies
Certified Industrial Hygienist, professional credential for occupational hygienists in the United States
IATA code for Changzhi Wangcun Airport
The Chartered Institute of Housing, a UK-based professional society
ISO 639-3 code for the Chinali language
Opel cam-in-head engine, a series of vehicle engines
Chromogenic immunohistochemistry |
https://en.wikipedia.org/wiki/CIH%20%28computer%20virus%29 | CIH, also known as Chernobyl or Spacefiller, is a Microsoft Windows 9x computer virus that first emerged in 1998. Its payload is highly destructive to vulnerable systems, overwriting critical information on infected system drives and, in some cases, destroying the system BIOS. The virus was created by Chen Ing-hau (陳盈豪, pinyin: Chén Yíngháo), a student at Tatung University in Taiwan. It was believed to have infected sixty million computers internationally, resulting in an estimated () in commercial damages.
Chen claimed to have written the virus as a challenge against bold claims of antiviral efficiency by antivirus software developers. Chen stated that after classmates at Tatung University spread the virus, he apologized to the school and made an antivirus program available for public download. Weng Shi-hao (翁世豪), a student at Tamkang University, co-authored with the antivirus program. Prosecutors in Taiwan could not charge Chen at the time because no victims came forward with a lawsuit. Nevertheless, these events led to new computer crime legislation in Taiwan.
The name "Chernobyl Virus" was coined sometime after the virus was already well known as CIH and refers to the complete coincidence of the payload trigger date in some variants of the virus (actually the virus creation date in 1998, to trigger exactly a year later) and the Chernobyl disaster, which happened in the Soviet Union on April 26, 1986.
The name "Spacefiller" was introduced because most viruses write their code to the end of the infected file, with infected files being detectable because their file size increases. In contrast, CIH looks for gaps in the existing program code, where it then writes its code, preventing an increase in file size; in that way, the virus avoids detection.
History
The virus first emerged in 1998. In March 1999, several thousand IBM Aptivas shipped with the CIH virus, just one month before the virus would trigger. In July 1999, copies of remote administration tool Back Orifice 2000 given out for DEF CON 7 attendees were discovered by the organizers to have been infected with CIH. On December 31, 1999, Yamaha shipped a software update to their CD-R400 drives that was infected with the virus. In July 1998, a demo version of the first-person shooter game SiN was infected by one of its mirror sites.
CIH's dual payload was delivered for the first time on April 26, 1999, with most of the damage occurring in Asia. CIH filled the first 1024 KB of the host's boot drive with zeros and then attacked certain types of BIOS. Both of these payloads served to render the host computer inoperable, and for most ordinary users the virus essentially destroyed the PC. Technically, however, it was possible to replace the BIOS chip, and methods for recovering hard disk data emerged later.
Today, CIH is not as widespread as it once was, due to awareness of the threat and the fact it only affects older Windows 9x (95, 98, ME) operating systems.
The virus made another come |
https://en.wikipedia.org/wiki/Transport%20in%20Oradea | Transport in Oradea is provided by a network of public transport operating trams and buses, as well as roads. Tram and bus services are run by Oradea Transport Local S.A. (commonly known as OTL).
Roads
Tram
There are three tram lines in Oradea, and these run together for most of their journey. The lines are 1, 2 and 3. Lines 1 and 3 run together in a city loop, while Line 2 joins part of this loop in part of its journey. All quarters except Vie are served by trams. Trams do not actually run in the city centre, since this is a historic area with narrow streets. They do, however, run on the border of the city in a loop, and then continue through to all the residential areas and quarters.
Line 1 (1 red, 1R [Roşu], and 1 black, 1N [Negru] (completes the circuit the other way around)) runs from Sinteza Factory, which is located in the industrial west of Oradea, very close to the township of Borş and the Hungarian border, via the quarter of Rogerius, the central railway station, the city centre and then loops back to Rogerius.
Line 2 runs from Ioșia quarter via the southern city centre and the heart of the city (Unirii Square) to Cantemir quarter and then Nufărul.
Line 3 (3 red, 3R [Roşu], and 3 black, 3N [Negru]) (completes the circuit the other way around)) runs from Nufărul and then does the city loop from the Civic Centre onwards, terminating back at the Civic Centre near the main market.
Line 3 was called Line 4 before 2004, and there was no route named Line 3. However, in order to make the line order more logical, Line 4 was renamed Line 3 in 2004.
In 2008 and 2009 10 new Siemens ULF trams were introduced to the Oradea tram system. The first Siemens tram was put in service in April 2008.
In 2018, Oradea took delivery of 10 Tatra KT4D trams from the Berlin transport operator BVG.
The 10th European Tramdriver Championship was held in the city on the 3rd June 2023
Grafic circulaţie tramvaie
http://www.otlra.ro/cms/upload/www.otlra.ro/pagini/ro/linii/up_16.png
http://www.otlra.ro/cms/upload/www.otlra.ro/pagini/ro/linii/up_16.pnghttp://www.otlra.ro/cms/upload/www.otlra.ro/pagini/ro/linii/up_16.png
Bus
OTL runs the following bus routes in Oradea:
Linii autobuze
References
External links
Tram and Bus Schedule
Oradea
Oradea |
https://en.wikipedia.org/wiki/400-series%20highways | The 400-series highways are a network of controlled-access highways in the Canadian province of Ontario, forming a special subset of the provincial highway system. They are analogous to the Interstate Highway System in the United States or the Autoroute system of neighbouring Quebec, and are regulated by the Ministry of Transportation of Ontario (MTO). The 400-series designations were introduced in 1952, although Ontario had been constructing divided highways for two decades prior. Initially, only Highways 400, 401 and 402 were numbered; other designations followed in the subsequent decades. The network is situated almost entirely in Southern Ontario, although Highway 400 extends into the more remote northern portion of the province.
Modern 400-series highways have high design standards, speed limits of , with a limit on select stretches, and various collision avoidance and traffic management systems. The design of 400-series highways has set the precedent for a number of innovations used throughout North America, including the parclo interchange and a modified Jersey barrier design known as the Ontario Tall Wall. As a result, they currently experience one of the lowest accident and fatality rates comparative to traffic volume in North America.
History
When the 400-series designations were first applied to Ontario freeways in 1952,
several divided highways had already been opened in Southern Ontario.
Originally inspired by German Autobahns, Minister of Highways Thomas McQuesten planned a network of "Dual Highways" across the southern half of the province during the 1930s.
The Queen Elizabeth Way (QEW) was first, an upgrade to the partially constructed Middle Road in 1934.
McQuesten also sought out the economic opportunities that came with linking Toronto to Detroit and New York state by divided roadways with interchanges at major crossroads. Although he no longer served as Minister of Highways by the onset of World War II, his ambitious plans would come to fruition in the following decades as Highways 400, 401, 402, 403 (between Woodstock and Hamilton), and 405.
The construction boom following the war resulted in many new freeway construction projects in the province. The Toronto–Barrie Highway (Highway 400), Trans-Provincial Highway (Highway 401),
a short expansion of Highway 7 approaching the Blue Water Bridge in Sarnia (Highway 402),
and an expansion of Highway 27 (eventually designated as Highway 427 by the mid-1970s) into part of the Toronto Bypass were all underway or completed by the early 1950s. Seeking a way to distinguish the controlled-access freeways from the existing two-lane King's Highways, the Department of Highways created the 400-series designations in 1952. By the end of the year, Highway 400, 401, and 402 were numbered, although they were only short stubs of their current lengths. Highway 401 was assembled across the province in a patchwork fashion,
becoming fully navigable between Windsor and the Quebec border on Novembe |
https://en.wikipedia.org/wiki/Hopewell%20tradition | The Hopewell tradition, also called the Hopewell culture and Hopewellian exchange, describes a network of precontact Native American cultures that flourished in settlements along rivers in the northeastern and midwestern Eastern Woodlands from 100 BCE to 500 CE, in the Middle Woodland period. The Hopewell tradition was not a single culture or society but a widely dispersed set of populations connected by a common network of trade routes.
At its greatest extent, the Hopewell exchange system ran from the northern shores of Lake Ontario south to the Crystal River Indian Mounds in modern-day Florida. Within this area, societies exchanged goods and ideas, with the highest amount of activity along waterways, which were the main transportation routes. Peoples within the Hopewell exchange system received materials from all over the territory of what now comprises the mainland United States. Most of the items traded were exotic materials; they were delivered to peoples living in the major trading and manufacturing areas. These people converted raw materials into products and exported them through local and regional exchange networks. Hopewell communities traded finished goods, such as steatite platform pipes, far and wide; they have been found among grave goods in many burials outside the Midwest.
Origins
Although the origins of the Hopewell are still under discussion, the Hopewell culture can also be considered a cultural climax.
Hopewell populations originated in western New York and moved south into Ohio, where they built upon the local Adena mortuary tradition. Or, Hopewell was said to have originated in western Illinois and spread by diffusion... to southern Ohio. Similarly, the Havana Hopewell tradition was thought to have spread up the Illinois River and into southwestern Michigan, spawning Goodall Hopewell. (Dancey 114)
American archaeologist Warren K. Moorehead popularized the term Hopewell after his 1891 and 1892 explorations of the Hopewell Mound Group in Ross County, Ohio. The mound group was named after Mordecai Hopewell, whose family then owned the property where the earthworks are sited. What any of the various peoples now classified as Hopewellian called themselves is unknown; indeed, what language families they spoke is unknown. Archaeologists applied the term "Hopewell" to a broad range of cultures. Many of the Hopewell communities were temporary settlements of one to three households near rivers. They practiced a mixture of hunting, gathering, and horticulture.
Politics and hierarchy
The Hopewell inherited from their Adena forebears an incipient social stratification. This increased social stability and reinforced sedentism, specialized use of resources, and probably population growth. Hopewell societies cremated most of their deceased and reserved burial for only the most important people. In some sites, hunters apparently were given a higher status in the community: their graves were more elaborate and contained more status goo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.