source
stringlengths
32
199
text
stringlengths
26
3k
https://en.wikipedia.org/wiki/Phil%20Katz
Phillip Walter Katz (November 3, 1962 – April 14, 2000) was a computer programmer best known as the co-creator of the Zip file format for data compression, and the author of PKZIP, a program for creating zip files that ran under DOS. A copyright lawsuit between System Enhancement Associates (SEA) and Katz's company, PKWARE, Inc., was widely publicized in the BBS community in the late 1980s. Phil Katz's software business was very successful, but he struggled with social isolation and chronic alcoholism in the last years of his life. Career Phil Katz was a graduate of Nicolet High School in Glendale, Wisconsin. Katz graduated from the Computer Science Engineering program at the University of Wisconsin–Milwaukee. After his graduation, he was hired by the Allen-Bradley company as a programmer. He wrote code to run programmable logic controllers, which operated manufacturing equipment on shop floors worldwide for Allen-Bradley's customers. PKARC and PKWARE Katz left Allen-Bradley in 1986 to work for Graysoft, a Milwaukee-based software company. At the time, he had worked on an alternative to Thom Henderson's ARC, named PKARC. ARC was written in C, with the source code available on System Enhancement Associates' bulletin board system (BBS). PKARC, written partially in assembly language, was much faster. Katz had a special flair for optimizing code: besides writing critical code in assembly language, he would write C code to perform the same task in several different ways and then examine the compiler output to see which produced the most efficient assembly code. He first publicly released only PKXARC, an extraction program, as freeware. Its much greater speed caused it to spread very quickly throughout the BBS community. Strong positive feedback and encouragement prompted Katz to release his compression program, PKARC, and eventually to make his software shareware. Katz founded PKWARE, Inc. (Phil Katz Software) in 1986, with the company's operations located in his home in Glendale, Wisconsin, but he remained at Graysoft until 1987. Steve Burg, a former Graysoft programmer, joined PKWARE in 1988. PKZIP PKZIP made Katz one of the most well-known shareware authors of all time. Although PKWARE became a multimillion-dollar company, Katz was more noted for his technical expertise than business prowess. His family assisted him in running the company, but he eventually fired them when they denied him access to the company's profits. Katz was adamantly opposed to Microsoft Windows in the early 1990s. This led to PKWARE missing out on the opportunity to be the first to bring PKZIP to the platform, with WinZip becoming the standard tool on the platform instead. Lawsuits In the late 1980s, a dispute arose between System Enhancement Associates (SEA), maker of the ARC program, and PKWARE. SEA sued Katz for trademark and copyright infringement. The most substantial evidence at trial was from an independent software expert, John Navas, who was appointed by t
https://en.wikipedia.org/wiki/HSV
HSV may refer to: Computing HSL and HSV color space, which describes colors by hue, saturation, and lightness (or luminosity) Virology Herpes simplex virus (HSV) spreads in skin contact with skin and herpes wounds on the skin; transmitted by kissing Places Huntsville, Alabama, United States Huntsville International Airport Sport Hamburger SV, a German football club HSV Handball, a German handball club in Hamburg Hannover 96, or Hannoverscher Sportverein von 1896, a German football club Other uses HSV (TV station) broadcasting in Melbourne, Australia Hennessey Special Vehicles, a recently established American automobile division by Hennessey Holden Special Vehicles, an Australian automobile manufacturer
https://en.wikipedia.org/wiki/Economy%20of%20Belgium
The economy of Belgium is a highly developed, high-income, mixed economy. Belgium's economy has capitalised on the country's central geographic location, and has a well-developed transport network, and diversified industrial and commercial base. Belgium was the first European country to join the Industrial Revolution in the early 19th century. It has since developed a highly-developed transportation infrastructure made up of ports (most notably the Port of Antwerp), canals, railways, and highways, in order to integrate its industry with that of its neighbours. Belgium's industry is concentrated mainly in the populous region of Flanders in the north, around Brussels and in the two biggest Walloon cities, Liège and Charleroi, along the Sillon industriel. Belgium imports raw materials and semi-finished goods that are further processed and re-exported. Except for its coal, which is no longer economical to exploit, Belgium has few natural resources other than fertile soils. Despite the heavy industrial component, services dominate the country's economy and account for 77.2% of Belgium's gross domestic product (GDP), while agriculture accounts for 0.7%. With exports equivalent to over two-thirds of the country's gross national income (GNI), Belgium depends heavily on world trade. Belgium's trade advantages are derived from its central geographic location and a highly skilled, multilingual, and productive work force. One of the founding members of the European Community, Belgium strongly supports deepening the powers of the present-day European Union (EU) to integrate European economies further. About three-quarters of its trade is with other EU countries. Belgium began circulating the euro currency in January 2002. In 2021, Belgium's public debt was about 108% of the country's gross domestic product (GDP). History In the twentieth century For 50 years through World War II, French-speaking Wallonia was a technically advanced, industrial region, with its industry concentrated along the sillon industriel, while Dutch-speaking Flanders was predominantly agricultural with some industry, mainly processing agricultural products and textiles. This disparity began to fade during the interwar period. When Belgium emerged from World War II with its industrial infrastructure relatively undamaged thanks to the Galopin doctrine, the stage was set for a period of rapid development, particularly in Flanders. The postwar boom years, enhanced by the establishment of the European Union and NATO headquarters in Brussels, contributed to the rapid expansion of light industry throughout most of Flanders, particularly along a corridor stretching between Brussels and Antwerp, which is the second largest port in Europe after Rotterdam. Foreign investment contributed significantly to Belgian economic growth in the 1960s. In particular, U.S. firms played a leading role in the expansion of light industrial and petrochemical industries in the 1960s and 1970s. The olde
https://en.wikipedia.org/wiki/Telecommunications%20in%20Andorra
The telephone system in Andorra, including mobile, data and Internet is operated exclusively by the Andorran national telecommunications company, Andorra Telecom, formerly known as Servei de Telecomunicacions d'Andorra (STA). The same company is also responsible for managing the technical infrastructure and national broadcasting networks for radio and television, both analogue and digital. At one time, Andorra shared the country code of France (+33), and also had a special routing code for calls from Spain, but now has its own country calling code, 376. Telephone Telephones - main lines in use: 37,200 (2007)country comparison to the world: 171 Telephones - mobile cellular: 68,500 (2007)country comparison to the world: 187 Telephone system:domestic: modern system with microwave radio relay connections between exchangesinternational: landline circuits to France and Spain Radio Radio broadcast stations: AM 0, FM 1, shortwave 0 (easy access to radio and television broadcasts originating in France and Spain) (2007) There are two abandoned high power mediumwave broadcasting facilities, situated at Encamp and on Pic Blanc. Radios: 16,000 (1997) Television Television broadcast stations: 1 (2007) Televisions: 27,000 (1997) As announced on 25 September 2007 all analogue transmissions ceased. Television services are now provided by Televisió Digital Terrestre (TDT), which as well as broadcasting the one Andorran channel, broadcasts channels from Spain and France. Internet Internet access is available only through the national telephone company, Andorra Telecom (formerly STA). Access was first provided in the 1990s by dial-up, but this has since been mostly replaced throughout the country by ADSL at a fixed speed of 2 Mbit/s, and in metropolitan areas of the country by fibre to the home at a fixed speed of 100 Mbit/s. The average cost of a high-speed internet connection is €35.35. The whole country was to have Fibre-Optic to the Home at a minimum speed of 100 Mbit/s by 2010, and the availability was complete in June 2012 although actual available bandwidth to the end user never exceeds 10Mbit/s. Internet service providers (ISPs): 1 Internet hosts: 23,368 (2008) country comparison to the world: 90 Internet users: 58,900 (2007) country comparison to the world: 161 Country codes: AD (1997) Country calling code: 376 References Andorra Andorra no:Kommunikasjon i Andorra
https://en.wikipedia.org/wiki/Transport%20in%20Argentina
Transport in Argentina is mainly based on a complex network of routes, crossed by relatively inexpensive long-distance buses and by cargo trucks. The country also has a number of national and international airports. The importance of the long-distance train is minor today, though in the past it was widely used and is now regaining momentum after the re-nationalisation of the country's commuter and freight networks. Fluvial transport is mostly used for cargo. Within the urban areas, the main transportation system is by the bus or colectivo; bus lines transport millions of people every day in the larger cities and their metropolitan areas as well as a bus rapid transport system known as Metrobus. Buenos Aires additionally has an underground, the only one in the country, and Greater Buenos Aires is serviced by a system of suburban trains. Public transportation A majority of people use public transport rather than personal cars to move around in the cities, especially in common business hours, since parking can be both difficult and expensive. Cycling is becoming increasingly common in big cities as a result of a growing network of cycling lanes in cities like Buenos Aires and Rosario. Bus The Colectivo (urban bus) cover the cities with numerous lines. Fares might be fixed for the whole city, or they might depend on the destination. Colectivos often cross municipal borders into the corresponding metropolitan areas. In some cases there are diferenciales (special services) which are faster, and notably more expensive. Bus lines in a given city might be run by different private companies and/or by the municipal state, and they might be painted in different colours for easier identification. The city of Buenos Aires has in recent years been expanding its Metrobus BRT system to complement its existing Underground network and it is estimated that, along with other measures, it will increase the city's use of public transport by 30 percent. Taxi Taxis are very common and relatively accessible price-wise. They have different colours and fares in different cities, though a highly contrasted black-and-yellow design is common to the largest conurbations. Call-taxi companies (radio-taxis) are very common, while the remisse is another form of hired transport: they are very much like call-taxis, but do not share a common design, and trip fares are agreed beforehand instead of using the meter. Although, there are often fixed prices for common destinations. Commuter rail Suburban trains connect Buenos Aires city with the Greater Buenos Aires area, (see: Buenos Aires commuter rail network). Every weekday, more than 1.4 million people commute to the Argentine capital for work and other business. These suburban trains work between 4 AM and 1 AM. The busiest lines are electric, several are diesel powered, while some of these are currently being electrified, while the rolling stock is being replaced across the city. Until recently, Trenes de Buenos Aires, UG
https://en.wikipedia.org/wiki/Transport%20in%20Bahrain
Transport in Bahrain encompasses road transportation by car, air transportation and shipping. It has been announced that a monorail network will be constructed. The country traditionally had one of the cheapest prices for gasoline at $0.78 per gallon ($0.21 per litre). Due to massive budgetary deficits and low oil prices, the Bahraini government increased the price of gasoline in 2016–2017 to $0.37 per litre. Road transport The widening of roads in the old districts of Manama and the development of a national network linking the capital to other settlements commenced as early as the arrival of the first car in 1914. The continuous increase in the number of cars from 395 in 1944, to 3,379 in 1954 and to 18,372 cars in 1970 caused urban development to primarily focus on expanding the road network, widening carriageways and the establishment of more parking spaces. Many tracks previously laid in the pre-oil era (prior to the 1930s) were resurfaced and widened, turning them into 'road arteries'. Initial widening of the roads started in the Manama Souq district, widening its main roads by demolishing encroaching houses. A series of ring roads were constructed (Isa al Kabeer avenue in the 1930s, Exhibition avenue in the 1960s and Al Fateh highway in the 1980s), to push back the coastline and extend the city area in belt-like forms. To the north, the foreshore used to be around Government Avenue in the 1920s but it shifted to a new road, King Faisal Road, in the early 1930s which became the coastal road. To the east, a bridge connected Manama to Muharraq since 1929, a new causeway was built in 1941 which replaced the old wooden bridge. Transits between the two islands peaked after the construction of the Bahrain International Airport in 1932. To the south of Manama, roads connected groves, lagoons and marshes of Hoora, Adliya, Gudaibiya and Juffair. Villages such as Mahooz, Ghuraifa, Seqaya served as the end of these roads. To the west, a major highway was built that linked Manama to the isolated village port of Budaiya, this highway crossed through the 'green belt' villages of Sanabis, Jidhafs and Duraz. To the south, a road was built that connected Manama to Riffa. The discovery of oil accelerated the growth of the city's road network. The four main islands and all the towns and villages are linked by well-constructed roads. There were of roadways , of which were paved. Multiple causeways stretching over , connect Manama with Muharraq Island, and the Sitra Causeway joins Sitra to the main island. A four-lane highway atop a causeway, linking Bahrain with the Saudi Arabian mainland via the island of Umm an-Nasan was completed in December, 1986, and financed by Saudi Arabia. Private vehicles and taxis are the primary means of transportation in the city. Bahrain changed from driving on the left to driving on the right in November 1967. International highways King Fahd Causeway, measuring connects Bahrain and Saudi Arabia through a multipl
https://en.wikipedia.org/wiki/IPC
IPC may refer to: Computing Infrastructure protection centre or information security operations center Instructions per cycle or instructions per clock, an aspect of central-processing performance Inter-process communication, the sharing of data across multiple and commonly specialized processes Industrial PC, is an x86 PC-based computing platform for industrial applications IP camera Education or Polytechnical Institute of Coimbra, Portugal International Pacific College, Palmerston North, New Zealand International People's College, Denmark Organizations Idaho Power Company, a regulated electrical power utility in the United States IPC (electronics), an international trade association for the printed-board and electronics assembly industries IPC Healthcare, US healthcare corporation Imperial Privy Council, another name for the Privy Council of the United Kingdom IPC Systems, a firm providing communication systems for financial markets Immigration Policy Center, the research and policy arm of the American Immigration Council Indian Pentecostal Church of God, the largest indigenous Pentecostal movement in India Intellectual Property Committee, a coalition of US corporations with intellectual property interests International Panorama Council, an international network of specialists in the field of panoramas International Paralympic Committee, an international non-profit organisation of elite sports for athletes with disabilities International Pepper Community, an intergovernmental commodity organization International Post Corporation, the International Post Corporation (IPC) International Presbyterian Church International Publishing Company, a former name of TI Media IPC Media, another former name of TI Media Iraq Petroleum Company Institute Pasteur du Cambodge, a health institute in Cambodia Other uses Infection Prevention and Control Indian Penal Code Índice de Precios al Consumidor, a Chilean consumer price index Indice de Precios y Cotizaciones, an index of the Mexican Stock Exchange Information and Privacy Commissioner of Ontario, an officer of the Legislative Assembly of Ontario in Canada Insulation-piercing connector International Patent Classification, a classification system Integrated Food Security Phase Classification, a tool for improving food security analysis and decision-making Integrated pest control, a discipline for combining biological and other pest management strategies Mataveri International Airport's IATA code International Plumbing Code Investigatory Powers Commissioner
https://en.wikipedia.org/wiki/Busy%20beaver
In theoretical computer science, the busy beaver game aims at finding a terminating program of a given size that produces the most output possible. Since an endlessly looping program producing infinite output is easily conceived, such programs are excluded from the game. More precisely, the busy beaver game consists of designing a halting Turing machine with alphabet {0,1} which writes the most 1s on the tape, using only a given set of states. The rules for the 2-state game are as follows: the machine must have at most two states in addition to the halting state, and the tape initially contains 0s only. A player should conceive a transition table aiming for the longest output of 1s on the tape while making sure the machine will halt eventually. An nth busy beaver, BB-n or simply "busy beaver" is a Turing machine that wins the n-state busy beaver game. That is, it attains the largest number of 1s among all other possible n-state competing Turing machines. The BB-2 Turing machine, for instance, achieves four 1s in six steps. Determining whether an arbitrary Turing machine is a busy beaver is undecidable. This has implications in computability theory, the halting problem, and complexity theory. The concept was first introduced by Tibor Radó in his 1962 paper, "On Non-Computable Functions". The game The n-state busy beaver game (or BB-n game), introduced in Tibor Radó's 1962 paper, involves a class of Turing machines, each member of which is required to meet the following design specifications: The machine has n "operational" states plus a Halt state, where n is a positive integer, and one of the n states is distinguished as the starting state. (Typically, the states are labelled by 1, 2, ..., n, with state 1 as the starting state, or by A, B, C, ..., with state A as the starting state.) The machine uses a single two-way infinite (or unbounded) tape. The tape alphabet is {0, 1}, with 0 serving as the blank symbol. The machine's transition function takes two inputs: the current non-Halt state, the symbol in the current tape cell, and produces three outputs: a symbol to write over the symbol in the current tape cell (it may be the same symbol as the symbol overwritten), a direction to move (left or right; that is, shift to the tape cell one place to the left or right of the current cell), and a state to transition into (which may be the Halt state). There are thus (4n + 4)2n n-state Turing machines meeting this definition because the general form of the formula is (symbols × directions × (states + 1))(symbols × states). The transition function may be seen as a finite table of 5-tuples, each of the form (current state, current symbol, symbol to write, direction of shift, next state). "Running" the machine consists of starting in the starting state, with the current tape cell being any cell of a blank (all-0) tape, and then iterating the transition function until the Halt state is entered (if ever). If, and only if, the machine eventually
https://en.wikipedia.org/wiki/Ron%20Rivest
Ronald Linn Rivest (; born May 6, 1947) is a cryptographer and computer scientist whose work has spanned the fields of algorithms and combinatorics, cryptography, machine learning, and election integrity. He is an Institute Professor at the Massachusetts Institute of Technology (MIT), and a member of MIT's Department of Electrical Engineering and Computer Science and its Computer Science and Artificial Intelligence Laboratory. Along with Adi Shamir and Len Adleman, Rivest is one of the inventors of the RSA algorithm. He is also the inventor of the symmetric key encryption algorithms RC2, RC4, and RC5, and co-inventor of RC6. (RC stands for "Rivest Cipher".) He also devised the MD2, MD4, MD5 and MD6 cryptographic hash functions. Education Rivest earned a Bachelor's degree in Mathematics from Yale University in 1969, and a Ph.D. degree in Computer Science from Stanford University in 1974 for research supervised by Robert W. Floyd. Career At MIT, Rivest is a member of the Theory of Computation Group, and founder of MIT CSAIL's Cryptography and Information Security Group. Rivest was a founder of RSA Data Security (now merged with Security Dynamics to form RSA Security), Verisign, and of Peppercoin. His former doctoral students include Avrim Blum, Benny Chor, Sally Goldman, Burt Kaliski, Anna Lysyanskaya, Ron Pinter, Robert Schapire, Alan Sherman, and Mona Singh. Research Rivest is especially known for his research in cryptography. He has also made significant contributions to algorithm design, to the computational complexity of machine learning, and to election security. Cryptography The publication of the RSA cryptosystem by Rivest, Adi Shamir, and Leonard Adleman in 1978 revolutionized modern cryptography by providing the first usable and publicly described method for public-key cryptography. The three authors won the 2002 Turing Award, the top award in computer science, for this work. The award cited "their ingenious contribution to making public-key cryptography useful in practice". The same paper that introduced this cryptosystem also introduced Alice and Bob, the fictional heroes of many subsequent cryptographic protocols. In the same year, Rivest, Adleman, and Michael Dertouzos first formulated homomorphic encryption and its applications in secure cloud computing, an idea that would not come to fruition until over 40 years later when secure homomorphic encryption algorithms were finally developed. Rivest was one of the inventors of the GMR public signature scheme, published with Shafi Goldwasser and Silvio Micali in 1988, and of ring signatures, an anonymized form of group signatures invented with Shamir and Yael Tauman Kalai in 2001. He designed the MD4 and MD5 cryptographic hash functions, published in 1990 and 1992 respectively, and a sequence of symmetric key block ciphers that include RC2, RC4, RC5, and RC6. Other contributions of Rivest to cryptography include chaffing and winnowing, the interlock protocol for authenticating ano
https://en.wikipedia.org/wiki/Workstation
A workstation is a special computer designed for technical or scientific applications. Intended primarily to be used by a single user, they are commonly connected to a local area network and run multi-user operating systems. The term workstation has been used loosely to refer to everything from a mainframe computer terminal to a PC connected to a network, but the most common form refers to the class of hardware offered by several current and defunct companies such as Sun Microsystems, Silicon Graphics, Apollo Computer, DEC, HP, NeXT, and IBM which powered the 3D computer graphics revolution of the late 1990s. Workstations formerly offered higher performance than mainstream personal computers, especially in CPU, graphics, memory, and multitasking. Workstations are optimized for the visualization and manipulation of different types of complex data such as 3D mechanical design, engineering simulations like computational fluid dynamics, animation, medical imaging, image rendering, and mathematical plots. Typically, the form factor is that of a desktop computer, which consists of a high-resolution display, a keyboard, and a mouse at a minimum, but also offers multiple displays, graphics tablets, and 3D mice for manipulating objects and navigating scenes. Workstations were the first segment of the computer market to present advanced accessories, and collaboration tools like videoconferencing. The increasing capabilities of mainstream PCs since the late 1990s have reduced distinction between the PCs and workstations. Typical 1980s workstations have expensive proprietary hardware and operating systems to categorically distinguish from standardized PCs. From the 1990s and 2000s, IBM's RS/6000 and IntelliStation have RISC-based POWER CPUs running AIX, and its IBM PC Series and Aptiva corporate and consumer PCs have Intel x86 CPUs. However, by the early 2000s, this difference largely disappeared, since workstations use highly commoditized hardware dominated by large PC vendors, such as Dell, Hewlett-Packard, and Fujitsu, selling x86-64 systems running Windows or Linux. History Origins and development Perhaps the first computer that might qualify as a workstation is the IBM 1620, a small scientific computer designed to be used interactively by a single person sitting at the console. It was introduced in 1959. One peculiar feature of the machine is that it lacks any arithmetic circuitry. To perform addition, it requires a memory-resident table of decimal addition rules. This reduced the cost of logic circuitry, enabling IBM to make it inexpensive. The machine is codenamed CADET and was initially rented for $1000 per month. In 1965, the IBM 1130 scientific computer became the successor to 1620. Both of these systems run Fortran and other languages. They are built into roughly desk-sized cabinets, with console typewriters. They have optional add-on disk drives, printers, and both paper-tape and punched-card I/O. Early workstations are generally dedicated
https://en.wikipedia.org/wiki/Justin%20Frankel
Justin Frankel (born 1978) is an American computer programmer best known for his work on the Winamp media player application and for inventing the Gnutella peer-to-peer network. Frankel is also the founder of Cockos Incorporated, which creates music production and development software such as the REAPER digital audio workstation, the NINJAM collaborative music tool and the Jesusonic expandable effects processor. In 2002, he was named in the MIT Technology Review TR100 as one of the top 100 innovators in the world under the age of 35. Early life Justin Frankel was born in 1978 and grew up in Sedona, Arizona. Frankel had an aptitude for computers at an early age. His skill eventually led him to running the student computer network of Verde Valley School, which he attended, as well as writing an email application for the students. Winamp After graduating high school with a 2.9 GPA, he attended the University of Utah in 1996, where he majored in computer science, but dropped out after two quarters. A few months later, he released the first version of WinAMP under his newly formed company's name Nullsoft. By 1998, more than fifteen million people had downloaded the program. Since many people had sent in the $10 donation suggested in return for using the program, Frankel earned tens of thousands of dollars a month. Frankel, along with Tom Pepper (who played a big part of the Winamp development and distribution), later completed SHOUTcast, which allowed ordinary users with an Internet connection to broadcast, or "stream", audio over the Internet. He also created the Advanced Visualization Studio, a plugin for Winamp which enabled users to create their own music visualizations in real-time, without any programming knowledge required. Sale of Nullsoft to AOL In June 1999 AOL simultaneously acquired Nullsoft and Spinner.com in a combined purchase worth approximately $400 million. In a July 21, 1999 SEC filing by AOL, the transaction was recorded as a payment of 2,863,053 shares of AOL common stock to the 54 stockholders in the two companies being acquired. On July 20, 1999, the last reported sale price for AOL common stock was $113.1875 per share. Frankel's stake of 522,661 shares in the acquisition was worth approximately $59 million. AOL On March 14, 2000, Frankel and Nullsoft colleague Tom Pepper released gnutella, a public peer-to-peer file-sharing application, using Nullsoft's corporate web servers, without AOL's knowledge. Gnutella was a new peer-to-peer file-sharing system like the original Napster system, which was used by users to share their MP3 collections with everyone who ran a Napster client. Unlike Napster, however, gnutella allowed users to share any type of file, not just MP3s. It also did not have the single point of failure that Napster had: centralized servers that indexed where all the shared content was stored. Whereas Napster could be (and was) shut off just by turning off the centralized index servers owned by Napster,
https://en.wikipedia.org/wiki/Desktop%20environment
In computing, a desktop environment (DE) is an implementation of the desktop metaphor made of a bundle of programs running on top of a computer operating system that share a common graphical user interface (GUI), sometimes described as a graphical shell. The desktop environment was seen mostly on personal computers until the rise of mobile computing. Desktop GUIs help the user to easily access and edit files, while they usually do not provide access to all of the features found in the underlying operating system. Instead, the traditional command-line interface (CLI) is still used when full control over the operating system is required. A desktop environment typically consists of icons, windows, toolbars, folders, wallpapers and desktop widgets (see Elements of graphical user interfaces and WIMP). A GUI might also provide drag and drop functionality and other features that make the desktop metaphor more complete. A desktop environment aims to be an intuitive way for the user to interact with the computer using concepts which are similar to those used when interacting with the physical world, such as buttons and windows. While the term desktop environment originally described a style of user interfaces following the desktop metaphor, it has also come to describe the programs that realize the metaphor itself. This usage has been popularized by projects such as the Common Desktop Environment, K Desktop Environment, and GNOME. Implementation On a system that offers a desktop environment, a window manager in conjunction with applications written using a widget toolkit are generally responsible for most of what the user sees. The window manager supports the user interactions with the environment, while the toolkit provides developers a software library for applications with a unified look and behavior. A windowing system of some sort generally interfaces directly with the underlying operating system and libraries. This provides support for graphical hardware, pointing devices, and keyboards. The window manager generally runs on top of this windowing system. While the windowing system may provide some window management functionality, this functionality is still considered to be part of the window manager, which simply happens to have been provided by the windowing system. Applications that are created with a particular window manager in mind usually make use of a windowing toolkit, generally provided with the operating system or window manager. A windowing toolkit gives applications access to widgets that allow the user to interact graphically with the application in a consistent way. History and common use The first desktop environment was created by Xerox and was sold with the Xerox Alto in the 1970s. The Alto was generally considered by Xerox to be a personal office computer; it failed in the marketplace because of poor marketing and a very high price tag. With the Lisa, Apple introduced a desktop environment on an affordable personal compute
https://en.wikipedia.org/wiki/Peter%20Shor
Peter Williston Shor (born August 14, 1959) is an American professor of applied mathematics at MIT. He is known for his work on quantum computation, in particular for devising Shor's algorithm, a quantum algorithm for factoring exponentially faster than the best currently-known algorithm running on a classical computer. Early life and education Shor was born in New York City to Joan Bopp Shor and S. W. Williston Shor. He grew up in Washington, D.C. and Mill Valley, California. While attending Tamalpais High School, he placed third in the 1977 USA Mathematical Olympiad. After graduation that year, he won a silver medal at the International Math Olympiad in Yugoslavia (the U.S. team achieved the most points per country that year). He received his B.S. in Mathematics in 1981 for undergraduate work at Caltech, and was a Putnam Fellow in 1978. He earned his PhD in Applied Mathematics from MIT in 1985. His doctoral advisor was F. Thomson Leighton, and his thesis was on probabilistic analysis of bin-packing algorithms. Career After being awarded his PhD by MIT, he spent one year as a postdoctoral researcher at the University of California, Berkeley, and then accepted a position at Bell Labs in New Providence, New Jersey. It was there he developed Shor's algorithm. This development was inspired by Simon's problem, where he first solved the discrete log problem (which relates point-finding on a hypercube to a torus) and,"Later that week, I was able to solve the factoring problem as well. There’s a strange relation between discrete log and factoring."Due to their similarity as HSP problems, Shor discovered a related factoring problem (Shor's algorithm) that same week for which he was awarded the Nevanlinna Prize at the 23rd International Congress of Mathematicians in 1998 and the Gödel Prize in 1999. In 1999, he was awarded a MacArthur Fellowship. In 2017, he received the Dirac Medal of the ICTP and for 2019 the BBVA Foundation Frontiers of Knowledge Award in Basic Sciences. Shor began his MIT position in 2003. Currently, he is the Henry Adams Morss and Henry Adams Morss, Jr. Professor of Applied Mathematics in the Department of Mathematics at MIT. He also is affiliated with CSAIL and the MIT Center for Theoretical Physics (CTP). He received a Distinguished Alumni Award from Caltech in 2007. On October 1, 2011, he was inducted into the American Academy of Arts and Sciences. He was elected as an ACM Fellow in 2019 "for contributions to quantum-computing, information theory, and randomized algorithms". He was elected as a member of the National Academy of Sciences in 2002. In 2020, he was elected a member of the National Academy of Engineering for pioneering contributions to quantum computation. In an interview published in Nature on October 30, 2020, Shor said that he considers post-quantum cryptography to be a solution to the quantum threat, although a lot of engineering effort is required to switch from vulnerable algorithms. Along with three oth
https://en.wikipedia.org/wiki/Computer%20chess
Computer chess includes both hardware (dedicated computers) and software capable of playing chess. Computer chess provides opportunities for players to practice even in the absence of human opponents, and also provides opportunities for analysis, entertainment and training. Computer chess applications that play at the level of a chess grandmaster or higher are available on hardware from supercomputers to smart phones. Standalone chess-playing machines are also available. Stockfish, GNU Chess, Fruit, and other free open source applications are available for various platforms. Computer chess applications, whether implemented in hardware or software, utilize different strategies than humans to choose their moves: they use heuristic methods to build, search and evaluate trees representing sequences of moves from the current position and attempt to execute the best such sequence during play. Such trees are typically quite large, thousands to millions of nodes. The computational speed of modern computers, capable of processing tens of thousands to hundreds of thousands of nodes or more per second, along with extension and reduction heuristics that narrow the tree to mostly relevant nodes, make such an approach effective. The first chess machines capable of playing chess or reduced chess-like games were software programs running on digital computers early in the vacuum-tube computer age (1950s). The early programs played so poorly that even a beginner could defeat them. Within 40 years, in 1997, chess engines running on super-computers or specialized hardware were capable of defeating even the best human players. By 2006, programs running on desktop PCs had attained the same capability. In 2006, Monty Newborn, Professor of Computer Science at McGill University, declared: "the science has been done". Nevertheless, solving chess is not currently possible for modern computers due to the game's extremely large number of possible variations. Computer chess was once considered the "Drosophila of AI", the edge of knowledge engineering. The field is now considered a scientifically completed paradigm, and playing chess is a mundane computing activity. Availability and playing strength Chess machines/programs are available in several different forms: stand-alone chess machines (usually a microprocessor running a software chess program, but sometimes as a specialized hardware machine), software programs running on standard PCs, web sites, and apps for mobile devices. Programs run on everything from super-computers to smartphones. Hardware requirements for programs are minimal; the apps are no larger than a few megabytes on disk, use a few megabytes of memory (but can use much more, if it is available), and any processor 300Mhz or faster is sufficient. Performance will vary modestly with processor speed, but sufficient memory to hold a large transposition table (up to several gigabytes or more) is more important to playing strength than processor speed. M
https://en.wikipedia.org/wiki/Peripheral%20unit
Peripheral unit may refer to: Peripheral unit (country subdivision) Peripheral, computer hardware
https://en.wikipedia.org/wiki/Singleton
Singleton may refer to: Sciences, technology Mathematics Singleton (mathematics), a set with exactly one element Singleton field, used in conformal field theory Computing Singleton pattern, a design pattern that allows only one instance of a class to exist Singleton bound, used in coding theory Singleton variable, a variable that is referenced only once Singleton, a character encoded with one unit in variable-width encoding schemes for computer character sets Singleton, an empty tag or self-closing tag in XHTML or XML coding Social science Singleton (global governance), a hypothetical world order with a single decision-making agency Singleton, a consonant that is not a geminate in linguistics Singleton, a person that is not a twin or other multiple birth People Singleton (surname), for a partial list of people with the surname "Singleton" Places United Kingdom Singleton, Lancashire, England Singleton, West Sussex, England Singleton, Kent, England Singleton Park, Swansea, Wales Singleton Abbey Singleton Hospital Australia Singleton, New South Wales Singleton Council, New South Wales Singleton, Western Australia Other uses Singleton (lifestyle), a self-description of individuals without romantic partners, particularly applied to women in their thirties introduced in the novel and film Bridget Jones's Diary The Singleton (film), a 2015 British drama film "Singleton", a short story by Greg Egan Singleton (cards), a single card in a suit The Singleton, a whisky made by Diageo Catawba (grape) or Singleton
https://en.wikipedia.org/wiki/Connection%20Machine
A Connection Machine (CM) is a member of a series of massively parallel supercomputers that grew out of doctoral research on alternatives to the traditional von Neumann architecture of computers by Danny Hillis at Massachusetts Institute of Technology (MIT) in the early 1980s. Starting with CM-1, the machines were intended originally for applications in artificial intelligence (AI) and symbolic processing, but later versions found greater success in the field of computational science. Origin of idea Danny Hillis and Sheryl Handler founded Thinking Machines Corporation (TMC) in Waltham, Massachusetts, in 1983, moving in 1984 to Cambridge, MA. At TMC, Hillis assembled a team to develop what would become the CM-1 Connection Machine, a design for a massively parallel hypercube-based arrangement of thousands of microprocessors, springing from his PhD thesis work at MIT in Electrical Engineering and Computer Science (1985). The dissertation won the ACM Distinguished Dissertation prize in 1985, and was presented as a monograph that overviewed the philosophy, architecture, and software for the first Connection Machine, including information on its data routing between central processing unit (CPU) nodes, its memory handling, and the programming language Lisp applied in the parallel machine. Very early concepts contemplated just over a million processors, each connected in a 20-dimensional hypercube, which was later scaled down. Designs Each CM-1 microprocessor has its own 4 kilobits of random-access memory (RAM), and the hypercube-based array of them was designed to perform the same operation on multiple data points simultaneously, i.e., to execute tasks in single instruction, multiple data (SIMD) fashion. The CM-1, depending on the configuration, has as many as 65,536 individual processors, each extremely simple, processing one bit at a time. CM-1 and its successor CM-2 take the form of a cube 1.5 meters on a side, divided equally into eight smaller cubes. Each subcube contains 16 printed circuit boards and a main processor called a sequencer. Each circuit board contains 32 chips. Each chip contains a router, 16 processors, and 16 RAMs. The CM-1 as a whole has a 12-dimensional hypercube-based routing network (connecting the 212 chips), a main RAM, and an input-output processor (a channel controller). Each router contains five buffers to store the data being transmitted when a clear channel is not available. The engineers had originally calculated that seven buffers per chip would be needed, but this made the chip slightly too large to build. Nobel Prize-winning physicist Richard Feynman had previously calculated that five buffers would be enough, using a differential equation involving the average number of 1 bits in an address. They resubmitted the design of the chip with only five buffers, and when they put the machine together, it worked fine. Each chip is connected to a switching device called a nexus. The CM-1 uses Feynman's algorithm for comp
https://en.wikipedia.org/wiki/Telecommunications%20in%20Armenia
Telecommunications in Armenia involves the availability and use of electronic devices and services, such as the telephone, television, radio or computer, for the purpose of communication. The various telecommunications systems found and used in Armenia includes radio, television, fixed and mobile telephones, and the internet. Mobile As of 2017, Armenia has 3.5 million mobile subscribers in total, and a 120% penetration rate. There are three mobile phone operators currently in Armenia: Viva-MTS, Ucom and Beeline. All three offer both 2G and 3G as well as 4G services. All three networks are widely modern and reliable with shops located in major towns and cities where one can purchase a sim card or get assistance if needed. Most unlocked mobile phones are able to be used on roaming however network charges apply. Ucom and Viva Cell MTS are often recommended to tourists due to the variety of tariffs available and the help available in a variety of languages. As of 2012, approximately 90% of all main lines are digitized and provide excellent quality services for the region. The remaining 10% is in modernization process. International system Yerevan is connected to the Trans-Asia-Europe fiber-optic cable via Georgia. Additional international service is available by microwave radio relay and landline connections to other countries of the Commonwealth of Independent States, the Moscow international switch and by satellite. The main backbones of Armenian networks are made by E3 or STM-1 lines via microwave units across whole country with many passive retranslations. Wire telephone services Traditionally, Armenia has well-developed landline telephone services. According to official statistic data of the International Telecommunication Union, as of 2017 there were 505,190 fixed telephone service subscribers in Armenia (residents and businesses) or 17.24 subscribers per 100 inhabitants. The number of fixed telephone users have significantly declined as compared with the previous 10 years from 20.41 in 2006. The main reason for the decline is mobile-fixed substitution. Radio As of 2008, Armenia has 9 AM stations, 17 FM stations, and one shortwave station. Additionally, there are approximately 850,000 radios in existence. The primary network provider is TRBNA. Television Armenia has 48 private television stations alongside 2 public networks with major Russian channels widely available throughout the country. In 2008, TRBNA upgraded the main circuit to a digital distribution system based on DVB-IP and MPEG2 standards. According to the Television Association Committee of Armenia, TV penetration rate is 80% according to 2011 data. Internet There are approximately 1,400,000 Internet users and approximately 65,279 Internet hosts in Armenia. The country code (Top level domain) for Armenia is .am, which has been used for AM radio stations and for domain hacks. The national communications company Armentel's only fiber optic connection to the Internet enters
https://en.wikipedia.org/wiki/Telecommunications%20in%20Australia
Telecommunications in Australia refers to communication in Australia through electronic means, using devices such as telephone, television, radio or computer, and services such as the telephony and broadband networks. Telecommunications have always been important in Australia given the 'tyranny of distance' with a dispersed population. Governments have driven telecommunication development and have a key role in its regulation. History Colonial period Prior to Federation of Australia in 1901, each of the six Australian colonies had its own telephony communications network. The Australian networks were government assets operating under colonial legislation modelled on that of Britain. The UK Telegraph Act 1868 for example empowered the Postmaster-General to 'acquire, maintain and work electric telegraphs' and foreshadowed the 1870 nationalisation of competing British telegraph companies. Australia's first telephone service (connecting the Melbourne and South Melbourne offices of Robinson Brothers, a Melbourne engineering firm) was launched in 1879. The private Melbourne Telephone Exchange Company opened Australia's first telephone exchange in August 1880. Around 7,757 calls were handled in 1884. The nature of the networks meant that regulation in Australia was undemanding: network personnel were government employees or agents, legislation was enhanced on an incremental basis and restrictions could be achieved through infrastructure. All the colonies ran their telegraph networks at a deficit through investment in infrastructure and subsidisation of regional access, generally with bipartisan support. Government-operated post office and telegraph networks – the largest parts of the bureaucracy – were combined into a single department in each colony on the model of the UK Post Office: South Australia in 1869, Victoria in 1870, Queensland in 1880 and New South Wales in 1893. At Federation (1901) At Federation, the colonial networks (staff, switches, wires, handsets, buildings etc.) were transferred to the Commonwealth Postmaster-General's Department responsible for domestic postal, telephone and, telegraph services becoming the responsibility of the first Postmaster-General (PMG), a federal. With 16,000 staff (and assets of over £6 million) the PMG accounted for 80% of the new federal bureaucracy. Public phones were available in a handful of post offices. Subscriber telephones were initially restricted to major businesses, government agencies, institutions and wealthier residences. Eight million telegrams were sent that year over 43,000 miles of line. There were around 33,000 phones across Australia, with 7,502 telephone subscribers in inner Sydney and 4,800 in the Melbourne central business district. Overseas cable links to Australia remained in private hands, reflecting the realities of imperial politics, demands on the new government's resources, and perceptions of its responsibilities. After Federation A trunk line between Melbourne (headq
https://en.wikipedia.org/wiki/Telecommunications%20in%20Austria
This article concerns the systems of telecommunication in Austria. Austria has a highly developed and efficient telephone network, and has a number of radio and television broadcast stations. Infrastructure The telephone system is highly developed and efficient. Fibre-optic coverage is extensive, although it remains very expensive. A full range of telephone and Internet services are available via the network. Austria has 15 satellite earth stations, two Intelsat (one Atlantic Ocean and one Indian Ocean) and one Eutelsat. Additionally, there are around 600 very-small-aperture terminals (VSATs) (2007). Telephones International calling code: 43 Fixed line phones 3.4 million fixed line phones, 47th in the world (2011). The majority of fixed lines are analogue, with Integrated Services Digital Network (ISDN) lines for the remainder. Fixed-line subscribership has been in decline since the mid-1990s and was eclipsed by mobile-cellular in the late 1990s. Mobile phones 7.6 million mobile phone lines in use, 60th in the world (2011). The Austrian mobile phone market is highly competitive, with some of the lowest rates in Europe. Due to the geographical structures of Austria (mountains, flat lands, lakes) many providers use it as a "testing range" for new services. Mobile number portability was introduced in 2008, allowing users to retain their mobile phone numbers when switching between network operators. The original area codes allocated to each operator can no longer be used to determine the network with which a subscriber is registered. First generation networks D-Netz by Telekom Austria. This network was switched off at the end of the 1990s. Second generation networks There are three nationwide GSM networks which also support additional brands and mobile virtual network operators (MVNOs). A1: originally Mobilkom. It now runs a mixed GSM-900, GSM-1800 and UMTS network. Also provides service for MVNO's bob, B-free (owned by A1), Red Bull Mobile and Yess! T-Mobile: originally max mobil. It now runs a mixed GSM-900, GSM-1800 and UMTS network. Also marketed as telering as a separate brand. Orange: originally One (until September 2008). A mixed GSM-1800 and UMTS network. Since end 2011 owned by Drei/Hutchinson Whampoa. Third generation networks Drei: Owned by Hutchinson Whampoa, a Hong Kong based company that runs its own UMTS network. Internet 37 Internet service providers (ISPs), most of them organised in the local ISP association Internet Service Providers Austria, ISPA. 6.7 million Internet users, 50th in the world; 81% of the population, 29th in the world (2012). 2,074,252 fixed broadband subscriptions, 41st in the world; 25.2% of the population, 33rd in the world (2012). 4,564,834 mobile subscriptions, 40th in the world; 55.5% of the population, 23rd in the world (2012). 3.5 million Internet hosts, 30th in the world (2012). 300,000 Asymmetric Digital Subscriber Lines (ADSL). The country code for Austria is "AT", the cou
https://en.wikipedia.org/wiki/Object%E2%80%93relational%20database
An object–relational database (ORD), or object–relational database management system (ORDBMS), is a database management system (DBMS) similar to a relational database, but with an object-oriented database model: objects, classes and inheritance are directly supported in database schemas and in the query language. In addition, just as with pure relational systems, it supports extension of the data model with custom data types and methods. An object–relational database can be said to provide a middle ground between relational databases and object-oriented databases. In object–relational databases, the approach is essentially that of relational databases: the data resides in the database and is manipulated collectively with queries in a query language; at the other extreme are OODBMSes in which the database is essentially a persistent object store for software written in an object-oriented programming language, with a programming API for storing and retrieving objects, and little or no specific support for querying. Overview The basic need of object–relational database arises from the fact that both Relational and Object database have their individual advantages and drawbacks. The isomorphism of the relational database system with a mathematical relation allows it to exploit many useful techniques and theorems from set theory. But these types of databases are not optimal for certain kinds of applications. An object oriented database model allows containers like sets and lists, arbitrary user-defined datatypes as well as nested objects. This brings commonality between the application type systems and database type systems which removes any issue of impedance mismatch. But object databases, unlike relational do not provide any mathematical base for their deep analysis. The basic goal for the object–relational database is to bridge the gap between relational databases and the object-oriented modeling techniques used in programming languages such as Java, C++, Visual Basic .NET or C#. However, a more popular alternative for achieving such a bridge is to use a standard relational database systems with some form of object–relational mapping (ORM) software. Whereas traditional RDBMS or SQL-DBMS products focused on the efficient management of data drawn from a limited set of data-types (defined by the relevant language standards), an object–relational DBMS allows software developers to integrate their own types and the methods that apply to them into the DBMS. The ORDBMS (like ODBMS or OODBMS) is integrated with an object-oriented programming language. The characteristic properties of ORDBMS are 1) complex data, 2) type inheritance, and 3) object behavior. Complex data creation in most SQL ORDBMSs is based on preliminary schema definition via the user-defined type (UDT). Hierarchy within structured complex data offers an additional property, type inheritance. That is, a structured type can have subtypes that reuse all of its attributes and contain
https://en.wikipedia.org/wiki/Xfce
Xfce or XFCE (pronounced as four individual letters) is a free and open-source desktop environment for Linux and other Unix-like operating systems. Xfce aims to be fast and lightweight while still being visually appealing and easy to use. Xfce embodies the traditional Unix philosophy of modularity and re-usability. It consists of separately packaged parts that together provide all functions of the desktop environment, but can be selected in subsets to suit user needs and preference. Another priority of Xfce is adherence to standards, specifically those defined at freedesktop.org. Features Like GNOME, Xfce is based on the GTK toolkit, but it is not a GNOME fork. It uses the Xfwm window manager, described below. Its configuration is entirely mouse-driven, with the configuration files hidden from the casual user. Xfce does not feature any desktop animations, but Xfwm supports compositing. History Olivier Fourdan started the project in late 1996 as a Linux version of the Common Desktop Environment (CDE), a Unix desktop environment that was initially proprietary and later released as free software. The first release of Xfce was in early 1997. However, over time, Xfce diverged from CDE and now stands on its own. The Slackware Linux distribution has nicknamed Xfce the "Cholesterol Free Desktop Environment", a loose interpretation of the initialism. Mascot Per the FAQ, the logo of Xfce is "a mouse, obviously, for all kinds of reasons like world domination and monsters and such." In the SuperTuxKart game, in which various open source mascots race against each other, the mouse is said to be a female named "Xue". Early versions Xfce began as a simple project created with XForms. Olivier Fourdan released the program, which was just a simple taskbar, on SunSITE. Fourdan continued developing the project and in 1998, Xfce 2 was released with the first version of Xfce's window manager, Xfwm. He requested to have the project included in Red Hat Linux, but was refused due to its XForms basis. Red Hat only accepted software that was open source and released under either a GPL or BSD compatible license, whereas, at the time, XForms was closed source and free only for personal use. For the same reason, Xfce was not in Debian before version 3, and Xfce 2 was only distributed in Debian's contrib repository. In March 1999, Fourdan began a complete rewrite of the project based on GTK, a non-proprietary toolkit then rising in popularity. The result was Xfce 3.0, licensed under the GPL. Along with being based completely on free software, the project gained GTK drag-and-drop support, native language support, and improved configurability. Xfce was uploaded to SourceForge.net in February 2001, starting with version 3.8.1. Modern Xfce In version 4.0.0, released 25 September 2003, Xfce was upgraded to use the GTK 2 libraries. Changes in 4.2.0, released 16 January 2005, included a compositing manager for Xfwm which added built-in support for transparency and dro
https://en.wikipedia.org/wiki/IEEE%20802.8
The Fiber Optic Technical Advisory Group was to create a LAN standard for fiber optic media used in token passing computer networks like FDDI. This was part of the IEEE 802 group of standards. The group had given up and disbanded itself and is no longer a part of IEEE standards. References IEEE 802
https://en.wikipedia.org/wiki/Symbian%20Software
Symbian Ltd. was a software development and licensing consortium company, known for the Symbian operating system (OS), for smartphones and some related devices. Its headquarters were in Southwark, London, England, with other offices opened in Cambridge, Sweden, Silicon Valley, Japan, India, China, South Korea, and Australia. It was established on 24 June 1998 as a partnership between Psion, Nokia, Ericsson, Motorola, and Sony, to exploit the convergence between personal digital assistants (PDAs) and mobile phones, and a joint-effort to prevent Microsoft from extending its desktop computer monopoly into the mobile devices market. Ten years to the day after it was established, on 24 June 2008, Nokia announced that they intended to acquire the shares that they did not own already, at a cost of €264 million. On the same day the Symbian Foundation was announced, with the aim to "provide royalty-free software and accelerate innovation", and the pledged contribution of the Symbian OS and user interfaces. The acquisition of Symbian Ltd. by Nokia was completed on 2 December 2008, at which point all Symbian employees became Nokia employees. Transfer of relevant Symbian Software Ltd. leases, trademarks, and domain names from Nokia to the Symbian Foundation was completed in April 2009. On 18 July 2009, Nokia's Symbian professional services department, which was not transferred to the Symbian Foundation, was sold to the Accenture consulting company. Overview Symbian Ltd. was the brainchild of Psion's next generation mobile operating system project following the 32-bit version of EPOC. Psion approached the other four companies and decided to work together on a full software suite including kernel, device drivers, and user interface. Much of Symbian's initial intellectual property came from the software arm of Psion. Symbian Ltd developed and licensed Symbian OS, an operating system for advanced mobile phones and personal digital assistants (PDAs). Symbian Ltd wanted the system to have different user interface layers, unlike Microsoft's offerings. Psion originally created several interfaces or "reference designs", which would later end up as Pearl (smartphone), Quartz (Palm-like PDA), and Crystal (clamshell design PDA). One early design called Emerald also ended up in the market on the Ericsson R380. Nokia created the Series 60 (from Pearl), Series 80 and Series 90 platforms (both from Crystal), whilst UIQ Technology, which was a subsidiary of Symbian Ltd. at the time, created UIQ (from Quartz). Another interface was MOAP(S) from NTT Docomo. Despite being partners at Symbian Ltd, the different backers of each interface were effectively competing with each other's software. This became a prominent point in February 2004 when UIQ, which focuses on pen devices, announced its foray in traditional keyboard devices, competing head-on with Nokia's Series 60 offering whilst Nokia was in the process of acquiring Psion's remaining stake in Symbian Ltd. to take ove
https://en.wikipedia.org/wiki/Pocket%20PC
A Pocket PC (P/PC, PPC) is a class of personal digital assistant (PDA) that runs the Windows Mobile or Windows Embedded Compact operating system that has some of the abilities of modern desktop PCs. The name was introduced by Microsoft in 2000 as a rebranding of the Palm-size PC category. Some of these devices also had integrated phone and data capabilities, which were called Pocket PC Phone Edition. Windows "Smartphone" is another Windows CE based platform for non-touch flip phones or dumber phones. As of 2010, thousands of applications existed for handhelds adhering to the Microsoft Pocket PC specification, many of which were freeware. Microsoft-compliant Pocket PCs can be used with many add-ons such as GPS receivers, barcode readers, RFID readers, and cameras. In 2007, with the advent of Windows Mobile 6.0, Microsoft dropped the name Pocket PC in favor of a new naming scheme: Windows Mobile Classic (formerly Pocket PC): devices without an integrated phone; Windows Mobile Professional (formerly Pocket PC Phone Edition): devices with an integrated phone and a touch screen; Windows Mobile Standard (formerly Smartphone): devices with an integrated phone but without a touch screen. Pocket PC was replaced by Windows Phone in 2010 but even after versions were released based on the Windows NT kernel were ultimately unable to compete with the iPhone of 2007 and Android phones and interested waned in Pocket PC's without phones. History The Pocket PC was an evolution from prior calculator-sized computers. Keystroke-programmable calculators which could do simple business and scientific applications were available by the 1970s. In 1982, Hewlett Packard's HP-75 incorporated a 1-line text display, an alphanumeric keyboard, HP BASIC language and some basic PDA abilities. The HP 95LX, HP 100LX and HP 200LX series packed a PC-compatible MS-DOS computer with graphics display and QWERTY keyboard into a palmtop format. The HP OmniGo 100 and 120 used a pen and graphics interface on DOS-based PC/GEOS, but was not widely sold in the United States. The HP 300LX built a palmtop computer on the Windows CE operating system. Palm-size PC (PsPC) was Microsoft's official name for Windows CE PDAs that were smaller than Handheld PCs by the lack of a physical keyboard. The class was announced in January 1998 originally as "Palm PC" which provoked a lawsuit by Palm Inc., and the name changed soon afterwards to Palm-size PC before release. These devices were similar to the Handheld PC and also ran Windows CE, however this version was more limited and lacked Pocket Microsoft Office, Pocket Internet Explorer, ActiveX and some other tools. Its main competitor was the PalmPilot and Palm III. According to the specification, Palm-size PCs use SuperH SH3 processors and MIPS architecture. The term "palm-sized PC" was also used as a generic term of similar such devices that are not necessary connected to Microsoft, such as the PalmPilot. Microsoft's Handheld PCs and Palm-size
https://en.wikipedia.org/wiki/RT-11
RT-11 (Real-time 11) is a discontinued small, low-end, single-user real-time operating system for the full line of Digital Equipment Corporation PDP-11 16-bit computers. RT-11 was first implemented in 1970. It was widely used for real-time computing systems, process control, and data acquisition across all PDP-11s. It was also used for low-cost general-use computing. Features Source code RT-11 was written in assembly language. Heavy use of the conditional assembly and macro programming features of the MACRO-11 assembler allowed a significant degree of configurability and allowed programmers to specify high-level instructions otherwise unprovided for in machine code. RT-11 distributions included the source code of the operating system and its device drivers with all the comments removed and a program named "SYSGEN" which would build the operating system and drivers according to a user-specified configuration. Developer's documentation included a kernel listing that included comments. Device drivers In RT-11, device drivers were loadable, except that prior to V4.0 the device driver for the system device (boot device) was built into the kernel at configuration time. Because RT-11 was commonly used for device control and data acquisition, it was common for developers to write or enhance device drivers. DEC encouraged such driver development by making their hardware subsystems (from bus structure to code) open, documenting the internals of the operating system, encouraging third-party hardware and software vendors, and by fostering the development of the Digital Equipment Computer Users Society. Multitasking RT-11 systems did not support preemptive multitasking, but most versions could run multiple simultaneous applications. All variants of the monitors provided a background job. The FB, XM, and ZM monitors also provided a foreground job, and six system jobs if selected via the SYSGEN system generation program. These tasks had fixed priorities, with the background job lowest and the foreground job highest. It was possible to switch between jobs from the system console user interface, and SYSGEN could generate a monitor that provided a single background job (the SB, XB and ZB variants). The terms foreground and background are counterintuitive; the background job was typically the user's command-line interpreter; a foreground job might be doing something like non-interactive data collection. Human interface Users generally operated RT-11 via a printing terminal or a video terminal, originally via a strap-selectable current-loop (for conventional teletypes) or via an RS-232 (later RS-422 as well) interface on one of the CPU cards; DEC also supported the VT11 and VS60 graphics display devices (vector graphics terminals with a graphic character generator for displaying text, and a light pen for graphical input). A third-party favorite was the Tektronix 4010 family. The Keyboard Monitor (KMON) interpreted commands issued by the user and would invoke
https://en.wikipedia.org/wiki/Less
Less or LESS may refer to: Computing less (Unix), a Unix utility program Less (style sheet language), a dynamic style sheet language Large-Scale Scrum (LeSS), a product development framework that extends Scrum Other uses -less, a privative suffix in English Lunar Escape Systems, a series of proposed emergency spacecraft for the Apollo Program Christian Friedrich Lessing (1809–1862), (author abbreviation Less.) for German botanist Less (novel), a 2017 novel by Andrew Sean Greer See also Fewer versus less Less is more (disambiguation)
https://en.wikipedia.org/wiki/Chaosnet
Chaosnet is a local area network technology. It was first developed by Thomas Knight and Jack Holloway at MIT's AI Lab in 1975 and thereafter. It refers to two separate, but closely related, technologies. The more widespread was a set of computer communication packet-based protocols intended to connect the then-recently developed and very popular (within MIT) Lisp machines; the second was one of the earliest local area network (LAN) hardware implementations. Origin The Chaosnet protocol originally used an implementation over CATV coaxial cable modeled on the early Xerox PARC Ethernet, the early ARPANET, and Transmission Control Protocol (TCP). It was a contention-based system intended to work over a range, that included a pseudo-slotted feature intended to reduce collisions, which worked by passing a virtual token of permission from host to host; successful packet transmissions updated each host's knowledge of which host had the token at that time. Collisions caused a host to fall silent for a duration depending on the distance from the host it collided with. Collisions were never a real problem, and the pseudo-slotting fell into disuse. Chaosnet's network topology was usually series of linear (not circular) cables, each up to a maximum of a kilometer and roughly 12 clients. The individual segments were interconnected by "bridges" (much in the ARPANET mold), generally older computers like PDP-11s with two network interfaces. The protocols were also later implemented as a payload that could be carried over Ethernet (usually the later variety). Chaosnet was specifically for LANs; features to support WANs were left out for the sake of simplicity. Chaosnet can be regarded as a contemporary of both the PUP protocols invented by PARC, and the Internet Protocol (IP), and was recognized as one of the other network classes (other than "IN" and "HS") in the Domain Name System. BIND uses a built-in pseudo-top-level-domain in the "CHAOS class" for retrieving information about a running DNS server. Chaosnet protocol The Chaosnet protocol identifies hosts by 16-bit addresses, 8 bits of which identify the subnet, 8 bits of which identify the host within the subnet. The basic protocol was a full-duplex reliable packet transmission between two user processes. The packet contents could be treated as bytes of 8 or 16 bits, with support for other word sizes provided by higher-level protocols. The connection was identified by a combination of the 16-bit addresses of each host and a 16-bit "connection index" assigned by each host to maintain uniqueness. "Controlled" packets within a connection were identified by a 16-bit packet number, which was used to deliver controlled packets reliably and in order, with re-transmission and flow control. "Uncontrolled" packets were not retransmitted, and were used at a lower level to support the flow-control and re-transmission. Chaosnet also supported "BRD" broadcast packets to multiple subnets. Initial establishment of
https://en.wikipedia.org/wiki/Terminate-and-stay-resident%20program
A terminate-and-stay-resident program (commonly TSR) is a computer program running under DOS that uses a system call to return control to DOS as though it has finished, but remains in computer memory so it can be reactivated later. This technique partially overcame DOS's limitation of executing only one program, or task, at a time. TSRs are used only in DOS, not in Windows. Some TSRs are utility software that a computer user might call up several times a day, while working in another program, using a hotkey. Borland Sidekick was an early and popular example of this type. Others serve as device drivers for hardware that the operating system does not directly support. Use Normally DOS can run only one program at a time. When a program finishes, it returns control to DOS using the system call . The memory and system resources used are then marked as unused. This makes it impossible to restart parts of the program without having to reload it all. However, if a program ends with the system call or , the operating system does not reuse a certain specified part of its memory. The original call, , is called "terminate but stay resident", hence the name "TSR". Using this call, a program can make up to 64 KB of its memory resident. MS-DOS version 2.0 introduced an improved call, ('Keep Process'), which removed this limitation and let the program return an exit code. Before making this call, the program can install one or several interrupt handlers pointing into itself, so that it can be called again. Installing a hardware interrupt vector allows such a program to react to hardware events. Installing a software interrupt vector allows it to be called by the currently running program. Installing a timer interrupt handler allows a TSR to run periodically (see ISA and programmable interval timer, especially the section "IBM PC compatible"). The typical method of using an interrupt vector involves reading its present value (the address), storing it within the memory space of the TSR, and replacing it with an address in its own code. The stored address is called from the TSR, in effect forming a singly linked list of interrupt handlers, also called interrupt service routines, or ISRs. This procedure of installing ISRs is called chaining or hooking an interrupt or an interrupt vector. By chaining the interrupt vectors TSRs can take complete control of the computer. A TSR can have one of two behaviors: Take complete control of an interrupt by not calling other TSRs that had previously altered the same interrupt vector. Cascade with other TSRs by calling the old interrupt vector. This can be done before or after they executed their actual code. This way TSRs can form a chain where each calls the next. The terminate-and-stay-resident method is used by most DOS viruses and other malware, which can either take control of the PC or stay in the background. This malware will react to disk I/O or execution events by infecting executable (.EXE or .COM) files w
https://en.wikipedia.org/wiki/Binary%20heap
A binary heap is a heap data structure that takes the form of a binary tree. Binary heaps are a common way of implementing priority queues. The binary heap was introduced by J. W. J. Williams in 1964, as a data structure for heapsort. A binary heap is defined as a binary tree with two additional constraints: Shape property: a binary heap is a complete binary tree; that is, all levels of the tree, except possibly the last one (deepest) are fully filled, and, if the last level of the tree is not complete, the nodes of that level are filled from left to right. Heap property: the key stored in each node is either greater than or equal to (≥) or less than or equal to (≤) the keys in the node's children, according to some total order. Heaps where the parent key is greater than or equal to (≥) the child keys are called max-heaps; those where it is less than or equal to (≤) are called min-heaps. Efficient (logarithmic time) algorithms are known for the two operations needed to implement a priority queue on a binary heap: inserting an element, and removing the smallest or largest element from a min-heap or max-heap, respectively. Binary heaps are also commonly employed in the heapsort sorting algorithm, which is an in-place algorithm because binary heaps can be implemented as an implicit data structure, storing keys in an array and using their relative positions within that array to represent child–parent relationships. Heap operations Both the insert and remove operations modify the heap to conform to the shape property first, by adding or removing from the end of the heap. Then the heap property is restored by traversing up or down the heap. Both operations take time. Insert To add an element to a heap, we can perform this algorithm: Add the element to the bottom level of the heap at the leftmost open space. Compare the added element with its parent; if they are in the correct order, stop. If not, swap the element with its parent and return to the previous step. Steps 2 and 3, which restore the heap property by comparing and possibly swapping a node with its parent, are called the up-heap operation (also known as bubble-up, percolate-up, sift-up, trickle-up, swim-up, heapify-up, or cascade-up). The number of operations required depends only on the number of levels the new element must rise to satisfy the heap property. Thus, the insertion operation has a worst-case time complexity of . For a random heap, and for repeated insertions, the insertion operation has an average-case complexity of O(1). As an example of binary heap insertion, say we have a max-heap and we want to add the number 15 to the heap. We first place the 15 in the position marked by the X. However, the heap property is violated since , so we need to swap the 15 and the 8. So, we have the heap looking as follows after the first swap: However the heap property is still violated since , so we need to swap again: which is a valid max-heap. There is no need to check the left chi
https://en.wikipedia.org/wiki/Interpolation%20search
Interpolation search is an algorithm for searching for a key in an array that has been ordered by numerical values assigned to the keys (key values). It was first described by W. W. Peterson in 1957. Interpolation search resembles the method by which people search a telephone directory for a name (the key value by which the book's entries are ordered): in each step the algorithm calculates where in the remaining search space the sought item might be, based on the key values at the bounds of the search space and the value of the sought key, usually via a linear interpolation. The key value actually found at this estimated position is then compared to the key value being sought. If it is not equal, then depending on the comparison, the remaining search space is reduced to the part before or after the estimated position. This method will only work if calculations on the size of differences between key values are sensible. By comparison, binary search always chooses the middle of the remaining search space, discarding one half or the other, depending on the comparison between the key found at the estimated position and the key sought — it does not require numerical values for the keys, just a total order on them. The remaining search space is reduced to the part before or after the estimated position. The linear search uses equality only as it compares elements one-by-one from the start, ignoring any sorting. On average the interpolation search makes about log(log(n)) comparisons (if the elements are uniformly distributed), where n is the number of elements to be searched. In the worst case (for instance where the numerical values of the keys increase exponentially) it can make up to O(n) comparisons. In interpolation-sequential search, interpolation is used to find an item near the one being searched for, then linear search is used to find the exact item. Performance Using big-O notation, the performance of the interpolation algorithm on a data set of size n is O(n); however under the assumption of a uniform distribution of the data on the linear scale used for interpolation, the performance can be shown to be O(log log n). However, Dynamic Interpolation Search is possible in o(log log n) time using a novel data structure. Practical performance of interpolation search depends on whether the reduced number of probes is outweighed by the more complicated calculations needed for each probe. It can be useful for locating a record in a large sorted file on disk, where each probe involves a disk seek and is much slower than the interpolation arithmetic. Index structures like B-trees also reduce the number of disk accesses, and are more often used to index on-disk data in part because they can index many types of data and can be updated online. Still, interpolation search may be useful when one is forced to search certain sorted but unindexed on-disk datasets. Adaptation to different datasets When sort keys for a dataset are uniformly distribute
https://en.wikipedia.org/wiki/Recursive%20descent%20parser
In computer science, a recursive descent parser is a kind of top-down parser built from a set of mutually recursive procedures (or a non-recursive equivalent) where each such procedure implements one of the nonterminals of the grammar. Thus the structure of the resulting program closely mirrors that of the grammar it recognizes. A predictive parser is a recursive descent parser that does not require backtracking. Predictive parsing is possible only for the class of LL(k) grammars, which are the context-free grammars for which there exists some positive integer k that allows a recursive descent parser to decide which production to use by examining only the next k tokens of input. The LL(k) grammars therefore exclude all ambiguous grammars, as well as all grammars that contain left recursion. Any context-free grammar can be transformed into an equivalent grammar that has no left recursion, but removal of left recursion does not always yield an LL(k) grammar. A predictive parser runs in linear time. Recursive descent with backtracking is a technique that determines which production to use by trying each production in turn. Recursive descent with backtracking is not limited to LL(k) grammars, but is not guaranteed to terminate unless the grammar is LL(k). Even when they terminate, parsers that use recursive descent with backtracking may require exponential time. Although predictive parsers are widely used, and are frequently chosen if writing a parser by hand, programmers often prefer to use a table-based parser produced by a parser generator, either for an LL(k) language or using an alternative parser, such as LALR or LR. This is particularly the case if a grammar is not in LL(k) form, as transforming the grammar to LL to make it suitable for predictive parsing is involved. Predictive parsers can also be automatically generated, using tools like ANTLR. Predictive parsers can be depicted using transition diagrams for each non-terminal symbol where the edges between the initial and the final states are labelled by the symbols (terminals and non-terminals) of the right side of the production rule. Example parser The following EBNF-like grammar (for Niklaus Wirth's PL/0 programming language, from Algorithms + Data Structures = Programs) is in LL(1) form: program = block "." . block = ["const" ident "=" number {"," ident "=" number} ";"] ["var" ident {"," ident} ";"] {"procedure" ident ";" block ";"} statement . statement = ident ":=" expression | "call" ident | "begin" statement {";" statement } "end" | "if" condition "then" statement | "while" condition "do" statement . condition = "odd" expression | expression ("="|"#"|"<"|"<="|">"|">=") expression . expression = ["+"|"-"] term {("+"|"-") term} . term = factor {("*"|"/") factor} . factor = ident | number | "(" expression ")" . Terminals are expressed in quotes. Each nonterminal is defined by a rule in the g
https://en.wikipedia.org/wiki/Spirited%20Away
is a 2001 Japanese animated fantasy film written and directed by Hayao Miyazaki, animated by Studio Ghibli for Tokuma Shoten, Nippon Television Network, Dentsu, Buena Vista Home Entertainment, Tohokushinsha Film, and Mitsubishi and distributed by Toho. The film features the voices of Rumi Hiiragi, Miyu Irino, Mari Natsuki, Takeshi Naito, Yasuko Sawaguchi, Tsunehiko Kamijō, Takehiko Ono, and Bunta Sugawara. Spirited Away tells the story of Chihiro Ogino (Hiiragi), a ten-year-old girl who, while moving to a new neighborhood, enters the world of kami (spirits of Japanese Shinto folklore). After her parents are turned into pigs by the witch Yubaba (Natsuki), Chihiro takes a job working in Yubaba's bathhouse to find a way to free herself and her parents and return to the human world. Miyazaki wrote the screenplay after he decided the film would be based on the ten-year-old daughter of his friend Seiji Okuda, the film's associate producer, who came to visit his house each summer. At the time, Miyazaki was developing two personal projects, but they were rejected. With a budget of US$19 million, production of Spirited Away began in 2000. Pixar animator John Lasseter, a fan and friend of Miyazaki, convinced Walt Disney Pictures to buy the film's North American distribution rights, and served as executive producer of its English-dubbed version. Lasseter then hired Kirk Wise as director and Donald W. Ernst as producer, while screenwriters Cindy and Donald Hewitt wrote the English-language dialogue to match the characters' original Japanese-language lip movements. Originally released in Japan on 20 July 2001 by distributor Toho, the film received universal acclaim, grossing at the worldwide box office. Accordingly, it became the highest-grossing film in Japanese history with a total of ($305 million). It held the record for 19 years until it was surpassed by Demon Slayer: Kimetsu no Yaiba – The Movie: Mugen Train in 2020. A co-recipient of the Golden Bear with Bloody Sunday at the 2002 Berlin International Film Festival and the first, and to date only, hand-drawn and non-English-language animated film to win the Academy Award for Best Animated Feature at the 75th Academy Awards, Spirited Away is regarded as one of the greatest films of all time and has been included in various "best-of" lists. Plot Ten-year-old Chihiro Ogino and her parents are traveling to their new home. Her father stops to explore an abandoned amusement park despite Chihiro's protests. They find a restaurant stocked with food, which Chihiro's parents begin to eat. While exploring alone, Chihiro finds a bathhouse and meets a boy named Haku, who warns her to leave. However, Chihiro discovers that her parents have become pigs, and the exit is blocked by an ocean of water. Haku finds Chihiro and leads her toward the bathhouse. She sees several animals and creatures visiting the bathhouse, as well as , a masked spirit. Haku instructs her to ask for a job from the bathhouse's boiler-m
https://en.wikipedia.org/wiki/Compiler-compiler
In computer science, a compiler-compiler or compiler generator is a programming tool that creates a parser, interpreter, or compiler from some form of formal description of a programming language and machine. The most common type of compiler-compiler is more precisely called a parser generator. It only handles syntactic analysis. A grammar file is supplied as the input for a parser generator. This is typically written in Backus–Naur form (BNF) or extended Backus–Naur form (EBNF) and defines the syntax of a target programming language. Source code for a parser of the programming language is returned as the parser generator's output. This source code can then be compiled into a parser, which may be either standalone or embedded. The compiled parser then accepts the source code of the target programming language as an input and performs an action or outputs an abstract syntax tree (AST). Parser generators do not handle the semantics of the AST, or the generation of machine code for the target machine. A metacompiler is a software development tool used mainly in the construction of compilers, translators, and interpreters for other programming languages. The input to a metacompiler is a computer program written in a specialized programming metalanguage designed mainly for the purpose of constructing compilers. The language of the compiler produced is called the object language. The minimal input producing a compiler is a metaprogram specifying the object language grammar and semantic transformations into an object program. Variants A typical parser generator associates executable code with each of the rules of the grammar that should be executed when these rules are applied by the parser. These pieces of code are sometimes referred to as semantic action routines since they define the semantics of the syntactic structure that is analyzed by the parser. Depending upon the type of parser that should be generated, these routines may construct a parse tree (or abstract syntax tree), or generate executable code directly. One of the earliest (1964), surprisingly powerful, versions of compiler-compilers is META II, which accepted an analytical grammar with output facilities that produce stack machine code, and is able to compile its own source code and other languages. Among the earliest programs of the original Unix versions being built at Bell Labs was the two-part lex and yacc system, which was normally used to output C programming language code, but had a flexible output system that could be used for everything from programming languages to text file conversion. Their modern GNU versions are flex and bison. Some experimental compiler-compilers take as input a formal description of programming language semantics, typically using denotational semantics. This approach is often called 'semantics-based compiling', and was pioneered by Peter Mosses' Semantic Implementation System (SIS) in 1978. However, both the generated compiler and the code it pro
https://en.wikipedia.org/wiki/Simple%20LR%20parser
In computer science, a Simple LR or SLR parser is a type of LR parser with small parse tables and a relatively simple parser generator algorithm. As with other types of LR(1) parser, an SLR parser is quite efficient at finding the single correct bottom-up parse in a single left-to-right scan over the input stream, without guesswork or backtracking. The parser is mechanically generated from a formal grammar for the language. SLR and the more-general methods LALR parser and Canonical LR parser have identical methods and similar tables at parse time; they differ only in the mathematical grammar analysis algorithms used by the parser generator tool. SLR and LALR generators create tables of identical size and identical parser states. SLR generators accept fewer grammars than do LALR generators like yacc and Bison. Many computer languages don't readily fit the restrictions of SLR, as is. Bending the language's natural grammar into SLR grammar form requires more compromises and grammar hackery. So LALR generators have become much more widely used than SLR generators, despite being somewhat more complicated tools. SLR methods remain a useful learning step in college classes on compiler theory. SLR and LALR were both developed by Frank DeRemer as the first practical uses of Donald Knuth's LR parser theory. The tables created for real grammars by full LR methods were impractically large, larger than most computer memories of that decade, with 100 times or more parser states than the SLR and LALR methods.. Lookahead sets To understand the differences between SLR and LALR, it is important to understand their many similarities and how they both make shift-reduce decisions. (See the article LR parser now for that background, up through the section on reductions' lookahead sets.) The one difference between SLR and LALR is how their generators calculate the lookahead sets of input symbols that should appear next, whenever some completed production rule is found and reduced. SLR generators calculate that lookahead by an easy approximation method based directly on the grammar, ignoring the details of individual parser states and transitions. This ignores the particular context of the current parser state. If some nonterminal symbol S is used in several places in the grammar, SLR treats those places in the same single way rather than handling them individually. The SLR generator works out Follow(S), the set of all terminal symbols which can immediately follow some occurrence of S. In the parse table, each reduction to S uses Follow(S) as its LR(1) lookahead set. Such follow sets are also used by generators for LL top-down parsers. A grammar that has no shift/reduce or reduce/reduce conflicts when using follow sets is called an SLR grammar. LALR generators calculate lookahead sets by a more precise method based on exploring the graph of parser states and their transitions. This method considers the particular context of the current parser state
https://en.wikipedia.org/wiki/Shareaza
Shareaza is a peer-to-peer file sharing client running under Microsoft Windows which supports the gnutella, Gnutella2 (G2), eDonkey, BitTorrent, FTP, HTTP and HTTPS network protocols and handles magnet links, ed2k links, and the now deprecated gnutella and Piolet links. It is available in 30 languages. Shareaza was developed by Michael Stokes until June 1, 2004, and has since been maintained by a group of volunteers. On June 1, 2004, Shareaza 2.0 was released, along with the source code, under the GNU General Public License (GPL-2.0-or-later), making it free software. Features Multi-network Shareaza can connect to gnutella, G2, eDonkey and BitTorrent. Shareaza hashes its files for all networks, and then distributes those hash values on G2. This allows Shareaza to download one file from several networks at once. When another client connected to G2 finds such a file, it is given the hash values for all networks and can search on the other networks with their respective hash values, which increases the number of sources and the download speed of the file. Shareaza also uses its G2 network to find more sources for torrents. Security filter The Shareaza client has some basic content filters including a forced child and optional adult pornography filter, and some other optional filters such as a filter for files encumbered with Digital rights management (DRM). Shareaza's security filters can also be extended with user-defined keywords and/or IP addresses. Later versions of Shareaza allow for the use of regular expressions and filtering by hash. These filters increase the chances of getting the files the user wants and decrease the chance of getting malicious or fake files. The file format used for the filters is an extendable XML schema. The filters are editable inside Shareaza, and can be exported from the application to be shared with others. Plugins Shareaza implements a framework for additional plugins. The Shareaza installer ships several plugins. Most of them are used to read and strip off built in metadata from the files being hashed and convert it to an external XML based format, or to decode multimedia files for making a preview for other G2 clients. Some others serve the need of a media player inside Shareaza, and enhancements of that media player. Third party plugins can also be used, for example, Sharemonkey, which will add a link inside Shareaza when downloading or searching copyrighted material from where it can be legally downloaded. Skins The client can have almost all parts of the GUI skinned. This includes bars, icons, as well as backgrounds and buttons. In that way, Shareaza can be completely changed with colors, images, new buttons, etc. A basic list of skins is contained in the Shareaza installer package. Other skins can be downloaded in the community forums or found via a search for .sks (Shareaza skin files) in the G2 network. The skins are zip archives, renamed with the extension .sks, containing icons and images, as
https://en.wikipedia.org/wiki/Chaffing%20and%20winnowing
Chaffing and winnowing is a cryptographic technique to achieve confidentiality without using encryption when sending data over an insecure channel. The name is derived from agriculture: after grain has been harvested and threshed, it remains mixed together with inedible fibrous chaff. The chaff and grain are then separated by winnowing, and the chaff is discarded. The cryptographic technique was conceived by Ron Rivest and published in an on-line article on 18 March 1998. Although it bears similarities to both traditional encryption and steganography, it cannot be classified under either category. This technique allows the sender to deny responsibility for encrypting their message. When using chaffing and winnowing, the sender transmits the message unencrypted, in clear text. Although the sender and the receiver share a secret key, they use it only for authentication. However, a third party can make their communication confidential by simultaneously sending specially crafted messages through the same channel. How it works The sender (Alice) wants to send a message to the receiver (Bob). In the simplest setup, Alice enumerates the symbols in her message and sends out each in a separate packet. If the symbols are complex enough, such as natural language text, an attacker may be able to distinguish the real symbols from poorly-faked chaff symbols, posing a similar problem as steganography in needing to generate highly realistic fakes; to avoid this, the symbols can be reduced to just single 0/1 bits, and realistic fakes can then be simply randomly generated 50:50 and are indistinguishable from real symbols. In general the method requires each symbol to arrive in-order and to be authenticated by the receiver. When implemented over networks that may change the order of packets, the sender places the symbol's serial number in the packet, the symbol itself (both unencrypted), and a message authentication code (MAC). Many MACs use a secret key Alice shares with Bob, but it is sufficient that the receiver has a method to authenticate the packets. Rivest notes an interesting property of chaffing-and-winnowing is that third parties (such as an ISP) can opportunistically add it to communications without needing permission or coordination with the sender/recipient. A third-party (dubbed "Charles") who transmits Alice's packets to Bob, interleaves the packets with corresponding bogus packets (called "chaff") with corresponding serial numbers, arbitrary symbols, and a random number in place of the MAC. Charles does not need to know the key to do that (real MACs are large enough that it is extremely unlikely to generate a valid one by chance, unlike in the example). Bob uses the MAC to find the authentic messages and drops the "chaff" messages. This process is called "winnowing". An eavesdropper located between Alice and Charles can easily read Alice's message. But an eavesdropper between Charles and Bob would have to tell which packets are bogus and whic
https://en.wikipedia.org/wiki/Sohonet
Sohonet is a community-of-interest network for the television, film and media production community based in the Soho area of London. Founded in 1995 by a group of post-production companies, Sohonet links many of the British film studios to London's post-production community. Sohonet also provides access to the Internet, and private wide-area links to other countries. The system links film and media companies, TV broadcasters, publishers, Internet providers, graphic designers and recording studios via seamless transatlantic fibre connections to a global media network in the United States, (Los Angeles, New York City, Hawaii, Chicago, Seattle, Atlanta, etc.), Canada (Vancouver, Toronto, Montreal), New Zealand, Australia, France, Netherlands, Germany, Spain, Singapore and Italy ( Rome) and the ability to provide connectivity to locations worldwide via Fibre Optic links. Leading British film studios, Pinewood Studios, Shepperton Studios, Three Mills Studios, Twickenham Film Studios, Longcross Studios and Warner Bros. Studios, Leavesden, have direct optical fiber connectivity to the Sohonet London Fiber Ring, with campus connectivity on the sites provided via fiber and VDSL technologies. As well as may other studios globally, Australian examples Village Roadshow Studios, Melbourne City Studios, and Los Angeles shooting film studios Hollywood Center Studios, and Red Studios Hollywood, as well as Warner Bros., Universal Studios, HBO to name a few. A pioneering user of IP-over-ATM networking, Sohonet has since moved away from ATM to using gigabit Ethernet, 10 Gigabit Ethernet and MPLS technologies, including the use of wavelength division multiplexing on backbone connections. Sohonet is one of the pioneers of tapeless Digital intermediate, and one of the instigators of the media dispatch protocol developed by the Pro-MPEG consortium, which later became a SMPTE standard. All types of media file formats, from QuickTime DV MPEG AES/EBU MXF through OMFi, AAF, OpenEXR, to 4k DPX files are supported. Sohonet has its own private optical fibre networks in several cities and provides high speed object storage based on OpenStack Swift, in number of locations worldwide Los Angeles, London, and Sydney. On top of these stores, Sohonet operates a fast file transfer service called FileRunner. Sohonet also provides high speed secure connections, referred to as FastLane, into a number of public cloud providers Google Cloud Platform, SoftLayer, and Amazon Web Services, to enable large scale Cloud computing for 3D rendering. In 2020, Sohonet won a Primetime Engineering Emmy Award for its collaborative video review and editing platform, ClearView Flex. The Clearview Flex platform was also among the winners of the Advanced Imaging Society's Entertainment Technology Lumiere Awards for 2020. In 2022, Sohonet won an additional Primetime Engineering Emmy Award for its ClearView Pivot remote collaboration tool, Clearview Pivot. Links Website References Online companies
https://en.wikipedia.org/wiki/Music%20technology%20%28electronic%20and%20digital%29
Digital music technology encompasses digital instruments, computers, electronic effects units, software, or digital audio equipment by a performer, composer, sound engineer, DJ, or record producer to produce, perform or record music. The term refers to electronic devices, instruments, computer hardware, and software used in performance, playback, recording, composition, mixing, analysis, and editing of music. Education Professional training Courses in music technology are offered at many different Universities as part of degree programs focusing on performance, composition, music research at the undergraduate and graduate level. The study of music technology is usually concerned with the creative use of technology for creating new sounds, performing, recording, programming sequencers or other music-related electronic devices, and manipulating, mixing and reproducing music. Music technology programs train students for careers in "...sound engineering, computer music, audio-visual production and post-production, mastering, scoring for film and multimedia, audio for games, software development, and multimedia production." Those wishing to develop new music technologies often train to become an audio engineer working in R&D. Due to the increasing role of interdisciplinary work in music technology, individuals developing new music technologies may also have backgrounds or training in computer programming, computer hardware design, acoustics, record producing or other fields. Use of music technology in education Digital music technologies are widely used to assist in music education for training students in the home, elementary school, middle school, high school, college and university music programs. Electronic keyboard labs are used for cost-effective beginner group piano instruction in high schools, colleges, and universities. Courses in music notation software and basic manipulation of audio and MIDI can be part of a student's core requirements for a music degree. Mobile and desktop applications are available to aid the study of music theory and ear training. Digital pianos, such as those offered by Roland, provide interactive lessons and games using the built-in features of the instrument to teach music fundamentals. History , Early pioneers included Luigi Russolo, Halim El-Dabh, Pierre Schaeffer, Pierre Henry, Edgard Varèse, Karlheinz Stockhausen, Ikutaro Kakehashi, King Tubby., and others who manipulated sounds using tape machines—splicing tape and changing its playback speed to alter pre-recorded samples. Pierre Schaefer was credited for inventing this method of composition, known as , in 1948 in Paris, France. In this style of composition, existing material is manipulated to create new timbres. contrasts a later style that emerged in the mid-1950s in Cologne, Germany, known as . This style, invented by Karlheinz Stockhausen, involves creating new sounds without the use of pre-existing material. Unlike , which primarily focuses on ti
https://en.wikipedia.org/wiki/Jackson%20structured%20programming
Jackson structured programming (JSP) is a method for structured programming developed by British software consultant Michael A. Jackson and described in his 1975 book Principles of Program Design. The technique of JSP is to analyze the data structures of the files that a program must read as input and produce as output, and then produce a program design based on those data structures, so that the program control structure handles those data structures in a natural and intuitive way. JSP describes structures (of both data and programs) using three basic structures – sequence, iteration, and selection (or alternatives). These structures are diagrammed as (in effect) a visual representation of a regular expression. Introduction Michael A. Jackson originally developed JSP in the 1970s. He documented the system in his 1975 book Principles of Program Design. In a 2001 conference talk, he provided a retrospective analysis of the original driving forces behind the method, and related it to subsequent software engineering developments. Jackson's aim was to make COBOL batch file processing programs easier to modify and maintain, but the method can be used to design programs for any programming language that has structured control constructs— sequence, iteration, and selection ("if/then/else"). Jackson Structured Programming was similar to Warnier/Orr structured programming although JSP considered both input and output data structures while the Warnier/Orr method focused almost exclusively on the structure of the output stream. Motivation for the method At the time that JSP was developed, most programs were batch COBOL programs that processed sequential files stored on tape. A typical program read through its input file as a sequence of records, so that all programs had the same structure— a single main loop that processed all of the records in the file, one at a time. Jackson asserted that this program structure was almost always wrong, and encouraged programmers to look for more complex data structures. In Chapter 3 of Principles of Program Design Jackson presents two versions of a program, one designed using JSP, the other using the traditional single-loop structure. Here is his example, translated from COBOL into Java. The purpose of these two programs is to recognize groups of repeated records (lines) in a sorted file, and to produce an output file listing each record and the number of times that it occurs in the file. Here is the traditional, single-loop version of the program. String line; int count = 0; String firstLineOfGroup = null; // begin single main loop while ((line = in.readLine()) != null) { if (firstLineOfGroup == null || !line.equals(firstLineOfGroup)) { if (firstLineOfGroup != null) { System.out.println(firstLineOfGroup + " " + count); } count = 0; firstLineOfGroup = line; } count++; } if (firstLineOfGroup != null) { System.out.println(firstLineOfGroup + " " + co
https://en.wikipedia.org/wiki/Access%20Systems%20Americas
ACCESS Systems Americas, Inc. (formerly PalmSource) is a subsidiary of ACCESS which develops the Palm OS PDA operating system and its successor, the Access Linux Platform, as well as BeOS. PalmSource was spun off from Palm Computing, Inc. Palm OS runs on 38 million devices that have been sold since 1996 from hardware manufacturers including Palm, Inc., Samsung, IBM, Aceeca, AlphaSmart, Fossil, Inc., Garmin, Group Sense PDA (Xplore), Kyocera, PiTech, Sony, and Symbol. PalmSource also develops several programs for the Palm OS, and as of December 2005, PalmGear claims to offer 28,769 software titles of varying genres. Palm OS software programs can also be downloaded from CNET, PalmSource, Handango, and Tucows. PalmSource also owns BeOS, which it purchased from Be Inc. in August 2001. History In January 2002, Palm, Inc. set up a wholly owned subsidiary to develop and license Palm OS, which was named PalmSource in February. In October 2003, PalmSource was spun off from Palm as an independent company, and Palm renamed itself palmOne. palmOne and PalmSource set up a holding company that owned the Palm trademark. In January 2004, PalmSource announced the successor to classic Palm OS called Palm OS Cobalt. However, it failed to gain support from hardware licensees. That December, PalmSource acquired China MobileSoft, a software company with a mobile Linux offering. As a result, PalmSource announced that they would extend Palm OS to run on top of the Linux architecture. In May 2005, palmOne purchased PalmSource's share of the Palm trademark for US$30 million and two months later renamed itself Palm, Inc. As part of the agreement, palmOne granted certain rights to Palm trademarks to PalmSource and licensees for a four-year transition period. Later that year, ACCESS, which specializes in mobile and embedded web browser technologies, including NetFront, acquired PalmSource for US$324 million. In October 2006, PalmSource announced that it would rename itself to ACCESS, to match its parent company's name. See also List of Palm OS Devices References External links PalmSource Historical SEC Filings Defunct software companies of the United States Companies formerly listed on the Nasdaq Palm, Inc.
https://en.wikipedia.org/wiki/Extended%20Backus%E2%80%93Naur%20form
In computer science, extended Backus–Naur form (EBNF) is a family of metasyntax notations, any of which can be used to express a context-free grammar. EBNF is used to make a formal description of a formal language such as a computer programming language. They are extensions of the basic Backus–Naur form (BNF) metasyntax notation. The earliest EBNF was developed by Niklaus Wirth, incorporating some of the concepts (with a different syntax and notation) from Wirth syntax notation. Today, many variants of EBNF are in use. The International Organization for Standardization adopted an EBNF Standard, ISO/IEC 14977, in 1996. According to Zaytsev, however, this standard "only ended up adding yet another three dialects to the chaos" and, after noting its lack of success, also notes that the ISO EBNF is not even used in all ISO standards. Wheeler argues against using the ISO standard when using an EBNF and recommends considering alternative EBNF notations such as the one from the W3C Extensible Markup Language (XML) 1.0 (Fifth Edition). This article uses EBNF as specified by the ISO for examples applying to all EBNFs. Other EBNF variants use somewhat different syntactic conventions. Basics EBNF is a code that expresses the syntax of a formal language. An EBNF consists of terminal symbols and non-terminal production rules which are the restrictions governing how terminal symbols can be combined into a valid sequence. Examples of terminal symbols include alphanumeric characters, punctuation marks, and whitespace characters. The EBNF defines production rules where sequences of symbols are respectively assigned to a nonterminal: digit excluding zero = "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" ; digit = "0" | digit excluding zero ; This production rule defines the nonterminal digit which is on the left side of the assignment. The vertical bar represents an alternative and the terminal symbols are enclosed with quotation marks followed by a semicolon as terminating character. Hence a digit is a 0 or a digit excluding zero that can be 1 or 2 or 3 and so forth until 9. A production rule can also include a sequence of terminals or nonterminals, each separated by a comma: twelve = "1", "2" ; two hundred one = "2", "0", "1" ; three hundred twelve = "3", twelve ; twelve thousand two hundred one = twelve, two hundred one ; Expressions that may be omitted or repeated can be represented through curly braces { ... }: natural number = digit excluding zero, { digit } ; In this case, the strings 1, 2, ..., 10, ..., 10000, ... are correct expressions. To represent this, everything that is set within the curly braces may be repeated arbitrarily often, including not at all. An option can be represented through squared brackets [ ... ]. That is, everything that is set within the square brackets may be present just once, or not at all: integer = "0" | [ "-" ], natural number ; Therefor
https://en.wikipedia.org/wiki/John%20Backus
John Warner Backus (December 3, 1924 – March 17, 2007) was an American computer scientist. He directed the team that invented and implemented FORTRAN, the first widely used high-level programming language, and was the inventor of the Backus–Naur form (BNF), a widely used notation to define formal language syntax. He later did research into the function-level programming paradigm, presenting his findings in his influential 1977 Turing Award lecture "Can Programming Be Liberated from the von Neumann Style?" The IEEE awarded Backus the W. W. McDowell Award in 1967 for the development of FORTRAN. He received the National Medal of Science in 1975 and the 1977 Turing Award "for profound, influential, and lasting contributions to the design of practical high-level programming systems, notably through his work on FORTRAN, and for publication of formal procedures for the specification of programming languages". He retired in 1991 and died at his home in Ashland, Oregon on March 17, 2007. Early life Backus was born in Philadelphia and grew up in nearby Wilmington, Delaware. He studied at The Hill School in Pottstown, Pennsylvania, but he was apparently not a diligent student. He entered college at the University of Virginia to study chemistry, but struggled with his classes there, and he was expelled after less than a year for poor attendance. He was subsequently conscripted into the U.S. Army during World War II, and eventually came to hold the rank of corporal, being put in command of an anti-aircraft battery stationed at Fort Stewart, Georgia. After receiving high scores on a military aptitude test, the Army sent him to study engineering at the University of Pittsburgh. He later transferred to a pre-medical program at Haverford College. During an internship at a hospital, he was diagnosed with a cranial bone tumor, which was successfully removed, and a plate was installed in his head. He then moved to the Flower and Fifth Avenue Medical School for medical school, but found it uninteresting and dropped out after nine months. He soon underwent a second operation to replace the metal plate in his head with one of his own design, and received an honorable medical discharge from the U.S. Army in 1946. Fortran After moving to New York City he trained initially as a radio technician and became interested in mathematics. He graduated from Columbia University with a bachelor's degree in 1949 and a master's degree in 1950, both in mathematics, and joined IBM in 1950. During his first three years, he worked on the Selective Sequence Electronic Calculator (SSEC); his first major project was to write a program to calculate positions of the Moon. In 1953 Backus developed the language Speedcoding, the first high-level language created for an IBM computer, to aid in software development for the IBM 701 computer. Programming was very difficult at this time, and in 1954 Backus assembled a team to define and develop Fortran for the IBM 704 computer. Fortran was
https://en.wikipedia.org/wiki/Lymphatic%20system
The lymphatic system, or lymphoid system, is an organ system in vertebrates that is part of the immune system, and complementary to the circulatory system. It consists of a large network of lymphatic vessels, lymph nodes, lymphoid organs, lymphoid tissues and lymph. Lymph is a clear fluid carried by the lymphatic vessels back to the heart for re-circulation. (The Latin word for lymph, lympha, refers to the deity of fresh water, "Lympha"). Unlike the circulatory system that is a closed system, the lymphatic system is open. The human circulatory system processes an average of 20 litres of blood per day through capillary filtration, which removes plasma from the blood. Roughly 17 litres of the filtered blood is reabsorbed directly into the blood vessels, while the remaining three litres are left in the interstitial fluid. One of the main functions of the lymphatic system is to provide an accessory return route to the blood for the surplus three litres. The other main function is that of immune defense. Lymph is very similar to blood plasma, in that it contains waste products and cellular debris, together with bacteria and proteins. The cells of the lymph are mostly lymphocytes. Associated lymphoid organs are composed of lymphoid tissue, and are the sites either of lymphocyte production or of lymphocyte activation. These include the lymph nodes (where the highest lymphocyte concentration is found), the spleen, the thymus, and the tonsils. Lymphocytes are initially generated in the bone marrow. The lymphoid organs also contain other types of cells such as stromal cells for support. Lymphoid tissue is also associated with mucosas such as mucosa-associated lymphoid tissue (MALT). Fluid from circulating blood leaks into the tissues of the body by capillary action, carrying nutrients to the cells. The fluid bathes the tissues as interstitial fluid, collecting waste products, bacteria, and damaged cells, and then drains as lymph into the lymphatic capillaries and lymphatic vessels. These vessels carry the lymph throughout the body, passing through numerous lymph nodes which filter out unwanted materials such as bacteria and damaged cells. Lymph then passes into much larger lymph vessels known as lymph ducts. The right lymphatic duct drains the right side of the region and the much larger left lymphatic duct, known as the thoracic duct, drains the left side of the body. The ducts empty into the subclavian veins to return to the blood circulation. Lymph is moved through the system by muscle contractions. In some vertebrates, a lymph heart is present that pumps the lymph to the veins. The lymphatic system was first described in the 17th century independently by Olaus Rudbeck and Thomas Bartholin. Structure The lymphatic system consists of a conducting network of lymphatic vessels, lymphoid organs, lymphoid tissues, and the circulating lymph. Primary lymphoid organs The primary (or central) lymphoid organs generate lymphocytes from immature progenitor
https://en.wikipedia.org/wiki/Universal%20Turing%20machine
In computer science, a universal Turing machine (UTM) is a Turing machine capable of computing any computable sequence, as described by Alan Turing in his seminal paper "On Computable Numbers, with an Application to the Entscheidungsproblem". Common sense might say that a universal machine is impossible, but Turing proves that it is possible. He suggested that we may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions q 1: q 2 . .... qI; which will be called "m-configurations". He then described the operation of such machine, as described below, and argued: Alan Turing introduced the idea of such a machine in 1936–1937. This principle is considered to be the origin of the idea of a stored-program computer used by John von Neumann in 1946 for the "Electronic Computing Instrument" that now bears von Neumann's name: the von Neumann architecture. Introduction Davis makes a persuasive argument that Turing's conception of what is now known as "the stored-program computer", of placing the "action table"—the instructions for the machine—in the same "memory" as the input data, strongly influenced John von Neumann's conception of the first American discrete-symbol (as opposed to analog) computer—the EDVAC. Davis quotes Time magazine to this effect, that "everyone who taps at a keyboard ... is working on an incarnation of a Turing machine", and that "John von Neumann [built] on the work of Alan Turing". Davis makes a case that Turing's Automatic Computing Engine (ACE) computer "anticipated" the notions of microprogramming (microcode) and RISC processors. Knuth cites Turing's work on the ACE computer as designing "hardware to facilitate subroutine linkage"; Davis also references this work as Turing's use of a hardware "stack". As the Turing machine was encouraging the construction of computers, the UTM was encouraging the development of the fledgling computer sciences. An early, if not the very first, assembler was proposed "by a young hot-shot programmer" for the EDVAC. Von Neumann's "first serious program ... [was] to simply sort data efficiently". Knuth observes that the subroutine return embedded in the program itself rather than in special registers is attributable to von Neumann and Goldstine. Knuth furthermore states that Davis briefly mentions operating systems and compilers as outcomes of the notion of program-as-data. Mathematical theory With this encoding of action tables as strings, it becomes possible, in principle, for Turing machines to answer questions about the behaviour of other Turing machines. Most of these questions, however, are undecidable, meaning that the function in question cannot be calculated mechanically. For instance, the problem of determining whether an arbitrary Turing machine will halt on a particular input, or on all inputs, known as the Halting problem, was shown to be, in general, undecidable in Turing's original paper. Rice's theorem shows th
https://en.wikipedia.org/wiki/Free-space%20optical%20communication
Free-space optical communication (FSO) is an optical communication technology that uses light propagating in free space to wirelessly transmit data for telecommunications or computer networking. "Free space" means air, outer space, vacuum, or something similar. This contrasts with using solids such as optical fiber cable. The technology is useful where the physical connections are impractical due to high costs or other considerations. History Optical communications, in various forms, have been used for thousands of years. The ancient Greeks used a coded alphabetic system of signalling with torches developed by Cleoxenus, Democleitus and Polybius. In the modern era, semaphores and wireless solar telegraphs called heliographs were developed, using coded signals to communicate with their recipients. In 1880, Alexander Graham Bell and his assistant Charles Sumner Tainter created the photophone, at Bell's newly established Volta Laboratory in Washington, DC. Bell considered it his most important invention. The device allowed for the transmission of sound on a beam of light. On June 3, 1880, Bell conducted the world's first wireless telephone transmission between two buildings, some 213 meters (700 feet) apart. Its first practical use came in military communication systems many decades later, first for optical telegraphy. German colonial troops used heliograph telegraphy transmitters during the Herero and Namaqua genocide starting in 1904, in German South-West Africa (today's Namibia) as did British, French, US or Ottoman signals. During the trench warfare of World War I when wire communications were often cut, German signals used three types of optical Morse transmitters called , the intermediate type for distances of up to 4 km (2.5 miles) at daylight and of up to 8 km (5 miles) at night, using red filters for undetected communications. Optical telephone communications were tested at the end of the war, but not introduced at troop level. In addition, special blinkgeräts were used for communication with airplanes, balloons, and tanks, with varying success. A major technological step was to replace the Morse code by modulating optical waves in speech transmission. Carl Zeiss, Jena developed the 80/80 (literal translation: optical speaking device) that the German army used in their World War II anti-aircraft defense units, or in bunkers at the Atlantic Wall. The invention of lasers in the 1960s revolutionized free-space optics. Military organizations were particularly interested and boosted their development. However, the technology lost market momentum when the installation of optical fiber networks for civilian uses was at its peak. Many simple and inexpensive consumer remote controls use low-speed communication using infrared (IR) light. This is known as consumer IR technologies. Usage and technologies Free-space point-to-point optical links can be implemented using infrared laser light, although low-data-rate communication over short dist
https://en.wikipedia.org/wiki/Infrared%20Data%20Association
The Infrared Data Association (IrDA) is an industry-driven interest group that was founded in 1994 by around 50 companies. IrDA provides specifications for a complete set of protocols for wireless infrared communications, and the name "IrDA" also refers to that set of protocols. The main reason for using the IrDA protocols had been wireless data transfer over the "last one meter" using point-and-shoot principles. Thus, it has been implemented in portable devices such as mobile telephones, laptops, cameras, printers, and medical devices. The main characteristics of this kind of wireless optical communication are physically secure data transfer, line-of-sight (LOS) and very low bit error rate (BER) that makes it very efficient. Specifications IrPHY The mandatory IrPHY (Infrared Physical Layer Specification) is the physical layer of the IrDA specifications. It comprises optical link definitions, modulation, coding, cyclic redundancy check (CRC) and the framer. Different data rates use different modulation/coding schemes: SIR: 9.6–115.2 kbit/s, asynchronous, RZI, UART-like, 3/16 pulse. To save energy, the pulse width is often minimized to 3/16 of a 115.2KBAUD pulse width. MIR: 0.576–1.152 Mbit/s, RZI, 1/4 pulse, HDLC bit stuffing FIR: 4 Mbit/s, 4PPM VFIR: 16 Mbit/s, NRZ, HHH(1,13) UFIR: 96 Mbit/s, NRZI, 8b/10b GigaIR: 512 Mbit/s – 1 Gbit/s, NRZI, 2-ASK, 4-ASK, 8b/10b Further characteristics are: Range: standard: 2 m; low-power to low-power: 0.2 m; standard to low-power: 0.3 m. The 10 GigaIR also define new usage models that supports higher link distances up to several meters. Angle: minimum cone ±15° Speed: 2.4 kbit/s to 1 Gbit/s Modulation: baseband, no carrier Infrared window (part of the device body transparent to infrared light beam) Wavelength: 850–900 nm The frame size depends on the data rate mostly and varies between 64 B and 64 kB. Additionally, bigger blocks of data can be transferred by sending multiple frames consecutively. This can be adjusted with a parameter called "window size" (1–127). Finally, data blocks up to 8 MB can be sent at once. Combined with a low bit error rate of generally <, that communication could be very efficient compared to other wireless solutions. IrDA transceivers communicate with infrared pulses (samples) in a cone that extends at least 15 degrees half angle off center. The IrDA physical specifications require the lower and upper limits of irradiance such that a signal is visible up to one meter away, but a receiver is not overwhelmed with brightness when a device comes close. In practice, there are some devices on the market that do not reach one meter, while other devices may reach up to several meters. There are also devices that do not tolerate extreme closeness. The typical sweet spot for IrDA communications is from away from a transceiver, in the center of the cone. IrDA data communications operate in half-duplex mode because while transmitting, a device's receiver is blinded by the
https://en.wikipedia.org/wiki/Document%20management%20system
A document management system (DMS) is usually a computerized system used to store, share, track and manage files or documents. Some systems include history tracking where a log of the various versions created and modified by different users is recorded. The term has some overlap with the concepts of content management systems. It is often viewed as a component of enterprise content management (ECM) systems and related to digital asset management, document imaging, workflow systems and records management systems. History While many EDM systems store documents in their native file format (Microsoft Word or Excel, PDF), some web-based document management systems are beginning to store content in the form of HTML. These HTML-based document management systems can act as publishing systems or policy management systems. Content is captured either by using browser based editors or the importing and conversion of not HTML content. Storing documents as HTML enables a simpler full-text workflow as most search engines deal with HTML natively. DMS without an HTML storage format is required to extract the text from the proprietary format making the full text search workflow slightly more complicated. Search capabilities including boolean queries, cluster analysis, and stemming have become critical components of DMS as users have grown used to internet searching and spend less time organizing their content. Components Document management systems commonly provide storage, versioning, metadata, security, as well as indexing and retrieval capabilities. Here is a description of these components: Standardization Many industry associations publish their own lists of particular document control standards that are used in their particular field. Following is a list of some of the relevant ISO documents. Divisions ICS 01.140.10 and 01.140.20. The ISO has also published a series of standards regarding the technical documentation, covered by the division of 01.110. ISO 2709 Information and documentation – Format for information exchange ISO 15836 Information and documentation – The Dublin Core metadata element set ISO 15489 Information and documentation – Records management ISO 21127 Information and documentation – A reference ontology for the interchange of cultural heritage information ISO 23950 Information and documentation – Information retrieval (Z39.50) – Application service definition and protocol specification ISO 10244 Document management – Business process baselining and analysis ISO 32000 Document management – Portable document format ISO/IEC 27001 Specification for an information security management system Document control Government regulations require that companies working in certain industries control their documents. a Document Controller is responsible to control these documents strictly. These industries include accounting (for example: 8th EU Directive, Sarbanes–Oxley Act), food safety (e.g., Food Safety Modernization Act in the US), IS
https://en.wikipedia.org/wiki/SIDS%20%28disambiguation%29
SIDS most often refers to sudden infant death syndrome, the sudden unexplained death of a child of less than one year of age. SIDS may also refer to: Screening information dataset, a study of the hazards associated with a particular chemical substance Small Island Developing States, a group of developing small-island countries See also SID (disambiguation)
https://en.wikipedia.org/wiki/Challenge-Handshake%20Authentication%20Protocol
In computing, the Challenge-Handshake Authentication Protocol (CHAP) is an authentication protocol originally used by Point-to-Point Protocol (PPP) to validate users. CHAP is also carried in other authentication protocols such as RADIUS and Diameter. Almost all network operating systems support PPP with CHAP, as do most network access servers. CHAP is also used in PPPoE, for authenticating DSL users. As the PPP sends data unencrypted and "in the clear", CHAP is vulnerable to any attacker who can observe the PPP session. An attacker can see the user's name, CHAP challenge, CHAP response, and any other information associated with the PPP session. The attacker can then mount an offline dictionary attack in order to obtain the original password. When used in PPP, CHAP also provides protection against replay attacks by the peer through the use of a challenge which is generated by the authenticator, which is typically a network access server. Where CHAP is used in other protocols, it may be sent in the clear, or it may be protected by a security layer such as Transport Layer Security (TLS). For example, when CHAP is sent over RADIUS using User Datagram Protocol (UDP), any attacker who can see the RADIUS packets can mount an offline dictionary attack, as with PPP. CHAP requires that both the client and server know the clear-text version of the password, although the password itself is never sent over the network. Thus when used in PPP, CHAP provides better security as compared to Password Authentication Protocol (PAP) which is vulnerable for both these reasons. Benefits of CHAP When the peer sends CHAP, the authentication server will receive it, and obtain the "known good" password from a database, and perform the CHAP calculations. If the resulting hashes match, then the user is deemed to be authenticated. If the hashes do not match, then the users authentication attempt is rejected. Since the authentication server has to store the password in clear-text, it is impossible to use different formats for the stored password. If an attacker were to steal the entire database of passwords, all of those passwords would be visible "in the clear" in the database. As a result, while CHAP can be more secure than PAP when used over a PPP link, it prevents more secure storage "at rest" than with other methods such as PAP. Variants MS-CHAP is similar to CHAP but uses a different hash algorithm, and allows for each party to authenticate the other. Working cycle CHAP is an authentication scheme originally used by Point-to-Point Protocol (PPP) servers to validate the identity of remote clients. CHAP periodically verifies the identity of the client by using a three-way handshake. This happens at the time of establishing the initial link (LCP), and may happen again at any time afterwards. The verification is based on a shared secret (such as the client's password). After the completion of the link establishment phase, the authenticator sends a "challenge"
https://en.wikipedia.org/wiki/Password%20Authentication%20Protocol
Password Authentication Protocol (PAP) is a password-based authentication protocol used by Point-to-Point Protocol (PPP) to validate users. PAP is specified in . Almost all network operating systems support PPP with PAP, as do most network access servers. PAP is also used in PPPoE, for authenticating DSL users. As the Point-to-Point Protocol (PPP) sends data unencrypted and "in the clear", PAP is vulnerable to any attacker who can observe the PPP session. An attacker can see the users name, password, and any other information associated with the PPP session. Some additional security can be gained on the PPP link by using CHAP or EAP. However, there are always tradeoffs when choosing an authentication method, and there is no single answer for which is more secure. When PAP is used in PPP, it is considered a weak authentication scheme. Weak schemes are simpler and have lighter computational overhead than more complex schemes such as Transport Layer Security (TLS), but they are much more vulnerable to attack. While weak schemes are used where the transport layer is expected to be physically secure, such as a home DSL link. Where the transport layer is not physically secure a system such as Transport Layer Security (TLS) or Internet Protocol Security (IPsec) is used instead. Other uses of PAP PAP is also used to describe password authentication in other protocols such as RADIUS and Diameter. However, those protocols provide for transport or network layer security, and therefore that usage of PAP does not have the security issues seen when PAP is used with PPP. Benefits of PAP When the client sends a clear-text password, the authentication server will receive it, and compare it to a "known good" password. Since the authentication server has received the password in clear-text, the format of the stored password can be chosen to be secure "at rest". If an attacker were to steal the entire database of passwords, it is computationally infeasible to reverse the function to recover a plaintext password. As a result, while PAP passwords are less secure when sent over a PPP link, they allow for more secure storage "at rest" than with other methods such as CHAP. Working cycle PAP authentication is only done at the time of the initial link establishment, and verifies the identity of the client using a two-way handshake. Client sends username and password. This is sent repeatedly until a response is received from the server. Server sends authentication-ack (if credentials are OK) or authentication-nak (otherwise) PAP packets PAP packet embedded in a PPP frame. The protocol field has a value of C023 (hex). See also SAP – Service Access Point Notes References Password authentication Internet protocols Authentication protocols
https://en.wikipedia.org/wiki/Personal%20Computer%20Memory%20Card%20International%20Association
The Personal Computer Memory Card International Association (PCMCIA) was a group of computer hardware manufacturers, operating under that name from 1989 to 2009. Starting with the PCMCIA card in 1990 (the name later simplified to PC Card), it created various standards for peripheral interfaces designed for laptop computers. History PCMCIA was based on the original initiative of the British mathematician and computer scientist Ian H. S. Cullimore, one of the founders of the Sunnyvale-based Poqet Computer Corporation, who was seeking to integrate some kind of memory card technology as storage medium into their early DOS-based palmtop PCs, when traditional floppy drives and harddisks were found to be too power-hungry and large to fit into their battery-powered handheld devices. When in July 1989, Poqet contacted Fujitsu for their existing but still non-standardized SRAM memory cards, and Intel for their flash technology, the necessity and potential of establishing a worldwide memory card standard became obvious to the parties involved. This led to the foundation of the PCMCIA organization in September 1989. By early 1990, some thirty companies had joined the initiative already, including Poqet, Fujitsu, Intel, Mitsubishi, IBM, Lotus, Microsoft and SCM Microsystems (now Identiv). From 1990 onwards, the association published and maintained a sequence of standards for parallel communication peripheral interfaces in laptop computers, notably the PCMCIA card, later renamed to PC Card, and succeeded by ExpressCard (2003), all of them now technologically obsolete. The PCMCIA association was dissolved in 2009 and all of its activities have since been managed by the USB Implementers Forum, according to the PCMCIA website. Name PCMCIA stands for Personal Computer Memory Card International Association, the group of companies that defined the standard. This acronym was difficult to say and remember, and was sometimes jokingly referred to as "People Can't Memorize Computer Industry Acronyms". To recognize increased scope beyond memory, and to aid in marketing, the association acquired the rights to the simpler term "PC Card" from IBM. This was the name of the standard from version 2 of the specification onwards. These cards were used for wireless networks, modems, and other functions in notebook PCs. Obsolescence As of 2023, PCMCIA is now little used in new hardware, with most removable devices using USB instead. The Linux kernel project is now moving toward removing obsolete PCMCIA drivers from the mainline kernel. References External links Solid-state computer storage media Motherboard PCMCIA Standards organizations in the United States Organizations established in 1989 Organizations disestablished in 2009 1989 establishments in the United States 2009 disestablishments in the United States Computer-related introductions in 1990
https://en.wikipedia.org/wiki/List%20of%20command-line%20interpreters
In computing, a command-line interpreter, or command language interpreter, is a blanket term for a certain class of programs designed to read lines of text entered by a user, thus implementing a command-line interface. Operating system shells AmigaOS Amiga CLI/Amiga Shell Unix-like systems There are many variants of Unix shell: Bourne shell sh Almquist shell (ash) Debian Almquist shell (dash) Bash (Unix shell) bash KornShell ksh Z shell zsh C shell csh TENEX C shell tcsh Ch shell ch Emacs shell eshell Friendly interactive shell fish PowerShell pwsh rc shell rc, a shell for Plan 9 from Bell Labs and Unix Stand-alone shell sash Scheme Shell scsh Microsoft Windows Native COMMAND.COM, the original Microsoft command line processor introduced on MS-DOS as well as Windows 9x, in 32-bit versions of NT-based Windows via NTVDM cmd.exe, successor of COMMAND.COM introduced on OS/2 and Windows NT systems, although COMMAND.COM is still available in virtual DOS machines on IA-32 versions of those operating systems also. Recovery Console Windows PowerShell, a command processor based on .NET Framework PowerShell, a command processor based on .NET Hamilton C shell, a clone of the Unix C shell by Hamilton Laboratories Take Command Console (4NT), a clone of CMD.EXE with added features by JP Software Take Command, a newer incarnation of 4NT Unix/Linux compatibility layer and POSIX subsystem Interix MKS Toolkit Microsoft POSIX subsystem Windows Services for UNIX Windows Subsystem for Linux CP/M Console Command Processor (CCP), the default command line interpreter ZCPR for the Z-System Microshell DOS COMMAND.COM, the default command-line interpreter 4DOS, a compatible, but more advanced shell by JP Software NDOS, provided with some versions of the Norton Utilities GW-BASIC OS/2 CMD.EXE, the default command-line interpreter Hamilton C shell, a clone of the Unix C shell by Hamilton Laboratories 4OS2, a clone of CMD.EXE with additional features by JP Software IBM i Control Language Qshell Apple computers Apple DOS/Apple ProDOS Macintosh Programmer's Workshop Mobile devices DROS, Java ME platform based DOS-like shell for smartphones Network routers Cisco IOS Junos Command Line Interface (Juniper Networks) Minicomputer CLIs Data General's CLI (Command Line Interpreter) on RDOS and AOS Operating Systems and their variants Digital Equipment Corporation's DIGITAL Command Language (DCL) Other BASIC-PLUS (RSTS/E) CANDE MCS – command-line shell and text editor on the MCP operating system Conversational Monitor System (VM/CMS) DOS Wedge (an extension to the Commodore 64's BASIC 2.0) DIGITAL Command Language (OpenVMS) Extensible Firmware Interface shell Microsoft BASIC (qualifies both for a programming language and OS) Singularity (operating system) SymShell, see SymbOS Time Sharing Option (MVS, z/OS) Atari TOS shell YouOS shell EFI-SHELL – an open source Extensible Firmware Interface c
https://en.wikipedia.org/wiki/List%20of%20operating%20systems
This is a list of operating systems. Computer operating systems can be categorized by technology, ownership, licensing, working state, usage, and by many other characteristics. In practice, many of these groupings may overlap. Criteria for inclusion is notability, as shown either through an existing Wikipedia article or citation to a reliable source. Proprietary Acorn Computers Arthur ARX MOS RISC iX RISC OS Amazon Fire OS Amiga Inc. AmigaOS AmigaOS 1.0-3.9 (Motorola 68000) AmigaOS 4 (PowerPC) Amiga Unix (a.k.a. Amix) Amstrad AMSDOS Contiki CP/M 2.2 CP/M Plus SymbOS Apple Inc. Apple II family Apple DOS Apple Pascal Apex (Colorado School of Mines) ProDOS GS/OS GNO/ME Contiki Apple III Apple SOS Apple Lisa Apple Macintosh Classic Mac OS A/UX (UNIX System V with BSD extensions) Copland MkLinux Pink Rhapsody macOS (formerly Mac OS X and OS X) macOS Server (formerly Mac OS X Server and OS X Server) Apple Network Server IBM AIX (Apple-customized) Apple MessagePad Newton OS iPhone and iPod Touch iOS (formerly iPhone OS) iPad iPadOS Apple Watch watchOS Apple TV tvOS Embedded operating systems bridgeOS Apple Vision Pro visionOS Embedded operating systems A/ROSE iPod software (unnamed embedded OS for iPod) Unnamed NetBSD variant for Airport Extreme and Time Capsule Apollo Computer, Hewlett-Packard Domain/OS – One of the first network-based systems. Run on Apollo/Domain hardware. Later bought by Hewlett-Packard. Atari Atari DOS (for 8-bit computers) Atari TOS Atari MultiTOS Contiki (for 8-bit, ST, Portfolio) BAE Systems XTS-400 Be Inc. BeOS BeIA BeOS r5.1d0 magnussoft ZETA (based on BeOS r5.1d0 source code, developed by yellowTAB) Bell Labs Unix ("Ken's new system," for its creator (Ken Thompson), officially Unics and then Unix, the prototypic operating system created in Bell Labs in 1969 that formed the basis for the Unix family of operating systems) UNIX Time-Sharing System v1 UNIX Time-Sharing System v2 UNIX Time-Sharing System v3 UNIX Time-Sharing System v4 UNIX Time-Sharing System v5 UNIX Time-Sharing System v6 MINI-UNIX PWB/UNIX USG CB Unix UNIX Time-Sharing System v7 (It is from Version 7 Unix (and, to an extent, its descendants listed below) that almost all Unix-based and Unix-like operating systems descend.) Unix System III Unix System IVPhysics Unix System V Unix System V Releases 2.0, 3.0, 3.2, 4.0, and 4.2 UNIX Time-Sharing System v8 UNIX Time-Sharing System v9 UNIX Time-Sharing System v10 Non-Unix Operating Systems: BESYS Plan 9 from Bell Labs Inferno Burroughs Corporation, Unisys Burroughs MCP Commodore International GEOS AmigaOS AROS Research Operating System Control Data Corporation Lower 3000 series SCOPE (Supervisory Control Of Program Execution) Upper 3000 series SCOPE (Supervisory Control Of Program Execution) Drum SCOPE 6x00 and related Cyber Chippewa Operating System (COS) MACE (Mansfield and Cahlander Executive) Kronos (K
https://en.wikipedia.org/wiki/Cambridge%20Z88
The Cambridge Z88 is a Zilog Z80-based portable computer released in 1987 by Cambridge Computer, the company formed for such purpose by Clive Sinclair. It was approximately A4 paper sized and lightweight at , running on four AA batteries for 20 hours of use. It was packaged with a built-in combined word processing/spreadsheet/database application called PipeDream (functionally equivalent to a 1987 BBC Micro ROM called Acornsoft View Professional), along with several other applications and utilities, such as a Z80-version of the BBC BASIC programming language. History The Z88 evolved from Sir Clive Sinclair's Pandora portable computer project which had been under development at Sinclair Research during the mid-1980s. Following the sale of Sinclair Research to Amstrad, Sinclair released the Z88 through his Cambridge Computer mail-order company. The machine was launched at the Which Computer? Show on 17 February 1987. Early models were contract-manufactured by Thorn EMI but production later switched to SCI Systems in Irvine, Scotland. Design The Z88 is a portable computer weighing , based on a low-power CMOS version of the popular Zilog Z80 microprocessor. It comes with 32 kB of internal pseudo-static RAM and 128 kB of ROM containing the operating system (called OZ). The memory can be expanded up to 3.5 MB of RAM, the contents of which are preserved across sessions. An integrated capacitor prevents the Z88 from losing its data for the limited time it takes to change the batteries. The machine uses a membrane keyboard, which is almost silent in use; an optional electronic "click" can be turned on to indicate keystrokes. The Z88 is powered by four AA batteries, giving up to 20 hours of use. It has three memory card slots, which accommodate proprietary RAM, EPROM or flash cards, the third slot being equipped with a built-in EPROM programmer. Card capacities range from 32 kB to 1 MB. The Z88 has a built-in eight-line, 64 × 640 pixel super-twisted nematic display which has greater contrast than conventional twisted nematic LCDs. The 64 kB addressable by the Z80 processor are divided in four banks of 16 kB each. The maximum memory of 4 MiB for the system is also divided in 256 segments of 16 kB each. The hardware can map any of the 16 kB blocks to any of the four banks. The first 512 kB are reserved for ROM; the next 512 kB are reserved for internal RAM. The next 3 MB are assigned to each one of the three memory slots. Postmarket upgrades Since 1998, a 1 MiB Flash memory card is available which provides convenient non-volatile storage. Once written to the card, files are safe and not reliant on a power supply. Unlike traditional EPROM cards (erased with an external ultraviolet light), this one can be electrically erased in the computer's slot. The first generation of card only worked in slot 3 where a 12 V signal (Vpp) is available. The later generation is based on AMD chips and runs with 5 V for erasure. It is possible to read, write and era
https://en.wikipedia.org/wiki/Commodore%20128
The Commodore 128, also known as the C128, C-128, C= 128, is the last 8-bit home computer that was commercially released by Commodore Business Machines (CBM). Introduced in January 1985 at the CES in Las Vegas, it appeared three years after its predecessor, the Commodore 64, the bestselling computer of the 1980s. The C128 is a significantly expanded successor to the C64, with nearly full compatibility. The newer machine has 128 KB of RAM in two 64 KB banks, and an 80-column color video output. It has a redesigned case and keyboard. Also included is a Zilog Z80 CPU which allows the C128 to run CP/M, as an alternative to the usual Commodore BASIC environment. The presence of the Z80 and the huge CP/M software library it brings, coupled with the C64's software library, gave the C128 one of the broadest ranges of available software among its competitors. The primary hardware designer of the C128 was Bil Herd, who had worked on the Plus/4. Other hardware engineers were Dave Haynie and Frank Palaia, while the IC design work was done by Dave DiOrio. The main Commodore system software was developed by Fred Bowen and Terry Ryan, while the CP/M subsystem was developed by Von Ertwine. Technical overview The C128's keyboard includes four cursor keys, an Alt key, Help key, Esc key, Tab key and a numeric keypad. None of these were present on the C64 which had only two cursor keys, requiring the use of the Shift key to move the cursor up or left. This alternate arrangement was retained on the 128, for use under C64 mode. The lack of a numeric keypad, Alt key, and Esc key on the C64 was an issue with some CP/M productivity software when used with the C64's Z80 cartridge. A keypad was requested by many C64 owners who spent long hours entering machine language type-in programs. Many of the added keys matched counterparts present on the IBM PC's keyboard and made the new computer more attractive to business software developers. While the 128's 40-column mode closely duplicates that of the C64, an extra 1K of color RAM is made available to the programmer, as it is multiplexed through memory address 1. The C128's power supply is improved over the C64's unreliable design, being much larger and equipped with cooling vents and a replaceable fuse. The C128 does not perform a system RAM test on power-up like previous Commodore machines. Instead of the single 6510 microprocessor of the C64, the C128 incorporates a two-CPU design. The primary CPU, the 8502, is a slightly improved version of the 6510, capable of being clocked at 2 MHz. The second CPU is a Zilog Z80 which is used to run CP/M software, as well as to initiate operating-mode selection at boot time. The two processors cannot run concurrently, thus the C128 is not a multiprocessing system. The C128's complex architecture, includes four differently accessed kinds of RAM (128 KB main RAM, 16–64 KB VDC video RAM, 2 kNibbles VIC-II Color RAM, 2-KB floppy-drive RAM on C128Ds, 0, 128 or 512 KB REU RAM), two or th
https://en.wikipedia.org/wiki/C%2B%2B
C++ (, pronounced "C plus plus" and sometimes abbreviated as CPP) is a high-level, general-purpose programming language created by Danish computer scientist Bjarne Stroustrup. First released in 1985 as an extension of the C programming language, it has since expanded significantly over time; C++ has object-oriented, generic, and functional features, in addition to facilities for low-level memory manipulation. It is almost always implemented as a compiled language, and many vendors provide C++ compilers, including the Free Software Foundation, LLVM, Microsoft, Intel, Embarcadero, Oracle, and IBM. C++ was designed with systems programming and embedded, resource-constrained software and large systems in mind, with performance, efficiency, and flexibility of use as its design highlights. C++ has also been found useful in many other contexts, with key strengths being software infrastructure and resource-constrained applications, including desktop applications, video games, servers (e.g. e-commerce, web search, or databases), and performance-critical applications (e.g. telephone switches or space probes). C++ is standardized by the International Organization for Standardization (ISO), with the latest standard version ratified and published by ISO in December 2020 as ISO/IEC 14882:2020 (informally known as C++20). The C++ programming language was initially standardized in 1998 as ISO/IEC 14882:1998, which was then amended by the C++03, C++11, C++14, and C++17 standards. The current C++20 standard supersedes these with new features and an enlarged standard library. Before the initial standardization in 1998, C++ was developed by Stroustrup at Bell Labs since 1979 as an extension of the C language; he wanted an efficient and flexible language similar to C that also provided high-level features for program organization. Since 2012, C++ has been on a three-year release schedule with C++23 as the next planned standard. History In 1979, Bjarne Stroustrup, a Danish computer scientist, began work on "", the predecessor to C++. The motivation for creating a new language originated from Stroustrup's experience in programming for his PhD thesis. Stroustrup found that Simula had features that were very helpful for large software development, but the language was too slow for practical use, while BCPL was fast but too low-level to be suitable for large software development. When Stroustrup started working in AT&T Bell Labs, he had the problem of analyzing the UNIX kernel with respect to distributed computing. Remembering his PhD experience, Stroustrup set out to enhance the C language with Simula-like features. C was chosen because it was general-purpose, fast, portable and widely used. As well as C and Simula's influences, other languages also influenced this new language, including ALGOL 68, Ada, CLU and ML. Initially, Stroustrup's "C with Classes" added features to the C compiler, Cpre, including classes, derived classes, strong typing, inlining and defaul
https://en.wikipedia.org/wiki/VIC-20
The VIC-20 (known as the VC-20 in Germany and the VIC-1001 in Japan) is an 8-bit home computer that was sold by Commodore Business Machines. The VIC-20 was announced in 1980, roughly three years after Commodore's first personal computer, the PET. The VIC-20 was the first computer of any description to sell one million units. It was described as "one of the first anti-spectatorial, non-esoteric computers by design...no longer relegated to hobbyist/enthusiasts or those with money, the computer Commodore developed was the computer of the future." History As the Apple II gained momentum with the advent of VisiCalc in 1979, Jack Tramiel wanted a product that would compete in the same segment, to be presented at the January 1980 CES. For this reason Chuck Peddle and Bill Seiler started to design a computer named TOI (The Other Intellect). The TOI computer failed to materialize, mostly because it required an 80-column character display which in turn required the MOS Technology 6564 chip. However, the chip could not be used in the TOI since it required very expensive static RAM to operate fast enough. As the new decade began, the price of computer hardware was dropping and Tramiel saw an emerging market for low-price computers, that could be sold at retail stores to relative novices rather than professionals or people with an electronics or programming background. Radio Shack had been achieving considerable success with the TRS-80 Model I, a relatively low-cost machine that was widely sold to novices and in 1980 released the Color Computer, which was aimed at the home and educational markets, used ROM cartridges for software, and connected to a TV set. Development In the meantime, new engineer Robert Yannes at MOS Technology (then a part of Commodore) designed a computer in his home dubbed the MicroPET and finished a prototype with help from Al Charpentier and Charles Winterble. With the TOI unfinished, when Jack Tramiel was shown the MicroPET prototype, he immediately said he wanted it to be finished and ordered it to be mass-produced following a limited demonstration at the CES. The prototype produced by Yannes had few of the features required for a real computer, so Robert Russell at Commodore headquarters had to coordinate and finish large parts of the design under the codename Vixen. The parts contributed by Russell included a port of the operating system (kernel and BASIC interpreter) taken from John Feagan's design for the Commodore PET, a character set with the characteristic PETSCII, an Atari CX40 joystick-compatible interface, and a ROM cartridge port. The serial IEEE-488-derivative CBM-488 interface was designed by Glen Stark. It served several purposes, including costing substantially less than the IEEE-488 interface on the PET, using smaller cables and connectors that allowed for a more compact case design, and also complying with newly imposed FCC regulations on RFI emissions by home electronics (the PET was certified as Class B off
https://en.wikipedia.org/wiki/Luser
Before the popularization of the Internet in the 1990s, Internet slang defined a luser (sometimes expanded to local user; also luzer or luzzer) as a painfully annoying, stupid, or irritating computer user. The word is a blend of "loser" and "user". Among hackers, the word luser takes on a broad meaning, referring to any normal user (in other words, not a "guru"), with the implication the person is also a loser. The term is partially interchangeable with the hacker term lamer. The term can also signify a layman with only user account privileges, as opposed to a power user or administrator, who has knowledge of, and access to, superuser accounts; for example, an end luser who cannot be trusted with a root account for system administration. It is popular with technical support staff who have to deal with lusers as part of their job, often metaphorically employing a LART (Luser Attitude Readjustment Tool, also known as a clue-by-four, cluestick, or cluebat), meaning turning off the user's access to computer resources and the like. History The Jargon File states that the word was coined around 1975 at MIT, although LUSER is visible in CTSS source code circa 1969 in subroutines involving spying on and killing users and deleting their files and directories. Under ITS, when a user first walked up to a terminal at MIT and typed control-Z to get the computer's attention, it printed out some status information, including how many people were already using the computer. A patch to the system was then written to print "14 losers" instead of "14 users", as a joke. For a while, several hackers who disagreed on the appropriateness of the change struggled covertly, each changing the message behind the backs of the others; any time a user logged into the computer it was equally probable that a user would see, say, "users" or "losers". Finally, someone tried the compromise "lusers", and it stuck. Later, ITS also had the command "luser", which attempted to summon assistance from a list of designated helpers. Although ITS ceased to be used in the mid-1990s, use of the term continued to spread, partly because in Unix-style computer operating systems, "user" designates all unprivileged accounts, while the superuser, or root, is the special user account used for system administration. "root" is the conventional name of the user who has all rights or permissions (to all files and programs) in all modes (single- or multi-user). The usage lives on, however, and the term "luser" is often seen in program comments and on Usenet. On IRC, /lusers (which abbreviates "list users") is a common command to get the number of users connected to a server or network. See also Any key Banhammer BOFH id10t Lamer Layer 8 Newbie PEBKAC Power user Notes and references External links Internet slang Internet Relay Chat Pejorative terms for people Internet culture Hacker culture
https://en.wikipedia.org/wiki/CMS
CMS may refer to: Computing Call management system CMS-2 (programming language), used by the United States Navy Code Morphing Software, a technology used by Transmeta Collection management system for a museum collection Color management system, a system for computers to control the representation of colors Concurrent mark sweep collector, a garbage collector in the Oracle HotSpot Java virtual machine Configuration management system Construction and management simulation, a type of simulation video game Contact management system, an integrated office solution to record relationships and interactions with customers and suppliers Content management system, a system for managing content and providing it in various formats Conversational Monitor System, previously Cambridge Monitor System, an IBM mainframe operating system, also known as VM/CMS and CP/CMS Course management system, software that facilitates e-learning or computer learning Credential management system, also known as Smart card management system(SCMS) and Card management system(CMS) Cryptographic Message Syntax, a cryptographic standard Medicine Chronic mountain sickness, or Monge's disease, a disease caused by high altitude Congenital mitral stenosis Congenital myasthenic syndrome, an inherited neuromuscular disorder Organizations Education United States Calexico Mission School, a private school in Calexico, California, USA Cardigan Mountain School, a junior boarding school in Canaan, New Hampshire, USA Carpentersville Middle School, a public school in Carpentersville, Illinois, USA Caruso Middle School, a public school in Deerfield, Illinois, USA Cedar Middle School, a public school in Cedar City, Utah, USA Centennial Middle School, a public school in Snohomish, Washington, USA Charlotte-Mecklenburg Schools, local school district in North Carolina, USA Chicago Medical School, in North Chicago, Illinois Chickahominy Middle School, a public school in Mechanicsville, Virginia, USA Claremont-Mudd-Scripps Stags and Athenas, the joint athletic team of Claremont McKenna College, Harvey Mudd College, and Scripps College in Claremont, California, USA Clarksville Middle School, a public school in Clarksville, Maryland, USA Colina Middle School, in Thousand Oaks, California, USA Colonia Middle School In Colonia, New Jersey, USA Community Middle School, a public school in Plainsboro, New Jersey, USA Creekwood Middle School, a public school in Kingwood, Houston, Texas, USA Crestview Middle School, a middle school in Clarkson Valley, St. Louis County, Missouri, USA Crest Memorial School (of the Wildwood Crest School District), a public K-8 school in Wildwood Crest, New Jersey, USA India Church Mission Society High School, CMS High School, Thrissur, Kerala, India City Montessori School, in Lucknow, India CMS College Kottayam, in Kottayam, Kerala, India Other Church Mission School, Pakistan C.M.S. Ladies' College, Colombo, Sri Lanka Mathematics Calcutta
https://en.wikipedia.org/wiki/Grand%20Prix%20Legends
Grand Prix Legends is a computer racing simulator developed by Papyrus Design Group and published in 1998 by Sierra On-Line under the Sierra Sports banner. It simulates the 1967 Grand Prix season. Gameplay The game offers several modes in which the player can race alone or against AI opponents. The game also features multiplayer via LAN. Many parameters affecting the skill and aggressiveness of the AI drivers can be specified. Development The game was in development for three years with a team of 25 to 30 people. Inspired by the 1966 film Grand Prix, the developers chose to base the game on the 1967 Formula 1 Grand Prix season because during that period tracks were narrow and lined with trees, houses, and other elements that in a video game can serve as backgrounds to enhance the sensation of speed. In addition, the more primitive suspension of cars of the time meant that the car physics could be more visually dramatic. However, the amount of time that has passed since the 1967 Grand Prix season meant that some of the tracks the designers wanted to recreate no longer existed in their original form. The team visited town halls to get blueprints for defunct tracks. Licensing could also be difficult. Papyrus co-founder Dave Kaemmer commented, "It's not a pleasant thing to call someone on the phone and say that you want to license their dead son's name, but people have been very helpful." Reception Critical reception The game received "favorable" reviews according to the review aggregation website GameRankings. GameSpot said, "Grand Prix Legends will reward you with arguably the most intense racing experience ever seen on a personal computer." Next Generation said of the game in its January 1999 issue, "Overall, there aren't enough adjectives to describe how excellent this is. If you're willing to make the investment it takes to become good, you'll be rewarded with what is perhaps the most exciting and engaging racing game we've ever had the privilege to play." An issue later, the magazine ranked it at #47 in its list of the Fifty Best Games of All Time, saying, "Not only does it have the most realistic physics model yet in a racing game [...] a brilliant premise, and the best drive AI we've seen, but GPL enables players to do something they simply never could in the real world. Many, if not most[,] games do that, but few do it as convincingly or compellingly." Sales The game was a commercial failure; Andy Mahood of PC Gamer US described its sales as "abysmally poor". In 2003, writer Mark H. Walker reported that "the game sold only a few thousand copies" in the United States, which he attributed to the general unpopularity of Formula One racing in the country. He noted that its "steep learning curve kept many fans away" in European markets. GameSpots Gord Goble attributed its performance to the "combination of treacherous gameplay, sometimes glacial frame rates, and esoteric subject matter". It ultimately totaled 200,000 sales by 2004. Awards
https://en.wikipedia.org/wiki/Kristen%20Nygaard
Kristen Nygaard (27 August 1926 – 10 August 2002) was a Norwegian computer scientist, programming language pioneer, and politician. Internationally, Nygaard is acknowledged as the co-inventor of object-oriented programming and the programming language Simula with Ole-Johan Dahl in the 1960s. Nygaard and Dahl received the 2001 A. M. Turing Award for their contribution to computer science. Early life and career Nygaard was born in Oslo and received his master's degree in mathematics at the University of Oslo in 1956. His thesis on abstract probability theory was entitled "Theoretical Aspects of Monte Carlo methods". Nygaard worked full-time at the Norwegian Defense Research Establishment from 1948 to 1960, in computing and programming (1948–1954) and operational research (1952–1960). From 1957 to 1960, he was head of the first operations research groups in the Norwegian defense establishment. He was cofounder and first chairman of the Norwegian Operational Research Society (1959–1964). In 1960, he was hired by the Norwegian Computing Center (NCC), responsible for building up the NCC as a research institute in the 1960s, becoming its Director of Research in 1962. Object-oriented programming With Ole-Johan Dahl, he developed the initial ideas for object-oriented programming (OOP) in the 1960s at the Norwegian Computing Center (Norsk Regnesentral (NR)) as part of the Simula I (1961–1965) and Simula 67 (1965–1968) simulation programming languages, which began as an extended variant and superset of ALGOL 60. The languages introduced the core concepts of object-oriented programming: objects, classes, inheritance, virtual quantities, and multi-threaded (quasi-parallel) program execution. In 2004, the Association Internationale pour les Technologies Objets (AITO) established an annual prize in the name of Ole-Johan Dahl and Kristen Nygaard to honor their pioneering work on object-orientation. This Dahl–Nygaard Prize is awarded annually to two individuals that have made significant technical contributions to the field of object-orientation. The work should be in the spirit of the pioneer conceptual and/or implementation work of Dahl and Nygaard which shaped the present view of object-oriented programming. The prize is presented each year at the ECOOP conference. The prize consists of two awards given to a senior and a junior professional. He conducted research for Norwegian trade unions on planning, control, and data processing, all evaluated in light of the objectives of organised labour (1971–1973), working together with Olav Terje Bergo. His other research and development work included the social impact of computer technology, and the general system description language DELTA (1973–1975), working with Erik Holbaek-Hanssen and Petter Haandlykken. Nygaard was a professor at Aarhus University, Denmark (1975–1976) and then became professor emeritus at the University of Oslo (part-time from 1977, full-time 1984–1996). His work in Aarhus and Oslo include
https://en.wikipedia.org/wiki/Computer%20display%20standard
Computer display standards are a combination of aspect ratio, display size, display resolution, color depth, and refresh rate. They are associated with specific expansion cards, video connectors, and monitors. These standards encompass various aspects of the display, including resolution, refresh rate, color depth, and connectivity. History Various computer display standards or display modes have been used in the history of the personal computer. They are often a combination of aspect ratio (specified as width-to-height ratio), display resolution (specified as the width and height in pixels), color depth (measured in bits per pixel), and refresh rate (expressed in hertz). Associated with the screen resolution and refresh rate is a display adapter. Earlier display adapters were simple frame-buffers, but later display standards also specified a more extensive set of display functions and software controlled interface. Beyond display modes, the VESA industry organization has defined several standards related to power management and device identification, while ergonomics standards are set by the TCO. Standards A number of common resolutions have been used with computers descended from the original IBM PC. Some of these are now supported by other families of personal computers. These are de facto standards, usually originated by one manufacturer and reverse-engineered by others, though the VESA group has co-ordinated the efforts of several leading video display adapter manufacturers. Video standards associated with IBM-PC-descended personal computers are shown in the diagram and table below, alongside those of early Macintosh and other makes for comparison. (From the early 1990s onwards, most manufacturers moved over to PC display standards thanks to widely available and affordable hardware). Display resolution prefixes Although the common standard prefixes super and ultra do not indicate specific modifiers to base standard resolutions, several others do: Quarter (Q or q) A quarter of the base resolution. E.g. QVGA, a term for a 320×240 resolution, half the width and height of VGA, hence the quarter total resolution. The "Q" prefix usually indicates "Quad" (4 times as many, not 1/4 times as many) in higher resolutions, and sometimes "q" is used instead of "Q" to specify quarter (by analogy with SI prefixes m/M), but this usage is not consistent. Wide (W) The base resolution increased by increasing the width and keeping the height constant, for square or near-square pixels on a widescreen display, usually with an aspect ratio of either 16:9 (adding an extra 1/3rd width vs a standard 4:3 display) or 16:10 (adding an extra 1/5th). However, it is sometimes used to denote a resolution that would have roughly the same total pixel count as this, but in a different aspect and sharing neither the horizontal OR vertical resolution—typically for a 16:10 resolution which is narrower but taller than the 16:9 option, and therefore larger in both dimensions th
https://en.wikipedia.org/wiki/Video%20Graphics%20Array
Video Graphics Array (VGA) is a video display controller and accompanying de facto graphics standard, first introduced with the IBM PS/2 line of computers in 1987, which became ubiquitous in the IBM PC compatible industry within three years. The term can now refer to the computer display standard, the 15-pin D-subminiature VGA connector, or the 640 × 480 resolution characteristic of the VGA hardware. VGA was the last IBM graphics standard to which the majority of IBM PC compatible computer manufacturers conformed, making it the lowest common denominator that virtually all post-1990 PC graphics hardware can be expected to implement. IBM intended to supersede VGA with the Extended Graphics Array (XGA) standard, but failed. Instead, VGA was adapted into many extended forms by third parties, collectively known as Super VGA, then gave way to custom graphics processing units which, in addition to their proprietary interfaces and capabilities, continue to implement common VGA graphics modes and interfaces to the present day. The VGA analog interface standard has been extended to support resolutions of up to 2048 × 1536 and even higher in special applications. Hardware design The color palette random access memory (RAM) and its corresponding digital-to-analog converter (DAC) were integrated into one chip (the RAMDAC) and the cathode-ray tube controller (CRTC) was integrated into a main VGA chip, which eliminated several other chips in previous graphics adapters, so VGA only additionally required external video RAM and timing crystals. This small part count allowed IBM to include VGA directly on the PS/2 motherboard, in contrast to prior IBM PC modelsPC, PC/XT, and PC ATwhich required a separate display adapter installed in a slot in order to connect a monitor. The term "array" rather than "adapter" in the name denoted that it was not a complete independent expansion device, but a single component that could be integrated into a system. Unlike the graphics adapters that preceded it (MDA, CGA, EGA and many third-party options) there was initially no discrete VGA card released by IBM. The first commercial implementation of VGA was a built-in component of the IBM PS/2, in which it was accompanied by 256 KB of video RAM, and a new DE-15 connector replacing the DE-9 used by previous graphics adapters. IBM later released the standalone IBM PS/2 Display Adapter, which utilized the VGA but could be added to machines that did not have it built in. Capabilities The VGA supports all graphics modes supported by the MDA, CGA and EGA cards, as well as multiple new modes. Standard graphics modes 640 × 480 in 16 colors or monochrome 640 × 350 or 640 × 200 in 16 colors or monochrome (EGA/CGA compatibility) 320 × 200 in 256 colors (Mode 13h) 320 × 200 in 4 or 16 colors (CGA compatibility) The 640 × 480 16-color and 320 × 200 256-color modes had fully redefinable palettes, with each entry selected from an 18-bit (262,144-color) gamut. The other modes defaulted t
https://en.wikipedia.org/wiki/Canonical%20LR%20parser
In computer science, a canonical LR parser or LR(1) parser is an LR(k) parser for k=1, i.e. with a single lookahead terminal. The special attribute of this parser is that any LR(k) grammar with k>1 can be transformed into an LR(1) grammar. However, back-substitutions are required to reduce k and as back-substitutions increase, the grammar can quickly become large, repetitive and hard to understand. LR(k) can handle all deterministic context-free languages. In the past this LR(k) parser has been avoided because of its huge memory requirements in favor of less powerful alternatives such as the LALR and the LL(1) parser. Recently, however, a "minimal LR(1) parser" whose space requirements are close to LALR parsers, is being offered by several parser generators. Like most parsers, the LR(1) parser is automatically generated by compiler-compilers like GNU Bison, MSTA, Menhir, HYACC, LRSTAR. History In 1965 Donald Knuth invented the LR(k) parser (Left to right, Rightmost derivation parser) a type of shift-reduce parser, as a generalization of existing precedence parsers. This parser has the potential of recognizing all deterministic context-free languages and can produce both left and right derivations of statements encountered in the input file. Knuth proved that it reaches its maximum language recognition power for k=1 and provided a method for transforming LR(k), k > 1 grammars into LR(1) grammars. Canonical LR(1) parsers have the practical disadvantage of having enormous memory requirements for their internal parser-table representation. In 1969, Frank DeRemer suggested two simplified versions of the LR parser called LALR and SLR. These parsers require much less memory than Canonical LR(1) parsers, but have slightly less language-recognition power. LALR(1) parsers have been the most common implementations of the LR Parser. However, a new type of LR(1) parser, some people call a "Minimal LR(1) parser" was introduced in 1977 by David Pager who showed that LR(1) parsers can be created whose memory requirements rival those of LALR(1) parsers. Recently, some parser generators are offering Minimal LR(1) parsers, which not only solve the memory requirement problem, but also the mysterious-conflict-problem inherent in LALR(1) parser generators. In addition, Minimal LR(1) parsers can use shift-reduce actions, which makes them faster than Canonical LR(1) parsers. Overview The LR(1) parser is a deterministic automaton and as such its operation is based on static state transition tables. These codify the grammar of the language it recognizes and are typically called "parsing tables". The parsing tables of the LR(1) parser are parameterized with a lookahead terminal. Simple parsing tables, like those used by the LR(0) parser represent grammar rules of the form A1 → A B which means that if we go have input A followed by B then we will reduce the pair to A1 regardless of what follows. After parameterizing such a rule with a lookahead we have: A1 →
https://en.wikipedia.org/wiki/J%20%28programming%20language%29
The J programming language, developed in the early 1990s by Kenneth E. Iverson and Roger Hui, is an array programming language based primarily on APL (also by Iverson). To avoid repeating the APL special-character problem, J uses only the basic ASCII character set, resorting to the use of the dot and colon as inflections to form short words similar to digraphs. Most such primary (or primitive) J words serve as mathematical symbols, with the dot or colon extending the meaning of the basic characters available. Also, many characters which in other languages often must be paired (such as [] {} "" `` or <>) are treated by J as stand-alone words or, when inflected, as single-character roots of multi-character words. J is a very terse array programming language, and is most suited to mathematical and statistical programming, especially when performing operations on matrices. It has also been used in extreme programming and network performance analysis. Like John Backus's languages FP and FL, J supports function-level programming via its tacit programming features. Unlike most languages that support object-oriented programming, J's flexible hierarchical namespace scheme (where every name exists in a specific locale) can be effectively used as a framework for both class-based and prototype-based object-oriented programming. Since March 2011, J is free and open-source software under the GNU General Public License version 3 (GPLv3). One may also purchase source under a negotiated license. Examples J permits point-free style and function composition. Thus, its programs can be very terse and are considered difficult to read by some programmers. The "Hello, World!" program in J is: 'Hello, World!' This implementation of hello world reflects the traditional use of J – programs are entered into a J interpreter session, and the results of expressions are displayed. It's also possible to arrange for J scripts to be executed as standalone programs. Here's how this might look on a Unix system: #!/bin/jc echo 'Hello, world!' exit '' (Note that current j implementations install either jconsole or (because jconsole is used by java), ijconsole and likely install this to /usr/bin or some other directory (perhaps the Application directory on OSX). So, there's a system dependency here which the user would have to solve.) Historically, APL used / to indicate the fold, so +/1 2 3 was equivalent to 1+2+3. Meanwhile, division was represented with the mathematical division symbol (). Because ASCII does not include a division symbol per se, J uses % to represent division, as a visual approximation or reminder. (This illustrates something of the mnemonic character of J's tokens, and some of the quandaries imposed by the use of ASCII.) Defining a J function named avg to calculate the average of a list of numbers yields: +/ sums the items of the array. # counts the number of items in the array. % divides the sum by the number of items. This is a test exe
https://en.wikipedia.org/wiki/Mitch%20Kapor
Mitchell David Kapor ( ; born November 1, 1950) is an American entrepreneur best known for his work as an application developer in the early days of the personal computer software industry, later founding Lotus, where he was instrumental in developing the Lotus 1-2-3 spreadsheet. He left Lotus in 1986. In 1990 with John Perry Barlow and John Gilmore, he co-founded the Electronic Frontier Foundation, and served as its chairman until 1994. In 2003, Kapor became the founding chair of the Mozilla Foundation, creator of the open source web browser Firefox. Kapor has been an investor in the personal computing industry, and supporter of social causes via Kapor Capital and the Kapor Center. Kapor serves on the board of SMASH, a non-profit founded by his wife, Freada Kapor Klein, to help underrepresented scholars hone their STEM knowledge while building the networks and skills for careers in tech and the sciences. Early life and education Kapor was born to a Jewish family in Brooklyn, New York, and raised in Freeport, New York on Long Island, where he graduated from high school in 1967. He received a B.A. from Yale College in 1971 and studied psychology, linguistics, and computer science in an interdisciplinary major, also attending the Boston-based Beacon College, which had a satellite campus in Washington, D.C. at the time. He began but did not complete a master's degree at the MIT Sloan School of Management but later served on the faculty of the MIT Media Lab and the University of California, Berkeley School of Information. Career Lotus Kapor and his business partner Jonathan Sachs founded Lotus in 1982 with backing from Ben Rosen. Lotus' first product was presentation software for the Apple II known as Lotus Executive Briefing System. Kapor founded Lotus after leaving his post as head of development at VisiCorp, the distributors of the VisiCalc spreadsheet, and selling all his rights to VisiPlot and VisiTrend to VisiCorp. Shortly after Kapor left Visi-Corp, he and Sachs produced an integrated spreadsheet and graphics program. Even though IBM and VisiCorp had a collaboration agreement whereby Visi-Calc was being shipped simultaneously with the PC, Lotus had a clearly superior product. Lotus released Lotus 1-2-3 on January 26, 1983. The name referred to the three ways the product could be used, as a spreadsheet, graphics package, and database manager. In practice the latter two functions were less often used, but 1-2-3 was the most powerful spreadsheet program available. Lotus was almost immediately successful, becoming the world's third largest microcomputer software company in 1983 with $53 million in sales in its first year, compared to its business plan forecast of $1 million in sales. Jerome Want says: Under founder and CEO Mitch Kapor, Lotus was a company with few rules and fewer internal bureaucratic barriers.... Kapor decided that he was no longer suited to running a company, and [in 1986] he replaced himself with Jim Manzi. Digital right
https://en.wikipedia.org/wiki/Apple%20IIe
The Apple IIe (styled as Apple //e) is the third model in the Apple II series of personal computers produced by Apple Computer. The e in the name stands for enhanced, referring to the fact that several popular features were now built-in that were formerly only available as upgrades or add-ons in earlier models. Improved expandability combined with the new features made for a very attractive general-purpose machine to first-time computer shoppers. As the last surviving model of the Apple II computer line before discontinuation, and having been manufactured and sold for nearly 11 years with relatively few changes, the IIe earned the distinction of being the longest-lived computer in Apple's history. History Apple Computer planned to discontinue the Apple II series after the introduction of the Apple III in 1980; the company intended to clearly establish market segmentation by designing the Apple III to appeal to the business market, leaving the Apple II for home and education users. Management believed that "once the Apple III was out, the Apple II would stop selling in six months", cofounder Steve Wozniak later said. By the time IBM released the rival IBM PC in 1981, the Apple II's technology was already four years old. In September 1981 InfoWorld reported—below the PC's announcement—that Apple was secretly developing three new computers "to be ready for release within a year": Lisa, Macintosh, and "Diana". Describing the last as a software-compatible Apple II replacement—"A 6502 machine using custom LSI" and a simpler motherboard—it said that Diana "was ready for release months ago" but decided to improve the design to better compete with the Xerox 820. "Now it appears that when Diana is ready for release, it will offer features and a price that will make the Apple II uncompetitive", the magazine wrote. "Apple's plans to phase out the Apple II have also been delayed by complications in the design of the Apple III", the article also said. After the Apple III initially struggled, management decided in 1981 that the further continuation of the Apple II was in the company's best interest. After years of the Apple II Plus, essentially at a standstill, came the introduction of a new Apple II model — the Apple IIe (codenamed "Diana" and "Super II"). The Apple IIe was released in January 1983, the successor to the Apple II Plus. The Apple IIe was the first Apple computer with a custom ASIC chip, which reduced much of the old discrete IC-based circuitry to a single chip. This change resulted in reducing the cost and size of the motherboard. Some of the hardware features of the Apple III (e.g. bank-switched memory) were borrowed in the design of the Apple IIe, and some from incorporating the Apple II Plus Language card. The culmination of these changes led to increased sales and greater market share of home, education, and small business use. New features One of the most notable improvements of the Apple IIe is the addition of a full ASCII character
https://en.wikipedia.org/wiki/EPROM
An EPROM (rarely EROM), or erasable programmable read-only memory, is a type of programmable read-only memory (PROM) chip that retains its data when its power supply is switched off. Computer memory that can retrieve stored data after a power supply has been turned off and back on is called non-volatile. It is an array of floating-gate transistors individually programmed by an electronic device that supplies higher voltages than those normally used in digital circuits. Once programmed, an EPROM can be erased by exposing it to strong ultraviolet light source (such as from a mercury-vapor lamp). EPROMs are easily recognizable by the transparent fused quartz (or on later models resin) window on the top of the package, through which the silicon chip is visible, and which permits exposure to ultraviolet light during erasing. Operation Development of the EPROM memory cell started with investigation of faulty integrated circuits where the gate connections of transistors had broken. Stored charge on these isolated gates changes their threshold voltage. Following the invention of the MOSFET (metal–oxide–semiconductor field-effect transistor) by Mohamed Atalla and Dawon Kahng at Bell Labs, presented in 1960, Frank Wanlass studied MOSFET structures in the early 1960s. In 1963, he noted the movement of charge through oxide onto a gate. While he did not pursue it, this idea would later become the basis for EPROM technology. In 1967, Dawon Kahng and Simon Min Sze at Bell Labs proposed that the floating gate of a MOSFET could be used for the cell of a reprogrammable ROM (read-only memory). Building on this concept, Dov Frohman of Intel invented EPROM in 1971, and was awarded in 1972. Frohman designed the Intel 1702, a 2048-bit EPROM, which was announced by Intel in 1971. Each storage location of an EPROM consists of a single field-effect transistor. Each field-effect transistor consists of a channel in the semiconductor body of the device. Source and drain contacts are made to regions at the end of the channel. An insulating layer of oxide is grown over the channel, then a conductive (silicon or aluminum) gate electrode is deposited, and a further thick layer of oxide is deposited over the gate electrode. The floating-gate electrode has no connections to other parts of the integrated circuit and is completely insulated by the surrounding layers of oxide. A control gate electrode is deposited and further oxide covers it. To retrieve data from the EPROM, the address represented by the values at the address pins of the EPROM is decoded and used to connect one word (usually an 8-bit byte) of storage to the output buffer amplifiers. Each bit of the word is a 1 or 0, depending on the storage transistor being switched on or off, conducting or non-conducting. The switching state of the field-effect transistor is controlled by the voltage on the control gate of the transistor. Presence of a voltage on this gate creates a conductive channel in the transistor
https://en.wikipedia.org/wiki/Visualization
Visualization or visualisation may refer to: Visualization (graphics), the physical or imagining creation of images, diagrams, or animations to communicate a message Data and information visualization, the practice of creating visual representations of complex data and information Music visualization, animated imagery based on a piece of music Mental image, the experience of images without the relevant external stimuli "Visualization", a song by Blank Banshee on the 2012 album Blank Banshee 0 See also Creative visualization (disambiguation) Visualizer (disambiguation) Graphics List of graphical methods, various forms of visualization Guided imagery, a mind-body intervention by a trained practitioner Illustration, a decoration, interpretation or visual explanation of a text, concept or process Image, an artifact that depicts visual perception, such as a photograph or other picture Infographics
https://en.wikipedia.org/wiki/Sieve%20of%20Eratosthenes
In mathematics, the sieve of Eratosthenes is an ancient algorithm for finding all prime numbers up to any given limit. It does so by iteratively marking as composite (i.e., not prime) the multiples of each prime, starting with the first prime number, 2. The multiples of a given prime are generated as a sequence of numbers starting from that prime, with constant difference between them that is equal to that prime. This is the sieve's key distinction from using trial division to sequentially test each candidate number for divisibility by each prime. Once all the multiples of each discovered prime have been marked as composites, the remaining unmarked numbers are primes. The earliest known reference to the sieve (, kóskinon Eratosthénous) is in Nicomachus of Gerasa's Introduction to Arithmetic, an early 2nd cent. CE book which attributes it to Eratosthenes of Cyrene, a 3rd cent. BCE Greek mathematician, though describing the sieving by odd numbers instead of by primes. One of a number of prime number sieves, it is one of the most efficient ways to find all of the smaller primes. It may be used to find primes in arithmetic progressions. Overview A prime number is a natural number that has exactly two distinct natural number divisors: the number 1 and itself. To find all the prime numbers less than or equal to a given integer by Eratosthenes' method: Create a list of consecutive integers from 2 through : . Initially, let equal 2, the smallest prime number. Enumerate the multiples of by counting in increments of from to , and mark them in the list (these will be ; the itself should not be marked). Find the smallest number in the list greater than that is not marked. If there was no such number, stop. Otherwise, let now equal this new number (which is the next prime), and repeat from step 3. When the algorithm terminates, the numbers remaining not marked in the list are all the primes below . The main idea here is that every value given to will be prime, because if it were composite it would be marked as a multiple of some other, smaller prime. Note that some of the numbers may be marked more than once (e.g., 15 will be marked both for 3 and 5). As a refinement, it is sufficient to mark the numbers in step 3 starting from , as all the smaller multiples of will have already been marked at that point. This means that the algorithm is allowed to terminate in step 4 when is greater than . Another refinement is to initially list odd numbers only, , and count in increments of in step 3, thus marking only odd multiples of . This actually appears in the original algorithm. This can be generalized with wheel factorization, forming the initial list only from numbers coprime with the first few primes and not just from odds (i.e., numbers coprime with 2), and counting in the correspondingly adjusted increments so that only such multiples of are generated that are coprime with those small primes, in the first place. Example To find al
https://en.wikipedia.org/wiki/Character%20%28computing%29
In computer and machine-based telecommunications terminology, a character is a unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language. Examples of characters include letters, numerical digits, common punctuation marks (such as "." or "-"), and whitespace. The concept also includes control characters, which do not correspond to visible symbols but rather to instructions to format or process the text. Examples of control characters include carriage return and tab as well as other instructions to printers or other devices that display or otherwise process text. Characters are typically combined into strings. Historically, the term character was used to denote a specific number of contiguous bits. While a character is most commonly assumed to refer to 8 bits (one byte) today, other options like the 6-bit character code were once popular, and the 5-bit Baudot code has been used in the past as well. The term has even been applied to 4 bits with only 16 possible values. All modern systems use a varying-size sequence of these fixed-sized pieces, for instance UTF-8 uses a varying number of 8-bit code units to define a "code point" and Unicode uses varying number of those to define a "character". Encoding Computers and communication equipment represent characters using a character encoding that assigns each character to something an integer quantity represented by a sequence of digits, typically that can be stored or transmitted through a network. Two examples of usual encodings are ASCII and the UTF-8 encoding for Unicode. While most character encodings map characters to numbers and/or bit sequences, Morse code instead represents characters using a series of electrical impulses of varying length. Terminology Historically, the term character has been widely used by industry professionals to refer to an encoded character, often as defined by the programming language or API. Likewise, character set has been widely used to refer to a specific repertoire of characters that have been mapped to specific bit sequences or numerical codes. The term glyph is used to describe a particular visual appearance of a character. Many computer fonts consist of glyphs that are indexed by the numerical code of the corresponding character. With the advent and widespread acceptance of Unicode and bit-agnostic coded character sets, a character is increasingly being seen as a unit of information, independent of any particular visual manifestation. The ISO/IEC 10646 (Unicode) International Standard defines character, or abstract character as "a member of a set of elements used for the organization, control, or representation of data". Unicode's definition supplements this with explanatory notes that encourage the reader to differentiate between characters, graphemes, and glyphs, among other things. Such differentiation is an instance of the wider theme of the s
https://en.wikipedia.org/wiki/Feature%20creep
Feature creep is the excessive ongoing expansion or addition of new features in a product, especially in computer software, video games and consumer and business electronics. These extra features go beyond the basic function of the product and can result in software bloat and over-complication, rather than simple design. The definition of what qualifies as "feature creep" varies among end users, where what is perceived as such by some users may be considered practical functionality by others. Feature creep is one of the most common sources of cost and schedule overruns. It thus endangers and can even kill products and projects. Causes Feature creep may arise from the desire to provide the consumer with a more useful or desirable product in order to increase sales or distribution. Once a product does everything that it is designed to do, the manufacturer may add functions some users might consider unneeded (sometimes at the cost of efficiency) or continue with the original version (at the cost of a perceived lack of improvement). Feature creep may also arise as a result of compromise from a committee implementing several different viewpoints or use cases in the same product, even for opportunistic reasons. As more features are added to support each approach, cross-conversion features between the multiple paradigms may further complicate the total features. Control There are several methods to control feature creep, including: strict limits for allowable features, multiple variations, and pruning excess features. Separation Later feature creep may be avoided by basing initial design on strong software fundamentals, such as logical separation of functionality and data access, e.g. using submenus that are optionally accessible by power users who desire more functionality and a higher verbosity of information. It can be actively controlled with rigorous change management and by delaying changes to later delivery phases of a project. Variations and options Another method of controlling feature creep is maintaining multiple variations of products, where features are limited and reduced in the more basic variations, e.g. Microsoft Windows editions. For software user interfaces, viewing modes or operation modes can be used (e.g. basic mode or expert mode), between which the users can select to match their own needs. Both in many graphical user interfaces and command line interfaces, users are able to opt in for a higher verbosity manually. In the latter case, in many command-line programs, adding a -v or --verbose option manually, does show more detailed information that might be less relevant to minimal users, but useful to power users or for debugging and troubleshooting purposes. Because the ever-growing, ever-expanding addition of new features might exceed available resources, a minimal core "basic" version of a product can be maintained separately, to ensure operation in smaller operating environments. Using the "80/20 rule", the more ba
https://en.wikipedia.org/wiki/Binary%20space%20partitioning
In computer science, binary space partitioning (BSP) is a method for space partitioning which recursively subdivides a Euclidean space into two convex sets by using hyperplanes as partitions. This process of subdividing gives rise to a representation of objects within the space in the form of a tree data structure known as a BSP tree. Binary space partitioning was developed in the context of 3D computer graphics in 1969. The structure of a BSP tree is useful in rendering because it can efficiently give spatial information about the objects in a scene, such as objects being ordered from front-to-back with respect to a viewer at a given location. Other applications of BSP include: performing geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics and 3D video games, ray tracing, and other applications that involve the handling of complex spatial scenes. History 1969 Schumacker et al. published a report that described how carefully positioned planes in a virtual environment could be used to accelerate polygon ordering. The technique made use of depth coherence, which states that a polygon on the far side of the plane cannot, in any way, obstruct a closer polygon. This was used in flight simulators made by GE as well as Evans and Sutherland. However, the creation of the polygonal data organization was performed manually by the scene designer. 1980 Fuchs et al. extended Schumacker's idea to the representation of 3D objects in a virtual environment by using planes that lie coincident with polygons to recursively partition the 3D space. This provided a fully automated and algorithmic generation of a hierarchical polygonal data structure known as a Binary Space Partitioning Tree (BSP Tree). The process took place as an off-line preprocessing step that was performed once per environment/object. At run-time, the view-dependent visibility ordering was generated by traversing the tree. 1981 Naylor's Ph.D. thesis provided a full development of both BSP trees and a graph-theoretic approach using strongly connected components for pre-computing visibility, as well as the connection between the two methods. BSP trees as a dimension-independent spatial search structure were emphasized, with applications to visible surface determination. The thesis also included the first empirical data demonstrating that the size of the tree and the number of new polygons were reasonable (using a model of the Space Shuttle). 1983 Fuchs et al. described a micro-code implementation of the BSP tree algorithm on an Ikonas frame buffer system. This was the first demonstration of real-time visible surface determination using BSP trees. 1987 Thibault and Naylor described how arbitrary polyhedra may be represented using a BSP tree as opposed to the traditional b-rep (boundary representation). This provided a solid representation vs. a surface based-representation. Set operations on polyhedra were described using a tool, enabling constru
https://en.wikipedia.org/wiki/William%20Shunn
William Shunn (born August 14, 1967) is an American science fiction writer and computer programmer. He was raised in a Latter-day Saint household, the oldest of eight children. In 1986, he served a mission to Canada for the Church of Jesus Christ of Latter-day Saints, but was arrested for making a false bomb threat, for the purpose of preventing his fellow missionary from returning home. Life and career Shunn received a B.S. in computer science at the University of Utah in 1991. He went to work for WordPerfect Corporation and was part of the team that developed WordPerfect 6.0 for MS-DOS. In 1995, he moved from Utah to New York City. He left the LDS Church at the same time and created one of the earliest ex-Mormon web sites. Shunn's first professional short story was published in The Magazine of Fantasy & Science Fiction in 1993. He has been nominated once for the Hugo Award and twice for the Nebula Award. Shunn is the author of a 2015 memoir, The Accidental Terrorist: Confessions of a Reluctant Missionary. In the wake of the September 11, 2001 attacks, he created what may have been the first online survivor registry. Shunn is also known for creating a web site that offers daily hints to The New York Times Spelling Bee. This tool is commonly used within the community of Spelling Bee players. Awards and nominations 2001: Nominated for Nebula Award for Best Novelette for "Dance of the Yellow-Breasted Luddites" (Vanishing Acts, ed. Ellen Datlow, Tor Books, New York, NY, 2000) 2006: Nominated for Hugo Award for Best Novella and Nebula Award for Best Novella for "Inclination" (Asimov's Science Fiction, April/May 2006) Bibliography Fiction Netherview Station story series: The Practical Ramifications of Interstellar Packet Loss (1998) Dance of the Yellow-Breasted Luddites (2000) Inclination (2006) Strong Medicine (2003) Love in the Age of Spyware (2003) An Alternate History of the 21st Century, chap-book (2007) Nonfiction The Accidental Terrorist: Confessions of a Reluctant Missionary (2015) In 1993 or 1994, Shunn wrote a style guide for standard manuscript format (the generally accepted method for preparing a fiction manuscript for submission to professional markets), based on advice gathered at the Clarion Workshop and elsewhere. First published to the web in 1995, this guide (and its later revisions), commonly known as "Shunn format", has since been adopted by many magazines as a requirement for submissions. References External links Official site "The Practical Ramifications of Interstellar Packet Loss" (short story) "Love in the Age of Spyware" (short story from Salon, 16 July 2003) "Strong Medicine" (short story from Salon, 10 November 2003) "The Missionary Imposition" (personal essay) Podcast of Shunn comparing the Book of Mormon to Lord of the Rings Spelling Bee Solver 1967 births American science fiction writers Former Latter Day Saints Living people American Mormon missionaries in Canada University of Utah alumni
https://en.wikipedia.org/wiki/Cursor
Cursor may refer to: Cursor (user interface), an indicator used to show the current position for user interaction on a computer monitor or other display device Cursor (databases), a control structure that enables traversal over the records in a database Cursor, a value that is the position of an object in some known data structure, a predecessor of pointers Cursor (slide rule), indicates corresponding points on scales that are not adjacent to each other Cursor Models, made for the Mercedes Benz Museum, and as promotional models Cursor (magazine), an early magazine distributed on cassette from 1978 and into the early 1980s Cursor, a holographic sidekick character from the TV series Automan See also Pointer (graphical user interfaces), commonly called a mouse cursor.
https://en.wikipedia.org/wiki/Data%20terminal%20equipment
Data terminal equipment (DTE) is an end instrument that converts user information into signals or reconverts received signals. It is also called data processing terminal equipment or tail circuit. A DTE device communicates with the data circuit-terminating equipment (DCE), such as a modem. The DTE/DCE classification was introduced by IBM. A DTE is the functional unit of a data station (station, terminal) that serves as a data source or a data sink and provides for the data communication control function to be performed in accordance with the link protocol. Usually, the DTE device is the terminal (or a computer emulating a terminal), and the DCE is a modem or another carrier-owned device. The data terminal equipment may be a single piece of equipment or an interconnected subsystem of multiple pieces of equipment that perform all the required functions necessary to permit users to communicate. A user interacts with the DTE (e.g. through a human-machine interface), or the DTE may be the user. Connections Two different types of devices are assumed on each end of the interconnecting cable for a case of simply adding DTE to the topology (e.g. to a hub, DCE), which also brings a less trivial case of interconnection of devices of the same type: DTE-DTE or DCE-DCE. Such cases need crossover cables, such as for the Ethernet or null modem for RS-232. D-sub connectors follow another rule for pin assignment. 25 pin DTE devices transmit on pin 2 and receive on pin 3. 25 pin DCE devices transmit on pin 3 and receive on pin 2. 9 pin DTE devices transmit on pin 3 and receive on pin 2. 9 pin DCE devices transmit on pin 2 and receive on pin 3. Networking A general rule is that DCE devices provide the clock signal (internal clocking) and the DTE device synchronizes on the provided clock (external clocking). This term is also generally used in the Telco and Cisco equipment context to designate a network device, such as terminals, personal computers but also routers and bridges, that's unable or configured not to generate clock signals. Hence a direct PC to PC Ethernet connection can also be called a DTE to DTE communication. This communication is done via an Ethernet crossover cable as opposed to a PC to DCE (hub, switch, or bridge) communication which is done via an Ethernet straight cable. V.35 is a high-speed serial interface designed to support both higher data rates and connectivity between DTEs (data-terminal equipment) or DCEs (data-communication equipment) over digital lines. See also Communication endpoint Data circuit-terminating equipment End system Federal Standard 1037C, MIL-STD-188 Host (network) Node (networking) Terminal (telecommunication) Serial port, in depth description of pinouts References External links Internetworking Technology Handbook, Frame Relay, Cisco Systems Telecommunications equipment
https://en.wikipedia.org/wiki/Yarrow%20algorithm
The Yarrow algorithm is a family of cryptographic pseudorandom number generators (CPRNG) devised by John Kelsey, Bruce Schneier, and Niels Ferguson and published in 1999. The Yarrow algorithm is explicitly unpatented, royalty-free, and open source; no license is required to use it. An improved design from Ferguson and Schneier, Fortuna, is described in their book, Practical Cryptography Yarrow was used in FreeBSD, but is now superseded by Fortuna. Yarrow was also incorporated in iOS and macOS for their /dev/random devices, but Apple has switched to Fortuna since 2020 Q1. Name The name Yarrow alludes to the use of the yarrow plant in the random generating process of I Ching divination. Since the Xia dynasty ( to ), Chinese have used yarrow stalks for divination. Fortunetellers divide a set of 50 yarrow stalks into piles and use modular arithmetic recursively to generate two bits of random information that have a non-uniform distribution. Principles Yarrow's main design principles are: resistance to attacks, easy use by programmers with no cryptography background, and reusability of existing building blocks. The former widely used designs such as ANSI X9.17 and RSAREF 2.0 PRNG have loopholes that provide attack opportunities under some circumstances. Some of them are not designed with real-world attacks in mind. Yarrow also aims to provide easy integration, to enable system designers with little knowledge of PRNG functionality. Design Components The design of Yarrow consists of four major components: an entropy accumulator, a reseed mechanism, a generation mechanism, and reseed control. Yarrow accumulates entropy into two pools: the fast pool, which provides frequent reseeds of the key to keep the duration of key compromises as short as possible; the slow pool, which provides rare but conservative reseeds of the key. This makes sure that the reseed is secured even when the entropy estimates are very optimistic. The reseed mechanism connects the entropy accumulator to the generating mechanism. Reseeding from the fast pool uses the current key and the hash of all inputs to the fast pool since startup to generate a new key; reseeding from the slow pool behaves similarly, except it also uses the hash of all inputs to the slow pool to generate a new key. Both of the reseedings reset the entropy estimation of the fast pool to zero, but the last one also sets the estimation of the slow pool to zero. The reseeding mechanism updates the key constantly, so that even if the key of pool information is known to the attacker before the reseed, they will be unknown to the attacker after the reseed. The reseed control component is leveraging between frequent reseeding, which is desirable but might allow iterative guessing attacks, and infrequent reseeding, which compromises more information for an attacker who has the key. Yarrow uses the fast pool to reseed whenever the source passes some threshold values, and uses the slow pool to reseed whenever at leas
https://en.wikipedia.org/wiki/Hardware%20description%20language
In computer engineering, a hardware description language (HDL) is a specialized computer language used to describe the structure and behavior of electronic circuits, and most commonly, digital logic circuits. A hardware description language enables a precise, formal description of an electronic circuit that allows for the automated analysis and simulation of an electronic circuit. It also allows for the synthesis of an HDL description into a netlist (a specification of physical electronic components and how they are connected together), which can then be placed and routed to produce the set of masks used to create an integrated circuit. A hardware description language looks much like a programming language such as C or ALGOL; it is a textual description consisting of expressions, statements and control structures. One important difference between most programming languages and HDLs is that HDLs explicitly include the notion of time. HDLs form an integral part of electronic design automation (EDA) systems, especially for complex circuits, such as application-specific integrated circuits, microprocessors, and programmable logic devices. Motivation Due to the exploding complexity of digital electronic circuits since the 1970s (see Moore's law), circuit designers needed digital logic descriptions to be performed at a high level without being tied to a specific electronic technology, such as ECL, TTL or CMOS. HDLs were created to implement register-transfer level abstraction, a model of the data flow and timing of a circuit. There are two major hardware description languages: VHDL and Verilog. There are different types of description in them: "dataflow, behavioral and structural". Example of dataflow of VHDL: LIBRARY IEEE; USE IEEE.STD_LOGIC_1164.ALL; ENTITY not1 IS PORT( a : IN STD_LOGIC; b : OUT STD_LOGIC; ); END not1; ARCHITECTURE behavioral OF not1 IS BEGIN b <= NOT a; END behavioral; Structure of HDL HDLs are standard text-based expressions of the structure of electronic systems and their behaviour over time. Like concurrent programming languages, HDL syntax and semantics include explicit notations for expressing concurrency. However, in contrast to most software programming languages, HDLs also include an explicit notion of time, which is a primary attribute of hardware. Languages whose only characteristic is to express circuit connectivity between a hierarchy of blocks are properly classified as netlist languages used in electric computer-aided design. HDL can be used to express designs in structural, behavioral or register-transfer-level architectures for the same circuit functionality; in the latter two cases the synthesizer decides the architecture and logic gate layout. HDLs are used to write executable specifications for hardware. A program designed to implement the underlying semantics of the language statements and simulate the progress of time provides the hardware designer with the ability to mode
https://en.wikipedia.org/wiki/Dynamic%20random-access%20memory
Dynamic random-access memory (dynamic RAM or DRAM) is a type of random-access semiconductor memory that stores each bit of data in a memory cell, usually consisting of a tiny capacitor and a transistor, both typically based on metal–oxide–semiconductor (MOS) technology. While most DRAM memory cell designs use a capacitor and transistor, some only use two transistors. In the designs where a capacitor is used, the capacitor can either be charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1. The electric charge on the capacitors gradually leaks away; without intervention the data on the capacitor would soon be lost. To prevent this, DRAM requires an external memory refresh circuit which periodically rewrites the data in the capacitors, restoring them to their original charge. This refresh process is the defining characteristic of dynamic random-access memory, in contrast to static random-access memory (SRAM) which does not require data to be refreshed. Unlike flash memory, DRAM is volatile memory (vs. non-volatile memory), since it loses its data quickly when power is removed. However, DRAM does exhibit limited data remanence. DRAM typically takes the form of an integrated circuit chip, which can consist of dozens to billions of DRAM memory cells. DRAM chips are widely used in digital electronics where low-cost and high-capacity computer memory is required. One of the largest applications for DRAM is the main memory (colloquially called the "RAM") in modern computers and graphics cards (where the "main memory" is called the graphics memory). It is also used in many portable devices and video game consoles. In contrast, SRAM, which is faster and more expensive than DRAM, is typically used where speed is of greater concern than cost and size, such as the cache memories in processors. The need to refresh DRAM demands more complicated circuitry and timing than SRAM. This is offset by the structural simplicity of DRAM memory cells: only one transistor and a capacitor are required per bit, compared to four or six transistors in SRAM. This allows DRAM to reach very high densities with a simultaneous reduction in cost per bit. Refreshing the data consumes power and a variety of techniques are used to manage the overall power consumption. DRAM had a 47% increase in the price-per-bit in 2017, the largest jump in 30 years since the 45% jump in 1988, while in recent years the price has been going down. In 2018, a "key characteristic of the DRAM market is that there are currently only three major suppliers — Micron Technology, SK Hynix and Samsung Electronics" that are "keeping a pretty tight rein on their capacity.” There is also Kioxia (previously Toshiba Memory Corporation after 2017 spin-off). Other manufacturers make and sell DIMMs (but not the DRAM chips in them), such as Kingston Technology, and some manufacturers that sell stacked DRAM (used e.g. in the fastest supercomputers on the
https://en.wikipedia.org/wiki/Transport%20in%20Israel
Transportation in Israel is based mainly on private motor vehicles and bus service and an expanding railway network. A lack of inland waterways and the small size of the country make air and water transport of only minor importance in domestic transportation, but they are vitally important for Israel's international transport links. Demands of population growth, political factors, the Israel Defense Forces, tourism and increased traffic set the pace for all sectors, being a major driver in the mobility transition towards railways and public transit while moving away from motorized road transport. All facets of transportation in Israel are under the supervision of the Ministry of Transport and Road Safety. Private transportation Roads Israel's road network spans of roads, of which are classified as freeways. The network spans the whole country. Route 6, the Trans Israel Highway, starts just east of Haifa down to the outskirts of Beer Sheva, about . Route 1 between Jerusalem and Tel Aviv and Route 2 between Tel Aviv and Haifa are well maintained highways. Cycling Tel Aviv has a growing network of bike paths, with more than over 360 kilometers (224 miles) existing or planned. In April 2011, Tel Aviv municipality launched Tel-O-Fun, a bicycle sharing system, in which 150 stations of bicycles for rent were installed within the city limits. Jerusalem has over 125 kilometers (78 miles) of cycleways, either existing or planned. National Bike Trail The National Bike Trail, when completed will take riders from the southern city of Eilat to the border with Lebanon, passing though Jerusalem, Tel Aviv and several other cities. Ofnidan (Greater Tel Aviv Cycle Network) As of 2021, construction was underway on Ofnidan, a cycle network of seven inter-urban routes connecting the cities of the Tel Aviv Metropolitan Area, with some segments already open. Public transportation Bus service Buses are the country's main form of public transport. In 2017, bus passenger trips totaled approximately 740 million. In 2009, 16 companies operated buses for public transport, totaling 5,939 buses and 8,470 drivers. Egged is Israel's largest bus company, and operates routes throughout the country. Bus routes in some areas are operated by smaller carriers, the largest being the Dan Bus Company, operating routes in Gush Dan. Kavim is the next largest. Bus stations in Israel, other than standalone bus stops, come in two types: terminals (masof, pl. mesofim) and central stations (tahana merkazit). Each terminal serves a number of routes, usually over a dozen, while a central station may serve over a hundred bus routes. The largest central bus terminal in the country is the Tel Aviv Central Bus Station, which is also the second largest bus terminal in the world. On August 5, 2010, the Ministry of Transport opened a website that contained information about public bus and train routes in the country. Previously, information was given only by the individual public transi
https://en.wikipedia.org/wiki/Freedb
Freedb was a database of compact disc track listings, where all the content was under the GNU General Public License. To look up CD information over the Internet, a client program calculated a hash function from the CD table of contents and used it as a disc ID to query the database. If the disc was in the database, the client was able to retrieve and display the artist, album title, track list and some additional information. It was originally based on the now-proprietary CDDB (Compact Disc DataBase). Because it inherited the CDDB limitations, there is no data field in the Freedb database for composer. This limits its usefulness for classical music CDs. Furthermore, CDs in a series are often introduced in the database by different people, resulting in inconsistent spelling and naming conventions across discs. the database held just under 2,000,000 CDs. As of 2007, MusicBrainz – a project with similar goals – had a Freedb gateway that allowed access to their own database. The Freedb gateway was shut down on March 18, 2019. History The original software behind CDDB was released under the GNU General Public License, and many people submitted CD information thinking the service would also remain free. The license was later changed, however, and some programmers complained that the new license included certain terms that they couldn't accept: if one wanted to access CDDB, one was not allowed to access any other CDDB-like database (such as Freedb), and any programs using a CDDB lookup had to display a CDDB logo while performing the lookup. In March 2001, CDDB, now owned by Gracenote, banned all unlicensed applications from accessing their database. New licenses for CDDB1 (the original version of CDDB) were no longer available, since Gracenote wanted to force programmers to switch to CDDB2 (a new version incompatible with CDDB1 and hence with Freedb). The license change motivated the Freedb project, which is intended to remain free. Freedb is used primarily by media players, cataloguers, audio taggers and CD ripper software. As of version 6 of the Freedb protocol, Freedb accepts and returns UTF-8 data. Magix acquired Freedb in 2006. MusicBrainz – a project with similar goals – released a Freedb gateway in 2007, allowing users to harvest information from the MusicBrainz database rather than Freedb. This service was shuttered in 2019. Freedb.org and its services was scheduled to be shut down on March 31 of 2020. As of May 28, 2020 the site was still operational. On the 13th of June 2020 it was observed that the URL used for lookups, Freedb.Freedb.org, no longer resolved to a host name and as a result the service no longer appears to operate. gnudb.org has continued to provide the Freedb.org database after Freedb.org was shutdown. Client software Further Freedb aware applications include: Asunder Audiograbber CDex cdrdao Exact Audio Copy foobar2000 fre:ac Grip JetAudio Mp3tag MediaMonkey puddletag Quod Libet See also List of on
https://en.wikipedia.org/wiki/Crippleware
Crippleware has been defined in realms of both computer software and hardware. In software, crippleware means that "vital features of the program such as printing or the ability to save files are disabled until the user purchases a registration key". While crippleware allows consumers to see the software before they buy, they are unable to test its complete functionality because of the disabled functions. Hardware crippleware is "a hardware device that has not been designed to its full capability". The functionality of the hardware device is limited to encourage consumers to pay for a more expensive upgraded version. Usually the hardware device considered to be crippleware can be upgraded to better or its full potential by way of a trivial change, such as removing a jumper wire. The manufacturer would most likely release the crippleware as a low-end or economy version of their product. Computer software Deliberately limited programs are usually freeware versions of computer programs that lack the most advanced (or even crucial) features of the original program. Limited versions are made available in order to increase the popularity of the full program (by making it more desirable) without giving it away free. Examples include a word processor that cannot save or print, and unwanted features, for example screencasting and video editing software programs applying a watermark (often a logo) onto the video screen. However, crippleware programs can also differentiate between tiers of paying software customers. The term "crippleware" is sometimes used to describe software products whose functions have been limited (or "crippled") with the sole purpose of encouraging or requiring the user to pay for those functions (either by paying a one-time fee or an ongoing subscription fee). The less derogatory term, from a shareware software producer's perspective, is feature-limited. Feature-limited is merely one mechanism for marketing shareware as a damaged good; others are time-limited, usage-limited, capacity-limited, nagware and output-limited. From the producer's standpoint, feature-limited allows customers to try software with no commitment instead of relying on questionable or possibly staged reviews. Try-before-you-buy applications are very prevalent for mobile devices, with the additional damaged good of ad-displays as well as all of the other forms of damaged-good applications. From an Open Source software providers perspective, there is the model of open core which includes a feature-limited version of the product and an open-core version. The feature-limited version can be used widely; this approach is used by products like MySQL and Eucalyptus. Computer hardware This product differentiation strategy has also been used in hardware products: The Intel 486SX which was a 486DX with the FPU removed or in early versions present but disabled. AMD disabled defective cores on their quad-core Phenom and Phenom II X4 processor dies to make cheaper tripl
https://en.wikipedia.org/wiki/P%C5%82ock
Płock (pronounced ) is a city in central Poland, on the Vistula river, in the Masovian Voivodeship. According to the data provided by GUS on 31 December 2021, there were 116,962 inhabitants in the city. Its full ceremonial name, according to the preamble to the City Statute, is Stołeczne Książęce Miasto Płock (the Princely or Ducal Capital City of Płock). It is used in ceremonial documents as well as for preserving an old tradition. Płock is a capital of the powiat (county) in the west of the Masovian Voivodeship. From 1079 to 1138 it was the capital of Poland. The Wzgórze Tumskie ("Cathedral Hill") with the Płock Castle and the Catholic Cathedral, which contains the sarcophagi of a number of Polish monarchs, is listed as a Historic Monument of Poland. It was the main city and administrative center of Mazovia in the Middle Ages before the rise of Warsaw as a major city of Poland, and later it remained a royal city of Poland. It is the cultural, academic, scientific, administrative and transportation center of the west and north Masovian region. Płock is the seat of the Roman Catholic Diocese of Płock, one of the oldest dioceses in Poland, founded in the 11th century, and it is also the worldwide headquarters of the Mariavite Church. In Płock are located also the Marshal Stanisław Małachowski High School, the oldest school in Poland and one of the oldest in Central Europe, and the Płock refinery, the country's largest oil refinery. History Middle Ages The area was long inhabited by pagan peoples. In the 10th century, a fortified location was established high of the Vistula River's bank. This location was at a junction of shipping and routes and was strategic for centuries. Its location was a great asset. In 1009 a Benedictine monastery was established here. It became a center of science and art for the area. During the rule of the first monarchs of the Piast dynasty, even prior to the Baptism of Poland, Płock served as one of the monarchial seats, including that of Prince Mieszko I and King Bolesław I the Brave. The king built the original fortifications on Cathedral Hill (), overlooking the Vistula River. From 1037 to 1047, Płock was capital of the independent Mazovian state of Miecław. Płock has been the residence of many Mazovian princes. In 1075, a diocese seat was created here for the Roman Catholic church. From 1079 to 1138, during the reign of the Polish monarchs Władysław I Herman and Bolesław III Wrymouth, the city was the capital of Poland, then earning its title as the Ducal Capital City of Płock (). As a result of the fragmentation of Poland into smaller duchies, from 1138 it was the capital of the Duchy of Masovia, and afterwards the Duchy of Płock. In 1180 the present-day Marshal Stanisław Małachowski High School (Małachowianka), the oldest still existing school in Poland and one of the oldest in Central Europe, was established. Among its notable graduates is scholar and jurist Paweł Włodkowic, a precursor of religious freedom i
https://en.wikipedia.org/wiki/J.%20Presper%20Eckert
John Adam Presper Eckert Jr. (April 9, 1919 – June 3, 1995) was an American electrical engineer and computer pioneer. With John Mauchly, he designed the first general-purpose electronic digital computer (ENIAC), presented the first course in computing topics (the Moore School Lectures), founded the Eckert–Mauchly Computer Corporation, and designed the first commercial computer in the U.S., the UNIVAC, which incorporated Eckert's invention of the mercury delay-line memory. Education Eckert was born in Philadelphia to wealthy real estate developer John Eckert, and was raised in a large house in Philadelphia's Germantown section. During elementary school, he was driven by chauffeur to William Penn Charter School, and in high school joined the Engineer's Club of Philadelphia and spent afternoons at the electronics laboratory of television inventor Philo Farnsworth in Chestnut Hill. He placed second in the country on the math portion of the College Board examination. Eckert initially enrolled in the University of Pennsylvania's Wharton School to study business at the encouragement of his parents, but in 1937 transferred to Penn's Moore School of Electrical Engineering. In 1940, at age 21, Eckert applied for his first patent, "Light Modulating Method and Apparatus". At the Moore School, Eckert participated in research on radar timing, made improvements to the speed and precision of the Moore School's differential analyzer, and in 1941 assisted in teaching a summer course in electronics under the Engineering, Science, and Management War Training (ESMWT) offered through the Moore School by the United States Department of War. Development of ENIAC John Mauchly, then chairman of the physics department of nearby Ursinus College, was a student in the summer electronics course, and the following fall secured a teaching position at the Moore School. Mauchly's proposal for building an electronic digital computer using vacuum tubes, many times faster and more accurate than the differential analyzer for computing ballistics tables for artillery, caught the interest of the Moore School's Army liaison, Lieutenant Herman Goldstine, and on April 9, 1943, was formally presented in a meeting at Aberdeen Proving Ground to director Colonel Leslie Simon, Oswald Veblen, and others. A contract was awarded for Moore School's construction of the proposed computing machine, which would be named ENIAC, and Eckert was made the project's chief engineer. ENIAC was completed in late 1945 and was unveiled to the public in February 1946. Entrepreneurship Both Eckert and Mauchly left the Moore School in March 1946 over a dispute involving assignment of claims on intellectual property developed at the University. In that year, the University of Pennsylvania adopted a new patent policy to protect the intellectual purity of the research it sponsored, which would have required Eckert and Mauchly to assign all their patents to the University had they stayed beyond March. Eckert and Ma
https://en.wikipedia.org/wiki/Universal%20asynchronous%20receiver-transmitter
A Universal Asynchronous Receiver-Transmitter (UART ) is a computer hardware device for asynchronous serial communication in which the data format and transmission speeds are configurable. It sends data bits one by one, from the least significant to the most significant, framed by start and stop bits so that precise timing is handled by the communication channel. The electric signaling levels are handled by a driver circuit external to the UART. Common signal levels are RS-232, RS-485, and raw TTL for short debugging links. Early teletypewriters used current loops. It was one of the earliest computer communication devices, used to attach teletypewriters for an operator console. It was also an early hardware system for the Internet. A UART is usually an individual (or part of an) integrated circuit (IC) used for serial communications over a computer or peripheral device serial port. One or more UART peripherals are commonly integrated in microcontroller chips. Specialised UARTs are used for automobiles, smart cards and SIMs. A related device, the universal synchronous and asynchronous receiver-transmitter (USART) also supports synchronous operation. Transmitting and receiving serial data A UART contains those following components: a clock generator, usually a multiple of the bit rate to allow sampling in the middle of a bit period input and output shift registers, along with the transmit/receive or FIFO buffers transmit/receive control read/write control logic The universal asynchronous receiver-transmitter (UART) takes bytes of data and transmits the individual bits in a sequential fashion. At the destination, a second UART re-assembles the bits into complete bytes. Each UART contains a shift register, which is the fundamental method of conversion between serial and parallel forms. Serial transmission of digital information (bits) through a single wire or other medium is less costly than parallel transmission through multiple wires. The UART usually does not directly generate or receive the external signals used between different items of equipment. Separate interface devices are used to convert the logic level signals of the UART to and from the external signaling levels, which may be standardized voltage levels, current levels, or other signals. Communication may be 3 modes: simplex (in one direction only, with no provision for the receiving device to send information back to the transmitting device) full duplex (both devices send and receive at the same time) half duplex (devices take turns transmitting and receiving) For UART to work the following settings need to be the same on both the transmitting and receiving side: Voltage level Baud Rate Parity bit Data bits size Stop bits size Flow Control For the voltage level, 2 UART modules work well when they both have the same voltage level, e.g 3V-3V between the 2 UART modules. To use 2 UART modules at different voltage levels, a level switch circuit needs to be added e
https://en.wikipedia.org/wiki/Expansion%20card
In computing, an expansion card (also called an expansion board, adapter card, peripheral card or accessory card) is a printed circuit board that can be inserted into an electrical connector, or expansion slot (also referred to as a bus slot) on a computer's motherboard (see also backplane) to add functionality to a computer system. Sometimes the design of the computer's case and motherboard involves placing most (or all) of these slots onto a separate, removable card. Typically such cards are referred to as a riser card in part because they project upward from the board and allow expansion cards to be placed above and parallel to the motherboard. Expansion cards allow the capabilities and interfaces of a computer system to be extended or supplemented in a way appropriate to the tasks it will perform. For example, a high-speed multi-channel data acquisition system would be of no use in a personal computer used for bookkeeping, but might be a key part of a system used for industrial process control. Expansion cards can often be installed or removed in the field, allowing a degree of user customization for particular purposes. Some expansion cards take the form of "daughterboards" that plug into connectors on a supporting system board. In personal computing, notable expansion buses and expansion card standards include the S-100 bus from 1974 associated with the CP/M operating system, the 50-pin expansion slots of the original Apple II computer from 1977 (unique to Apple), IBM's Industry Standard Architecture (ISA) introduced with the IBM PC in 1981, Acorn's tube expansion bus on the BBC Micro also from 1981, IBM's patented and proprietary Micro Channel architecture (MCA) from 1987 that never won favour in the clone market, the vastly improved Peripheral Component Interconnect (PCI) that displaced ISA in 1992, and PCI Express from 2003 which abstracts the interconnect into high-speed communication "lanes" and relegates all other functions into software protocol. History Vacuum-tube based computers had modular construction, but individual functions for peripheral devices filled a cabinet, not just a printed circuit board. Processor, memory and I/O cards became feasible with the development of integrated circuits. Expansion cards make processor systems adaptable to the needs of the user by making it possible to connect various types of devices, including I/O, additional memory, and optional features (such as a floating point unit) to the central processor. Minicomputers, starting with the PDP-8, were made of multiple cards communicating through, and powered by, a passive backplane. The first commercial microcomputer to feature expansion slots was the Micral N, in 1973. The first company to establish a de facto standard was Altair with the Altair 8800, developed 1974–1975, which later became a multi-manufacturer standard, the S-100 bus. Many of these computers were also passive backplane designs, where all elements of the computer, (processor, me
https://en.wikipedia.org/wiki/Game%20Show%20Network
Game Show Network (GSN) is an American basic cable channel owned by Sony Pictures Television. The channel's programming is primarily dedicated to game shows, including reruns of acquired game shows, along with new, first-run original and revived game shows. The network has also previously aired reality competition series and televised poker. As of October 2019, Game Show Network claimed that it was available to "nearly 75 million" households in America, primarily through traditional cable and satellite services. The network and its original programming are also available on streaming and Internet television services, including Frndly TV, YouTube TV, Philo, fuboTV, Sling TV, and Plex. History 1994–2004: As "Game Show Network" On May 7, 1992, Sony Pictures Entertainment joined forces with the United Video Satellite Group to launch the Game Show Channel, which was set to begin in 1993. The announcement of the channel was made by SPE president Mel Harris. On December 2, 1992, Sony Pictures Entertainment made a deal to acquire the Barry & Enright game show library, and in a separate deal, struck a 10-year licensing agreement for the rights to the Mark Goodson game show library of more than 20,000 episodes including among others, What's My Line?, Family Feud, and To Tell the Truth. Upon the deal, Sony said it would sell an equity stake in the network to Mark Goodson Productions, including the production of new original series by Jonathan Goodson Productions. Both deals were completed on December 7, 1992, eleven days before Mark Goodson's death. On June 6, 1994, Mark Goodson Productions pulled out of the venture. GSN's launch time was intended to be at 10:00 p.m. ET, but at the time, it was pushed back to 7:00 p.m. ET. Game Show Network launched at 7:00 p.m. on December 1, 1994. The first aired game show to be on GSN was What's My Line?. By the launch date, the network had secured rights to over 40,000 episodes from the libraries of several game show production companies and corporate parent Sony. The initial lineup was exclusively acquired programming such as Match Game, Family Feud, The Newlywed Game, Jeopardy!, and Wheel of Fortune. Over time, Game Show Network acquired the rights to The Price is Right, The $10,000 Pyramid, Let's Make a Deal, Hollywood Squares, Who Wants to Be a Millionaire and other libraries, putting them on the schedule at various times throughout the network's history. The network eventually began producing original game shows such as Lingo, Burt Luddin's Love Buffet, Whammy!, Inquizition, and Extreme Gong. Faux Pause is an American television program that aired in 1998 on Game Show Network. Co-hosted by Mary Gallagher and Sean Donnellan, Pause consisted of jokes and skits done while watching certain episodes of game shows, in a similar fashion to Mystery Science Theater 3000. In 2001, a massive change in both leadership and programming at the network took place when Liberty Media acquired a 50% stake. Both president Michae
https://en.wikipedia.org/wiki/Local%20bus
In computer architecture, a local bus is a computer bus that connects directly, or almost directly, from the central processing unit (CPU) to one or more slots on the expansion bus. The significance of direct connection to the CPU is avoiding the bottleneck created by the expansion bus, thus providing fast throughput. There are several local buses built into various types of computers to increase the speed of data transfer (i.e. bandwidth). Local buses for expanded memory and video boards are the most common. VESA Local Bus and Processor Direct Slot were examples of a local bus design. Although VL-Bus was later succeeded by AGP, it is not correct to categorize AGP as a local bus. Whereas VL-Bus operated on the CPU's memory bus at the CPU's clock speed, an AGP peripheral runs at specified clock speeds that run independently of the CPU clock (usually using a divider of the CPU clock). See also OPTi Inc., which had its own bespoke local bus expansion slot design in the early 1990s References Computer buses
https://en.wikipedia.org/wiki/Jeremiah%20%28TV%20series%29
Jeremiah is a post-apocalyptic action drama television series starring Luke Perry and Malcolm-Jamal Warner that ran on the Showtime network from 2002 to 2004. The series takes place in a future wherein the adult population has been wiped out by a deadly virus. The series ended production in 2003, after the management of Showtime decided they were not interested in producing science fiction programming anymore. Had the series continued, it would have run under a different showrunner than J. Michael Straczynski, who decided to leave following the completion of the production of the second season due to creative differences between him and MGM Television. Episodes for the final half of the second season did not begin airing in the United States until September 3, 2004. Plot The year is 2021, 15 years after a plague has killed nearly everyone over the age of thirteen (both the event and the virus itself are referred to as "The Big Death" and "The Big D"). Two young men, Jeremiah and Kurdy, meet up and join forces with those inside "Thunder Mountain" and help rebuild civilization. Jeremiah is searching for the "Valhalla Sector" where his father may still be alive. The eponymous Jeremiah is a semi-loner who has spent the last 15 years travelling back and forth across the United States, seeking out a living and looking for a place called "Valhalla Sector" (the remains of Raven Rock), which his father—a viral researcher—had mentioned to Jeremiah as a possible refuge shortly before disappearing into the chaos of "the Big Death." A stop in the Colorado trading town of Clarefield results in Jeremiah teaming up with another lone traveller named Kurdy, before being imprisoned by the town's warlord in a cell with a man named Simon, who wants to recruit Jeremiah for a vague and mysterious organization. With Kurdy's help, Jeremiah and Simon escape, but Simon is fatally wounded in the process. Following the instructions given to them by the dying Simon, Jeremiah and Kurdy take Simon's truck back to "Thunder Mountain," the remains of the NORAD complex, where they discover a well-organized and -equipped group operating out of the base, led by the former child prodigy Markus Alexander. Markus chooses to employ Jeremiah and Kurdy as a recon team to replace the now dead Simon and his partner, sending the two men back outside to gather information in preparation for the time when the mountain will need to start rebuilding the world. Over the course of the first season, the group increasingly encounters threats originating from Valhalla Sector, which they discover to be a sealed and heavily armed bunker complex in Pennsylvania, used to house the remains of the US government and military leadership during the Big Death. The survivors there plan to rebuild the world in an authoritarian mold, combining their military power with attempts to control the "Big Death" virus itself in order to wipe out resistance by slaughtering non-compliant populations. The second half
https://en.wikipedia.org/wiki/LINC
The LINC (Laboratory INstrument Computer) is a 12-bit, 2048-word transistorized computer. The LINC is considered by some the first minicomputer and a forerunner to the personal computer. Originally named the "Linc", suggesting the project's origins at MIT's Lincoln Laboratory, it was renamed LINC after the project moved from the Lincoln Laboratory. The LINC was designed by Wesley A. Clark and Charles Molnar. The LINC and other "MIT Group" machines were designed at MIT and eventually built by Digital Equipment Corporation (DEC) and Spear Inc. of Waltham, Massachusetts (later a division of Becton, Dickinson and Company). The LINC sold for more than $40,000 at the time. A typical configuration included an enclosed 6'X20" rack; four boxes holding (1) two tape drives, (2) display scope and input knobs, (3) control console and (4) data terminal interface; and a keyboard. The LINC interfaced well with laboratory experiments. Analog inputs and outputs were part of the basic design. It was designed in 1962 by Charles Molnar and Wesley Clark at Lincoln Laboratory, Massachusetts, for NIH researchers. The LINC's design was literally in the public domain, perhaps making it unique in the history of computers. A dozen LINC computers were assembled by their eventual biomedical researcher owners in a 1963 summer workshop at MIT. Digital Equipment Corporation (starting in 1964) and, later, Spear Inc. of Waltham, MA. manufactured them commercially. DEC's pioneer C. Gordon Bell states that the LINC project began in 1961, with first delivery in March 1962, and the machine was not formally withdrawn until December 1969. A total of 50 were built (all using DEC System Module Blocks and cabinets), most at Lincoln Labs, housing the desktop instruments in four wooden racks. The first LINC included two oscilloscope displays. Twenty-one were sold by DEC at $43,600 (), delivered in the Production Model design. In these, the tall cabinet sitting behind a white Formica-covered table held two somewhat smaller metal boxes holding the same instrumentation, a Tektronix display oscilloscope over the "front panel" on the user's left, a bay for interfaces over two LINC-Tape drives on the user's right, and a chunky keyboard between them. The standard program development software (an assembler/editor) was designed by Mary Allen Wilkes; the last version was named LAP6 (LINC Assembly Program 6). Architecture The LINC had 2048 12-bit words of memory in two sections. Only the first 1024 words were usable for program execution. The second section of memory could only be used for data. Programs could use a 12-bit accumulator and a one-bit link register. The first sixteen locations in program memory had special functions. Location 0 supported the single-level of subroutine call, automatically being updated with a return address on every jump instruction. The next fifteen locations could be used as index registers by one of the addressing modes. A programmable, six-bit relay register was i
https://en.wikipedia.org/wiki/Hilary%20Putnam
Hilary Whitehall Putnam (; July 31, 1926 – March 13, 2016) was an American philosopher, mathematician, and computer scientist, and a major figure in analytic philosophy in the second half of the 20th century. He made significant contributions to philosophy of mind, philosophy of language, philosophy of mathematics, and philosophy of science. Outside philosophy, Putnam contributed to mathematics and computer science. Together with Martin Davis he developed the Davis–Putnam algorithm for the Boolean satisfiability problem and he helped demonstrate the unsolvability of Hilbert's tenth problem. Putnam was known for his willingness to apply equal scrutiny to his own philosophical positions as to those of others, subjecting each position to rigorous analysis until he exposed its flaws. As a result, he acquired a reputation for frequently changing his positions. In philosophy of mind, Putnam is known for his argument against the type-identity of mental and physical states based on his hypothesis of the multiple realizability of the mental, and for the concept of functionalism, an influential theory regarding the mind–body problem. In philosophy of language, along with Saul Kripke and others, he developed the causal theory of reference, and formulated an original theory of meaning, introducing the notion of semantic externalism based on a thought experiment called Twin Earth. In philosophy of mathematics, Putnam and W. V. O. Quine developed the Quine–Putnam indispensability argument, an argument for the reality of mathematical entities, later espousing the view that mathematics is not purely logical, but "quasi-empirical". In epistemology, Putnam is known for his critique of the well-known "brain in a vat" thought experiment. This thought experiment appears to provide a powerful argument for epistemological skepticism, but Putnam challenges its coherence. In metaphysics, he originally espoused a position called metaphysical realism, but eventually became one of its most outspoken critics, first adopting a view he called "internal realism", which he later abandoned. Despite these changes of view, throughout his career Putnam remained committed to scientific realism, roughly the view that mature scientific theories are approximately true descriptions of ways things are. In his later work, Putnam became increasingly interested in American pragmatism, Jewish philosophy, and ethics, engaging with a wider array of philosophical traditions. He also displayed an interest in metaphilosophy, seeking to "renew philosophy" from what he identified as narrow and inflated concerns. He was at times a politically controversial figure, especially for his involvement with the Progressive Labor Party in the late 1960s and early 1970s.<ref name="Auxier"</ Life Hilary Whitehall Putnam was born on July 31, 1926, in Chicago, Illinois. His father, Samuel Putnam, was a scholar of Romance languages, columnist, and translator who wrote for the Daily Worker, a publication of th
https://en.wikipedia.org/wiki/Metropolitan%20area
A metropolitan area or metro is a region consisting of a densely populated urban agglomeration and its surrounding territories sharing industries, commercial areas, transport network, infrastructures, and housing. A metropolitan area usually comprises multiple principal cities, jurisdictions and municipalities: neighborhoods, townships, boroughs, cities, towns, exurbs, suburbs, counties, districts, and even states and nations in areas like the eurodistricts. As social, economic and political institutions have changed, metropolitan areas have become key economic and political regions. Metropolitan areas in the United States are delineated around the Core of a core based statistical area which is defined as an urban area, (this is different than the urban core) and consists of central and outlying counties as the terms central city and suburb are no longer used by the census bureau due to suburbanization of employment. In other countries metropolitan areas sometimes anchored by one central city such as the Paris metropolitan area (Paris) or Mumbai Metropolitan Region (Mumbai). In other cases, metropolitan areas contain multiple centers of equal or close to equal importance, especially in the United States; for example, the Dallas–Fort Worth metropolitan area has eight principal cities. The Islamabad–Rawalpindi metropolitan area (Islamabad and Rawalpindi), the Rhine-Ruhr in Germany and the Randstad in the Netherlands are other examples. In the United States, the concept of metropolitan statistical areas has gained prominence. The area of the Greater Washington metropolitan area is an example of statistically grouping independent cities and county areas from various states to form a larger city because of proximity, history and recent urban convergence. Metropolitan areas may themselves be part of a greater megalopolis. For urban centres located outside metropolitan areas that generate a similar attraction at a smaller scale for a region, the concept of a regiopolis and a respective regiopolitan area, or regio, was introduced by German professors in 2006. In the United States, the term micropolitan statistical area is used. Definition A metropolitan area combines an urban agglomeration with the contiguous built-up areas, which are not necessarily urban in character but are closely bound to the center by employment or other commerce. These outlying zones are sometimes known as a commuter belt and may extend well beyond the urban zone to other political entities. For example, East Hampton, New York, on Long Island is considered part of the New York metropolitan area. In practice, the parameters of metropolitan areas, in both official and unofficial usage, are not consistent. Sometimes they are little different from an urban area, and in other cases, they cover broad regions that have little relation to a single urban settlement; comparative statistics for metropolitan areas should take this into account. The term metropolitan can also refer to a c
https://en.wikipedia.org/wiki/AFS
AFS is an initialism that may refer to: Computing Andrew File System, a distributed networked file system OpenAFS, an open source implementation of the Andrew File System Apple File Service, implementing the Apple Filing Protocol Apple File System, Apple's proprietary file system AtheOS File System, part of the Syllable operating system Education AFS Intercultural Programs, formerly American Field Service Abington Friends School, in Jenkintown, Pennsylvania, United States Military Army Fire Service, UK Air force station Organizations Alternative for Sweden, a political party in Sweden American Folklore Society American Foundry Society (AfS), a workgroup of the German National Library (DNB) Association of Football Statisticians, UK Australian Flag Society Auxiliary Fire Service, UK and Ireland Places Afs, Idlib, a Syrian village Ashford railway station (Surrey) (station code:AFS), Middlesex, UK South Africa, ITU country code Other Advanced front-lighting system (AFS) or Adaptative Front-lighting System, for automotive headlamps Aeronautical fixed service, for air navigation Afghan afghani, unit of currency Afro-Seminole Creole language (ISO 639-3: oafs) AFS Trinity, a US company Allergic fungal sinusitis Alternative financial service Atomic fluorescence spectroscopy Available for sale, an accounting term International Convention on the Control of Harmful Anti-fouling Systems on Ships, 2001 "AFS", a song by Natanael Cano from Nata Montana, 2023 Nikon AF-S, a type of Nikon F-mount lens
https://en.wikipedia.org/wiki/Windows%2098
Windows 98 is a consumer-oriented operating system developed by Microsoft as part of its Windows 9x family of Microsoft Windows operating systems. The second operating system in the 9x line, it is the successor to Windows 95, and was released to manufacturing on May 15, 1998, and generally to retail on June 25, 1998. Like its predecessor, it is a hybrid 16-bit and 32-bit monolithic product with the boot stage based on MS-DOS. Windows 98 is a web-integrated operating system that bears numerous similarities to its predecessor. Most of its improvements were cosmetic or designed to improve the user experience, but there were also a handful of features introduced to enhance system functionality and capabilities, including improved USB support and accessibility, as well as support for hardware advancements such as DVD players. Windows 98 was the first edition of Windows to adopt the Windows Driver Model, and introduced features that would become standard in future generations of Windows, such as Disk Cleanup, Windows Update, multi-monitor support, and Internet Connection Sharing. Microsoft had marketed Windows 98 as a "tune-up" to Windows 95, rather than an entirely improved next generation of Windows. Upon release, it received a positive reception for its web-integrated interface and ease of use, as well as its addressing of issues present in Windows 95, although some pointed out that it was not significantly more stable than its predecessor. Windows 98 sold an estimated 58 million licenses and saw one major update, known as Windows 98 Second Edition (SE), released on May 5, 1999. After the release of its successor, Windows Me in 2000, mainstream support for Windows 98 and 98 SE ended on June 30, 2002, followed by extended support on July 11, 2006. Development Following the success of Windows 95, the development of Windows 98 began, initially under the development codename "Memphis." The first test version, Windows Memphis Developer Release, was released in January 1997. Memphis first entered beta as Windows Memphis Beta 1, released on June 30, 1997. It was followed by Windows 98 Beta 2, which dropped the Memphis name and was released in July. Microsoft had planned a full release of Windows 98 for the first quarter of 1998, along with a Windows 98 upgrade pack for Windows 95, but it also had a similar upgrade for Windows 3.x operating systems planned for the second quarter. Stacey Breyfogle, a product manager for Microsoft, explained that the later release of the upgrade for Windows 3 was because the upgrade required more testing than that for Windows 95 due to the presence of more compatibility issues, and without user objections, Microsoft merged the two upgrade packs into one and set all of their release dates to the second quarter. On December 15, Microsoft released Windows 98 Beta 3. It was the first build to be able to upgrade from Windows 3.1x, and introduced new startup and shutdown sounds. Near its completion, Windows 98 was released as
https://en.wikipedia.org/wiki/Windows%20Me
Windows Millennium Edition, or Windows Me (marketed with the pronunciation of the pronoun "me"), often capitalized as Windows ME, is an operating system developed by Microsoft as part of its Windows 9x family of Microsoft Windows operating systems. It was officially codenamed as Millennium. It is the successor to Windows 98, and was released to manufacturing on June 19, 2000, and then to retail on September 14, 2000. It was Microsoft's main operating system for home users until the introduction of its successor Windows XP in October 2001. Windows Me was targeted specifically at home PC users, and included Internet Explorer 5.5 (later default was Internet Explorer 6), Windows Media Player 7 (later default was Windows Media Player 9 Series) and the new Windows Movie Maker software, which provided basic video editing and was designed to be easy to use for consumers. Microsoft also incorporated features first introduced in Windows 2000, which had been released as a business-oriented operating system seven months earlier, into the graphical user interface, shell and Windows Explorer. Although Windows Me was still ultimately based around MS-DOS like its predecessors, access to real-mode DOS was restricted to decrease system boot time. Windows Me initially received a generally positive reception when it was released, however it soon garnered a very negative reception from many users due to stability problems. Windows Me became infamously known by many as one of the worst versions of Windows ever released, being unfavorably compared with its predecessor, Windows 98, several years before. In October 2001, Windows XP was released to the public, having already been under development at the time of Windows Me's release, and incorporated most, but not all, of the content of Windows Me, while being far more stable because of it being based on the Windows NT kernel. After the release of Windows XP in 2001, mainstream support for Windows Me ended on December 31, 2003, followed by extended support on July 11, 2006. Development history At the 1998 Windows Hardware Engineering Conference, Microsoft CEO Bill Gates stated that Windows 98 would be the last iteration of Windows to use the Windows 9x kernel, with the intention for the next consumer-focused version to be based on the Windows NT kernel, unifying the two branches of Windows. However, it soon became apparent that the development work involved was too great to meet the aim of releasing before the end of 2000, particularly given the ongoing parallel work on the eventually-canceled Neptune project. The Consumer Windows development team was therefore re-tasked with improving Windows 98 while porting some of the look-and-feel from Windows 2000. Microsoft President Steve Ballmer publicly announced these changes at the next Windows HEIC in 1999. On July 23, 1999, the first alpha version of Windows Me was released to testers. Known as Development Preview 1, it was very similar to Windows 98 SE, with the only
https://en.wikipedia.org/wiki/Abstract%20syntax
In computer science, the abstract syntax of data is its structure described as a data type (possibly, but not necessarily, an abstract data type), independent of any particular representation or encoding. This is particularly used in the representation of text in computer languages, which are generally stored in a tree structure as an abstract syntax tree. Abstract syntax, which only consists of the structure of data, is contrasted with concrete syntax, which also includes information about the representation. For example, concrete syntax includes features like parentheses (for grouping) or commas (for lists) which are not included in the abstract syntax, as they are implicit in the structure. Abstract syntaxes are classified as first-order abstract syntax (FOAS), if the structure is abstract but names (identifiers) are still concrete (and thus requires name resolution), and higher-order abstract syntax, if the names themselves are abstract. Uses To be implemented either for computation or communications, a mapping from the abstract syntax to specific machine representations and encodings must be defined; these may be called the "concrete syntax" (in language implementation) or the "transfer syntax" (in communications). A compiler's internal representation of a program will typically be specified by an abstract syntax in terms of categories such as "statement", "expression" and "identifier". This is independent of the source syntax (concrete syntax) of the language being compiled (though it will often be very similar). A parse tree is similar to an abstract syntax tree but it will typically also contain features such as parentheses which are syntactically significant but which are implicit in the structure of the abstract syntax tree. Algebraic data types are particularly well-suited to the implementation of abstract syntax. See also Higher-order abstract syntax Abstract Syntax Notation One References Programming language design Programming language theory Compiler construction Syntax Parsing
https://en.wikipedia.org/wiki/ASN.1
Abstract Syntax Notation One (ASN.1) is a standard interface description language (IDL) for defining data structures that can be serialized and deserialized in a cross-platform way. It is broadly used in telecommunications and computer networking, and especially in cryptography. Protocol developers define data structures in ASN.1 modules, which are generally a section of a broader standards document written in the ASN.1 language. The advantage is that the ASN.1 description of the data encoding is independent of a particular computer or programming language. Because ASN.1 is both human-readable and machine-readable, an ASN.1 compiler can compile modules into libraries of code, codecs, that decode or encode the data structures. Some ASN.1 compilers can produce code to encode or decode several encodings, e.g. packed, BER or XML. ASN.1 is a joint standard of the International Telecommunication Union Telecommunication Standardization Sector (ITU-T) in ITU-T Study Group 17 and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC), originally defined in 1984 as part of CCITT X.409:1984. In 1988, ASN.1 moved to its own standard, X.208, due to wide applicability. The substantially revised 1995 version is covered by the X.680 series. The latest revision of the X.680 series of recommendations is the 6.0 Edition, published in 2021. Language support ASN.1 is a data type declaration notation. It does not define how to manipulate a variable of such a type. Manipulation of variables is defined in other languages such as SDL (Specification and Description Language) for executable modeling or TTCN-3 (Testing and Test Control Notation) for conformance testing. Both these languages natively support ASN.1 declarations. It is possible to import an ASN.1 module and declare a variable of any of the ASN.1 types declared in the module. Applications ASN.1 is used to define a large number of protocols. Its most extensive uses continue to be telecommunications, cryptography, and biometrics. Encodings ASN.1 is closely associated with a set of encoding rules that specify how to represent a data structure as a series of bytes. The standard ASN.1 encoding rules include: Encoding Control Notation ASN.1 recommendations provide a number of predefined encoding rules. If none of the existing encoding rules are suitable, the Encoding Control Notation (ECN) provides a way for a user to define his or her own customized encoding rules. Relation to Privacy-Enhanced Mail (PEM) Encoding Privacy-Enhanced Mail (PEM) encoding is entirely unrelated to ASN.1 and its codecs, but encoded ASN.1 data, which is often binary, is often PEM-encoded so that it can be transmitted as textual data, e.g. over SMTP relays, or through copy/paste buffers. Example This is an example ASN.1 module defining the messages (data structures) of a fictitious Foo Protocol: FooProtocol DEFINITIONS ::= BEGIN FooQuestion ::= SEQUENCE { trackingNumber INT
https://en.wikipedia.org/wiki/Abstract%20syntax%20tree
In computer science, an abstract syntax tree (AST), or just syntax tree, is a tree representation of the abstract syntactic structure of text (often source code) written in a formal language. Each node of the tree denotes a construct occurring in the text. The syntax is "abstract" in the sense that it does not represent every detail appearing in the real syntax, but rather just the structural or content-related details. For instance, grouping parentheses are implicit in the tree structure, so these do not have to be represented as separate nodes. Likewise, a syntactic construct like an if-condition-then statement may be denoted by means of a single node with three branches. This distinguishes abstract syntax trees from concrete syntax trees, traditionally designated parse trees. Parse trees are typically built by a parser during the source code translation and compiling process. Once built, additional information is added to the AST by means of subsequent processing, e.g., contextual analysis. Abstract syntax trees are also used in program analysis and program transformation systems. Application in compilers Abstract syntax trees are data structures widely used in compilers to represent the structure of program code. An AST is usually the result of the syntax analysis phase of a compiler. It often serves as an intermediate representation of the program through several stages that the compiler requires, and has a strong impact on the final output of the compiler. Motivation An AST has several properties that aid the further steps of the compilation process: An AST can be edited and enhanced with information such as properties and annotations for every element it contains. Such editing and annotation is impossible with the source code of a program, since it would imply changing it. Compared to the source code, an AST does not include inessential punctuation and delimiters (braces, semicolons, parentheses, etc.). An AST usually contains extra information about the program, due to the consecutive stages of analysis by the compiler. For example, it may store the position of each element in the source code, allowing the compiler to print useful error messages. ASTs are needed because of the inherent nature of programming languages and their documentation. Languages are often ambiguous by nature. In order to avoid this ambiguity, programming languages are often specified as a context-free grammar (CFG). However, there are often aspects of programming languages that a CFG can't express, but are part of the language and are documented in its specification. These are details that require a context to determine their validity and behaviour. For example, if a language allows new types to be declared, a CFG cannot predict the names of such types nor the way in which they should be used. Even if a language has a predefined set of types, enforcing proper usage usually requires some context. Another example is duck typing, where the type of an element
https://en.wikipedia.org/wiki/AST
AST, Ast, or ast may refer to: Science and technology Attention schema theory, of consciousness or subjective awareness Computing Abstract syntax tree, a finite, labeled, directed tree used in computer science Anamorphic stretch transform, a physics-inspired signal transform Andrew S. Tanenbaum (born 1944), American-Dutch computer scientist, sometimes identified as "ast" Asynchronous System Trap, a mechanism used in several computer operating systems Application Security Testing, testing the operation and security of a software application Application Support Team, a team of Application Support Analysts supporting IT services delivered to users within an organisation, enabling the required operational processes needed for the business to be successful. Medicine Aspartate transaminase, an enzyme associated with liver parenchymal cells Mathematics Alternative set theory Alternating series test Organizations Allied Security Trust, a patent holding company AST Research, a defunct personal computer manufacturer Association for Software Testing, US AST (publisher), a Russian book publishing company Arsenal Supporters' Trust, English football club supporters Politics and government Abwehrstelle or Ast, the local intelligence center in each military district in Nazi Germany Alaska State Troopers, state police agency in Alaska, US Chadian Social Action (), a former political party in Chad Office of Commercial Space Transportation, of the US FAA Education ACT Scaling Test, for year 12 students, Australia Advanced Skills Teacher, a teaching role in England and Wales Air Service Training, a flight engineering training school American School in Taichung, an international school in Taichung, Taiwan American School of Tangier American School of Tegucigalpa American School of Tripoli Atlantic School of Theology, Halifax, Nova Scotia, Canada Avalanche Skills Training, Canada Conroe ISD Academy of Science and Technology, Texas, US Transport Aldershot GO Station, Amtrak station code AST Astoria Regional Airport (FAA Identifier and IATA code AST), Clatsop County, Oregon, US Aston railway station (National Rail code), Birmingham, England Time zones Atlantic Standard Time, UTC−4 Alaska Standard Time, UTC−9 Arabia Standard Time, UTC+3 Antigua and Barbuda Time, UTC−4 Other uses Aviation Survival Technician, a US Coast Guard rating AST Research, Inc., as known as AST Computer, a personal computer manufacturer Asti, Italian province and city (was traditionally Ast in the Piemontese dialect) Assured shorthold tenancy, UK Asturian language (ISO 639 alpha-3: ast), Spain Astana Pro Team (UCI code AST), a professional road bicycle racing team Pat Ast, American actress and model (1941-2001)