source
stringlengths
32
199
text
stringlengths
26
3k
https://en.wikipedia.org/wiki/Transport%20in%20Portugal
Transport in Portugal is diversified. Portugal has a network of roads, of which almost are part of a 44 motorways system. Brisa is the largest highway management concessionaire. With 89,015 km2, Continental Portugal has 4 international airports located near Lisbon, Porto, Faro and Beja. The national railway system service is provided by Comboios de Portugal. The major seaports are located in Leixões, Aveiro, Figueira da Foz, Lisbon, Setúbal, Sines and Faro. Roads In 1972, Brisa was to construct of roadways by the end of 1981. The first priority was a highway designated as A1, a stretch reaching from the capital of Lisbon north to Porto, Portugal's second-largest city. This highway would become a crucial link to the industrial activity in the north of the country and experience the highest traffic volumes in Brisa's network. Construction also began on the A2, which was projected to reach from Lisbon to resort areas on the southern coast. Two years after the establishment of Brisa, the right wing dictatorship was overthrown by a leftist revolution. The new regime included Brisa in a program of nationalization, first taking control of 40 percent of the company and eventually gaining a 90 percent share. Road construction continued stretch by stretch under socialist control. As the first highway sections were completed on the A1 and A2, the government concession was expanded to include adjoining stretches. In addition, concessions were granted for expansions to the network: the A3 would extend the north–south highway from Porto up to the Spanish border, the A4 would reach east from Porto to the city of Amarante, and the A5 was to reach from Lisbon about west to the coast. However, during the first years of democratic government, the combined length of the network never exceeded through the 1980s. Transportation was seen as a priority in the 1990s, pushed by the growing use of automobiles and mass consumption. In 1985, a new government led by the center-right Social Democrats headed by Prime Minister Aníbal Cavaco Silva, came to power in Portugal and began loosening the state's control over economic activity. After years of slow progress, the government began an extensive investment program to bring the transportation infrastructure up to date. While some funds were earmarked for railroad and subway companies, the largest share went to highways. Brisa received a direct capital injection of PTE 17.7 billion in 1990. The investment was urgently needed, since traffic volume in Portugal was growing at a faster rate than any other country in the European Union. Average daily traffic volume increased at a rate about 4.5 percent more than the gross domestic product each year between 1990 and 1996. The government kept up its intensive program of annual investments, allowing Brisa's network to grow from in 1990 to in 1995. Railways National rail system The principal train operator in Portugal is Comboios de Portugal. Rail infrastructure is maintai
https://en.wikipedia.org/wiki/Transport%20in%20Yemen
As a direct consequence of the country's poverty, Yemen compares unfavorably with its Middle Eastern neighbors in terms of transportation infrastructure and communications network. The roads are generally poor, although several projects are planned to upgrade the system. There is no rail network, efforts to upgrade airport facilities have languished, and telephone and Internet usage and capabilities are limited. The Port of Aden has shown a promising recovery from a 2002 attack; container throughput increased significantly in 2004 and 2005. However, the expected imposition of higher insurance premiums for shippers in 2006 may result in reduced future throughput. The announcement in summer 2005 that the port's main facility, Aden Container Terminal, would for the next 30 or more years be run by Dubai Ports International brings with it the prospect of future expansion. Roads Considering Yemen's size, its road transportation system is extremely limited. Yemen has 71,300 kilometers of roads, only 6,200 kilometers of which are paved. In the north, roads connecting Sanaa, Taizz, and Al Hudaydah are in good condition, as is the intercity bus system. In the south, on the other hand, roads are in need of repair, except for the Aden–Taizz road. In November 2005, the World Bank approved a US$40 million project to upgrade 200 kilometers of intermediate rural roads and 75 kilometers of village-access roads as part of a larger effort to strengthen Yemen's rural-road planning and engineering capabilities. Plans are underway to build an estimated US$1.6 billion highway linking Aden (in the south) and Amran (in the north). The road will include more than 10 tunnels and halve the travel time between the southern coast and the northern border with Saudi Arabia. Travel by road in Yemen is often unsafe. Within cities, minivans and small buses ply somewhat regular routes, picking up and dropping off passengers with little regard for other vehicles. Taxis and public transportation are available but often lack safety precautions. Despite the presence of traffic lights and traffic policemen, the U.S. Embassy advises drivers to exercise extreme caution, especially at intersections. While traffic laws do exist, they are not always enforced. Drivers sometimes drive on the left side of the road, although right-hand driving is specified by Yemeni law. No laws mandate the use of seat belts or car seats for children. The maximum speed for private cars is 100 kilometers per hour (62.5 miles per hour), but speed limits are rarely enforced. Furthermore, there are many underage drivers in Yemen. Many vehicles are in poor repair and lack basic parts such as functional turn signals, headlights, and taillights. Pedestrians, especially children, and animals are a hazard in both rural and urban areas. Beyond main intercity roads, which are usually paved, the rural roads generally necessitate four-wheel-drive vehicles or vehicles with high clearance. The British government has a cl
https://en.wikipedia.org/wiki/Fitts%27s%20law
Fitts's law (often cited as Fitts' law) is a predictive model of human movement primarily used in human–computer interaction and ergonomics. The law predicts that the time required to rapidly move to a target area is a function of the ratio between the distance to the target and the width of the target. Fitts's law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device. It was initially developed by Paul Fitts. Fitts's law has been shown to apply under a variety of conditions; with many different limbs (hands, feet, the lower lip, head-mounted sights), manipulanda (input devices), physical environments (including underwater), and user populations (young, old, special educational needs, and drugged participants). Original model formulation The original 1954 paper by Paul Morris Fitts proposed a metric to quantify the difficulty of a target selection task. The metric was based on an information analogy, where the distance to the center of the target (D) is like a signal and the tolerance or width of the target (W) is like noise. The metric is Fitts's index of difficulty (ID, in bits): Fitts also proposed an index of performance (IP, in bits per second) as a measure of human performance. The metric combines a task's index of difficulty (ID) with the movement time (MT, in seconds) in selecting the target. In Fitts's words, "The average rate of information generated by a series of movements is the average information per movement divided by the time per movement." Thus, Today, IP is more commonly called throughput (TP). It is also common to include an adjustment for accuracy in the calculation. Researchers after Fitts began the practice of building linear regression equations and examining the correlation (r) for goodness of fit. The equation expresses the relationship between MT and the D and W task parameters: where: MT is the average time to complete the movement. a and b are constants that depend on the choice of input device and are usually determined empirically by regression analysis. a defines the intersection on the y axis and is often interpreted as a delay. The b parameter is a slope and describes an acceleration. Both parameters show the linear dependency in Fitts's law. ID is the index of difficulty. D is the distance from the starting point to the center of the target. W is the width of the target measured along the axis of motion. W can also be thought of as the allowed error tolerance in the final position, since the final point of the motion must fall within ± of the target's center. Since shorter movement times are desirable for a given task, the value of the b parameter can be used as a metric when comparing computer pointing devices against one another. The first human–computer interface application of Fitts's law was by Card, English, and Burr, who used the index of performance (IP), inter
https://en.wikipedia.org/wiki/United%20States%20Census%20Bureau
The United States Census Bureau (USCB), officially the Bureau of the Census, is a principal agency of the U.S. Federal Statistical System, responsible for producing data about the American people and economy. The Census Bureau is part of the U.S. Department of Commerce and its director is appointed by the President of the United States. The Census Bureau's primary mission is conducting the U.S. census every ten years, which allocates the seats of the U.S. House of Representatives to the states based on their population. The bureau's various censuses and surveys help allocate over $675 billion in federal funds every year and it assists states, local communities, and businesses make informed decisions. The information provided by the census informs decisions on where to build and maintain schools, hospitals, transportation infrastructure, and police and fire departments. In addition to the decennial census, the Census Bureau continually conducts over 130 surveys and programs a year, including the American Community Survey, the U.S. Economic Census, and the Current Population Survey. Furthermore, economic and foreign trade indicators released by the federal government typically contain data produced by the Census Bureau. ==Legal mandate== Article One of the United States Constitution (section II) directs the population be enumerated at least once every ten years and the resulting counts used to set the number of members from each state in the House of Representatives and, by extension, in the Electoral College. The Census Bureau now conducts a full population count every ten years in years ending with a zero and uses the term "decennial" to describe the operation. Between censuses, the Census Bureau makes population estimates and projections. In addition, census data directly affects how more than $400 billion per year in federal and state funding is allocated to communities for neighborhood improvements, public health, education, transportation and more. The Census Bureau is mandated with fulfilling these obligations: the collecting of statistics about the nation, its people, and economy. The Census Bureau's legal authority is codified in Title 13 of the United States Code. The Census Bureau also conducts surveys on behalf of various federal government and local government agencies on topics such as employment, crime, health, consumer expenditures, and housing. Within the bureau, these are known as "demographic surveys" and are conducted perpetually between and during decennial (10-year) population counts. The Census Bureau also conducts economic surveys of manufacturing, retail, service, and other establishments and of domestic governments. Between 1790 and 1840, the census was taken by marshals of the judicial districts. The Census Act of 1840 established a central office which became known as the Census Office. Several acts followed that revised and authorized new censuses, typically at the 10-year intervals. In 1902, the temporary Census
https://en.wikipedia.org/wiki/Middleware%20%28distributed%20applications%29
Middleware in the context of distributed applications is software that provides services beyond those provided by the operating system to enable the various components of a distributed system to communicate and manage data. Middleware supports and simplifies complex distributed applications. It includes web servers, application servers, messaging and similar tools that support application development and delivery. Middleware is especially integral to modern information technology based on XML, SOAP, Web services, and service-oriented architecture. Middleware often enables interoperability between applications that run on different operating systems, by supplying services so the application can exchange data in a standards-based way. Middleware sits "in the middle" between application software that may be working on different operating systems. It is similar to the middle layer of a three-tier single system architecture, except that it is stretched across multiple systems or applications. Examples include EAI software, telecommunications software, transaction monitors, and messaging-and-queueing software. The distinction between operating system and middleware functionality is, to some extent, arbitrary. While core kernel functionality can only be provided by the operating system itself, some functionality previously provided by separately sold middleware is now integrated in operating systems. A typical example is the TCP/IP stack for telecommunications, nowadays included virtually in every operating system. Definitions Middleware is defined as software that provides a link between separate software applications. It is sometimes referred to as plumbing because it connects two applications and passes data between them. Middleware allows data contained in one database to be accessed through another. This makes it particularly useful for enterprise application integration and data integration tasks. In more abstract terms, middleware is "The software layer that lies between the operating system and applications on each side of a distributed computing system in a network." Origins Middleware is a relatively new addition to the computing landscape. It gained popularity in the 1980s as a solution to the problem of how to link newer applications to older legacy systems, although the term had been in use since 1968.<ref>{{Cite web|first=Nick|last=Gall|url=http://ironick.typepad.com/ironick/2005/07/update_on_the_o.html|title=Origin of the term ''|date=July 30, 2005}}</ref> It also facilitated distributed processing, the connection of multiple applications to create a larger application, usually over a network. Use Middleware services provide a more functional set of application programming interfaces to allow an application to: Locate transparently across the network, thus providing interaction with another service or application Filter data to make them friendly usable or public via anonymization process for privacy protection (for example) Be i
https://en.wikipedia.org/wiki/TGV
The TGV (, "high-speed train"; previously ) is France's intercity high-speed rail service, operated by SNCF. SNCF worked on a high-speed rail network from 1966 to 1974 and presented the project to President Georges Pompidou who approved it. Originally designed as turbotrains to be powered by gas turbines, TGV prototypes evolved into electric trains with the 1973 oil crisis. In 1976 the SNCF ordered 87 high-speed trains from Alstom. Following the inaugural service between Paris and Lyon in 1981 on the LGV Sud-Est (LGV for Ligne à Grande Vitesse; "high-speed line"), the network, centered on Paris, has expanded to connect major cities across France (including Marseille, Lille, Bordeaux, Strasbourg, Rennes and Montpellier) and in neighbouring countries on a combination of high-speed and conventional lines. The TGV network in France carries about 110 million passengers a year. The high-speed tracks, maintained by SNCF Réseau, are subject to heavy regulation. Confronted with the fact that train drivers would not be able to see signals along the track-side when trains reach full speed, engineers developed the TVM cab-signalling technology, which would later also see use on limited routes within Belgium, the UK, and Korea. It allows for a train engaging in an emergency braking to request within seconds all following trains to reduce their speed; if a driver does not react within , the system overrides the controls and reduces the train's speed automatically. The TVM safety mechanism enables TGVs using the same line to depart every three minutes. A specially modified TGV high-speed train known as Project V150, weighing only 265 tonnes, set the world record for the fastest wheeled train, reaching during a test run on 3 April 2007. Standard TGV trains used for conventional services, have maximum operating speeds of on the LGV Est, LGV Rhin-Rhône and LGV Méditerranée. In 2007, the world's fastest scheduled rail journey was a start-to-stop average speed of between the Gare de Champagne-Ardenne and Gare de Lorraine on the LGV Est, not surpassed until the 2013 reported average of express service on the Shijiazhuang to Zhengzhou segment of China's Shijiazhuang–Wuhan high-speed railway. The TGV was conceived at the same period as other technological projects sponsored by the Government of France, including the Ariane 1 rocket and Concorde supersonic airliner; those funding programmes were known as champion national policies (literal translation: national champion). The commercial success of the first high-speed line led to a rapid development of services to the south (LGV Rhône-Alpes, LGV Méditerranée, LGV Nîmes–Montpellier), west (LGV Atlantique, LGV Bretagne-Pays de la Loire, LGV Sud Europe Atlantique), north (LGV Nord, LGV Interconnexion Est) and east (LGV Rhin-Rhône, LGV Est). Neighbouring countries Italy, Spain and Germany developed their own high-speed rail services. The TGV system itself extends to neighbouring countries, either directly (Italy, Sp
https://en.wikipedia.org/wiki/Adi%20Shamir
Adi Shamir (; born July 6, 1952) is an Israeli cryptographer and inventor. He is a co-inventor of the Rivest–Shamir–Adleman (RSA) algorithm (along with Ron Rivest and Len Adleman), a co-inventor of the Feige–Fiat–Shamir identification scheme (along with Uriel Feige and Amos Fiat), one of the inventors of differential cryptanalysis and has made numerous contributions to the fields of cryptography and computer science. Biography Adi Shamir was born in Tel Aviv. He received a Bachelor of Science (BSc) degree in mathematics from Tel Aviv University in 1973 and obtained an MSc and PhD in computer science from the Weizmann Institute in 1975 and 1977 respectively. He spent a year as a postdoctoral researcher at the University of Warwick and did research at Massachusetts Institute of Technology (MIT) from 1977 to 1980. Scientific career In 1980, he returned to Israel, joining the faculty of Mathematics and Computer Science at the Weizmann Institute. Starting from 2006, he is also an invited professor at École Normale Supérieure in Paris. In addition to RSA, Shamir's other numerous inventions and contributions to cryptography include the Shamir secret sharing scheme, the breaking of the Merkle-Hellman knapsack cryptosystem, visual cryptography, and the TWIRL and TWINKLE factoring devices. Together with Eli Biham, he discovered differential cryptanalysis in the late 1980s, a general method for attacking block ciphers. It later emerged that differential cryptanalysis was already known — and kept a secret — by both IBM and the National Security Agency (NSA). Shamir has also made contributions to computer science outside of cryptography, such as finding the first linear time algorithm for 2-satisfiability and showing the equivalence of the complexity classes PSPACE and IP. Awards and recognition 2002 ACM Turing Award, together with Rivest and Adleman, in recognition of his contributions to cryptography Paris Kanellakis Theory and Practice Award; Erdős Prize of the Israel Mathematical Society, 1986 IEEE W.R.G. Baker Award UAP Scientific Prize Vatican's PIUS XI Gold Medal 2000 IEEE Koji Kobayashi Computers and Communications Award Israel Prize, in 2008, for computer sciences. honorary DMath (Doctor of Mathematics) degree from the University of Waterloo 2017 (33rd) Japan Prize in the field of Electronics, Information and Communication for his contribution to information security through pioneering research on cryptography Foreign Member of the Royal Society (ForMemRS) in 2018 for substantial contribution to the improvement of natural knowledge. He was elected a Member of the American Philosophical Society in 2019. References 1952 births Living people 20th-century Israeli mathematicians 21st-century Israeli mathematicians Academic staff of Weizmann Institute of Science Alumni of the University of Warwick Erdős Prize recipients Foreign associates of the National Academy of Sciences Foreign Members of the Royal Society International Asso
https://en.wikipedia.org/wiki/PILOT
Programmed Inquiry, Learning, or Teaching (PILOT) is a simple high-level programming language developed in the 1960s. Like its younger sibling LOGO, it was an early foray into the technology of computer-assisted instruction. PILOT is an imperative language similar in structure to BASIC and FORTRAN in its basic layout and structure. Its keywords are single characters, T for "type" to print text, or A for "accept", to input values from the user. Its History PILOT was developed by John Amsden Starkweather, a psychology professor at the University of California, San Francisco medical center. In 1962, he developed a simple language for automating learning tests called Computest. Starting in 1968, he developed a follow-on project called PILOT, for various computers of the time such as the SDS 940. Language syntax A line of PILOT code contains (from left to right) the following syntax elements: an optional label a command letter an optional Y (for yes) or N (for no) an optional conditional expression in parentheses a colon (":") an operand, or multiple operands delimited by commas. A label can also be alone in a line, not followed by other code. The syntax for a label is an asterisk followed by an identifier (alphanumeric string with alphabetic initial character). Command letters The following commands are used in "core PILOT". Lines beginning with "R:" indicate a remark (or a comment) explaining the code that follows. A Accept input into "accept buffer". Examples: R:Next line of input replaces current contents of accept buffer A: R:Next line of input replaces accept buffer, and string variable 'FREE' A:$FREE R:Next 3 lines of input assigned to string variables 'X', 'Y' and 'Z' A:$X,$Y,$Z R:Numeric input assigned to numeric variable "Q" A:#Q C Compute and assign numeric value. Most PILOT implementations have only integer arithmetic, and no arrays. Example: R:Assign arithmetic mean of #X and #Y to #AM C:#AM=(#X+#Y)/2 D Dimension an array, on some implementations. E End (return from) subroutine or (if outside of a subroutine) abort program. Always used without any operand. J Jump to a label. Example: J:*RESTART M Match the accept buffer against string variables or string literals. Example: R:Search accept buffer for "TRUTH", the value of MEXICO and "YOUTH", in that order M:TRUTH,$MEXICO,YOUTH The first match string (if any) that is a substring of the accept buffer is assigned to the special variable $MATCH. The buffer characters left of the first match are assigned to $LEFT, and the characters on the right are assigned to $RIGHT. The match flag is set to 'yes' or 'no', depending on whether a match is made. Any statement that has a Y following the command letter is processed only if the match flag is set. Statements with N are processed only if the flag is not set. N Equivalent to TN: (type if last match unsuccessful) R The operand of R: is a comment, and therefore has no effect. T 'Ty
https://en.wikipedia.org/wiki/SISAL
SISAL (Streams and Iteration in a Single Assignment Language) is a general-purpose single assignment functional programming language with strict semantics, implicit parallelism, and efficient array handling. SISAL outputs a dataflow graph in Intermediary Form 1 (IF1). It was derived from VAL (Value-oriented Algorithmic Language, designed by Jack Dennis), and adds recursion and finite streams. It has a Pascal-like syntax and was designed to be a common high-level language for numerical programs on a variety of multiprocessors. History SISAL was defined in 1983 by James McGraw et al., at the University of Manchester, LLNL, Colorado State University and DEC. It was revised in 1985, and the first compiled implementation was made in 1986. Its performance is superior to C and rivals Fortran, according to some sources, combined with efficient and automatic parallelization. SISAL's name came from grepping "sal" for "Single Assignment Language" from the Unix dictionary /usr/dict/words. Versions exist for the Cray X-MP, Y-MP, 2; Sequent, Encore Alliant, DEC VAX-11/784, dataflow architectures, KSR1, Transputers and systolic arrays. Architecture The requirements for a fine-grain parallelism language are better met with a dataflow language than a systems language. SISAL is more than just a dataflow and fine-grain language. It is a set of tools that convert a textual human readable dataflow language into a graph format (named IF1 - Intermediary Form 1). Part of the SISAL project also involved converting this graph format into runable C code. SISAL Renaissance Era In 2010 SISAL saw a brief resurgence when a group of undergraduates at Worcester Polytechnic Institute investigated implementing a fine-grain parallelism backend for the SISAL language. In 2018 SISAL was modernized with indent-based syntax, first-class functions, lambdas, closures and lazy semantics within a project SISAL-IS. References Notes Bibliography VAL Overview Sisal Language Tutorial External links SISAL Parallel Programming SourceForge.net project page Concurrent programming languages Functional languages
https://en.wikipedia.org/wiki/Multiplication%20algorithm
A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Efficient multiplication algorithms have existed since the advent of the decimal system. Long multiplication If a positional numeral system is used, a natural way of multiplying numbers is taught in schools as long multiplication, sometimes called grade-school multiplication, sometimes called the Standard Algorithm: multiply the multiplicand by each digit of the multiplier and then add up all the properly shifted results. It requires memorization of the multiplication table for single digits. This is the usual algorithm for multiplying larger numbers by hand in base 10. A person doing long multiplication on paper will write down all the products and then add them together; an abacus-user will sum the products as soon as each one is computed. Example This example uses long multiplication to multiply 23,958,233 (multiplicand) by 5,830 (multiplier) and arrives at 139,676,498,390 for the result (product). 23958233 × 5830 ——————————————— 00000000 ( = 23,958,233 × 0) 71874699 ( = 23,958,233 × 30) 191665864 ( = 23,958,233 × 800) + 119791165 ( = 23,958,233 × 5,000) ——————————————— 139676498390 ( = 139,676,498,390) Other notations In some countries such as Germany, the above multiplication is depicted similarly but with the original product kept horizontal and computation starting with the first digit of the multiplier: 23958233 · 5830 ——————————————— 119791165 191665864 71874699 00000000 ——————————————— 139676498390 Below pseudocode describes the process of above multiplication. It keeps only one row to maintain the sum which finally becomes the result. Note that the '+=' operator is used to denote sum to existing value and store operation (akin to languages such as Java and C) for compactness. multiply(a[1..p], b[1..q], base) // Operands containing rightmost digits at index 1 product = [1..p+q] // Allocate space for result for b_i = 1 to q // for all digits in b carry = 0 for a_i = 1 to p // for all digits in a product[a_i + b_i - 1] += carry + a[a_i] * b[b_i] carry = product[a_i + b_i - 1] / base product[a_i + b_i - 1] = product[a_i + b_i - 1] mod base product[b_i + p] = carry // last digit comes from final carry return product Usage in computers Some chips implement long multiplication, in hardware or in microcode, for various integer and floating-point word sizes. In arbitrary-precision arithmetic, it is common to use long multiplication with the base set to 2w, where w is the number of bits in a word, for multiplying relatively small numbers. To multiply two numbers with n d
https://en.wikipedia.org/wiki/WB%20Games%20Boston
WB Games Boston (formerly Turbine Inc., then Turbine Entertainment Software Corp., and originally CyberSpace, Inc.) is an American video game developer. The studio is best known for its massive multiplayer online role-playing games, Asheron's Call, Dungeons & Dragons Online, and The Lord of the Rings Online. On April 20, 2010, the company was acquired by Warner Bros. Home Entertainment for $160 million and became a part of Warner Bros. Interactive Entertainment (now Warner Bros. Games), the video game division of Warner Bros. Entertainment. History Turbine was founded as CyberSpace, Inc in April 1994 by Jeremy Gaffney, Jonathan Monserrat, Kevin Langevin, and Timothy Miller, some of whom were students from the Artificial Intelligence Lab at Brown University. In 1995, the company was based in Monserrat's mother's house with 12 staff members. They found an office in Providence, Rhode Island but later moved to Westwood, Massachusetts to better take advantage of the software engineers coming out of Boston's colleges. As CEO, Monsarrat used free food and office pranks to keep staff motivated. In 1995, the company changed its name to Turbine Entertainment Software Corp. In 1999, the company's first game, Asheron's Call, was released. It was notable for being the third 3D MMORPG, following the launch of Meridian 59 and then EverQuest. Its most notable feature, designed by Monsarrat, was a "loyalty" system giving new and experienced players incentives to work together. The Olthoi was the first monster developed for Asheron's Call, designed by Joe Angell. After Asheron's Call, the company went on to make a sequel, Asheron's Call 2: Fallen Kings, which came out in 2002 (just after the first Asheron's Call expansion). However, after only one expansion, Asheron's Call 2: Fallen Kings shut down in 2005. In the same year, Turbine Entertainment Software Corp. changed its name to Turbine, Inc. In 2006, Turbine released Dungeons & Dragons Online: Stormreach. Early reception was positive, but the game was criticised for poor solo play. In 2007, Turbine released The Lord of the Rings Online: Shadows of Angmar, which got positive reviews and was seen as a needed boost for the company. In 2009, Dungeons and Dragons Online was suffering a low playerbase; in an attempt to save the game, Turbine replaced the traditional monthly subscription model with a free to play one. In 2010, Turbine also moved The Lord of the Rings Online (which was then on its second expansion) to a free-to-play model. In the same year, Turbine was purchased by Warner Bros. Home Entertainment for $160 million. In 2012, Turbine announced that they would bring back Asheron's Call 2: Fallen Kings. In 2015, it was announced that development of Infinite Crisis would end immediately and that the game will be closed on August 14. The company was hit with layoffs for three years consecutively starting from 2014. While Turbine's focus was shifted to develop free-to-play mobile games by Warner Bro
https://en.wikipedia.org/wiki/Common%20Criteria
The Common Criteria for Information Technology Security Evaluation (referred to as Common Criteria or CC) is an international standard (ISO/IEC 15408) for computer security certification. It is currently in version 3.1 revision 5. Common Criteria is a framework in which computer system users can specify their security functional and assurance requirements (SFRs and SARs respectively) in a Security Target (ST), and may be taken from Protection Profiles (PPs). Vendors can then implement or make claims about the security attributes of their products, and testing laboratories can evaluate the products to determine if they actually meet the claims. In other words, Common Criteria provides assurance that the process of specification, implementation and evaluation of a computer security product has been conducted in a rigorous and standard and repeatable manner at a level that is commensurate with the target environment for use. Common Criteria maintains a list of certified products, including operating systems, access control systems, databases, and key management systems. Key concepts Common Criteria evaluations are performed on computer security products and systems. Target of Evaluation (TOE) – the product or system that is the subject of the evaluation. The evaluation serves to validate claims made about the target. To be of practical use, the evaluation must verify the target's security features. This is done through the following: Protection Profile (PP) – a document, typically created by a user or user community, which identifies security requirements for a class of security devices (for example, smart cards used to provide digital signatures, or network firewalls) relevant to that user for a particular purpose. Product vendors can choose to implement products that comply with one or more PPs, and have their products evaluated against those PPs. In such a case, a PP may serve as a template for the product's ST (Security Target, as defined below), or the authors of the ST will at least ensure that all requirements in relevant PPs also appear in the target's ST document. Customers looking for particular types of products can focus on those certified against the PP that meets their requirements. Security Target (ST) – the document that identifies the security properties of the target of evaluation. The ST may claim conformance with one or more PPs. The TOE is evaluated against the SFRs (Security Functional Requirements. Again, see below) established in its ST, no more and no less. This allows vendors to tailor the evaluation to accurately match the intended capabilities of their product. This means that a network firewall does not have to meet the same functional requirements as a database management system, and that different firewalls may in fact be evaluated against completely different lists of requirements. The ST is usually published so that potential customers may determine the specific security features that have been certified by t
https://en.wikipedia.org/wiki/Direct%20memory%20access
Direct memory access (DMA) is a feature of computer systems that allows certain hardware subsystems to access main system memory independently of the central processing unit (CPU). Without DMA, when the CPU is using programmed input/output, it is typically fully occupied for the entire duration of the read or write operation, and is thus unavailable to perform other work. With DMA, the CPU first initiates the transfer, then it does other operations while the transfer is in progress, and it finally receives an interrupt from the DMA controller (DMAC) when the operation is done. This feature is useful at any time that the CPU cannot keep up with the rate of data transfer, or when the CPU needs to perform work while waiting for a relatively slow I/O data transfer. Many hardware systems use DMA, including disk drive controllers, graphics cards, network cards and sound cards. DMA is also used for intra-chip data transfer in some multi-core processors. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without DMA channels. Similarly, a processing circuitry inside a multi-core processor can transfer data to and from its local memory without occupying its processor time, allowing computation and data transfer to proceed in parallel. DMA can also be used for "memory to memory" copying or moving of data within memory. DMA can offload expensive memory operations, such as large copies or scatter-gather operations, from the CPU to a dedicated DMA engine. An implementation example is the I/O Acceleration Technology. DMA is of interest in network-on-chip and in-memory computing architectures. Principles Third-party Standard DMA, also called third-party DMA, uses a DMA controller. A DMA controller can generate memory addresses and initiate memory read or write cycles. It contains several hardware registers that can be written and read by the CPU. These include a memory address register, a byte count register, and one or more control registers. Depending on what features the DMA controller provides, these control registers might specify some combination of the source, the destination, the direction of the transfer (reading from the I/O device or writing to the I/O device), the size of the transfer unit, and/or the number of bytes to transfer in one burst. To carry out an input, output or memory-to-memory operation, the host processor initializes the DMA controller with a count of the number of words to transfer, and the memory address to use. The CPU then commands the peripheral device to initiate a data transfer. The DMA controller then provides addresses and read/write control lines to the system memory. Each time a byte of data is ready to be transferred between the peripheral device and memory, the DMA controller increments its internal address register until the full block of data is transferred. Bus mastering In a bus mastering system, also known as a first-party DMA system, the CPU and
https://en.wikipedia.org/wiki/Ad%20Lib%2C%20Inc.
Ad Lib, Inc. was a Canadian manufacturer of sound cards and other computer equipment founded by Martin Prevel, a former professor of music and vice-dean of the music department at the Université Laval. The company's best known product, the AdLib Music Synthesizer Card (ALMSC), or simply the AdLib as it was called, was the first add-on sound card for IBM compatibles to achieve widespread acceptance, becoming the first de facto standard for audio reproduction. History After development work on the AdLib Music Synthesizer Card had concluded, the company struggled to engage the software development community with their new product. As a result, Ad Lib partnered with Top Star Computer Services, Inc., a New Jersey company that provided quality assurance services to game developers. Top Star's President, Rich Heimlich, was sufficiently impressed by a product demonstration in Quebec in 1987 to endorse the product to his top customers. Sierra On-Line's King's Quest IV became the first game to support AdLib. The game's subsequent success helped to launch the AdLib card into mainstream media coverage. As sales of the card rose, many developers began including support for the AdLib in their programs. The success of the AdLib Music Card soon attracted competition. Not long after its introduction, Creative Labs introduced its competing Sound Blaster card. The Sound Blaster was fully compatible with AdLib's hardware, and it also implemented two key features absent from the AdLib: a PCM audio channel and a game port. With additional features and better marketing, the Sound Blaster quickly overshadowed AdLib as the de facto standard in PC gaming audio. AdLib's slow response, the AdLib Gold, did not sell well enough to sustain the company. In 1992, Ad Lib filed for bankruptcy, while the Sound Blaster family continued to dominate the PC game industry. That same year, Binnenalster GmbH from Germany acquired the assets of the company. Ad Lib was renamed AdLib Multimedia and relaunched the AdLib Gold sound card and many other products. Binnenalster sold AdLib Multimedia to Softworld Taiwan in 1994. Products AdLib Music Synthesizer Card (1987) AdLib used Yamaha's YM3812 sound chip, which produces sound by FM synthesis. The AdLib card consisted of a YM3812 chip with off-the-shelf external glue logic to plug into a standard PC-compatible ISA 8-bit slot. PC software-generated multitimbral music and sound effects through the AdLib card, although the acoustic quality was distinctly synthesized. Digital audio (PCM) was not supported; this would become a key missing feature when the competitor Creative Labs implemented it in their Sound Blaster cards. It was still possible, however, to output PCM sound with software by modulating the playback volume at an audio rate, as was done, for example, in the MicroProse game F-15 Strike Eagle II and the multi-channel music editor Sound Club for MS-DOS. There are two separate revisions of the original AdLib sound card. The origin
https://en.wikipedia.org/wiki/Harvard%20architecture
The Harvard architecture is a computer architecture with separate storage and signal pathways for instructions and data. It is often contrasted with the von Neumann architecture, where program instructions and data share the same memory and pathways. The term is often stated as having originated from the Harvard Mark I relay-based computer, which stored instructions on punched tape (24 bits wide) and data in electro-mechanical counters. These early machines had data storage entirely contained within the central processing unit, and provided no access to the instruction storage as data. Programs needed to be loaded by an operator; the processor could not initialize itself. However, in the only peer-reviewed published paper on the topic - The Myth of the Harvard Architecture published in the IEEE Annals of the History of Computing - the author demonstrates that: - 'The term “Harvard architecture” was coined decades later, in the context of microcontroller design' and only 'retrospectively applied to the Harvard machines and subsequently applied to RISC microprocessors with separated caches' - 'The so-called “Harvard” and “von Neumann” architectures are often portrayed as a dichotomy, but the various devices labeled as the former have far more in common with the latter than they do with each other.' - 'In short [the Harvard architecture] isn't an architecture and didn't derive from work at Harvard.' Modern processors appear to the user to be systems with von Neumann architectures, with the program code stored in the same main memory as the data. For performance reasons, internally and largely invisible to the user, most designs have separate processor caches for the instructions and data, with separate pathways into the processor for each. This is one form of what is known as the modified Harvard architecture. Harvard architecture is historically, and traditionally, split into two address spaces, but having three, i.e. two extra (and all accessed in each cycle) is also done, while rare. Memory details In a Harvard architecture, there is no need to make the two memories share characteristics. In particular, the word width, timing, implementation technology, and memory address structure can differ. In some systems, instructions for pre-programmed tasks can be stored in read-only memory while data memory generally requires read-write memory. In some systems, there is much more instruction memory than data memory so instruction addresses are wider than data addresses. Contrast with von Neumann architectures In a system with a pure von Neumann architecture, instructions and data are stored in the same memory, so instructions are fetched over the same data path used to fetch data. This means that a CPU cannot simultaneously read an instruction and read or write data from or to the memory. In a computer using the Harvard architecture, the CPU can both read an instruction and perform a data memory access at the same time, even without a cache. A H
https://en.wikipedia.org/wiki/Control%20Data%20Corporation
Control Data Corporation (CDC) was a mainframe and supercomputer firm. CDC was one of the nine major United States computer companies through most of the 1960s; the others were IBM, Burroughs Corporation, DEC, NCR, General Electric, Honeywell, RCA, and UNIVAC. CDC was well-known and highly regarded throughout the industry at the time. For most of the 1960s, Seymour Cray worked at CDC and developed a series of machines that were the fastest computers in the world by far, until Cray left the company to found Cray Research (CRI) in the 1970s. After several years of losses in the early 1980s, in 1988 CDC started to leave the computer manufacturing business and sell the related parts of the company, a process that was completed in 1992 with the creation of Control Data Systems, Inc. The remaining businesses of CDC currently operate as Ceridian. Background and origins: World War II–1957 During World War II the U.S. Navy had built up a classified team of engineers to build codebreaking machinery for both Japanese and German electro-mechanical ciphers. A number of these were produced by a team dedicated to the task working in the Washington, D.C., area. With the post-war wind-down of military spending, the Navy grew increasingly worried that this team would break up and scatter into various companies, and it started looking for ways to keep the code-breaking team together. Eventually they found their solution: John Parker, the owner of a Chase Aircraft affiliate named Northwestern Aeronautical Corporation located in St. Paul, Minnesota, was about to lose all his contracts due to the ending of the war. The Navy never told Parker exactly what the team did, since it would have taken too long to get top secret clearance. Instead they simply said the team was important, and they would be very happy if he hired them all. Parker was obviously wary, but after several meetings with increasingly high-ranking Naval officers it became apparent that whatever it was, they were serious, and he eventually agreed to give this team a home in his military glider factory. The result was Engineering Research Associates (ERA). Formed in 1946, this contract engineering company worked on a number of seemingly unrelated projects in the early 1950s. Among these was the ERA Atlas, an early military stored program computer, the basis of the Univac 1101, which was followed by the 1102, and then the 36-bit ERA 1103 (UNIVAC 1103). The Atlas was built for the Navy, which intended to use it in their non-secret code-breaking centers. In the early 1950s a minor political debate broke out in Congress about the Navy essentially "owning" ERA, and the ensuing debates and legal wrangling left the company drained of both capital and spirit. In 1952, Parker sold ERA to Remington Rand. Although Rand kept the ERA team together and developing new products, it was most interested in ERA's magnetic drum memory systems. Rand soon merged with Sperry Corporation to become Sperry Rand. In the proce
https://en.wikipedia.org/wiki/LL%20parser
In computer science, an LL parser (Left-to-right, leftmost derivation) is a top-down parser for a restricted context-free language. It parses the input from Left to right, performing Leftmost derivation of the sentence. An LL parser is called an LL(k) parser if it uses k tokens of lookahead when parsing a sentence. A grammar is called an LL(k) grammar if an LL(k) parser can be constructed from it. A formal language is called an LL(k) language if it has an LL(k) grammar. The set of LL(k) languages is properly contained in that of LL(k+1) languages, for each k ≥ 0. A corollary of this is that not all context-free languages can be recognized by an LL(k) parser. An LL parser is called LL-regular (LLR) if it parses an LL-regular language. The class of LLR grammars contains every LL(k) grammar for every k. For every LLR grammar there exists an LLR parser that parses the grammar in linear time. Two nomenclative outlier parser types are LL(*) and LL(finite). A parser is called LL(*)/LL(finite) if it uses the LL(*)/LL(finite) parsing strategy. LL(*) and LL(finite) parsers are functionally closer to PEG parsers. An LL(finite) parser can parse an arbitrary LL(k) grammar optimally in the amount of lookahead and lookahead comparisons. The class of grammars parsable by the LL(*) strategy encompasses some context-sensitive languages due to the use of syntactic and semantic predicates and has not been identified. It has been suggested that LL(*) parsers are better thought of as TDPL parsers. Against the popular misconception, LL(*) parsers are not LLR in general, and are guaranteed by construction to perform worse on average (super-linear against linear time) and far worse in the worst-case (exponential against linear time). LL grammars, particularly LL(1) grammars, are of great practical interest, as parsers for these grammars are easy to construct, and many computer languages are designed to be LL(1) for this reason. LL parsers may be table-based, i.e. similar to LR parsers, but LL grammars can also be parsed by recursive descent parsers. According to Waite and Goos (1984), LL(k) grammars were introduced by Stearns and Lewis (1969). Overview For a given context-free grammar, the parser attempts to find the leftmost derivation. Given an example grammar : the leftmost derivation for is: Generally, there are multiple possibilities when selecting a rule to expand the leftmost non-terminal. In step 2 of the previous example, the parser must choose whether to apply rule 2 or rule 3: To be efficient, the parser must be able to make this choice deterministically when possible, without backtracking. For some grammars, it can do this by peeking on the unread input (without reading). In our example, if the parser knows that the next unread symbol is , the only correct rule that can be used is 2. Generally, an parser can look ahead at symbols. However, given a grammar, the problem of determining if there exists a parser for some that recognizes i
https://en.wikipedia.org/wiki/John%20Mathieson%20%28computer%20scientist%29
John Mathieson is a British computer chip designer who initially worked for Sinclair Research on the cancelled Loki computer project before co-founding Flare with ex-Sinclair colleagues Martin Brennan and Ben Cheese. After working at Flare on the Flare 1 and its development into the Konix Multisystem, he worked for Atari Corporation developing the Atari Panther video game console. It was abandoned in favor of its successor, the Atari Jaguar. The Jaguar was commercially released in the United States on November 23, 1993. Mathieson has been called "the father of the Jaguar." After leaving Atari, Mathieson worked on the development of the ill-fated NUON media processor at VM Labs. He moved to work for Nvidia at the end of 2001. As Director of Mobile Systems Architecture at Nvidia Corp. he led the system architecture team for three generations of the Tegra applications processor. References External links http://www.vmlabs.de/team.htm - List of VM Labs team with picture of John https://www.linkedin.com/in/johnmathieson/ - Linked-In profile Sinclair Research Year of birth missing (living people) Living people British computer scientists
https://en.wikipedia.org/wiki/ITV%20%28TV%20network%29
ITV is a British free-to-air public broadcast television network that is branded as ITV1 in most of the UK except for central and northern Scotland where it is branded as STV. It was launched in 1955 as Independent Television to provide competition and reduce the current monopoly to the then BBC Television (established in 1936). ITV is the oldest commercial network in the UK. Since the passing of the Broadcasting Act 1990, it has been legally known as Channel 3 to distinguish it from the other analogue channels at the time: BBC1, BBC2 and Channel 4. ITV was – for four decades – a network of separate companies which provided regional television services and also shared programmes between each other to be shown on the entire network. Each franchise was originally owned by a different company. After several mergers, the fifteen regional franchises are now held by two companies: ITV plc, which runs the ITV1 channel and the UTV channel – now branded as ITV1, and STV Group, which runs the STV channel. The ITV network is a separate entity from ITV plc, the company that resulted from the merger of Granada plc and Carlton Communications in 2004. ITV plc holds the Channel 3 broadcasting licences for every region except for central and northern Scotland, which are held by STV Group. Today, ITV plc simply commissions the network schedule centrally – programmes are made by its own subsidiary ITV Studios and independent production companies. Regional programming remains in news and some current affairs series. In Northern Ireland, ITV plc used the brand name UTV as the name of the channel, until the ITV channel was rebranded as ITV1; it is still, however, used for local programming being shown here. This was the name used by former owner UTV Media (now known as Wireless Group). ITV plc bought UTV in 2016. Although the ITV network's history goes back to 1955, many regional franchisees changed over the years. Some of the most important names in the network's past – notably Thames, ABC and ATV – have no connection with the modern network. History The origins of ITV lie in the passing of the Television Act 1954, designed to break the monopoly on television held by the BBC Television Service. The act created the Independent Television Authority (ITA, then IBA after the Sound Broadcasting Act) to heavily regulate the industry and to award franchises. The first six franchises were awarded in 1954 for London, the Midlands and the North of England, with separate franchises for Weekdays and Weekends. The first ITV service to launch was London's Associated-Rediffusion on 22 September 1955, with the Midlands and North services launching in February 1956 and May 1956 respectively. Following these launches, the ITA awarded more franchises until the whole country was covered by fourteen regional stations, all launched by 1962. The network has been modified several times through franchise reviews that have taken place in 1963, 1967, 1974, 1980 and 1991, during which b
https://en.wikipedia.org/wiki/AirPort
AirPort is a discontinued line of wireless routers and network cards developed by Apple Inc. using Wi-Fi protocols. In Japan, the line of products was marketed under the brand AirMac due to previous registration by I-O Data. Apple introduced the AirPort line in 1999. Wireless cards were discontinued in 2009 following the Mac transition to Intel processors, after all of Apple's Mac products had adopted built-in Wi-Fi. Apple's line of wireless routers consisted of the AirPort Base Station (later AirPort Extreme); the AirPort Time Capsule, a variant with a built-in hard disk for automated backups; and the AirPort Express, a compact router. In 2018, Apple discontinued the AirPort line. The remaining inventory was sold off, and Apple later retailed routers from Linksys, Netgear, and Eero in Apple retail stores. Overview AirPort debuted in 1999, as "one more thing" at Macworld New York, with Steve Jobs surfing the web on an iBook using wireless internet technology for the very first time in a public demo of an Apple laptop. The initial offering consisted of an optional expansion card for Apple's new line of iBook notebooks and an AirPort Base Station. The AirPort card (a repackaged Lucent ORiNOCO Gold Card PC Card adapter) was later added as an option for almost all of Apple's product line, including PowerBooks, eMacs, iMacs, and Power Macs. Only Xserves did not have it as a standard or optional feature. The original AirPort system allowed transfer rates up to 11 Mbit/s and was commonly used to share Internet access and files between multiple computers. In 2003, Apple introduced AirPort Extreme, based on the 802.11g specification, using Broadcom's BCM4306/BCM2050 two-chip solution. AirPort Extreme allows theoretical peak data transfer rates of up to 54 Mbit/s, and is fully backward-compatible with existing 802.11b wireless network cards and base stations. Several of Apple's desktop computers and portable computers, including the MacBook Pro, MacBook, Mac Mini, and iMac shipped with an AirPort Extreme (802.11g) card as standard. All other Macs of the time had an expansion slot for the card. AirPort and AirPort Extreme cards are not physically compatible: AirPort Extreme cards cannot be installed in older Macs, and AirPort cards cannot be installed in newer Macs. The original AirPort card was discontinued in June 2004. In 2004, Apple released the AirPort Express base station as a "Swiss Army knife" multifunction product. It can be used as a portable travel router, using the same AC connectors as on Apple's AC adapters; as an audio streaming device, with both line-level and optical audio outputs; and as a USB printer sharing device, through its USB host port. In 2007, Apple unveiled a new AirPort Extreme (802.11 Draft-N) Base Station, which introduced 802.11 Draft-N to the Apple AirPort product line. This implementation of 802.11 Draft-N can operate in both the 2.4 GHz and 5 GHz ISM bands, and has modes that make it compatible with 802.11b/g and 802
https://en.wikipedia.org/wiki/Vector%20processor
In computing, a vector processor or array processor is a central processing unit (CPU) that implements an instruction set where its instructions are designed to operate efficiently and effectively on large one-dimensional arrays of data called vectors. This is in contrast to scalar processors, whose instructions operate on single data items only, and in contrast to some of those same scalar processors having additional single instruction, multiple data (SIMD) or SWAR Arithmetic Units. Vector processors can greatly improve performance on certain workloads, notably numerical simulation and similar tasks. Vector processing techniques also operate in video-game console hardware and in graphics accelerators. Vector machines appeared in the early 1970s and dominated supercomputer design through the 1970s into the 1990s, notably the various Cray platforms. The rapid fall in the price-to-performance ratio of conventional microprocessor designs led to a decline in vector supercomputers during the 1990s. History Early research and development Vector processing development began in the early 1960s at the Westinghouse Electric Corporation in their Solomon project. Solomon's goal was to dramatically increase math performance by using a large number of simple coprocessors under the control of a single master Central processing unit (CPU). The CPU fed a single common instruction to all of the arithmetic logic units (ALUs), one per cycle, but with a different data point for each one to work on. This allowed the Solomon machine to apply a single algorithm to a large data set, fed in the form of an array. In 1962, Westinghouse cancelled the project, but the effort was restarted by the University of Illinois at Urbana–Champaign as the ILLIAC IV. Their version of the design originally called for a 1 GFLOPS machine with 256 ALUs, but, when it was finally delivered in 1972, it had only 64 ALUs and could reach only 100 to 150 MFLOPS. Nevertheless, it showed that the basic concept was sound, and, when used on data-intensive applications, such as computational fluid dynamics, the ILLIAC was the fastest machine in the world. The ILLIAC approach of using separate ALUs for each data element is not common to later designs, and is often referred to under a separate category, massively parallel computing. Around this time Flynn categorized this type of processing as an early form of single instruction, multiple threads (SIMT). Computer for operations with functions A computer for operations with functions was presented and developed by Kartsev in 1967. Supercomputers The first vector supercomputers are the Control Data Corporation STAR-100 and Texas Instruments Advanced Scientific Computer (ASC), which were introduced in 1974 and 1972, respectively. The basic ASC (i.e., "one pipe") ALU used a pipeline architecture that supported both scalar and vector computations, with peak performance reaching approximately 20 MFLOPS, readily achieved when processing long vectors.
https://en.wikipedia.org/wiki/Conceptual%20schema
A conceptual schema or conceptual data model is a high-level description of informational needs underlying the design of a database. It typically includes only the main concepts and the main relationships among them. Typically this is a first-cut model, with insufficient detail to build an actual database. This level describes the structure of the whole database for a group of users. The conceptual model is also known as the data model that can be used to describe the conceptual schema when a database system is implemented. It hides the internal details of physical storage and targets the description of entities, datatypes, relationships and constraints. Overview A conceptual schema is a map of concepts and their relationships used for databases. This describes the semantics of an organization and represents a series of assertions about its nature. Specifically, it describes the things of significance to an organization (entity classes), about which it is inclined to collect information, and their characteristics (attributes) and the associations between pairs of those things of significance (relationships). Because a conceptual schema represents the semantics of an organization, and not a database design, it may exist on various levels of abstraction. The original ANSI four-schema architecture began with the set of external schemata that each represents one person's view of the world around him or her. These are consolidated into a single conceptual schema that is the superset of all of those external views. A data model can be as concrete as each person's perspective, but this tends to make it inflexible. If that person's world changes, the model must change. Conceptual data models take a more abstract perspective, identifying the fundamental things, of which the things an individual deals with are just examples. The model does allow for what is called inheritance in object oriented terms. The set of instances of an entity class may be subdivided into entity classes in their own right. Thus, each instance of a sub-type entity class is also an instance of the entity class's super-type. Each instance of the super-type entity class, then is also an instance of one of the sub-type entity classes. Super-type/sub-type relationships may be exclusive or not. A methodology may require that each instance of a super-type may only be an instance of one sub-type. Similarly, a super-type/sub-type relationship may be exhaustive or not. It is exhaustive if the methodology requires that each instance of a super-type must be an instance of a sub-type. A sub-type named "Other" is often necessary. Example relationships Each PERSON may be the vendor in one or more ORDERS. Each ORDER must be from one and only one PERSON. PERSON is a sub-type of PARTY. (Meaning that every instance of PERSON is also an instance of PARTY.) Each EMPLOYEE may have a supervisor who is also an EMPLOYEE. Data structure diagram A data structure diagram (DSD) is a dat
https://en.wikipedia.org/wiki/YaST
YaST (Yet another Setup Tool) is a Linux operating system setup and configuration tool. YaST is featured in the openSUSE Linux distribution, as well as in SUSE's derived commercial distributions. It is also part of the defunct United Linux. YaST features tools that can configure many aspects of the system. YaST was released first in April 1995. The first SuSE distribution that included YaST was released in May 1996. YaST was re-written in 1999 and included first in SuSE Linux 6.3 as only an installer. YaST2 was added to the desktop in SuSE Linux 6.4 and co-existed with YaST1 until YaST1's removal in SuSE Linux 8.0. Details YaST is free software that SUSE has made available under the GPL in 2004. It is a tool for administering and maintaining a SUSE Linux installation. It allows administrators to install software, configure hardware, set up networks and servers, and more. A feature of YaST is that it contains both Graphical user interface (GUI) and Text-based user interface (TUI) (with ncurses) front ends. This is especially useful for non-GUI installations such as servers, for system administration over slow Internet connections, and for when one is unable to boot into a graphical X server but still requires an advanced user interface to the package manager (for example, a novice user trying to downgrade an Xorg package to fix a graphical installation). YaST offers package management functionality through the ZYpp project. The first ZYpp enabled package management YaST applications had performance problems and long start up times, but was improved in the 10.2 and 10.3 releases. Starting with openSUSE 11.0 alpha 3, ZYpp was integrated with the SAT solver project, making YaST and Zypper faster than other rpm based package managers. YaST used to include SaX and SaX2, the Suse Advanced X configuration. SaX was re-written as SaX2 in SuSE Linux 6.4. SaX1 was removed in SuSE Linux 8.1 and SaX2 was removed from the YaST Control Center in openSUSE 11.2. SaX2 was removed completely in openSUSE 11.3. The GTK interface was removed in openSUSE Leap 42.1. YaST often receives updates and improvements in Tumbleweed and between versions of Leap. openSUSE Leap 15.1, for example, saw improvements to the YaST interface for managing firewalls including the addition of an interface in the command line version of YaST. In this same release of openSUSE Leap, YaST now has an updated logo and improved partition management module. YaST is implemented in the Ruby programming language. AutoYaST AutoYaST is a system for installing one or more openSUSE systems automatically without user intervention. AutoYaST installations are performed using a control file with installation and configuration data. The profile of each current system is stored in /root/autoyast.xml. WebYaST WebYaST is a web interface for YaST that can be used to check the status of the current machine. It can check on the installation of packages, shutdown or reboot the system, change some system
https://en.wikipedia.org/wiki/Grover%27s%20algorithm
In quantum computing, Grover's algorithm, also known as the quantum search algorithm, is a quantum algorithm for unstructured search that finds with high probability the unique input to a black box function that produces a particular output value, using just evaluations of the function, where is the size of the function's domain. It was devised by Lov Grover in 1996. The analogous problem in classical computation cannot be solved in fewer than evaluations (because, on average, one has to check half of the domain to get a 50% chance of finding the right input). Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani proved that any quantum solution to the problem needs to evaluate the function times, so Grover's algorithm is asymptotically optimal. Since classical algorithms for NP-complete problems require exponentially many steps, and Grover's algorithm provides at most a quadratic speedup over the classical solution for unstructured search, this suggests that Grover's algorithm by itself will not provide polynomial-time solutions for NP-complete problems (as the square root of an exponential function is an exponential, not polynomial, function). Unlike other quantum algorithms, which may provide exponential speedup over their classical counterparts, Grover's algorithm provides only a quadratic speedup. However, even quadratic speedup is considerable when is large, and Grover's algorithm can be applied to speed up broad classes of algorithms. Grover's algorithm could brute-force a 128-bit symmetric cryptographic key in roughly 264 iterations, or a 256-bit key in roughly 2128 iterations. It may not be the case that Grover's algorithm poses a significantly increased risk to encryption over existing classical algorithms, however. Applications and limitations Grover's algorithm, along with variants like amplitude amplification, can be used to speed up a broad range of algorithms. In particular, algorithms for NP-complete problems generally contain exhaustive search as a subroutine, which can be sped up by Grover's algorithm. The current best algorithm for 3SAT is one such example. Generic constraint satisfaction problems also see quadratic speedups with Grover. These algorithms do not require that the input be given in the form of an oracle, since Grover's algorithm is being applied with an explicit function, e.g. the function checking that a set of bits satisfies a 3SAT instance. Grover's algorithm can also give provable speedups for black-box problems in quantum query complexity, including element distinctness and the collision problem (solved with the Brassard–Høyer–Tapp algorithm). In these types of problems, one treats the oracle function f as a database, and the goal is to use the quantum query to this function as few times as possible. Cryptography Grover's algorithm essentially solves the task of function inversion. Roughly speaking, if we have a function that can be evaluated on a quantum computer, Grover's alg
https://en.wikipedia.org/wiki/Trusted%20Computing
Trusted Computing (TC) is a technology developed and promoted by the Trusted Computing Group. The term is taken from the field of trusted systems and has a specialized meaning that is distinct from the field of confidential computing. With Trusted Computing, the computer will consistently behave in expected ways, and those behaviors will be enforced by computer hardware and software. Enforcing this behavior is achieved by loading the hardware with a unique encryption key that is inaccessible to the rest of the system and the owner. TC is controversial as the hardware is not only secured for its owner, but also secured against its owner. Such controversy has led opponents of trusted computing, such as free software activist Richard Stallman, to refer to it instead as treacherous computing, even to the point where some scholarly articles have begun to place scare quotes around "trusted computing". Trusted Computing proponents such as International Data Corporation, the Enterprise Strategy Group and Endpoint Technologies Associates state that the technology will make computers safer, less prone to viruses and malware, and thus more reliable from an end-user perspective. They also state that Trusted Computing will allow computers and servers to offer improved computer security over that which is currently available. Opponents often state that this technology will be used primarily to enforce digital rights management policies (imposed restrictions to the owner) and not to increase computer security. Chip manufacturers Intel and AMD, hardware manufacturers such as HP and Dell, and operating system providers such as Microsoft include Trusted Computing in their products if enabled. The U.S. Army requires that every new PC it purchases comes with a Trusted Platform Module (TPM). As of July 3, 2007, so does virtually the entire United States Department of Defense. Key concepts Trusted Computing encompasses six key technology concepts, of which all are required for a fully Trusted system, that is, a system compliant to the TCG specifications: Endorsement key Secure input and output Memory curtaining / protected execution Sealed storage Remote attestation Trusted Third Party (TTP) Endorsement key The endorsement key is a 2048-bit RSA public and private key pair that is created randomly on the chip at manufacture time and cannot be changed. The private key never leaves the chip, while the public key is used for attestation and for encryption of sensitive data sent to the chip, as occurs during the TPM_TakeOwnership command. This key is used to allow the execution of secure transactions: every Trusted Platform Module (TPM) is required to be able to sign a random number (in order to allow the owner to show that he has a genuine trusted computer), using a particular protocol created by the Trusted Computing Group (the direct anonymous attestation protocol) in order to ensure its compliance of the TCG standard and to prove its identity; this makes it
https://en.wikipedia.org/wiki/GE%20645
The GE 645 mainframe computer was a development of the GE 635 for use in the Multics project. This was the first computer that implemented a configurable hardware protected memory system. It was designed to satisfy the requirements of Project MAC to develop a platform that would host their proposed next generation time-sharing operating system (Multics) and to meet the requirements of a theorized computer utility. The system was the first truly symmetric multiprocessing machine to use virtual memory, it was also among the first machines to implement what is now know as a translation lookaside buffer. The foundational patent for which was granted to John Couleur and Edward Glaser. General Electric initially publicly announced the GE 645 at the Fall Joint Computer Conference in November 1965. At a subsequent press conference in December of that year it was announced that they would be working towards "broad commercial availability" of the system. However they would subsequently withdraw it from active marketing at the end of 1966. In total at least 6 sites ran GE 645 systems in the period from 1967 to 1975. System configuration The basic system configuration consisted of a combination of 4 basic modules these were: Processor System Controller Generalized I/O Controller (GIOC) Extended Memory Unit (EMU) Compared to the rest of the 600 series the 645 did not use the standard IOC's (input/output controllers) for I/O. Nor did it use the DATANET-30 front end processor for communications. Instead both sets of functionality was combined into one unit called a GIOC (Generalized I/O Controller) which provided dedicated channels for both Peripheral (Disc/Tape) and Terminal I/O. The GIOC acted as an Active Device and was directly connected to memory via dedicated links to each System Controller that was present in a specific configuration. The Extended Memory Unit, though termed a drum, was in reality a large fixed-head hard disk with one head per track, this was a OEM product from Librascope. The EMU consisted of 4,096 tracks providing 4MW (megawords) of storage (equivalent to 16MB). Each track had a dedicated read/write head, these were organised into groups of 16 "track sets" which are used to read/write a sector. A sector is the default unit of data allocation in the EMU and is made up of 80 words, of which 64 words are data and the remaining 16 were used as a guard band. The average transfer rate between the EMU and memory was 470,000 words per second, all transfers were 72-bits (two words) wide, with it taking 6.7μs to transfer 4 words. The unit had a rotational speed of 1,725 rpm, which ensured an average latency of 17.4 milliseconds. Architecture Processor Modes The GE-645 has two modes of Instruction Execution (Master and Slave) inherited from the GE-635, however it also adds another dimension by having two modes of memory addressing (Absolute and Appending). When the process is executing in Absolute Mode addressing is limited to 218 word
https://en.wikipedia.org/wiki/Honeywell%206000%20series
The Honeywell 6000 series computers were rebadged versions of General Electric's 600-series mainframes manufactured by Honeywell International, Inc. from 1970 to 1989. Honeywell acquired the line when it purchased GE's computer division in 1970 and continued to develop them under a variety of names for many years. In 1989, Honeywell sold its computer division to the French company Groupe Bull who continued to market compatible machines. Models The high-end model was the 6080, with performance approximately 1 MIPS. Smaller models were the 6070, 6060, 6050, 6040, and 6030. In 1973, a low-end 6025 was introduced. The models with an even number as the next-to-last digit of the model number included an Enhanced Instruction Set feature (EIS), which added decimal arithmetic and storage-to-storage operations to the original word-oriented architecture. In 1973, Honeywell introduced the 6180, a 6000-series machine with addressing modifications to support the Multics operating system. In 1974, Honeywell released the 68/80 which added cache memory in each processor and support for a large (2-8 million word) directly addressable memory. In 1975, the 6000-series systems were renamed as Level 66, which were slightly faster (to 1.2 MIPS) and offered larger memories. In 1977, the line was again renamed 66/DPS, and in 1979 to DPS-8, again with a small performance improvement to 1.7 MIPS. The Multics model was the DPS-8/M. Hardware 6000-series systems were said to be "memory oriented" — a system controller in each memory module arbitrated requests from other system components (processors, etc.). Memory modules contained 128 K words of 1.2 μs 36-bit words; a system could support one or two memory modules for a maximum of 256 K words (1 MB of 9-bit bytes). Each module provided two-way interleaved memory. Devices called Input/Output Multiplexers (IOMs) served as intelligent I/O controllers for communication with most peripherals. The IOM supported two different types of peripheral channels: Common Peripheral Channels could handle data transfer rates up to 650,000 cps; Peripheral Subsystem Interface Channels allowed transfers up to 1.3 million cps. The 6000 supported multiple processors and IOMs. Each processor and IOM had four ports for connection to memory; each memory module had eight ports for communication with other system components, with an interrupt cell for each port. Memory protection and relocation was accomplished using a base and bounds register in the processor, the Base Address Register (BAR). The IOM was passed the contents of the BAR for each I/O request, allowing it to use virtual rather than physical addresses. A variety of communications controllers could also be used with the system. The older DATANET-30 and the DATANET 305— intended for smaller systems with up to twelve terminals attached to an IOM. The DATANET 355 processor attached directly to the system controller in a memory module and was capable of supporting up to 200 termi
https://en.wikipedia.org/wiki/GE-600%20series
The GE-600 series was a family of 36-bit mainframe computers originating in the 1960s, built by General Electric (GE). When GE left the mainframe business the line was sold to Honeywell, which built similar systems into the 1990s as the division moved to Groupe Bull and then NEC. The system is perhaps best known as the hardware used by the Dartmouth Time Sharing System (DTSS) and the Multics operating system. Multics was supported by virtual memory additions made to later versions of the series. Architecture The 600 series used 36-bit words and 18-bit addresses. They had two 36-bit accumulators, eight 18-bit index registers, and one 8-bit exponent register. It supported floating point in both 36-bit single-precision and 2 x 36-bit double precision, the exponent being stored separately, allowing up to 71 bits of precision and one bit being used for the sign. It had an elaborate set of addressing modes, many of which used indirect words, some of which were auto-incrementing or auto-decrementing. It supported 6-bit and 9-bit bytes through addressing modes; these supported extracting specific bytes, and incrementing the byte pointer, but not random access to bytes. It also included a number of channel controllers for handling I/O. The CPU could hand off short programs written in the channel controller's own machine language, which would then process the data, move it to or from the memory, and raise an interrupt when they completed. This allowed the main CPU to move on to other tasks while waiting for the slow I/O to complete, a primary feature of time sharing systems. Operating systems Originally the operating system for the 600-series computers was GECOS, developed by GE beginning in 1962. GECOS was initially a batch processing system, but later added many features seen on more modern systems, including multitasking and multi-user support. Between 1963 and 1964, GE worked with Dartmouth College on their Dartmouth BASIC project, which also led to the development of a new timesharing system to support it on the GE-235. This was a great success and led to a late 1967 proposal for an improved version of the system running on the 635. The first version, known to Dartmouth as "Phase I" and GE as "Mark II", the original on the GE-235 becoming "Mark I", was a similar success. "Phase II" at Dartmouth was released as the Dartmouth Time Sharing System (DTSS), while GE further developed Mark II into the improved Mark III. The Computer History Museum's Corporate Histories Collection describes GE's Mark I history this way: The precursor of General Electric Information Services began as a business unit within General Electric formed to sell excess computer time on the computers used to give customer demos. In 1965, Warner Sinback recommended that they begin to sell time-sharing services using the time-sharing system (Mark 1) developed at Dartmouth on a General Electric 265 computer. The service was an instant success and by 1968, GEIS had 40% of the $ 70
https://en.wikipedia.org/wiki/CDC%206600
The CDC 6600 was the flagship of the 6000 series of mainframe computer systems manufactured by Control Data Corporation. Generally considered to be the first successful supercomputer, it outperformed the industry's prior recordholder, the IBM 7030 Stretch, by a factor of three. With performance of up to three megaFLOPS, the CDC 6600 was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600. The first CDC 6600s were delivered in 1965 to Livermore and Los Alamos. They quickly became a must-have system in high-end scientific and mathematical computing, with systems being delivered to Courant Institute of Mathematical Sciences, CERN, the Lawrence Radiation Laboratory, and many others. At least 100 were delivered in total. A CDC 6600 is on display at the Computer History Museum in Mountain View, California. The only running CDC 6000 series machine has been restored by Living Computers: Museum + Labs. History and impact CDC's first products were based on the machines designed at Engineering Research Associates (ERA), which Seymour Cray had been asked to update after moving to CDC. After an experimental machine known as the Little Character, in 1960 they delivered the CDC 1604, one of the first commercial transistor-based computers, and one of the fastest machines on the market. Management was delighted, and made plans for a new series of machines that were more tailored to business use; they would include instructions for character handling and record keeping for instance. Cray was not interested in such a project, and set himself the goal of producing a new machine that would be 50 times faster than the 1604. When asked to complete a detailed report on plans at one and five years into the future, he wrote back that his five-year goal was "to produce the largest computer in the world", "largest" at that time being synonymous with "fastest", and that his one-year plan was "to be one-fifth of the way". Taking his core team to new offices nearby the original CDC headquarters, they started to experiment with higher quality versions of the "cheap" transistors Cray had used in the 1604. After much experimentation, they found that there was simply no way the germanium-based transistors could be run much faster than those used in the 1604. The "business machine" that management had originally wanted, now forming as the CDC 3000 series, pushed them about as far as they could go. Cray then decided the solution was to work with the then-new silicon-based transistors from Fairchild Semiconductor, which were just coming onto the market and offered dramatically improved switching performance. During this period, CDC grew from a startup to a large company and Cray became increasingly frustrated with what he saw as ridiculous management requirements. Things became considerably more tense in 1962 when the new CDC 3600 started to near production quality, and appeared to be exactly what management wanted, wh
https://en.wikipedia.org/wiki/IDB
IDB can mean: Discount Bank, one of Israel's leading banks, it sometimes is referred to as Israel Discount Bank European Injury Data Base, a database maintained by the European Union that contains standardized cross-national information on the external causes of injuries treated in emergency departments in the EU IDB Bank, a New York-based private and commercial bank with locations in the United States, Latin America and Israel, and a wholly owned subsidiary of Tel Aviv-based Discount Bank IDB Capital, New York-based broker dealer and wholly owned subsidiary of IDB Bank IDB Communications Group, Inc., a constituent of MCI Inc. IDB Development, an i investment group that, together with Discount Investment Corporation, manages a portfolio of investments in a range of companies. IDB-IIC Federal Credit Union, a not-for-profit, financial service cooperative owned by over 11,000 members and sponsored by the Inter-American Development Bank Illegal Diamond Buying, the term used at the turn of the 19th-20th century for diamond trading outside the De Beers cartel IndexedDB or Indexed Database API, a low-level API for client-side storage of significant amounts of structured data, including files/blobs Industrial Development Bank, now the Business Development Bank of Canada Industrial Development Board, boards with powers to raise levies from specific industrial sectors in the United Kingdom for coordinated action Industrial Development Board of the City of New Orleans, a public corporation and instrument of the New Orleans City Council to drive economic growth. Industrial Development Bureau, an agency of the Ministry of Economic Affairs of the Republic of China Industrial revenue bonds (formerly called Industrial Development Revenue Bonds) are bonds issued to construct facilities or purchase equipment which is then leased to a corporation Infectious Diseases Branch, a division of the California Department of Public Health that conducts surveillance, investigation, control and prevention of many important infectious diseases Involuntary denied boarding, a passenger being prevented from boarding an overbooked airline flight Institute for Defense and Business, a non-profit education and research institute. Integrated Database, a database maintained by the Federal Judicial Center that provides information on civil case and criminal defendant filings and terminations in the district courts, along with bankruptcy court and appellate court case information. Intel debugger, a proprietary debugger Inter-American Development Bank, the largest source of development financing for Latin America and the Caribbean. Intelligent drum and bass, a sub-genre of drum and bass music Interface Descriptor Block, a Cisco IOS internal data structure that contains information on network data Intermediate Debug File, a Visual Studio file type Internal drainage board, a type of English and Welsh water level management authority Irish Dairy Board, the former na
https://en.wikipedia.org/wiki/Peter%20Naur
Peter Naur (25 October 1928 – 3 January 2016) was a Danish computer science pioneer and Turing award winner. He is best remembered as a contributor, with John Backus, to the Backus–Naur form (BNF) notation used in describing the syntax for most programming languages. He also contributed to creating the language ALGOL 60. Biography Naur began his career as an astronomer for which he received his Doctor of Philosophy (Ph.D.) degree in 1957, but his encounter with computers led to a change of profession. From 1959 to 1969, he was employed at Regnecentralen, the Danish computing company, while at the same time giving lectures at the Niels Bohr Institute and the Technical University of Denmark. From 1969 to 1998, Naur was a professor of computer science at University of Copenhagen. He was a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which specified, supports, and maintains the languages ALGOL 60 and ALGOL 68. Between the years 1960 and 1993 he was a member of the editorial board for BIT Numerical Mathematics, a journal focused on numerical analysis. Naur's main areas of inquiry were design, structure, and performance of computer programs and algorithms. He also pioneered in software engineering and software architecture. In his book Computing: A Human Activity (1992), which is a collection of his contributions to computer science, he rejected the formalist school of programming that views programming as a branch of mathematics. He did not like being associated with the Backus–Naur form (attributed to him by Donald Knuth) and said that he would prefer it to be called the Backus normal form. Naur was married to computer scientist Christiane Floyd. Naur disliked the term computer science and suggested it be called datalogy or data science. The former term has been adopted in Denmark and Sweden as datalogi, while the latter term is now used for data analysis, including statistics and databases. Since the middle 1960s, computer science has been practiced in Denmark under Peter Naur's term datalogy, the science of data processes. Starting at Regnecentralen and the University of Copenhagen, the Copenhagen Tradition of Computer Science has developed its own special characteristics by means of a close connection with applications and other fields of knowledge. The tradition is not least visible in the area of education. Comprehensive project activity is an integral part of the curriculum, thus presenting theory as an aspect of realistic solutions known primarily through actual experience. Peter Naur early recognized the particular educational challenges presented by computer science. His innovations have shown their quality and vitality also at other universities. There is a close connection between computer science training as it has been formed at Copenhagen University, and the view of computer science which characterized Peter Naur's research. In later years, h
https://en.wikipedia.org/wiki/Linear-feedback%20shift%20register
In computing, a linear-feedback shift register (LFSR) is a shift register whose input bit is a linear function of its previous state. The most commonly used linear function of single bits is exclusive-or (XOR). Thus, an LFSR is most often a shift register whose input bit is driven by the XOR of some bits of the overall shift register value. The initial value of the LFSR is called the seed, and because the operation of the register is deterministic, the stream of values produced by the register is completely determined by its current (or previous) state. Likewise, because the register has a finite number of possible states, it must eventually enter a repeating cycle. However, an LFSR with a well-chosen feedback function can produce a sequence of bits that appears random and has a very long cycle. Applications of LFSRs include generating pseudo-random numbers, pseudo-noise sequences, fast digital counters, and whitening sequences. Both hardware and software implementations of LFSRs are common. The mathematics of a cyclic redundancy check, used to provide a quick check against transmission errors, are closely related to those of an LFSR. In general, the arithmetics behind LFSRs makes them very elegant as an object to study and implement. One can produce relatively complex logics with simple building blocks. However, other methods, that are less elegant but perform better, should be considered as well. Fibonacci LFSRs The bit positions that affect the next state are called the taps. In the diagram the taps are [16,14,13,11]. The rightmost bit of the LFSR is called the output bit, which is always also a tap. The taps are XOR'd sequentially and then fed back into the leftmost bit. The sequence of bits in the rightmost position is called the output stream. The bits in the LFSR state that influence the input are called taps. A maximum-length LFSR produces an m-sequence (i.e., it cycles through all possible 2m − 1 states within the shift register except the state where all bits are zero), unless it contains all zeros, in which case it will never change. As an alternative to the XOR-based feedback in an LFSR, one can also use XNOR. This function is an affine map, not strictly a linear map, but it results in an equivalent polynomial counter whose state is the complement of the state of an LFSR. A state with all ones is illegal when using an XNOR feedback, in the same way as a state with all zeroes is illegal when using XOR. This state is considered illegal because the counter would remain "locked-up" in this state. This method can be advantageous in hardware LFSRs using flip-flops that start in a zero state, as it does not start in a lockup state, meaning that the register does not need to be seeded in order to begin operation. The sequence of numbers generated by an LFSR or its XNOR counterpart can be considered a binary numeral system just as valid as Gray code or the natural binary code. The arrangement of taps for feedback in an LFSR can be
https://en.wikipedia.org/wiki/Packet%20analyzer
A packet analyzer, also known as packet sniffer, protocol analyzer, or network analyzer, is a computer program or computer hardware such as a packet capture appliance that can analyze and log traffic that passes over a computer network or part of a network. Packet capture is the process of intercepting and logging traffic. As data streams flow across the network, the analyzer captures each packet and, if needed, decodes the packet's raw data, showing the values of various fields in the packet, and analyzes its content according to the appropriate RFC or other specifications. A packet analyzer used for intercepting traffic on wireless networks is known as a wireless analyzer or WiFi analyzer. While a packet analyzer can also be referred to as a network analyzer or protocol analyzer these terms can also have other meanings. Protocol analyzer can technically be a broader, more general class that includes packet analyzers/sniffers. However, the terms are frequently used interchangeably. Capabilities On wired shared-medium networks, such as Ethernet, Token Ring, and FDDI, depending on the network structure (hub or switch), it may be possible to capture all traffic on the network from a single machine. On modern networks, traffic can be captured using a network switch using port mirroring, which mirrors all packets that pass through designated ports of the switch to another port, if the switch supports port mirroring. A network tap is an even more reliable solution than to use a monitoring port since taps are less likely to drop packets during high traffic load. On wireless LANs, traffic can be captured on one channel at a time, or by using multiple adapters, on several channels simultaneously. On wired broadcast and wireless LANs, to capture unicast traffic between other machines, the network adapter capturing the traffic must be in promiscuous mode. On wireless LANs, even if the adapter is in promiscuous mode, packets not for the service set the adapter is configured for are usually ignored. To see those packets, the adapter must be in monitor mode. No special provisions are required to capture multicast traffic to a multicast group the packet analyzer is already monitoring, or broadcast traffic. When traffic is captured, either the entire contents of packets or just the headers are recorded. Recording just headers reduces storage requirements, and avoids some privacy legal issues, yet often provides sufficient information to diagnose problems. Captured information is decoded from raw digital form into a human-readable format that lets engineers review exchanged information. Protocol analyzers vary in their abilities to display and analyze data. Some protocol analyzers can also generate traffic. These can act as protocol testers. Such testers generate protocol-correct traffic for functional testing, and may also have the ability to deliberately introduce errors to test the device under test's ability to handle errors. Protocol analyzers can a
https://en.wikipedia.org/wiki/Asterisk
The asterisk ( ), from Late Latin , from Ancient Greek , asteriskos, "little star", is a typographical symbol. It is so called because it resembles a conventional image of a heraldic star. Computer scientists and mathematicians often vocalize it as star (as, for example, in the A* search algorithm or C*-algebra). An asterisk is usually five- or six-pointed in print and six- or eight-pointed when handwritten, though more complex forms exist. Its most common use is to call out a footnote. It is also often used to censor offensive words. In computer science, the asterisk is commonly used as a wildcard character, or to denote pointers, repetition, or multiplication. History The asterisk was already in use as a symbol in ice age cave paintings. There is also a two-thousand-year-old character used by Aristarchus of Samothrace called the , , which he used when proofreading Homeric poetry to mark lines that were duplicated. Origen is known to have also used the asteriskos to mark missing Hebrew lines from his Hexapla. The asterisk evolved in shape over time, but its meaning as a symbol used to correct defects remained. In the Middle Ages, the asterisk was used to emphasize a particular part of text, often linking those parts of the text to a marginal comment. However, an asterisk was not always used. One hypothesis to the origin of the asterisk is that it stems from the 5000-year-old Sumerian character dingir, , though this hypothesis seems to only be based on visual appearance. Usage Censorship When toning down expletives, asterisks are often used to replace letters. For example, the word "badword" might become "ba***rd", "b*****d", "b******" or even "*******". Vowels tend to be censored with an asterisk more than consonants, but the intelligibility of censored profanities with multiple syllables such as "b*dw*rd" and "b*****d" or "ba****d", or uncommon ones is higher if put in context with surrounding text. When a document containing classified information is published, the document may be "sanitized" (redacted) by replacing the classified information with asterisks. For example, the Intelligence and Security Committee Russia report. Competitive sports and games In colloquial usage, an asterisk attached to a sporting record indicates that it is somehow tainted. This is because results that have been considered dubious or set aside are recorded in the record books with an asterisk rendering to a footnote explaining the reason or reasons for concern. Baseball The usage of the term in sports arose during the 1961 baseball season in which Roger Maris of the New York Yankees was threatening to break Babe Ruth's 34-year-old single-season home run record. Ruth had amassed 60 home runs in a season with only 154 games, but Maris was playing the first season in the American League's newly expanded 162-game season. Baseball Commissioner Ford Frick, a friend of Ruth's during the legendary slugger's lifetime, held a press conference to announce his "
https://en.wikipedia.org/wiki/Object%E2%80%93relational%20mapping
Object–relational mapping (ORM, O/RM, and O/R mapping tool) in computer science is a programming technique for converting data between a relational database and the heap of an object-oriented programming language. This creates, in effect, a virtual object database that can be used from within the programming language. In object-oriented programming, data-management tasks act on objects that combine scalar values into objects. For example, consider an address book entry that represents a single person along with zero or more phone numbers and zero or more addresses. This could be modeled in an object-oriented implementation by a "Person object" with an attribute/field to hold each data item that the entry comprises: the person's name, a list of phone numbers, and a list of addresses. The list of phone numbers would itself contain "PhoneNumber objects" and so on. Each such address-book entry is treated as a single object by the programming language (it can be referenced by a single variable containing a pointer to the object, for instance). Various methods can be associated with the object, such as methods to return the preferred phone number, the home address, and so on. By contrast, relational databases, such as SQL, group scalars into tuples, which are then enumerated in tables. Tuples and objects have some general similarity, in that they are both ways to collect values into named fields such that the whole collection can be manipulated as a single compound entity. They have many differences, though, in particular: lifecycle management (row insertion and deletion, versus garbage collection or reference counting), references to other entities (object references, versus foreign key references), and inheritance (non-existent in relational databases). As well, objects are managed on-heap and are under full control of a single process, while database tuples are shared and must incorporate locking, merging, and retry. Object–relational mapping provides automated support for mapping tuples to objects and back, while accounting for all of these differences. The heart of the problem involves translating the logical representation of the objects into an atomized form that is capable of being stored in the database while preserving the properties of the objects and their relationships so that they can be reloaded as objects when needed. If this storage and retrieval functionality is implemented, the objects are said to be persistent. Overview Implementation-specific details of storage drivers are generally wrapped in an API in the programming language in use, exposing methods to interact with the storage medium in a way which is simpler and more in line with the paradigms of surrounding code. The following is a simple example, written in C# code, to execute a query written in SQL using a database engine. var sql = "SELECT id, first_name, last_name, phone, birth_date, sex, age FROM persons WHERE id = 10"; var result = context.Persons.FromSqlRaw(sql).
https://en.wikipedia.org/wiki/NSG
NSG may stand for: National Security Guard, a federal counterterrorism force in India National Street Gazetteer, a database of all streets in England and Wales Naturschutzgebiet, a nature protection category in Germany Nebraska State Guard, active during World War II and the Vietnam War New Southgate railway station, London, National Rail station code Nippon Sheet Glass, a Japanese glass manufacturer NSG mouse, an immunodeficient laboratory mouse strain Nordic Support Group, a multinational peacekeeping force North Sea Gas, natural gas from the North Sea oil field Northampton School for Girls, a girls secondary school in Northampton, UK NetWare Systems Group, a former division of Novell Nuclear Suppliers Group, an international body regulating export of technology related to nuclear weapons NSG (group), a British musical collective
https://en.wikipedia.org/wiki/RealAudio
RealAudio, or also spelled as Real Audio is a proprietary audio format developed by RealNetworks and first released in April 1995. It uses a variety of audio codecs, ranging from low-bitrate formats that can be used over dialup modems, to high-fidelity formats for music. It can also be used as a streaming audio format, that is played at the same time as it is downloaded. In the past, many internet radio stations used RealAudio to stream their programming over the internet in real time. In recent years, however, the format has become less common and has given way to more popular audio formats. RealAudio was heavily used by the BBC websites until 2009, though it was discontinued due to its declining use. BBC World Service, the last of the BBC websites to use RealAudio, discontinued its use in March 2011. File extensions RealAudio files were originally identified by a filename extension of .ra (for Real Audio). In 1997, RealNetworks also began offering a video format called RealVideo. The combination of the audio and video formats was called RealMedia and used the file extension .rm. However, the latest version of RealProducer, Real's flagship encoder, reverted to using .ra for audio-only files, and began using .rv for video files (with or without audio), and .rmvb for VBR video files. The .ram (Real Audio Metadata) and .smil (Synchronized Multimedia Integration Language) file formats are sometimes encountered as links from web pages (see .Streaming Audio section below). Players The official player for RealMedia content is RealNetworks' RealPlayer SP, currently at version 16, and is available for various platforms in binary form. Several features of this program have proven controversial (most recently, RP11's ability to record unprotected streaming media from web sites), and many alternative players have been developed. RealNetworks initially tried to discourage development of alternative players by keeping their audio format secret. However, in recent years, RealNetworks has made efforts to be somewhat more open, and has founded the Helix Community, a collaborative open source project, to extend their media framework. When RealAudio was introduced, RealNetworks disclosed no technical details about the audio format or how it was encoded, but it was soon noticed that some of the audio codecs used in RealAudio were identical to those used in cellular telephones and digital television. As these formats had been described in detail in various technical papers and standards documents, it was possible to write software capable of playing RealAudio based on this information. A variety of unofficial players now exist, including MPlayer, and Real Alternative. However, Real Alternative does not decode the audio data by itself, but relies on the dynamically linked libraries (DLLs) from the official RealPlayer. Thus Real Alternative requires RealPlayer to be installed (or at least its DLLs) in order to function. Most other players are based on FFmpeg, whi
https://en.wikipedia.org/wiki/Exception%20handling
In computing and computer programming, exception handling is the process of responding to the occurrence of exceptions – anomalous or exceptional conditions requiring special processing – during the execution of a program. In general, an exception breaks the normal flow of execution and executes a pre-registered exception handler; the details of how this is done depend on whether it is a hardware or software exception and how the software exception is implemented. Exception handling, if provided, is facilitated by specialized programming language constructs, hardware mechanisms like interrupts, or operating system (OS) inter-process communication (IPC) facilities like signals. Some exceptions, especially hardware ones, may be handled so gracefully that execution can resume where it was interrupted. Definition The definition of an exception is based on the observation that each procedure has a precondition, a set of circumstances for which it will terminate "normally". An exception handling mechanism allows the procedure to raise an exception if this precondition is violated, for example if the procedure has been called on an abnormal set of arguments. The exception handling mechanism then handles the exception. The precondition, and the definition of exception, is subjective. The set of "normal" circumstances is defined entirely by the programmer, e.g. the programmer may deem division by zero to be undefined, hence an exception, or devise some behavior such as returning zero or a special "ZERO DIVIDE" value (circumventing the need for exceptions). Common exceptions include an invalid argument (e.g. value is outside of the domain of a function), an unavailable resource (like a missing file, a hard disk error, or out-of-memory errors), or that the routine has detected a normal condition that requires special handling, e.g., attention, end of file. Exception handling solves the semipredicate problem, in that the mechanism distinguishes normal return values from erroneous ones. In languages without built-in exception handling such as C, routines would need to signal the error in some other way, such as the common return code and errno pattern. Taking a broad view, errors can be considered to be a proper subset of exceptions, and explicit error mechanisms such as errno can be considered (verbose) forms of exception handling. The term "exception" is preferred to "error" because it does not imply that anything is wrong - a condition viewed as an error by one procedure or programmer may not be viewed that way by another. Even the term "exception" may be misleading because its typical connotation of "outlier" indicates that something infrequent or unusual has occurred, when in fact raising the exception may be a normal and usual situation in the program. For example, suppose a lookup function for an associative array throws an exception if the key has no value associated. Depending on context, this "key absent" exception may occur much more of
https://en.wikipedia.org/wiki/Bracket
A bracket, as used in British English, is either of two tall fore- or back-facing punctuation marks commonly used to isolate a segment of text or data from its surroundings. Typically deployed in symmetric pairs, an individual bracket may be identified as a 'left' or 'right' bracket or, alternatively, an "opening bracket" or "closing bracket", respectively, depending on the directionality of the context. There are four primary types of brackets. In British usage they are known as round brackets (or simply brackets), square brackets, curly brackets, and angle brackets; in American usage they are respectively known as parentheses, brackets, braces, and chevrons. There are also various less common symbols considered brackets. Various forms of brackets are used in mathematics, with specific mathematical meanings, often for denoting specific mathematical functions and subformulas. History Angle brackets or chevrons ⟨ ⟩ were the earliest type of bracket to appear in written English. Erasmus coined the term to refer to the round brackets or parentheses () recalling the shape of the crescent moon (). Most typewriters only had the left and right parentheses. Square brackets appeared with some teleprinters. Braces (curly brackets) first became part of a character set with the 8-bit code of the IBM 7030 Stretch. In 1961, ASCII contained parenthesis, square, and curly brackets, and also less-than and greater-than signs that could be used as angle brackets. Typography In English, typographers mostly prefer not to set brackets in italics, even when the enclosed text is italic. However, in other languages like German, if brackets enclose text in italics, they are usually also set in italics. Parentheses or (round) brackets ( and ) are called parentheses (singular parenthesis ) in American English, and "brackets" informally in the UK, India, Ireland, Canada, the West Indies, New Zealand, South Africa, and Australia; they are also known as "round brackets", "parens" , "circle brackets", or "smooth brackets". In careful or formal writing, "parentheses" is also used in British English. Uses of ( ) Parentheses contain adjunctive material that serves to clarify (in the manner of a gloss) or is aside from the main point. A comma before or after the material can also be used, though if the sentence contains commas for other purposes, visual confusion may result. A dash before and after the material is also sometimes used. Parentheses may be used in formal writing to add supplementary information, such as "Senator John McCain (R - Arizona) spoke at length". They can also indicate shorthand for "either singular or plural" for nouns, e.g. "the claim(s)". It can also be used for gender-neutral language, especially in languages with grammatical gender, e.g. "(s)he agreed with his/her physician" (the slash in the second instance, as one alternative is the other, not to it). Parenthetical phrases have been used extensively in informal writing and stream o
https://en.wikipedia.org/wiki/DLL%20Hell
In computing, DLL Hell is a term for the complications that arise when one works with dynamic-link libraries (DLLs) used with Microsoft Windows operating systems, particularly legacy 16-bit editions, which all run in a single memory space. DLL Hell can manifest itself in many different ways wherein applications neither launch nor work correctly. DLL Hell is the Windows ecosystem-specific form of the general concept dependency hell. Problems DLLs are Microsoft's implementation of shared libraries. Shared libraries allow common code to be bundled into a wrapper, the DLL, which is used by any application software on the system without loading multiple copies into memory. A simple example might be the GUI text editor, which is widely used by many programs. By placing this code in a DLL, all the applications on the system can use it without using more memory. This contrasts with static libraries, which are functionally similar but copy the code directly into the application. In this case, every application grows by the size of all the libraries it uses, and this can be quite large for modern programs. The problem arises when the version of the DLL on the computer is different than the version that was used when the program was being created. DLLs have no built-in mechanism for backward compatibility, and even minor changes to the DLL can render its internal structure so different from previous versions that attempting to use them will generally cause the application to crash. Static libraries avoid this problem because the version that was used to build the application is included inside it, so even if a newer version exists elsewhere on the system, this does not affect the application. A key reason for the version incompatibility is the structure of the DLL file. The file contains a directory of the individual methods (procedures, routines, etc.) contained within the DLL and the types of data they take and return. Even minor changes to the DLL code can cause this directory to be re-arranged, in which case an application that calls a particular method believing it to be the 4th item in the directory might end up calling an entirely different and incompatible routine, which would normally cause the application to crash. There are several problems commonly encountered with DLLs, especially after numerous applications have been installed and uninstalled on a system. The difficulties include conflicts between DLL versions, difficulty in obtaining required DLLs, and having many unnecessary DLL copies. Solutions to these problems were known even while Microsoft was writing the DLL system . These have been incorporated into the .NET replacement, "Assemblies". Incompatible versions A particular version of a library can be compatible with some programs that use it and incompatible with others. Windows has been particularly vulnerable to this because of its emphasis on dynamic linking of C++ libraries and Object Linking and Embedding (OLE) objects. C++
https://en.wikipedia.org/wiki/ElGamal%20encryption
In cryptography, the ElGamal encryption system is an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie–Hellman key exchange. It was described by Taher Elgamal in 1985. ElGamal encryption is used in the free GNU Privacy Guard software, recent versions of PGP, and other cryptosystems. The Digital Signature Algorithm (DSA) is a variant of the ElGamal signature scheme, which should not be confused with ElGamal encryption. ElGamal encryption can be defined over any cyclic group , like multiplicative group of integers modulo n. Its security depends upon the difficulty of a certain problem in related to computing discrete logarithms. The algorithm The algorithm can be described as first performing a Diffie–Hellman key exchange to establish a shared secret , then using this as a one-time pad for encrypting the message. ElGamal encryption is performed in three phases: the key generation, the encryption, and the decryption. The first is purely key exchange, whereas the latter two mix key exchange computations with message computations. Key generation The first party, Alice, generates a key pair as follows: Generate an efficient description of a cyclic group of order with generator . Let represent the identity element of . It is not necessary to come up with a group and generator anew for each new key. Indeed, one may expect a specific implementation of ElGamal to be hardcoded to use a specific group, or a group from a specific suite. The choice of group is mostly about how large keys you want to use. Choose an integer randomly from . Compute . The public key consists of the values . Alice publishes this public key and retains as her private key, which must be kept secret. Encryption A second party, Bob, encrypts a message to Alice under her public key as follows: Map the message to an element of using a reversible mapping function. Choose an integer randomly from . Compute . This is called the shared secret. Compute . Compute . Bob sends the ciphertext to Alice. Note that if one knows both the ciphertext and the plaintext , one can easily find the shared secret , since . Therefore, a new and hence a new is generated for every message to improve security. For this reason, is also called an ephemeral key. Decryption Alice decrypts a ciphertext with her private key as follows: Compute . Since , , and thus it is the same shared secret that was used by Bob in encryption. Compute , the inverse of in the group . This can be computed in one of several ways. If is a subgroup of a multiplicative group of integers modulo , where is prime, the modular multiplicative inverse can be computed using the extended Euclidean algorithm. An alternative is to compute as . This is the inverse of because of Lagrange's theorem, since . Compute . This calculation produces the original message , because ; hence . Map back to the plaintext message . Practical use Like most public key
https://en.wikipedia.org/wiki/Digital%20Signature%20Algorithm
The Digital Signature Algorithm (DSA) is a public-key cryptosystem and Federal Information Processing Standard for digital signatures, based on the mathematical concept of modular exponentiation and the discrete logarithm problem. DSA is a variant of the Schnorr and ElGamal signature schemes. The National Institute of Standards and Technology (NIST) proposed DSA for use in their Digital Signature Standard (DSS) in 1991, and adopted it as FIPS 186 in 1994. Five revisions to the initial specification have been released. The newest specification is: FIPS 186-5 from February 2023. DSA is patented but NIST has made this patent available worldwide royalty-free. Specification FIPS 186-5 indicates DSA will no longer be approved for digital signature generation, but may be used to verify signatures generated prior to the implementation date of that standard. Overview The DSA works in the framework of public-key cryptosystems and is based on the algebraic properties of modular exponentiation, together with the discrete logarithm problem, which is considered to be computationally intractable. The algorithm uses a key pair consisting of a public key and a private key. The private key is used to generate a digital signature for a message, and such a signature can be verified by using the signer's corresponding public key. The digital signature provides message authentication (the receiver can verify the origin of the message), integrity (the receiver can verify that the message has not been modified since it was signed) and non-repudiation (the sender cannot falsely claim that they have not signed the message). History In 1982, the U.S government solicited proposals for a public key signature standard. In August 1991 the National Institute of Standards and Technology (NIST) proposed DSA for use in their Digital Signature Standard (DSS). Initially there was significant criticism, especially from software companies that had already invested effort in developing digital signature software based on the RSA cryptosystem. Nevertheless, NIST adopted DSA as a Federal standard (FIPS 186) in 1994. Five revisions to the initial specification have been released: FIPS 186–1 in 1998, FIPS 186–2 in 2000, FIPS 186–3 in 2009, FIPS 186–4 in 2013, and FIPS 186–5 in 2023. Standard FIPS 186-5 forbids signing with DSA, while allowing verification of signatures generated prior to the implementation date of the standard as a document. It is to be replaced by newer signature schemes such as EdDSA. DSA is covered by , filed July 26, 1991 and now expired, and attributed to David W. Kravitz, a former NSA employee. This patent was given to "The United States of America as represented by the Secretary of Commerce, Washington, D.C.", and NIST has made this patent available worldwide royalty-free. Claus P. Schnorr claims that his (also now expired) covered DSA; this claim is disputed. In 1993, Dave Banisar managed to get confirmation, via a FOIA request, that the DSA algorithm hasn
https://en.wikipedia.org/wiki/Next-Generation%20Secure%20Computing%20Base
The Next-Generation Secure Computing Base (NGSCB; codenamed Palladium and also known as Trusted Windows') is a software architecture designed by Microsoft which claimed to provide users of the Windows operating system with better privacy, security, and system integrity. NGSCB was the result of years of research and development within Microsoft to create a secure computing solution that equaled the security of closed platforms such as set-top boxes while simultaneously preserving the backward compatibility, flexibility, and openness of the Windows operating system. Microsoft's primary stated objective with NGSCB was to "protect software from software." Part of the Trustworthy Computing initiative when unveiled in 2002, NGSCB was to be integrated with Windows Vista, then known as "Longhorn." NGSCB relied on hardware designed by the Trusted Computing Group to produce a parallel operation environment hosted by a new hypervisor (referred to as a sort of kernel in documentation) called the "Nexus" that existed alongside Windows and provided new applications with features such as hardware-based process isolation, data encryption based on integrity measurements, authentication of a local or remote machine or software configuration, and encrypted paths for user authentication and graphics output. NGSCB would facilitate the creation and distribution of digital rights management (DRM) policies pertaining the use of information. NGSCB was subject to much controversy during its development, with critics contending that it would impose restrictions on users, enforce vendor lock-in, and undermine fair use rights and open-source software. It was first demonstrated by Microsoft at WinHEC 2003 before undergoing a revision in 2004 that would enable earlier applications to benefit from its functionality. Reports indicated in 2005 that Microsoft would change its plans with NGSCB so that it could ship Windows Vista by its self-imposed deadline year, 2006; instead, Microsoft would ship only part of the architecture, BitLocker, which can optionally use the Trusted Platform Module to validate the integrity of boot and system files prior to operating system startup. Development of NGSCB spanned approximately a decade before its cancellation, the lengthiest development period of a major feature intended for Windows Vista. NGSCB differed from technologies Microsoft billed as "pillars of Windows Vista"—Windows Presentation Foundation, Windows Communication Foundation, and WinFS—during its development in that it was not built with the .NET Framework and did not focus on managed code software development. NGSCB has yet to fully materialize; however, aspects of it are available in features such as BitLocker of Windows Vista, Measured Boot of Windows 8, Certificate Attestation of Windows 8.1, and Device Guard of Windows 10. History Early development Development of NGSCB began in 1997 after Peter Biddle conceived of new ways to protect content on personal computers. Biddle
https://en.wikipedia.org/wiki/Dial-up%20Internet%20access
Dial-up Internet access is a form of Internet access that uses the facilities of the public switched telephone network (PSTN) to establish a connection to an Internet service provider (ISP) by dialing a telephone number on a conventional telephone line. Dial-up connections use modems to decode audio signals into data to send to a router or computer, and to encode signals from the latter two devices to send to another modem at the ISP. History In 1979, Tom Truscott and Jim Ellis, graduates of Duke University, created an early predecessor to dial-up Internet access called the Usenet. The Usenet was a UNIX based system that used a dial-up connection to transfer data through telephone modems. Dial-up Internet has been around since the 1980s via public providers such as NSFNET-linked universities. The BBC established Internet access via Brunel University in the United Kingdom in 1989. Dial-up was first offered commercially in 1992 by Pipex in the United Kingdom and Sprint in the United States. After the introduction of commercial broadband in the late 1990s, dial-up Internet access became less popular by the mid-2000s. It is still used where other forms are not available or where the cost is too high, as in some rural or remote areas. For example, the U.S. states of Maine and Georgia have a much higher percentage of people using dial-up compared with the rest of the United States. Modems Because there was no technology to allow different carrier signals on a telephone line at the time, dial-up internet access relied on using audio communication. A modem would take the digital data from a computer, modulate it into an audio signal and send it to a receiving modem. This receiving modem would demodulate the signal from analogue noise, back into digital data for the computer to process. The simplicity of this arrangement meant that people would be unable to use their phone line for verbal communication until the internet call was finished. The Internet speed using this technology can drop to 21.6 kbit/s or less. Poor condition of the telephone line, high noise level and other factors all affect dial-up speed. For this reason, it is popularly called the 21600 Syndrome. Availability Dial-up connections to the Internet require no additional infrastructure other than the telephone network and the modems and servers needed to make and answer the calls. Because telephone access is widely available, dial-up is often the only choice available for rural or remote areas, where broadband installations are not prevalent due to low population density and high infrastructure cost. A 2008 Pew Research Center study stated that only 10% of US adults still used dial-up Internet access. The study found that the most common reason for retaining dial-up access was high broadband prices. Users cited lack of infrastructure as a reason less often than stating that they would never upgrade to broadband. That number had fallen to 6% by 2010, and to 3% by 2013. A survey
https://en.wikipedia.org/wiki/IBM%20System/370
The IBM System/370 (S/370) is a model range of IBM mainframe computers announced on June 30, 1970, as the successors to the System/360 family. The series mostly maintains backward compatibility with the S/360, allowing an easy migration path for customers; this, plus improved performance, were the dominant themes of the product announcement. In September 1990, the System/370 line was replaced with the System/390. Evolution The original System/370 line was announced on June 30, 1970, with first customer shipment of the Models 155 and 165 planned for February 1971 and April 1971 respectively. The 155 first shipped in January 1971. System/370 underwent several architectural improvements during its roughly 20-year lifetime. The following features mentioned in Principles of Operation are either optional on S/360 but standard on S/370, introduced with S/370 or added to S/370 after announcement. Branch and Save Channel Indirect Data Addressing Channel-Set Switching Clear I/O Command Retry Commercial Instruction Set Conditional Swapping CPU Timer and Clock Comparator Dual-Address Space (DAS) Extended-Precision Floating Point Extended Real Addressing External Signals Fast Release Floating Point Halt Device I/O Extended Logout Limited Channel Logout Move Inverse Multiprocessing PSW-Key Handling Recovery Extensions Segment Protection Service Signal Start-I/O-Fast Queuing (SIOF) Storage-Key-Instruction Extensions Storage-Key 4K-Byte Block Suspend and Resume Test Block Translation Vector 31-Bit IDAWs Initial models The first System/370 machines, the Model 155 and the Model 165, incorporated only a small number of changes to the System/360 architecture. These changes included: 13 new instructions, among which were MOVE LONG (MVCL); COMPARE LOGICAL LONG (CLCL); thereby permitting operations on up to 2^24-1 bytes (16 MB), vs. the 256-byte limits on the 360's MVC and CLC; SHIFT AND ROUND DECIMAL (SRP), which multiplied or divided a packed decimal value by a power of 10, rounding the result when dividing; optional 128-bit (hexadecimal) floating-point arithmetic, introduced in the System/360 Model 85 a new higher-resolution time-of-day clock support for the block multiplexer channel introduced in the System/360 Model 85. All of the emulator features were designed to run under the control of the standard operating systems. IBM documented the S/370 emulator programs as integrated emulators. These models had core memory and did not include support for virtual storage. Logic technology All models of the System/370 used IBM's form of monolithic integrated circuits called MST (Monolithic System Technology) making them third generation computers. MST provided System/370 with four to eight times the circuit density and over ten times the reliability when compared to the previous second generation SLT technology of the System/360. Monolithic memory On September 23, 1970, IBM announced the Model 145, a third model of the System/370, which was the first mode
https://en.wikipedia.org/wiki/Secure%20cryptoprocessor
A secure cryptoprocessor is a dedicated computer-on-a-chip or microprocessor for carrying out cryptographic operations, embedded in a packaging with multiple physical security measures, which give it a degree of tamper resistance. Unlike cryptographic processors that output decrypted data onto a bus in a secure environment, a secure cryptoprocessor does not output decrypted data or decrypted program instructions in an environment where security cannot always be maintained. The purpose of a secure cryptoprocessor is to act as the keystone of a security subsystem, eliminating the need to protect the rest of the subsystem with physical security measures. Examples A hardware security module (HSM) contains one or more secure cryptoprocessor chips. These devices are high grade secure cryptoprocessors used with enterprise servers. A hardware security module can have multiple levels of physical security with a single-chip cryptoprocessor as its most secure component. The cryptoprocessor does not reveal keys or executable instructions on a bus, except in encrypted form, and zeros keys by attempts at probing or scanning. The crypto chip(s) may also be potted in the hardware security module with other processors and memory chips that store and process encrypted data. Any attempt to remove the potting will cause the keys in the crypto chip to be zeroed. A hardware security module may also be part of a computer (for example an ATM) that operates inside a locked safe to deter theft, substitution, and tampering. Modern smartcards are probably the most widely deployed form of secure cryptoprocessor, although more complex and versatile secure cryptoprocessors are widely deployed in systems such as Automated teller machines, TV set-top boxes, military applications, and high-security portable communication equipment. Some secure cryptoprocessors can even run general-purpose operating systems such as Linux inside their security boundary. Cryptoprocessors input program instructions in encrypted form, decrypt the instructions to plain instructions which are then executed within the same cryptoprocessor chip where the decrypted instructions are inaccessibly stored. By never revealing the decrypted program instructions, the cryptoprocessor prevents tampering of programs by technicians who may have legitimate access to the sub-system data bus. This is known as bus encryption. Data processed by a cryptoprocessor is also frequently encrypted. The Trusted Platform Module (TPM) is an implementation of a secure cryptoprocessor that brings the notion of trusted computing to ordinary PCs by enabling a secure environment. Present TPM implementations focus on providing a tamper-proof boot environment, and persistent and volatile storage encryption. Security chips for embedded systems are also available that provide the same level of physical protection for keys and other secret material as a smartcard processor or TPM but in a smaller, less complex and less expensive pack
https://en.wikipedia.org/wiki/Interpreter%20%28computing%29
In computer science, an interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program. An interpreter generally uses one of the following strategies for program execution: Parse the source code and perform its behavior directly; Translate source code into some efficient intermediate representation or object code and immediately execute that; Explicitly execute stored precompiled bytecode made by a compiler and matched with the interpreter Virtual Machine. Early versions of Lisp programming language and minicomputer and microcomputer BASIC dialects would be examples of the first type. Perl, Raku, Python, MATLAB, and Ruby are examples of the second, while UCSD Pascal is an example of the third type. Source programs are compiled ahead of time and stored as machine independent code, which is then linked at run-time and executed by an interpreter and/or compiler (for JIT systems). Some systems, such as Smalltalk and contemporary versions of BASIC and Java may also combine two and three. Interpreters of various types have also been constructed for many languages traditionally associated with compilation, such as Algol, Fortran, Cobol, C and C++. While interpretation and compilation are the two main means by which programming languages are implemented, they are not mutually exclusive, as most interpreting systems also perform some translation work, just like compilers. The terms "interpreted language" or "compiled language" signify that the canonical implementation of that language is an interpreter or a compiler, respectively. A high-level language is ideally an abstraction independent of particular implementations. History Interpreters were used as early as 1952 to ease programming within the limitations of computers at the time (e.g. a shortage of program storage space, or no native support for floating point numbers). Interpreters were also used to translate between low-level machine languages, allowing code to be written for machines that were still under construction and tested on computers that already existed. The first interpreted high-level language was Lisp. Lisp was first implemented by Steve Russell on an IBM 704 computer. Russell had read John McCarthy's paper, "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I", and realized (to McCarthy's surprise) that the Lisp eval function could be implemented in machine code. The result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, "evaluate Lisp expressions". General operation An interpreter usually consists of a set of known commands it can execute, and a list of these commands in the order a programmer wishes to execute them. Each command (also known as an Instruction) contains the data the programmer wants to mutate, and information on how to mutate the data. For exam
https://en.wikipedia.org/wiki/Ernst%20%26%20Young
Ernst & Young Global Limited, trade name EY, is a British multinational professional services partnership headquartered in London, England. EY is one of the largest professional services networks in the world. Along with Deloitte, KPMG and PwC, it is considered one of the Big Four accounting firms. It primarily provides assurance (which includes financial audit), tax, consulting and advisory services to its clients. EY operates as a network of member firms which are structured as separate legal entities in a partnership, which has 312,250 employees in over 700 offices in more than 150 countries around the world. The firm's current partnership was formed in 1989 by a merger of two accounting firms; Ernst & Whinney and Arthur Young & Co. It was named Ernst & Young until a rebranding campaign officially changed its name to EY in 2013, although this initialism was already used informally prior to its sanctioning adoption. In 2019, EY was the seventh-largest privately owned organization in the United States. , EY has continuously been ranked on Fortune magazine's list of the 100 Best Companies to Work For for the past 25 years, longer than any other accounting firm. History Early history and mergers EY resulted from several mergers of ancestor firms over the last century and a half, the oldest of which was founded in 1849, in England, as Harding & Pullein. That same year, this firm was joined by an accountant named Frederick Whinney, who, a decade later, became a partner. After his son joined the firm, it was later renamed Whinney, Smith & Whinney, in 1894. In 1903, the firm Ernst & Ernst was founded in Cleveland, Ohio, by Alwin C. Ernst, and his brother, Theodore Ernst. In 1906, Arthur Young & Co. was set up by a Scottish accountant, Arthur Young, in Chicago. Starting in 1924, these two American firms became allied with prominent British firms; Young with Broads Paterson & Co.; and Ernst with the aforementioned Whinney Smith & Whinney. The latter of these two mergers spawned Anglo-American partnership Ernst & Whinney in 1979, then the fourth largest accountancy firm in the world. A decade later, in 1989, Ernst & Whinney merged with the fifth largest firm globally at the time, Arthur Young & Co., to create Ernst & Young. Later developments In October 1997, Ernst & Young announced plans to merge its global practices with professional services network KPMG, to create the largest professional services organization in the world. The announcement came on the heels of an announced merger between Price Waterhouse and Coopers & Lybrand only a month earlier. These plans were soon abandoned in February 1998, due to several factors ranging from client opposition, antitrust issues, cost problems, and the anticipated difficulty of merging the two diverse firms and cultures. The merger between Price Waterhouse and Coopers & Lybrand, however, went ahead as planned, creating PwC. Ernst & Young expanded its consulting practice heavily during the 1980s and
https://en.wikipedia.org/wiki/Discrete%20cosine%20transform
A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. The DCT, first proposed by Nasir Ahmed in 1972, is a widely used transformation technique in signal processing and data compression. It is used in most digital media, including digital images (such as JPEG and HEIF), digital video (such as MPEG and H.26x), digital audio (such as Dolby Digital, MP3 and AAC), digital television (such as SDTV, HDTV and VOD), digital radio (such as AAC+ and DAB+), and speech coding (such as AAC-LD, Siren and Opus). DCTs are also important to numerous other applications in science and engineering, such as digital signal processing, telecommunication devices, reducing network bandwidth usage, and spectral methods for the numerical solution of partial differential equations. A DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. The DCTs are generally related to Fourier series coefficients of a periodically and symmetrically extended sequence whereas DFTs are related to Fourier series coefficients of only periodically extended sequences. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), whereas in some variants the input or output data are shifted by half a sample. There are eight standard DCT variants, of which four are common. The most common variant of discrete cosine transform is the type-II DCT, which is often called simply the DCT. This was the original DCT as first proposed by Ahmed. Its inverse, the type-III DCT, is correspondingly often called simply the inverse DCT or the IDCT. Two related transforms are the discrete sine transform (DST), which is equivalent to a DFT of real and odd functions, and the modified discrete cosine transform (MDCT), which is based on a DCT of overlapping data. Multidimensional DCTs (MD DCTs) are developed to extend the concept of DCT to multidimensional signals. A variety of fast algorithms have been developed to reduce the computational complexity of implementing DCT. One of these is the integer DCT (IntDCT), an integer approximation of the standard DCT, used in several ISO/IEC and ITU-T international standards. DCT compression, also known as block compression, compresses data in sets of discrete DCT blocks. DCT blocks sizes including 8x8 pixels for the standard DCT, and varied integer DCT sizes between 4x4 and 32x32 pixels. The DCT has a strong energy compaction property, capable of achieving high quality at high data compression ratios. However, blocky compression artifacts can appear when heavy DCT compression is applied. History The DCT was first conceived by Nasir Ahmed, T. Natarajan and K. R. Rao while working at Kansas State University. The concept was proposed to the National Science Foundation in 1972. The DCT was originally intended f
https://en.wikipedia.org/wiki/Vaporware
In the computer industry, vaporware (or vapourware) is a product, typically computer hardware or software, that is announced to the general public but is late, never actually manufactured, or officially cancelled. Use of the word has broadened to include products such as automobiles. Vaporware is often announced months or years before its purported release, with few details about its development being released. Developers have been accused of intentionally promoting vaporware to keep customers from switching to competing products that offer more features. Network World magazine called vaporware an "epidemic" in 1989 and blamed the press for not investigating if developers' claims were true. Seven major companies issued a report in 1990 saying that they felt vaporware had hurt the industry's credibility. The United States accused several companies of announcing vaporware early enough to violate antitrust laws, but few have been found guilty. "Vaporware" was coined by a Microsoft engineer in 1982 to describe the company's Xenix operating system and appeared in print at least as early as the May 1983 issue of Sinclair User magazine (spelled as 'Vapourware' in UK English). It became popular among writers in the industry as a way to describe products they felt took too long to be released. InfoWorld magazine editor Stewart Alsop helped popularize it by lampooning Bill Gates with a Golden Vaporware award for the late release of his company's first version of Windows in 1985. Etymology "Vaporware", sometimes synonymous with "vaportalk" in the 1980s, has no single definition. It is generally used to describe a hardware or software product that has been announced, but that the developer is unlikely to release any time soon, if ever. The first reported use of the word was in 1982 by an engineer at the computer software company Microsoft. Ann Winblad, president of Open Systems Accounting Software, wanted to know if Microsoft planned to stop developing its Xenix operating system as some of Open System's products depended on it. She asked two Microsoft software engineers, John Ulett and Mark Ursino, who confirmed that development of Xenix had stopped. "One of them told me, 'Basically, it's vaporware'," she later said. Winblad compared the word to the idea of "selling smoke", implying Microsoft was selling a product it would soon not support. Winblad described the word to influential computer expert Esther Dyson, who published it for the first time in her monthly newsletter RELease 1.0. In an article titled "Vaporware" in the November 1983 issue of RELease 1.0, Dyson defined the word as "good ideas incompletely implemented". She described three software products shown at COMDEX in Las Vegas that year with bombastic advertisements. She stated that demonstrations of the "purported revolutions, breakthroughs and new generations" at the exhibition did not meet those claims. The practice existed before Winblad's account. In a January 1982 review of the new IB
https://en.wikipedia.org/wiki/Rasterisation
In computer graphics, rasterisation (British English) or rasterization (American English) is the task of taking an image described in a vector graphics format (shapes) and converting it into a raster image (a series of pixels, dots or lines, which, when displayed together, create the image which was represented via shapes). The rasterized image may then be displayed on a computer display, video display or printer, or stored in a bitmap file format. Rasterization may refer to the technique of drawing 3D models, or the conversion of 2D rendering primitives such as polygons, line segments into a rasterized format. Etymology The term "rasterisation" comes . 2D images Line primitives Bresenham's line algorithm is an example of an algorithm used to rasterize lines. Circle primitives Algorithms such as Midpoint circle algorithm are used to render circle onto a pixelated canvas. 3D images Rasterization is one of the typical techniques of rendering 3D models. Compared with other rendering techniques such as ray tracing, rasterization is extremely fast and therefore used in most realtime 3D engines. However, rasterization is simply the process of computing the mapping from scene geometry to pixels and does not prescribe a particular way to compute the color of those pixels. The specific color of each pixel is assigned by a pixel shader (which in modern GPUs is completely programmable). Shading may take into account physical effects such as light position, their approximations or purely artistic intent. The process of rasterizing 3D models onto a 2D plane for display on a computer screen ("screen space") is often carried out by fixed function (non-programmable) hardware within the graphics pipeline. This is because there is no motivation for modifying the techniques for rasterization used at render time and a special-purpose system allows for high efficiency. Triangle rasterization Polygons are a common representation of digital 3D models. Before rasterization, individual polygons are typically broken down into triangles; therefore, a typical problem to solve in 3D rasterization is rasterization of a triangle. Properties that are usually required from triangle rasterization algorithms are that rasterizing two adjacent triangles (i.e. those that share an edge) leaves no holes (non-rasterized pixels) between the triangles, so that the rasterized area is completely filled (just as the surface of adjacent triangles). And no pixel is rasterized more than once, i.e. the rasterized triangles don't overlap. This is to guarantee that the result doesn't depend on the order in which the triangles are rasterized. Overdrawing pixels can also mean wasting computing power on pixels that would be overwritten. This leads to establishing rasterization rules to guarantee the above conditions. One set of such rules is called a top-left rule, which states that a pixel is rasterized if and only if its center lies completely inside the triangle. Or its center
https://en.wikipedia.org/wiki/Escape%20sequence
In computer science, an escape sequence is a combination of characters that has a meaning other than the literal characters contained therein; it is marked by one or more preceding (and possibly terminating) characters. Examples In C and many derivative programming languages, a string escape sequence is a series of two or more characters, starting with a backslash \. Note that in C a backslash immediately followed by a newline does not constitute an escape sequence, but splices physical source lines into logical ones in the second translation phase, whereas string escape sequences are converted in the fifth translation phase. To represent the backslash character itself, \\ can be used, whereby the first backslash indicates an escape and the second specifies that a backslash is being escaped. A character may be escaped in multiple different ways. Assuming ASCII encoding, the escape sequences \x5c (hexadecimal), \\, \134 (octal) and \x5C all encode the same character: the backslash \. For devices that respond to ANSI escape sequences, the combination of three or more characters beginning with the ASCII "escape" character (decimal character code 27) followed by the left-bracket character [ (decimal character code 91) defines an escape sequence. Control sequences When directed this series of characters is used to change the state of computers and their attached peripheral devices, rather than to be displayed or printed as regular data bytes would be, these are also known as control sequences, reflecting their use in device control, beginning with the Control Sequence Initiator - originally the "escape character" ASCII code - character 27 (decimal) - often written "Esc" on keycaps. With the introduction of ANSI terminals most escape sequences began with the two characters "ESC" then "[" or a specially-allocated CSI character with a code 155 (decimal). Not all control sequences used an escape character; for example: modem control sequences used by AT/Hayes-compatible modems Data General terminal control sequences, but they often were still called escape sequences, and the very common use of "escaping" special characters in programming languages and command-line parameters today often use the "backslash" character to begin the sequence. Escape sequences in communications are commonly used when a computer and a peripheral have only a single channel through which to send information back and forth (so escape sequences are an example of in-band signaling). They were common when most dumb terminals used ASCII with 7 data bits for communication, and sometimes would be used to switch to a different character set for "foreign" or graphics characters that would otherwise been restricted by the 128 codes available in 7 data bits. Even relatively "dumb" terminals responded to some escape sequences, including the original mechanical Teletype printers (on which "glass Teletypes" or VDUs were based) responded to characters 27 and 31 to alternate between
https://en.wikipedia.org/wiki/Wireless%20community%20network
Wireless community networks or wireless community projects or simply community networks, are non-centralized, self-managed and collaborative networks organized in a grassroots fashion by communities, non-governmental organizations and cooperatives in order to provide a viable alternative to municipal wireless networks for consumers. Many of these organizations set up wireless mesh networks which rely primarily on sharing of unmetered residential and business DSL and cable Internet. This sort of usage might be non-compliant with the terms of service of local internet service provider (ISPs) that deliver their service via the consumer phone and cable duopoly. Wireless community networks sometimes advocate complete freedom from censorship, and this position may be at odds with the acceptable use policies of some commercial services used. Some ISPs do allow sharing or reselling of bandwidth. The First Latin American Summit of Community Networks, held in Argentina in 2018, presented the following definition for the term "community network": "Community networks are networks collectively owned and managed by the community for non-profit and community purposes. They are constituted by collectives, indigenous communities or non-profit civil society organizations that exercise their right to communicate, under the principles of democratic participation of their members, fairness, gender equality, diversity and plurality". According to the Declaration on Community Connectivity, elaborated through a multistakeholder process organized by the Internet Governance Forum's Dynamic Coalition on Community Connectivity, community networks are recognised by a list of characteristics: Collective ownership; Social management; Open design; Open participation; Promotion of peering and transit; Promotion of the consideration of security and privacy concerns while designing and operating the network; and promotion of the development and circulation of local content in local languages. History Wireless community networks started as projects that evolved from amateur radio using packet radio, and from the free software community which substantially overlapped with the amateur radio community. Wireless neighborhood networks were established by technology enthusiasts in the early 2000s. The Redbricks Intranet Collective (RIC) started 1999 in Manchester, UK, to allow about 30 flats in the Bentley House Estate to share the subscription cost of one leased line from British Telecom (BT). Wi-Fi was quickly adopted by technology enthusiasts and hobbyists, because it was an open standard and consumer Wi-Fi hardware was comparatively cheap. Wireless community networks started out by turning wireless access points designed for short-range use in homes into multi-kilometre long-range Wi-Fi by building high-gain directional antennas. Rather than buying commercially available units, some of the early groups advocated home-built antennas. Examples include the cantenna and RONJA, an o
https://en.wikipedia.org/wiki/Average
In ordinary language, an average is a single number or value that best represents a set of data. The type of average taken as most typically representative of a list of numbers, is the arithmetic mean - the sum of the numbers divided by how many numbers are in the list. For example, the mean average of the numbers 2, 3, 4, 7, and 9 (summing to 25) is 5. Depending on the context, the most representative statistic to be taken as the average might be another measure of central tendency, such as the mid-range, median, mode or geometric mean. For example, the average personal income is often given as the median—the number below which are 50% of personal incomes and above which are 50% of personal incomes—because the mean would be higher by including personal incomes from a few billionaires. For this reason, it is recommended to avoid using the word "average" when discussing measures of central tendency and specifically specify which type of measure of average is being used. General properties If all numbers in a list are the same number, then their average is also equal to this number. This property is shared by each of the many types of average. Another universal property is monotonicity: if two lists of numbers A and B have the same length, and each entry of list A is at least as large as the corresponding entry on list B, then the average of list A is at least that of list B. Also, all averages satisfy linear homogeneity: if all numbers of a list are multiplied by the same positive number, then its average changes by the same factor. In some types of average, the items in the list are assigned different weights before the average is determined. These include the weighted arithmetic mean, the weighted geometric mean and the weighted median. Also, for some types of moving average, the weight of an item depends on its position in the list. Most types of average, however, satisfy permutation-insensitivity: all items count equally in determining their average value and their positions in the list are irrelevant; the average of (1, 2, 3, 4, 6) is the same as that of (3, 2, 6, 4, 1). Pythagorean means The arithmetic mean, the geometric mean and the harmonic mean are known collectively as the Pythagorean means. Statistical location The mode, the median, and the mid-range are often used in addition to the mean as estimates of central tendency in descriptive statistics. These can all be seen as minimizing variation by some measure; see . Mode The most frequently occurring number in a list is called the mode. For example, the mode of the list (1, 2, 2, 3, 3, 3, 4) is 3. It may happen that there are two or more numbers which occur equally often and more often than any other number. In this case there is no agreed definition of mode. Some authors say they are all modes and some say there is no mode. Median The median is the middle number of the group when they are ranked in order. (If there are an even number of numbers, the mean of the middle two is
https://en.wikipedia.org/wiki/IBM%208514
IBM 8514 is a graphics card manufactured by IBM and introduced with the IBM PS/2 line of personal computers in 1987. It supports a display resolution of pixels with 256 colors at 43.5 Hz (interlaced), or at 60 Hz (non-interlaced). 8514 usually refers to the display controller hardware (such as the 8514/A display adapter). However, IBM sold the companion CRT monitor (for use with the 8514/A) which carries the same designation, 8514. The 8514 uses a standardised programming interface called the "Adapter Interface" or AI. This interface is also used by XGA, IBM Image Adapter/A, and clones of the 8514/A and XGA such as the ATI Technologies Mach 32 and IIT AGX. The interface allows computer software to offload common 2D-drawing operations (line-draw, color-fill, and block copies via a blitter) onto the 8514 hardware. This frees the host CPU for other tasks, and greatly improves the speed of redrawing a graphics visual (such as a pie-chart or CAD-illustration). The 8514 initially sold for for the adapter and for the 512KB memory expansion. The 8514/A required a Micro Channel architecture bus at a time when ISA systems were standard. History The 8514 was introduced with the IBM PS/2 computers in April 1987. It was an optional upgrade to the Micro Channel architecture based PS/2's Video Graphics Array (VGA), and was delivered within three months of PS/2's introduction. Although not the first PC video card to support hardware acceleration, IBM's 8514 is often credited as the first PC mass-market fixed-function accelerator. Up until the 8514's introduction, PC graphics acceleration was relegated to expensive workstation-class, graphics coprocessor boards. Coprocessor boards (such as the TARGA Truevision series) were designed around special CPU or digital signal processor chips which were programmable. Fixed-function accelerators, such as the 8514, sacrificed programmability for better cost/performance ratio. Later compatible 8514 boards were based on the Texas Instruments TMS34010 chip. Even though the 8514 was not a best-seller, it created a market for fixed-function PC graphics accelerators which grew exponentially in the early 1990s. The ATI Mach 8 and Mach 32 chips were popular clones, and several companies (notably S3) designed graphics accelerator chips which were not register compatible but were conceptually very similar to the 8514/A. The 8514 was superseded by IBM XGA. The VESA Group introduced a common standardized way to access features like hardware cursors, Bit Block transfers (Bit Blt), off screen sprites, hardware panning, drawing and other functions with VBE/accelerator functions (VBE/AF) in August 1996. Software support Software that supported this graphic standard: OS/2 Windows 2.1 Windows 3.x Windows 95 XFree86 2.1.1 AutoCAD 10 QuikMenu Any BGI software using IBM8514.BGI Output capabilities The 8514 offered: graphics with 256 colors out of 262,144 (18 bit RGB); text mode with 80×34 characters; graphics with 256 colo
https://en.wikipedia.org/wiki/Self%20%28programming%20language%29
Self is an object-oriented programming language based on the concept of prototypes. Self began as a dialect of Smalltalk, being dynamically typed and using just-in-time compilation (JIT) as well as the prototype-based approach to objects: it was first used as an experimental test system for language design in the 1980s and 1990s. In 2006, Self was still being developed as part of the Klein project, which was a Self virtual machine written fully in Self. The latest version is 2017.1 released in May 2017. Several just-in-time compilation techniques were pioneered and improved in Self research as they were required to allow a very high level object oriented language to perform at up to half the speed of optimized C. Much of the development of Self took place at Sun Microsystems, and the techniques they developed were later deployed for Java's HotSpot virtual machine. At one point a version of Smalltalk was implemented in Self. Because it was able to use the JIT, this also gave extremely good performance. History Self was designed mostly by David Ungar and Randall Smith in 1986 while working at Xerox PARC. Their objective was to push forward the state of the art in object-oriented programming language research, once Smalltalk-80 was released by the labs and began to be taken seriously by the industry. They moved to Stanford University and continued work on the language, building the first working Self compiler in 1987. At that point, focus changed to attempting to bring up an entire system for Self, as opposed to just the language. The first public release was in 1990, and the next year the team moved to Sun Microsystems where they continued work on the language. Several new releases followed until falling largely dormant in 1995 with the 4.0 version. The 4.3 version was released in 2006 and ran on Mac OS X and Solaris. A new release in 2010, version 4.4, has been developed by a group comprising some of the original team and independent programmers and is available for Mac OS X and Linux, as are all following versions. The follow-up 4.5 was released in January 2014, and three years later, version 2017.1 was released in May 2017. The Morphic user interface construction environment was originally developed by Randy Smith and John Maloney for the Self programming language. Morphic has been ported to other notable programming languages including Squeak, JavaScript, Python, and Objective-C. Self also inspired a number of languages based on its concepts. Most notable, perhaps, were NewtonScript for the Apple Newton and JavaScript used in all modern browsers. Other examples include Io, Lisaac and Agora. The IBM Tivoli Framework's distributed object system, developed in 1990, was, at the lowest level, a prototype based object system inspired by Self. Prototype-based programming languages Traditional class-based OO languages are based on a deep-rooted duality: Classes define the basic qualities and behaviours of objects. Object instances are particu
https://en.wikipedia.org/wiki/8-bit%20clean
8-bit clean is an attribute of computer systems, communication channels, and other devices and software, that process 8-bit character encodings without treating any byte as an in-band control code. History Until the early 1990s, many programs and data transmission channels were character-oriented and treated some characters, e.g., ETX, as control characters. Other assumed a stream of seven-bit characters, with values between 0 and 127; for example, the ASCII standard used only seven bits per character, avoiding an 8-bit representation in order to save on data transmission costs. On computers and data links using 8-bit bytes this left the top bit of each byte free for use as a parity, flag bit, or meta data control bit. 7-bit systems and data links are unable to directly handle more complex character codes which are commonplace in non-English-speaking countries with larger alphabets. Binary files of octets cannot be transmitted through 7-bit data channels directly. To work around this, binary-to-text encodings have been devised which use only 7-bit ASCII characters. Some of these encodings are uuencoding, Ascii85, SREC, BinHex, kermit and MIME's Base64. EBCDIC-based systems cannot handle all characters used in UUencoded data. However, the base64 encoding does not have this problem. SMTP and NNTP 8-bit cleanness Historically, various media were used to transfer messages, some of them only supporting 7-bit data, so an 8-bit message had high chances to be garbled during transmission in the 20th century. But some implementations really did not care about formal discouraging of 8-bit data and allowed high bit set bytes to pass through. Such implementations are said to be 8-bit clean. In general, a communications protocol is said to be 8-bit clean if it correctly passes through the high bit of each byte in the communication process. Many early communications protocol standards, such as (for SMTP), (for NNTP) and , were designed to work over such "7-bit" communication links. They specifically require the use of ASCII character set "transmitted as an 8-bit byte with the high-order bit cleared to zero" and some of these explicitly restrict all data to 7-bit characters. For the first few decades of email networks (1971 to the early 1990s), most email messages were plain text in the 7-bit US-ASCII character set. The definition of SMTP, like its predecessor , limits Internet Mail to lines (1000 characters or less) of 7-bit US-ASCII characters. Later the format of email messages was re-defined in order to support messages that are not entirely US-ASCII text (text messages in character sets other than US-ASCII, and non-text messages, such as audio and images). The header field Content-Transfer-Encoding=binary requires an 8-bit clean transport. specifies "NNTP operates over any reliable bi-directional 8-bit-wide data stream channel." and changes the character set for commands to UTF-8. However, still limits the character set to ASCII, including an
https://en.wikipedia.org/wiki/A-0%20System
The A-0 system (Arithmetic Language version 0), written by Grace Murray Hopper in 1951 and 1952 for the UNIVAC I, was an early compiler related tool developed for electronic computers. The A-0 functioned more as a loader or linker than the modern notion of a compiler. A program was specified as a sequence of subroutines and its arguments. The subroutines were identified by a numeric code and the arguments to the subroutines were written directly after each subroutine code. The A-0 system converted the specification into machine code that could be fed into the computer a second time to execute the said program. The A-0 system was followed by the A-1, A-2, A-3 (released as ARITH-MATIC), AT-3 (released as MATH-MATIC) and B-0 (released as FLOW-MATIC). The A-2 system was developed at the UNIVAC division of Remington Rand in 1953 and released to customers by the end of that year. Customers were provided the source code for A-2 and invited to send their improvements back to UNIVAC. Thus, A-2 could be considered an example of the result of an early philosophy similar to free and open-source software. See also History of compiler construction Notes External links Proceedings of the 1954 MIT Summer Session on "Digital Computers - Advanced Coding Techniques, section 7 - A2 Compiler and Associated Routines for use with Univac References Procedural programming languages Programming languages created in 1951
https://en.wikipedia.org/wiki/Orange%20Book
Orange Book may refer to: Trusted Computer System Evaluation Criteria, a computer security standard We Can Conquer Unemployment, 1929 manifesto by David Lloyd George and the Liberal Party The Orange Book: Reclaiming Liberalism, by members of the British Liberal Democrat party Approved Drug Products with Therapeutic Equivalence Evaluations, published by the FDA's Center for Drug Evaluation and Research The IUPAC Compendium of Analytical Nomenclature informally known as the Orange Book One of the compact disc standards collections in the Rainbow Books series Orange-Book-Standard, issued in 2009 by the German Federal Court of Justice on the interaction between patent law and standards Orange Book, a local area networking protocol based on the Cambridge Ring and one of the UK Coloured Book protocols Handbook of Directives and Permitted Conventions for the English Bridge Union A book about OpenGL Shading Language See also The Orange Box Black Book (disambiguation) Blue book (disambiguation) Green Book (disambiguation) Pink Book (disambiguation) Plum Book White book (disambiguation) Yellow Book (disambiguation)
https://en.wikipedia.org/wiki/ARITH-MATIC
You may have been looking for arithmetic, a branch of mathematics. ARITH-MATIC is an extension of Grace Hopper's A-2 programming language, developed around 1955. ARITH-MATIC was originally known as A-3, but was renamed by the marketing department of Remington Rand UNIVAC. Some ARITH-MATIC subroutines See also A-0 System References External links Website at Boise via Internet Archive Numerical programming languages
https://en.wikipedia.org/wiki/AAP%20DTD
In computing, AAP DTD (variously known as AAP Electronic Manuscript Standard, AAP standard, AAP/EPSIG standard, and ANSI/NISO Z39.59) is a set of three SGML Document Type Definitions (book, journal, and article) for scientific documents, defined by the Association of American Publishers. It was ratified as a U.S. standard under the name ANSI/NISO Z39.59 in 1988, and evolved into the international ISO 12083 standard in 1993. It was supplanted as a U.S. standard by ANSI/ISO 12083 in 1995. Development and standard ratifications From 1983 to 1987, the Association of American Publishers (AAP), a coalition of book and journal publishers in North America, sponsored the Electronic Manuscript Project, the earliest effort to develop a commercial SGML application. The project sought to create an SGML standard for book, journal, and article creation. With the technical work led by Aspen Systems, over thirty information-processing organizations contributed to the project, including the US Library of Congress, the American Society of Indexers, the IEEE, the American Chemical Society, the American Institute of Physics, and the American Mathematical Society. Two preliminary works with restricted distribution were produced in 1985, the draft AAP DTD and author guidelines. The Electronic Publishing Special Interest Group (EPSIG) was founded to take over responsibility for the work from AAP. The consortium, sponsored by the Online Computer Library Center, recommended that the DTDs developed by the Electronic Manuscript Project should become an American standard. With the support of the AAP and the Graphic Communications Association, the AAP DTDs were ratified in 1988 as the American National Standards Institute's Electronic Manuscript Preparation and Markup (ANSI/NISO Z39.59) standard. Unlike the DTDs that ANSI/NISO Z39.59 specifies for books, serials and articles, the markup recommended for mathematics and tables is not part of the standard. As the standard is based on ASCII character encoding, it includes a large set of entity definitions for special characters. The AAP and EPSIG continued their collaboration and published a revised version of the specification in 1989. The AAP and the European Physical Society further collaborated on a standard method for marking up mathematical notation and tables in scientific documents. Building on this work, Eric van Herwijnen, then head of the text processing section at CERN, edited the specification for adoption by the International Organization for Standardization as ISO 12083, which was first published in 1993, revised in 1994 and last reconfirmed in 2016. ISO 12083 specifies four DTDs: Article, Book, Serial, and Math. In 1995 ANSI/NISO Z39.59:1988 was superseded by ISO 12083, which was adopted as U.S. standard ANSI/NISO/ISO 12083-1995 (R2009) Electronic Manuscript Preparation and Markup. This U.S. standard was withdrawn in 2016. Usage The AAP DTDs counted the academic publishing house Elsevier among their earlie
https://en.wikipedia.org/wiki/Apple%20Attachment%20Unit%20Interface
Apple Attachment Unit Interface (AAUI) is a mechanical re-design by Apple of the standard Attachment Unit Interface (AUI) used to connect Ethernet transceivers to computer equipment. AUI was popular in the era before the dominance of 10BASE-T networking that started in the early 1990s; AAUI was an attempt to make the connector much smaller and more user friendly, though the proprietary nature of the interface was also criticized. FriendlyNet AAUI is part of a system of Ethernet peripherals intended to make connecting over Ethernet much easier. At the time of the introduction of AAUI, Ethernet systems usually were 10BASE2, also known as thinnet. Apple's system is called FriendlyNet. A FriendlyNet 10BASE2 system does not use BNC T-connectors or separate 50 Ω terminators. Instead of a single BNC connector that is inserted into a T-connector placed inline, the FriendlyNet transceiver has two BNC connectors, one on each side, to which the cables are attached. The transceiver automatically terminates the network if a cable is missing from either side. Additionally, Apple 10BASE2 cables terminate the network when no device is attached to them. Thus the number of mistakes that could be made hooking up a thinnet network is reduced considerably. Since any of these mistakes can disable the network segment, this presents a significant improvement. FriendlyNet equipment was quite expensive and even third-party AAUI transceivers were rather expensive. Because of this, Apple's computers, billed as having built-in Ethernet, were expensive to connect to Ethernet, perhaps adding as much as a tenth to the total price of the computer system. Additionally, AAUI held no advantage for any system other than 10BASE2 and thus as 10BASE-T became ubiquitous it became impossible to justify the cost of an external transceiver at all. Apple eventually abandoned the system and sold off the name. Macintosh Quadra, Centris, PowerBook 500, Duo Dock II (for PowerBook Duo) and early Power Macintoshes have AAUI ports, which require external transceivers. By the time AAUI was nearing the end of its life, an AAUI transceiver could cost even more than an inexpensive Ethernet card on a PC—a disproportionate amount—as network cards for PCs did not become commodity items until the spread of high-speed access to the Internet in the early 21st century. Later models include both AAUI and modular connector ports for directly connecting 10BASE-T; either can be used, but not both at the same time. AAUI connectors are also present on some Processor Direct Slot Ethernet adapter cards used in Macintosh LC and Performa machines. AAUI had disappeared by the late 1990s, when new Apple machines, starting with the beige Power Macintosh G3 series, include RJ-45 jacks instead of both. Third-party vendors Many third parties also created AAUI transceivers. Most made simplifications to the connectors and cables, presumably to reduce costs. Most third parties, as well as any non-Apple equipment would use
https://en.wikipedia.org/wiki/Abbreviated%20Test%20Language%20for%20All%20Systems
Abbreviated Test Language for All Systems (ATLAS) is a specialized programming language for use with automatic test equipment (ATE). It is a compiled high-level computer language and can be used on any computer whose supporting software can translate it into the appropriate low-level instructions. History ATLAS Test Language The original language was developed by Aeronautical Radio, Incorporated (ARINC) and standardized under ANSI/IEEE-Std-416 and released on December 22, 1983 Its purpose was to serve as a standard programming language for testing and maintenance of electronic systems for military and commercial aerospace applications. The language was designed to be platform-independent. The ATLAS language is oriented toward the Unit Under Test (UUT) and is independent of the test equipment used. This allows interchangeability of test procedures developed by different organizations, and thus reduces costly duplication of test programming effort. The first ATLAS specification developed by the international committee was published in 1968. The basic document has been revised several times. An ATLAS implementation typically consists of an online compiler (OLC), test executive (TEX or Test Exec), and file manager and media exchange (FMX) packages. ATLAS is run in TEX mode on test stations while testing electronic equipment. Syntax and Structure The structure of an ATLAS program is very similar to FORTRAN. standard ATLAS program structure consists of two elements: preamble structure and procedural structure. The language makes extensive use of variables and statement syntax. An ATLAS statement consists of these fields: ,$ : single character flag separator (space) : statement number separator (space) : verb separator (comma) : format depends on statement terminator ($) Sample ATLAS Statements: 000250 DECLARE,DECIMAL,'A1'(4)$ 000300 FILL, 'A1', 'NUM', (1) 1, 5, (2) 20, 87, (3) 15, 12, (4) $30, 18 Comments may be included with a 'C' in the field. These ATLAS statements apply a voltage to a pin (stimulus) and verify the presence and characteristics of a voltage at a pin: ... 010200 APPLY, AC SIGNAL, VOLTAGE-PP 7.5V, FREQ 3 kHz, CNX HI=P1-$1 ... 010300 VERIFY, (VOLTAGE-AV INTO 'VAVG'), AC SIGNAL, VOLTAGE-PP RANGE 64V TO 1V, SAMPLE-WIDTH 10MSEC, SYNC-VOLTAGE 2 MAX 5, SYNC-NEG-SLOPE, MAX-TIME 0.5, GO-TO-STEP 400 IF GO, LL 0.5 UL 50, CNX HI=P2-4 LO=P2-5, SYNC HI=P2-8 LO=P2-$5 ... Applications ATLAS has been used in the U.S. Air Force primarily on test stations for testing the avionic components of the F-15 Eagle, F-16 Fighting Falcon, C-5 Galaxy, C-17 Globemaster III, and B-1 Lancer. The U.S. Navy uses ATLAS-based programs for testing avionics systems of the P-3C Orion, UH-1Y Venom, AH-1Z Viper, SH-60 Seahawk, E-2C Hawkeye, F-14 Tomcat, F/A-18 Hornet, S-3 Viking, A-6 Intruder, EA-6B Prowler, AV8B Harrier, and V-22 Osprey. The
https://en.wikipedia.org/wiki/ABC%20ALGOL
ABC ALGOL is an extension of the programming language ALGOL 60 with arbitrary data structures and user-defined operators, intended for computer algebra (symbolic mathematics). Despite its advances, it was never used as widely as Algol proper. References External links ALGOL 60 dialect
https://en.wikipedia.org/wiki/Application%20binary%20interface
In computer software, an application binary interface (ABI) is an interface between two binary program modules. Often, one of these modules is a library or operating system facility, and the other is a program that is being run by a user. An ABI defines how data structures or computational routines are accessed in machine code, which is a low-level, hardware-dependent format. In contrast, an application programming interface (API) defines this access in source code, which is a relatively high-level, hardware-independent, often human-readable format. A common aspect of an ABI is the calling convention, which determines how data is provided as input to, or read as output from, computational routines. Examples of this are the x86 calling conventions. Adhering to an ABI (which may or may not be officially standardized) is usually the job of a compiler, operating system, or library author. However, an application programmer may have to deal with an ABI directly when writing a program in a mix of programming languages, or even compiling a program written in the same language with different compilers. An ABI is as important as the underlying hardware architecture. The program will fail equally if it violates any constraints of these two. Description Details covered by an ABI include the following: Processor instruction set, with details like register file structure, stack organization, memory access types, etc. Sizes, layouts, and alignments of basic data types that the processor can directly access Calling convention, which controls how the arguments of functions are passed, and return values retrieved; for example, it controls the following: Whether all parameters are passed on the stack, or some are passed in registers Which registers are used for which function parameters Whether the first function parameter passed on the stack is pushed first or last Whether the caller or callee is responsible for cleaning up the stack after the function call How an application should make system calls to the operating system, and if the ABI specifies direct system calls rather than procedure calls to system call stubs, the system call numbers In the case of a complete operating system ABI, the binary format of object files, program libraries, etc. Complete ABIs A complete ABI, such as the Intel Binary Compatibility Standard (iBCS), allows a program from one operating system supporting that ABI to run without modifications on any other such system, provided that necessary shared libraries are present, and similar prerequisites are fulfilled. ABIs can also standardize details such as the C++ name mangling, exception propagation, and calling convention between compilers on the same platform, but do not require cross-platform compatibility. Embedded ABIs An embedded-application binary interface (EABI) specifies standard conventions for file formats, data types, register usage, stack frame organization, and function parameter passing of an embedded s
https://en.wikipedia.org/wiki/Augmented%20Backus%E2%80%93Naur%20form
In computer science, augmented Backus–Naur form (ABNF) is a metalanguage based on Backus–Naur form (BNF), but consisting of its own syntax and derivation rules. The motive principle for ABNF is to describe a formal system of a language to be used as a bidirectional communications protocol. It is defined by Internet Standard 68 ("STD 68", type case sic), which is , and it often serves as the definition language for IETF communication protocols. supersedes . updates it, adding a syntax for specifying case-sensitive string literals. Overview An ABNF specification is a set of derivation rules, written as rule = definition ; comment CR LF where rule is a case-insensitive nonterminal, the definition consists of sequences of symbols that define the rule, a comment for documentation, and ending with a carriage return and line feed. Rule names are case-insensitive: <rulename>, <Rulename>, <RULENAME>, and <rUlENamE> all refer to the same rule. Rule names consist of a letter followed by letters, numbers, and hyphens. Angle brackets (<, >) are not required around rule names (as they are in BNF). However, they may be used to delimit a rule name when used in prose to discern a rule name. Terminal values Terminals are specified by one or more numeric characters. Numeric characters may be specified as the percent sign %, followed by the base (b = binary, d = decimal, and x = hexadecimal), followed by the value, or concatenation of values (indicated by .). For example, a carriage return is specified by %d13 in decimal or %x0D in hexadecimal. A carriage return followed by a line feed may be specified with concatenation as %d13.10. Literal text is specified through the use of a string enclosed in quotation marks ("). These strings are case-insensitive, and the character set used is (US-)ASCII. Therefore, the string "abc" will match “abc”, “Abc”, “aBc”, “abC”, “ABc”, “AbC”, “aBC”, and “ABC”. RFC 7405 added a syntax for case-sensitive strings: %s"aBc" will only match "aBc". Prior to that, a case-sensitive string could only be specified by listing the individual characters: to match “aBc”, the definition would be %d97.66.99. A string can also be explicitly specified as case-insensitive with a %i prefix. Operators White space White space is used to separate elements of a definition; for space to be recognized as a delimiter, it must be explicitly included. The explicit reference for a single whitespace character is WSP (linear white space), and LWSP is for zero or more whitespace characters with newlines permitted. The LWSP definition in RFC5234 is controversial because at least one whitespace character is needed to form a delimiter between two fields. Definitions are left-aligned. When multiple lines are required (for readability), continuation lines are indented by whitespace. Comment ; comment A semicolon (;) starts a comment that continues to the end of the line. Concatenation Rule1 Rule2 A rule may be defined by listing a sequence of rule nam
https://en.wikipedia.org/wiki/Abort%20%28computing%29
In a computer or data transmission system, to abort means to terminate, usually in a controlled manner, a processing activity because it is impossible or undesirable for the activity to proceed or in conjunction with an error. Such an action may be accompanied by diagnostic information on the aborted process. In addition to being a verb, abort also has two noun senses. In the most general case, the event of aborting can be referred to as an abort. Sometimes the event of aborting can be given a special name, as in the case of an abort involving a Unix kernel where it is known as a kernel panic. Specifically in the context of data transmission, an abort is a function invoked by a sending station to cause the recipient to discard or ignore all bit sequences transmitted by the sender since the preceding flag sequence. In the C programming language, abort() is a standard library function that terminates the current application and returns an error code to the host environment. See also Abort, Retry, Fail? Abnormal end Crash Hang Reset Reboot References Computing terminology
https://en.wikipedia.org/wiki/Alternating%20bit%20protocol
Alternating bit protocol (ABP) is a simple network protocol operating at the data link layer (OSI layer 2) that retransmits lost or corrupted messages using FIFO semantics. It can be seen as a special case of a sliding window protocol where a simple timer restricts the order of messages to ensure receivers send messages in turn while using a window of 1 bit. Design Messages are sent from transmitter A to receiver B. Assume that the channel from A to B is initialized and that there are no messages in transit. Each message from A to B contains a data part and a one-bit sequence number, i.e., a value that is 0 or 1. B has two acknowledge codes that it can send to A: ACK0 and ACK1. When A sends a message, it resends it continuously, with the same sequence number, until it receives an acknowledgment from B that contains the same sequence number. When that happens, A complements (flips) the sequence number and starts transmitting the next message. When B receives a message that is not corrupted and has sequence number 0, it starts sending ACK0 and keeps doing so until it receives a valid message with number 1. Then it starts sending ACK1, etc. This means that A may still receive ACK0 when it is already transmitting messages with sequence number one. (And vice versa.) It treats such messages as negative-acknowledge codes (NAKs). The simplest behaviour is to ignore them all and continue transmitting. The protocol may be initialized by sending bogus messages and acks with sequence number 1. The first message with sequence number 0 is a real message. Bounded Retransmission Protocol Bounded Retransmission Protocol (BRP) is a variant of the alternating bit protocol introduced by Philips. The service it delivers is to transfer in a reliable manner, if possible, large files (sequence of data of arbitrary length) from a sender to a receiver. Unlike ABP, BRP deals with sequence numbers of datum in the file and interrupts transfer after fixed number of retransmissions for a datum. History Donald Davies' team at the National Physical Laboratory introduced the concept of an alternating bit protocol in 1968 for the NPL network. An ABP was used by the ARPANET and by the European Informatics Network. See also Acknowledge character Information theory Negative-acknowledge character Stop-and-wait ARQ References Network protocols
https://en.wikipedia.org/wiki/ABR
Abr or ABR may refer to: Technology Available Bit Rate, a service used in ATM networks Average bitrate, the average amount of data transferred per second Area border router, in the Open Shortest Path First protocol Adaptive bit rate, a method of video transmission through the Internet Anaerobic baffled reactor, a type of decentralized wastewater system Transport Abercynon North railway station, a closed railway station formerly serving the village of Abercynon in the Cynon Valley, South Wales ABR, IATA code for Aberdeen Regional Airport, an airport east of Aberdeen, South Dakota, United States ABR, ICAO designation for ASL Airlines Ireland, a freight airline Athens Line, a railroad operating in the U.S. state of Georgia Other uses Abr, a village in Iran Accredited Buyer Representative, a designation of the National Association of Realtors Abron dialect, a major dialect of the Akan language of Central Ghana Addison Brown (1830–1913), American jurist and botanist (standard author abbreviation: A. Br.) American Board of Radiology Auditory brainstem response, an electrical signal evoked from the brainstem of a human or other mammal August Burns Red, a metalcore band from Lancaster, Pennsylvania Australian Biblical Review, academic journal Australian Book Review, a literary magazine Australian Business Register, government body tasked with issuing and maintaining Australian Business Numbers (ABNs)
https://en.wikipedia.org/wiki/Automatic%20baud%20rate%20detection
Automatic baud rate detection (ABR, autobaud) refers to the process by which a receiving device (such as a modem) determines the speed, code level, start bit, and stop bits of incoming data by examining the first character, usually a preselected sign-on character (syncword) on a UART connection. ABR allows the receiving device to accept data from a variety of transmitting devices operating at different speeds without needing to establish data rates in advance. Process During the autobaud process, the baud rate of received character stream is determined by examining the received pattern and its timing, and the length of a start bit. These type of baud rate detection mechanism are supported by many hardware chips including processors such as STM32 MPC8280, MPC8360, and so on. When start bit length is used to determine the baud rate, it requires the character to be odd since UART sends LSB bit first — this particular bit order scheme is referred to as little-endian. Often symbols 'a' or 'A' (0x61 or 0x41) are used. For example, the MPC8270 SCC tries to detect the length of the UART start bit for autobaud. Many protocols begin each frame with a preamble of alternating 1 and 0 bits that can be used for automatic baud rate detection. For example, the TI PGA460 uses a 'U' ( 0x55 ) sync byte for automatic baud rate detection as well as frame synchronization, and so does the LIN header (Local Interconnect Network#Header). For example, the UART-based FlexWire protocol begins each frame with a 'U' (0x55) sync byte. FlexWire receivers use the sync byte to precisely set their UART bit-clock frequency without a high-precision oscillator. For example, the Ethernet preamble contains 56 bits of alternating 1 and 0 bits for synchronizing bit clocks. Support Most modems currently on the market support autobaud. Before receiving any input data, most modems use a default baud rate of 9600 for output. For example, the following modems have been verified for autobaud and default output baud rate 9600: USRobotics USR5686G 56K Serial Controller Fax modem Hayes V92 External modem Microcom DeskPorte 28.8P The baud rate of modems are adjusted automatically after receiving input data by the autobaud process. See also Autonegotiation Telecommunications References "17.2 Autobaud Operation on a UART in MPC8280 PowerQUICC™ II Family Reference Manual" http://www.nxp.com/files/netcomm/doc/ref_manual/MPC8280RM.pdf "Automatic Baud Rate Detection on the MSP430" https://web.archive.org/web/20161026080239/http://www.ti.com/lit/an/slaa215/slaa215.pdf "How to implement “auto baud rate detection” feature on Cortex-M3" https://stackoverflow.com/q/38979647 "mpc8270 SCC2 UART issue" https://community.nxp.com/message/906833 Data transmission Units of measurement Telecommunications techniques
https://en.wikipedia.org/wiki/Abstract%20interpretation
In computer science, abstract interpretation is a theory of sound approximation of the semantics of computer programs, based on monotonic functions over ordered sets, especially lattices. It can be viewed as a partial execution of a computer program which gains information about its semantics (e.g., control-flow, data-flow) without performing all the calculations. Its main concrete application is formal static analysis, the automatic extraction of information about the possible executions of computer programs; such analyses have two main usages: inside compilers, to analyse programs to decide whether certain optimizations or transformations are applicable; for debugging or even the certification of programs against classes of bugs. Abstract interpretation was formalized by the French computer scientist working couple Patrick Cousot and Radhia Cousot in the late 1970s. Intuition This section illustrates abstract interpretation by means of real-world, non-computing examples. Consider the people in a conference room. Assume a unique identifier for each person in the room, like a social security number in the United States. To prove that someone is not present, all one needs to do is see if their social security number is not on the list. Since two different people cannot have the same number, it is possible to prove or disprove the presence of a participant simply by looking up his or her number. However it is possible that only the names of attendees were registered. If the name of a person is not found in the list, we may safely conclude that that person was not present; but if it is, we cannot conclude definitely without further inquiries, due to the possibility of homonyms (for example, two people named John Smith). Note that this imprecise information will still be adequate for most purposes, because homonyms are rare in practice. However, in all rigor, we cannot say for sure that somebody was present in the room; all we can say is that he or she was possibly here. If the person we are looking up is a criminal, we will issue an alarm; but there is of course the possibility of issuing a false alarm. Similar phenomena will occur in the analysis of programs. If we are only interested in some specific information, say, "was there a person of age n in the room?", keeping a list of all names and dates of births is unnecessary. We may safely and without loss of precision restrict ourselves to keeping a list of the participants' ages. If this is already too much to handle, we might keep only the age of the youngest, m and oldest person, M. If the question is about an age strictly lower than m or strictly higher than M, then we may safely respond that no such participant was present. Otherwise, we may only be able to say that we do not know. In the case of computing, concrete, precise information is in general not computable within finite time and memory (see Rice's theorem and the halting problem). Abstraction is used to allow for generaliz
https://en.wikipedia.org/wiki/Abstraction%20%28computer%20science%29
In software engineering and computer science, abstraction is the process of generalizing concrete details, such as attributes, away from the study of objects and systems to focus attention on details of greater importance. Abstraction is a fundamental concept in computer science and software engineering, especially within the object-oriented programming paradigm. Examples of this include: the usage of abstract data types to separate usage from working representations of data within programs; the concept of functions or subroutines which represent a specific way of implementing control flow; the process of reorganizing common behavior from groups of non-abstract classes into abstract classes using inheritance and sub-classes, as seen in object-oriented programming languages. Rationale Computing mostly operates independently of the concrete world. The hardware implements a model of computation that is interchangeable with others. The software is structured in architectures to enable humans to create the enormous systems by concentrating on a few issues at a time. These architectures are made of specific choices of abstractions. Greenspun's Tenth Rule is an aphorism on how such an architecture is both inevitable and complex. A central form of abstraction in computing is language abstraction: new artificial languages are developed to express specific aspects of a system. Modeling languages help in planning. Computer languages can be processed with a computer. An example of this abstraction process is the generational development of programming languages from the machine language to the assembly language and the high-level language. Each stage can be used as a stepping stone for the next stage. The language abstraction continues for example in scripting languages and domain-specific programming languages. Within a programming language, some features let the programmer create new abstractions. These include subroutines, modules, polymorphism, and software components. Some other abstractions such as software design patterns and architectural styles remain invisible to a translator and operate only in the design of a system. Some abstractions try to limit the range of concepts a programmer needs to be aware of, by completely hiding the abstractions that they in turn are built on. The software engineer and writer Joel Spolsky has criticised these efforts by claiming that all abstractions are leaky – that they can never completely hide the details below; however, this does not negate the usefulness of abstraction. Some abstractions are designed to inter-operate with other abstractions – for example, a programming language may contain a foreign function interface for making calls to the lower-level language. Abstraction features Programming languages Different programming languages provide different types of abstraction, depending on the intended applications for the language. For example: In object-oriented programming languages such as C++,
https://en.wikipedia.org/wiki/Abstract%20machine
In computer science, an abstract machine is a theoretical model that allows for a detailed and precise analysis of how a computer system functions. It is similar to a mathematical function in that it receives inputs and produces outputs based on predefined rules. Abstract machines vary from literal machines in that they are expected to perform correctly and independently of hardware. Abstract machines are "machines" because they allow step-by-step execution of programmes; they are "abstract" because they ignore many aspects of actual (hardware) machines. A typical abstract machine consists of a definition in terms of input, output, and the set of allowable operations used to turn the former into the latter. They can be used for purely theoretical reasons as well as models for real-world computer systems. In the theory of computation, abstract machines are often used in thought experiments regarding computability or to analyse the complexity of algorithms. This use of abstract machines is fundamental to the field of computational complexity theory, such as finite state machines, Mealy machines, push-down automata, and Turing machines. Classification Abstract machines are generally classified into two types, depending on the number of operations they are allowed to undertake at any one time: deterministic abstract machines and non-deterministic abstract machines. A deterministic abstract machine is a system in which a particular beginning state or condition always yields the same outputs. There is no randomness or variation in how inputs are transformed into outputs. In contrast, a non-deterministic abstract machine can provide various outputs for the same input on different executions. Unlike a deterministic algorithm, which gives the same result for the same input regardless of the number of iterations, a non-deterministic algorithm takes various paths to arrive to different outputs. Non-deterministic algorithms are helpful for obtaining approximate answers when deriving a precise solution using a deterministic approach is difficult or costly. Turing machines, for example, are some of the most fundamental abstract machines in computer science. These machines conduct operations on a tape (a string of symbols) of any length. Their instructions provide for both modifying the symbols and changing the symbol that the machine’s pointer is currently at. For example, a rudimentary Turing machine could have a single command, "convert symbol to 1 then move right", and this machine would only produce a string of 1s. This basic Turing machine is deterministic; however, nondeterministic Turing machines that can execute several actions given the same input may also be built. Implementation Any implementation of an abstract machine in the case of physical implementation (in hardware) uses some kind of physical device (mechanical or electronic) to execute the instructions of a programming language. An abstract machine, however, can also be implemented in so
https://en.wikipedia.org/wiki/List%20of%20Intel%20processors
This generational list of Intel processors attempts to present all of Intel's processors from the pioneering 4-bit 4004 (1971) to the present high-end offerings. Concise technical data is given for each product. Latest 13th generation Core Desktop (codenamed "Raptor Lake") 12th generation Core Desktop (codenamed "Alder Lake") Mobile (codenamed "Alder Lake") 11th generation Core Desktop (codenamed "Rocket Lake") Mobile (codenamed "Tiger Lake") 10th generation Core Desktop (codenamed "Comet Lake") Mobile (codenamed "Comet Lake", "Ice Lake", and "Amber Lake") 9th generation Core Desktop (codenamed "Coffee Lake Refresh") 8th generation Core Desktop (codenamed "Coffee Lake") Mobile (codenamed "Coffee Lake", "Amber Lake" and "Whiskey Lake") 7th generation Core Desktop (codenamed "Kaby Lake" and "Skylake-X") Mobile (codenamed "Kaby Lake" and "Apollo Lake") All processors All processors are listed in chronological order. The 4-bit processors Intel 4004 First microprocessor (single-chip IC processor) Introduced November 15, 1971 Clock rate 740 kHz 0.07 MIPS Bus width: 4 bits (multiplexed address/data due to limited pins) PMOS 2,300 transistors at 10 μm Addressable memory 640 bytes Program memory 4 KB (4096 B) Originally designed to be used in Busicom calculator MCS-4 family: 4004 – CPU 4001 – ROM & 4-bit Port 4002 – RAM & 4-bit Port 4003 – 10-bit Shift Register 4008 – Memory+I/O Interface 4009 – Memory+I/O Interface 4211 – General Purpose Byte I/O Port 4265 – Programmable General Purpose I/O Device 4269 – Programmable Keyboard Display Device 4289 – Standard Memory Interface for MCS-4/40 4308 – 8192-bit (1024 × 8) ROM w/ 4-bit I/O Ports 4316 – 16384-bit (2048 × 8) Static ROM 4702 – 2048-bit (256 × 8) EPROM 4801 – 5.185 MHz Clock Generator Crystal for 4004/4201A or 4040/4201A Intel 4040 Introduced in 1974 by Intel Clock speed was 740 kHz (same as the 4004 microprocessor) 3,000 transistors Interrupt features were available Programmable memory size: 8 KB (8192 B) 640 bytes of data memory 24-pin DIP The 8-bit processors 8008 Introduced April 1, 1972 Clock rate 500 kHz (8008-1: 800 kHz) 0.05 MIPS Bus width: 8 bits (multiplexed address/data due to limited pins) Enhancement load PMOS logic 3,500 transistors at 10 μm Addressable memory 16 KB Typical in early 8-bit microcomputers, dumb terminals, general calculators, bottling machines Developed in tandem with 4004 Originally intended for use in the Datapoint 2200 microcomputer Key volume deployment in Texas Instruments 742 microcomputer in >3,000 Ford dealerships 8080 Introduced April 1, 1974 Clock rate 2 MHz (very rare 8080B: 3 MHz) 0.29 MIPS Data bus width: 8 bits, address bus: 16 bits Enhancement load NMOS logic 4,500 transistors at 6 μm Assembly language downward compatible with 8008 Addressable memory 64 KB (64 × 1024 B) Up to 10× the performance of the 8008 Used in e.g. the Altair 8800, traffic light controller, cruise missile
https://en.wikipedia.org/wiki/NewtonScript
NewtonScript is a prototype-based programming language created to write programs for the Newton platform. It is heavily influenced by the Self programming language, but modified to be more suited to needs of mobile and embedded devices. History On August 3, 1993, Apple unveiled the Apple Newton MessagePad. The device had 640 KB RAM, 4 MB ROM, and an 20 MHz ARM 610 microprocessor. The main intention behind Newton project, was to develop a device capable of replacing a computer while being portable. With limited battery and memory, the developers were looking for programming language capable of meeting these challenges. The developers looked at the C++ programming language but realized that it lacked flexibility. They started focusing on prototype based languages and were impressed with Smalltalk and Self. Concurrently Apple was developing another dynamic programming language called Dylan, which was a strong candidate for the Newton platform. However, both Self and Dylan were dropped out of consideration, as they were both in nascent stage for proper integration. Instead, a team headed by Walter R. Smith developed a new language called NewtonScript. It was influenced by dynamic language like Smalltalk and prototype model based like Self. Features Although NewtonScript was heavily influenced by Self, there were some differences in both the languages. Differences arose due to three perceived problems with Self. One is that the typical Self snapshot requires 32 MB of RAM to run in, whereas the Newton platform was designed to use only 128 KB for the operating system. This required some serious paring down of the engine to make it fit and still have room for applications. Another issue was performance. Since the language would be used for the entire system, as opposed to just running on an existing operating system, it needed to run as fast as possible. Finally, the inheritance system in the normal Self engine had a single parent object, whereas GUIs typically have two — one for the objects and another for the GUI layout that is typically handled via the addition of a slot in some sort of GUI-hierarchy object (like View). The syntax was also modified to allow a more text-based programming style, as opposed to Self's widespread use of a GUI environment for programming. This allowed Newton programs to be developed on a computer running the Toolkit, where the programs would be compiled and then downloaded to a Newton device for running. One of the advantages of NewtonScript's prototype based inheritance was reduced memory usage, a key consideration in the 128 KB Newton. The prototype of a GUI object could actually be stored in ROM, so there was no need to copy default data or functions into working memory. Unlike class-based languages, where creation of an object involves memory being allocated to all of its attributes, NewtonScripts' use of prototype inheritance allowed it to allocated memory to few fields like _proto and _parent instead of
https://en.wikipedia.org/wiki/ARM%20architecture%20family
ARM (stylised in lowercase as arm, formerly an acronym for Advanced RISC Machines and originally Acorn RISC Machine) is a family of RISC instruction set architectures (ISAs) for computer processors. Arm Ltd. develops the ISAs and licenses them to other companies, who build the physical devices that use the instruction set. It also designs and licenses cores that implement these ISAs. Due to their low costs, low power consumption, and low heat generation, ARM processors are useful for light, portable, battery-powered devices, including smartphones, laptops, and tablet computers, as well as embedded systems. However, ARM processors are also used for desktops and servers, including the world's fastest supercomputer (Fugaku) from 2020 to 2022. With over 230 billion ARM chips produced, , ARM is the most widely used family of instruction set architectures. There have been several generations of the ARM design. The original ARM1 used a 32-bit internal structure but had a 26-bit address space that limited it to 64 MB of main memory. This limitation was removed in the ARMv3 series, which has a 32-bit address space, and several additional generations up to ARMv7 remained 32-bit. Released in 2011, the ARMv8-A architecture added support for a 64-bit address space and 64-bit arithmetic with its new 32-bit fixed-length instruction set. Arm Ltd. has also released a series of additional instruction sets for different rules; the "Thumb" extension adds both 32- and 16-bit instructions for improved code density, while Jazelle added instructions for directly handling Java bytecode. More recent changes include the addition of simultaneous multithreading (SMT) for improved performance or fault tolerance. History BBC Micro Acorn Computers' first widely successful design was the BBC Micro, introduced in December 1981. This was a relatively conventional machine based on the MOS Technology 6502 CPU but ran at roughly double the performance of competing designs like the Apple II due to its use of faster dynamic random-access memory (DRAM). Typical DRAM of the era ran at about 2 MHz; Acorn arranged a deal with Hitachi for a supply of faster 4 MHz parts. Machines of the era generally shared memory between the processor and the framebuffer, which allowed the processor to quickly update the contents of the screen without having to perform separate input/output (I/O). As the timing of the video display is exacting, the video hardware had to have priority access to that memory. Due to a quirk of the 6502's design, the CPU left the memory untouched for half of the time. Thus by running the CPU at 1 MHz, the video system could read data during those down times, taking up the total 2 MHz bandwidth of the RAM. In the BBC Micro, the use of 4 MHz RAM allowed the same technique to be used, but running at twice the speed. This allowed it to outperform any similar machine on the market. Acorn Business Computer 1981 was also the year that the IBM Personal Computer was introduced.
https://en.wikipedia.org/wiki/IBM%20POWER
IBM POWER (or IBM Power) may refer to: IBM POWER (software), an IBM operating system enhancement package IBM POWER instruction set architecture, a predecessor to the PowerPC/Power ISA instruction set architecture IBM Power microprocessors, a line of microprocessors implementing the IBM POWER and the PowerPC/Power ISA instruction set architectures IBM Power Systems, a family of server computers
https://en.wikipedia.org/wiki/Datasaab
Datasaab was the computer division of, and later a separate company spun off from, aircraft manufacturer Saab in Linköping, Sweden. History Its history dates back to December 1954, when Saab got a license to build its own copy of BESK, an early Swedish computer design using vacuum tubes, from Matematikmaskinnämnden (the Swedish governmental board for mathematical machinery). This clone was completed in and was named SARA. Its computing power was needed for design calculations for the next generation jet fighter Saab 37 Viggen. Intending to develop a navigational computer to place in an airplane, a team led by Viggo Wentzel came up with an all transistorized prototype computer named D2, completed in 1960, which came to define the company's activities in the following two decades. This development followed two lines. The main purpose was the development of a navigational computer for Viggen. A spinoff was the production of a line of civilian mini and mainframe computers for the commercial market. The military navigational computer CK37 was completed in 1971 and used in Viggen. The first civilian model D21 (1962) was sold to several countries and some 30 units were built. After that, several versions with names like D22 (1966), D220, D23, D5, D15, and D16 were developed. When the Swedish government needed 20 computers in the 1960s to calculate taxes, an evaluation between Saab's and IBM's machines proved Saab's better. Later the D5s were used to set up the first and largest bank terminal system for the Nordic banks, a system which was partly in use until the late 1980s. In 1971, technologies from Standard Radio & Telefon AB (SRT) and Saab were combined to form Stansaab AS, a joint venture that also included the state-owned Swedish Development Company. The company's primary focus was systems for real-time data applied to commercial and aviation applications. In 1975, the D23 system was seriously delayed and the solution was a joint company with Sperry UNIVAC. In 1978, this company merged with a division of Saab and became Datasaab. It was later owned by Ericsson, Nokia and ICL. When Intel sued the competitor UMC for patent infringement over technologies including microcode updates of processors and different parts of the processor working asynchronously, UMC could point to an awarded paper describing how these technologies had been used in the D23 already in 1972. Since Intel's patents were from 1978, that paper would prove prior art and imply that the patents never should have been granted at all. The case was later dropped. The academic computer society Lysator at Linköping University was founded in 1973 when a donation of an old used D21 was arranged. The company's history has been documented by members of its veteran society, Datasaabs Vänner ("Friends of Datasaab"), founded in 1993 to document and spread information about the computer history of Sweden, with focus on the region of Linköping and Datasaab. The society has documented t
https://en.wikipedia.org/wiki/Barcode
A barcode or bar code is a method of representing data in a visual, machine-readable form. Initially, barcodes represented data by varying the widths, spacings and sizes of parallel lines. These barcodes, now commonly referred to as linear or one-dimensional (1D), can be scanned by special optical scanners, called barcode readers, of which there are several types. Later, two-dimensional (2D) variants were developed, using rectangles, dots, hexagons and other patterns, called matrix codes or 2D barcodes, although they do not use bars as such. 2D barcodes can be read using purpose-built 2D optical scanners, which exist in a few different forms. 2D barcodes can also be read by a digital camera connected to a microcomputer running software that takes a photographic image of the barcode and analyzes the image to deconstruct and decode the 2D barcode. A mobile device with a built-in camera, such as smartphone, can function as the latter type of 2D barcode reader using specialized application software (The same sort of mobile device could also read 1D barcodes, depending on the application software). The barcode was invented by Norman Joseph Woodland and Bernard Silver and patented in the US in 1952. The invention was based on Morse code that was extended to thin and thick bars. However, it took over twenty years before this invention became commercially successful. UK magazine Modern Railways December 1962 pages 387–389 record how British Railways had already perfected a barcode-reading system capable of correctly reading rolling stock travelling at with no mistakes. An early use of one type of barcode in an industrial context was sponsored by the Association of American Railroads in the late 1960s. Developed by General Telephone and Electronics (GTE) and called KarTrak ACI (Automatic Car Identification), this scheme involved placing colored stripes in various combinations on steel plates which were affixed to the sides of railroad rolling stock. Two plates were used per car, one on each side, with the arrangement of the colored stripes encoding information such as ownership, type of equipment, and identification number. The plates were read by a trackside scanner located, for instance, at the entrance to a classification yard, while the car was moving past. The project was abandoned after about ten years because the system proved unreliable after long-term use. Barcodes became commercially successful when they were used to automate supermarket checkout systems, a task for which they have become almost universal. The Uniform Grocery Product Code Council had chosen, in 1973, the barcode design developed by George Laurer. Laurer's barcode, with vertical bars, printed better than the circular barcode developed by Woodland and Silver. Their use has spread to many other tasks that are generically referred to as automatic identification and data capture (AIDC). The first successful system using barcodes was in the UK supermarket group Sainsbury's in 1972
https://en.wikipedia.org/wiki/PureBasic
PureBasic is a commercially distributed procedural computer programming language and integrated development environment based on BASIC and developed by Fantaisie Software for Windows, Linux, and macOS. An Amiga version is available, although it has been discontinued and some parts of it are released as open-source. The first public release of PureBasic for Windows was on 17 December 2000. It has been continually updated ever since. PureBasic has a "lifetime license model". As cited on the website, the first PureBasic user (who registered in 1998) still has free access to new updates and this is not going to change. PureBasic compiles directly to IA-32, x86-64, PowerPC or 680x0 instruction sets, generating small standalone executables and DLLs which need no runtime libraries beyond the standard system libraries. Programs developed without using the platform-specific application programming interfaces (APIs) can be built easily from the same source file with little or no modification. PureBasic supports inline assembly, allowing the developer to include FASM assembler commands within PureBasic source code, while using the variables declared in PureBasic source code, enabling experienced programmers to improve the speed of speed-critical sections of code. PureBasic supports and has integrated the OGRE 3D Environment. Other 3D environments such as the Irrlicht Engine are unofficially supported. Programming language Characteristics PureBasic is a native cross platform 32 bit and 64 bit BASIC compiler. Currently supported systems are Windows, Linux, macOS. The AmigaOS version is legacy and open-source. The compiler produces native executables and the syntax of PureBasic is simple and straightforward, comparable to plain C without the brackets and with native unicode string handling and a large library of built-in support functions. It can compile console applications, GUI applications, and DLL files. Hello World example The following single line of PureBasic code will create a standalone x86 executable (4.5 KiB (4,608 bytes) on Windows version) that displays a message box with the text "Hello World". MessageRequester("Message Box", "Hello World") And the following variant of the same code, which instead uses an inline Windows API call with no need for declarations or other external references, will create an even smaller 2.0 KiB (2,048 bytes) standalone x86 executable for Windows. MessageBox_(0, "Hello World", "Message Box", 0) The following is a console version of the Hello World example. OpenConsole() ; Open a console window. Print("Hello, World!") Delay(5000) ; Pause for 5 seconds Procedural programming PureBasic is a "Second generation BASIC" language, with structured conditionals and loops, and procedure-oriented programming supported. The user is not required to use procedures, so a programmer may opt for a coding style which includes , and . Below is a sample procedure for sorting an array, although SortAr
https://en.wikipedia.org/wiki/Peripheral
A peripheral device, or simply peripheral, is an auxiliary hardware device used to transfer information into and out of a computer. The term peripheral device refers to all hardware components that are attached to a computer and are controlled by the computer system, but they are not the core components of the computer. Several categories of peripheral devices may be identified, based on their relationship with the computer: An input device sends data or instructions to the computer, such as a mouse, keyboard, graphics tablet, image scanner, barcode reader, game controller, light pen, light gun, microphone and webcam; An output device provides output data from the computer, such as a computer monitor, projector, printer, headphones, and computer speakers; An input/output device performs both input and output functions, such as a computer data storage device (including a disk drive, solid-state drive, USB flash drive, memory card and tape drive), modem, network adapter and multi-function printer. Many modern electronic devices, such as Internet-enabled digital watches, video game consoles, smartphone, and tablet computers, have interfaces for use as computer peripheral devices. See also Display device Expansion card Punched card input/output Punched tape Video game accessory References External links Peripheral – Encyclopædia Britannica
https://en.wikipedia.org/wiki/Simputer
The Simputer was a self-contained, open hardware Linux-based handheld computer, first released in 2002. Developed in, and primarily distributed within India, the product was envisioned as a low-cost alternative to personal computers. With initial goals of selling 50,000 simputers, the project had sold only about 4,000 units by 2005, and has been called a failure by news sources. Design and Hardware The device was designed by the Simputer Trust, a non-profit organization formed in November 1999 by seven Indian scientists and engineers led by Dr. Swami Manohar. The word "Simputer" is an acronym for "simple, inexpensive and multilingual people's computer", and is a trademark of the Simputer Trust. The device includes text-to-speech software and runs the Linux operating system. Similar in appearance to the PalmPilot class of handheld computers, the touch sensitive screen is operated on with a stylus; simple handwriting recognition software is provided by the program Tapatap. The Simputer Trust licensed two manufacturers to build the devices, Encore Software, which has also built the Mobilis for Corporate/Educational purposes and the SATHI for Defence purposes, and PicoPeta Simputers, which released a consumer product named the Amida Simputer. The device features include touchscreen, smart card, Serial port, and USB connections, and an Infrared Data Association (IrDA) port. It was released in both greyscale and color versions. Software The Simputer uses the Linux kernel (2.4.18 Kernel as of July 2005), and the Alchemy Window Manager (only the Amida Simputer). Software packages include: Scheduling, Calendar, Voice Recording and Playback, simple spreadsheet application, Internet and network connectivity, Web browsing and email, an e-Library, games, and support for Java ME, DotGNU (a free software implementation of .NET), and Flash. In addition, both licensees developed custom applications for microbanking, traffic police, and medical applications. Deployments In 2004, Simputers were used by the government of Karnataka to automate the process of land records procurement. Simputers were also used in a project in Chhattisgarh for the purpose of e-education. In 2005, they were used in a variety of applications, such as automobile engine diagnostics (Mahindra & Mahindra in Mumbai), tracking of iron-ore movement from mine pithead to shipping point (Dempo, Goa), Microcredit (Sanghamitra, Mysore), Electronic Money Transfer between UK and Ghana (XK8 Systems, UK), and others. In recent times, the Simputer has seen deployment by the police force to track traffic offenders and issue traffic tickets. Commercial production Pilot production of the Simputer started in September 2002. In 2004, the Amida Simputer became commercially available for 12450 and up (approximately US$240). The prices for Amida Simputer vary depending on the screen type (monochrome or colour). By 2006, both licensees had stopped actively marketing their Simputer devices. PicoPeta was ac
https://en.wikipedia.org/wiki/ACID
In computer science, ACID (atomicity, consistency, isolation, durability) is a set of properties of database transactions intended to guarantee data validity despite errors, power failures, and other mishaps. In the context of databases, a sequence of database operations that satisfies the ACID properties (which can be perceived as a single logical operation on the data) is called a transaction. For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction. In 1983, Andreas Reuter and Theo Härder coined the acronym ACID, building on earlier work by Jim Gray who named atomicity, consistency, and durability, but not isolation, when characterizing the transaction concept. These four properties are the major guarantees of the transaction paradigm, which has influenced many aspects of development in database systems. According to Gray and Reuter, the IBM Information Management System supported ACID transactions as early as 1973 (although the acronym was created later). Characteristics The characteristics of these four properties as defined by Reuter and Härder are as follows: Atomicity Transactions are often composed of multiple statements. Atomicity guarantees that each transaction is treated as a single "unit", which either succeeds completely or fails completely: if any of the statements constituting a transaction fails to complete, the entire transaction fails and the database is left unchanged. An atomic system must guarantee atomicity in each and every situation, including power failures, errors, and crashes. A guarantee of atomicity prevents updates to the database from occurring only partially, which can cause greater problems than rejecting the whole series outright. As a consequence, the transaction cannot be observed to be in progress by another database client. At one moment in time, it has not yet happened, and at the next, it has already occurred in whole (or nothing happened if the transaction was canceled in progress). Consistency Consistency ensures that a transaction can only bring the database from one consistent state to another, preserving database invariants: any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof. This prevents database corruption by an illegal transaction. Referential integrity guarantees the primary key–foreign key relationship. Isolation Transactions are often executed concurrently (e.g., multiple transactions reading and writing to a table at the same time). Isolation ensures that concurrent execution of transactions leaves the database in the same state that would have been obtained if the transactions were executed sequentially. Isolation is the main goal of concurrency control; depending on the isolation level used, the effects of an incomplete transaction might not be visible to other tra
https://en.wikipedia.org/wiki/Client
Client(s) or The Client may refer to: Client (business) Client (computing), hardware or software that accesses a remote service on another computer Customer or client, a recipient of goods or services in return for monetary or other valuable considerations Client (ancient Rome), an individual protected and sponsored by a patron Client (band), a British synthpop band Client (album), a 2003 album by Client Clients (album), a 2005 album by The Red Chord The Client (novel), a 1993 legal thriller by John Grisham The Client (1994 film), a film adaptation The Client (TV series), a 1995–1996 television series adaptation The Client (Star Wars), a character in The Mandalorian The Client (2011 film), a South Korean courtroom thriller "The Client" (The Office), an episode of The Office See also Client (prostitution) Client state, which is economically, politically, or militarily subordinate to another more powerful state
https://en.wikipedia.org/wiki/All%20My%20Children
All My Children (often shortened to AMC) is an American television soap opera that aired on ABC from January 5, 1970, to September 23, 2011, and on The Online Network (TOLN) from April 29 to September 2, 2013, via Hulu, Hulu Plus, and iTunes. Created by Agnes Nixon, All My Children is set in Pine Valley, Pennsylvania, a fictional suburb of Philadelphia, which is modeled on the actual Philadelphia suburb of Rosemont. The original series featured Susan Lucci as Erica Kane, one of daytime television's most popular characters. All My Children was the first new network daytime drama to debut in the 1970s. Originally owned by Creative Horizons, Inc., the company created by Nixon and her husband, Bob, the show was sold to ABC in January 1975. The series started at a half-hour in per-installment length, then was expanded to a full hour on April 25, 1977. Earlier, the show had experimented with the full-hour format for one week starting on June 30, 1975, after which Ryan's Hope premiered. From 1970 to 1990, All My Children was recorded at ABC's TV18 at 101 West 67th Street, now a 50-story apartment tower. From March 1990 to December 2009, it was taped at ABC's Studio TV23 at 320 West 66th Street in Manhattan, New York City, New York. In December 2009, the locale for taping the series moved from Manhattan to less costly Los Angeles, California. The show was then produced in Stages 1 and 2 at the Andrita Studios in Los Angeles, California, from 2010 to 2011, and then at the Connecticut Film Center in Stamford, Connecticut. All My Children started taping in high definition on January 4, 2010, and began airing in high definition on February 3, 2010. All My Children became the third soap opera to be produced and broadcast in high definition. At one point, the program's popularity positioned it as the most widely recorded television show in the United States. Also, in a departure from societal norms at the time, All My Children had an audience in the mid-1970s that was estimated to be 30% male. The show ranked No. 1 in the daytime Nielsen ratings in the 1978–79 season. Throughout most of the 1980s and into the early 1990s, All My Children was the No. 2 daytime soap opera on the air. However, like the rest of the soap operas in the United States, All My Children experienced unprecedented declines in its daytime ratings during the 2000s. By the 2010s, it had become one of the least watched soap operas in daytime television. On April 14, 2011, ABC announced that it was canceling All My Children after a run of 41 years. On July 7, 2011, ABC sold the licensing rights of All My Children to third-party production company Prospect Park with the show set to continue on the internet as a series of webisodes. The show taped its final scenes for ABC on August 30, 2011, and its final episode on the network aired on September 23, 2011, with a cliffhanger. On September 26, 2011, the following Monday, ABC replaced All My Children with a new talk show, The Chew. Prospect Pa
https://en.wikipedia.org/wiki/LPF
LPF may refer to: IATA code for Liupanshui Yuezhao Airport, China League for Programming Freedom, an organization promoting free software Lembaga Penapis Filem, or the Film Censorship Board of Malaysia Level playing field Libertarian Party of Florida Lietuvos plaukimo federacija, Lithuanian Swimming Federation Liga Panameña de Fútbol, a professional football league in Panama Liga Profesional de Fútbol, a professional football league in Argentina Liga Profesionistă de Fotbal, a professional football league governing body in Romania Light press fit, an interference fit in engineering Linux packet filter in computing LISA Pathfinder, a European Space Agency spacecraft Liters per flush, as shown on American urinals "1 gpf/3.7 lpf" Low-pass filter, type of signal filter in acoustics Low-power field, in microscopic examination, a wide field of view due to a low level of magnification Nissan Stadium (formerly LP Field), a stadium in Nashville, Tennessee; home of the Tennessee Titans Pim Fortuyn List, Dutch political party
https://en.wikipedia.org/wiki/Larry%20Page
Lawrence Edward Page (born March 26, 1973) is an American businessperson, computer scientist and internet entrepreneur best known for co-founding Google with Sergey Brin. Page was chief executive officer of Google from 1997 until August 2001 when he stepped down in favor of Eric Schmidt and then again from April 2011 until July 2015 when he became CEO of its newly formed parent organisation Alphabet Inc. which was created to deliver "major advancements" as Google's parent company, a post he held until December 4, 2019 when he along with his co-founder Brin stepped down from all executive positions and day-to-day roles within the company. He remains an Alphabet board member, employee, and controlling shareholder. As of October 2023, Page has an estimated net worth of $118 billion according to the Bloomberg Billionaires Index, making him the sixth-richest person in the world. He has also invested in flying car startups Kitty Hawk and Opener. Page is the co-creator and namesake of PageRank, a search ranking algorithm for Google for which he received the Marconi Prize in 2004 along with co-writer Brin. Early life Lawrence Edward Page was born on March 26, 1973, in Lansing, Michigan. His mother is Jewish; his maternal grandfather later immigrated to Israel, though Page's household while growing up was secular. His father, Carl Victor Page Sr., earned a PhD in computer science from the University of Michigan. BBC reporter Will Smale described him as a "pioneer in computer science and artificial intelligence". Page's paternal grandparents came from a Protestant background. Page's father was a computer science professor at Michigan State University and his mother Gloria was an instructor in computer programming at Lyman Briggs College at the same institution. Larry's parents divorced when he was eight years old, but he maintained a good relationship both with his mother Gloria and his father's long-term partner and MSU professor Joyce Wildenthal. During an interview, Page recalled his childhood home "was usually a mess, with computers, science, and technology magazines and Popular Science magazines all over the place", an environment in which he immersed himself. Page was an avid reader during his youth, writing in his 2013 Google founders letter: "I remember spending a huge amount of time pouring [sic] over books and magazines". According to writer Nicholas Carlson, the combined influence of Page's home atmosphere and his attentive parents "fostered creativity and invention". Page also played instruments and studied music composition while growing up. His parents sent him to music summer camp—Interlochen Arts Camp at Interlochen, Michigan, and Page has mentioned that his musical education inspired his impatience and obsession with speed in computing. "In some sense, I feel like music training led to the high-speed legacy of Google for me". In an interview Page said that "In music, you're very cognizant of time. Time is like the primary thing" and
https://en.wikipedia.org/wiki/Simple%20API%20for%20XML
SAX (Simple API for XML) is an event-driven online algorithm for lexing and parsing XML documents, with an API developed by the XML-DEV mailing list. SAX provides a mechanism for reading data from an XML document that is an alternative to that provided by the Document Object Model (DOM). Where the DOM operates on the document as a whole—building the full abstract syntax tree of an XML document for convenience of the user—SAX parsers operate on each piece of the XML document sequentially, issuing parsing events while making a single pass through the input stream. Definition Unlike DOM, there is no formal specification for SAX. The Java implementation of SAX is considered to be normative. SAX processes documents state-independently, in contrast to DOM which is used for state-dependent processing of XML documents. Benefits A SAX parser only needs to report each parsing event as it happens, and normally discards almost all of that information once reported (it does, however, keep some things, for example a list of all elements that have not been closed yet, in order to catch later errors such as end-tags in the wrong order). Thus, the minimum memory required for a SAX parser is proportional to the maximum depth of the XML file (i.e., of the XML tree) and the maximum data involved in a single XML event (such as the name and attributes of a single start-tag, or the content of a processing instruction, etc.). This much memory is usually considered negligible. A DOM parser, in contrast, has to build a tree representation of the entire document in memory to begin with, thus using memory that increases with the entire document length. This takes considerable time and space for large documents (memory allocation and data-structure construction take time). The compensating advantage, of course, is that once loaded any part of the document can be accessed in any order. Because of the event-driven nature of SAX, processing documents is generally far faster than DOM-style parsers, so long as the processing can be done in a start-to-end pass. Many tasks, such as indexing, conversion to other formats, very simple formatting and the like can be done that way. Other tasks, such as sorting, rearranging sections, getting from a link to its target, looking up information on one element to help process a later one and the like require accessing the document structure in complex orders and will be much faster with DOM than with multiple SAX passes. Some implementations do not neatly fit either category: a DOM approach can keep its persistent data on disk, cleverly organized for speed (editors such as SoftQuad Author/Editor and large-document browser/indexers such as DynaText do this); while a SAX approach can cleverly cache information for later use (any validating SAX parser keeps more information than described above). Such implementations blur the DOM/SAX tradeoffs, but are often very effective in practice. Due to the nature of DOM, streamed reading from dis
https://en.wikipedia.org/wiki/Portable%20Distributed%20Objects
Portable Distributed Objects (PDO) is an application programming interface (API) for creating object-oriented code that can be executed remotely on a network of computers. It was created by NeXT Computer, Inc. using their OpenStep system, whose use of Objective-C made the package very easy to write. It was characterized by its very light weight and high speed in comparison to similar systems such as CORBA. Versions of PDO were available for Solaris, HP-UX and all versions of the OPENSTEP system, although an agreement was also announced for a version to be made for Digital Unix, then still known as OSF/1, with delivery anticipated after versions for SunOS and Solaris had been released. Product licence pricing for these platforms varied from $2,500 for use on a "small server" up to $10,000 for use on a "large server". A version that worked with Microsoft OLE was also available called D'OLE, allowing distributed code written using PDO on any platform to be presented on Microsoft systems as if they were local OLE objects. PDO was one of a number of distributed object systems created in the early 1990s, a design model where "front end" applications on GUI-based microcomputers would call code running on mainframe and minicomputers for their processing and data storage. Microsoft was evolving OLE into the Component Object Model (COM) and a similar distributed version called DCOM, IBM had their System Object Model (SOM/DSOM), Sun Microsystems was promoting their Distributed Objects Everywhere, and there were a host of smaller players as well. With the exception of the limited functionality in COM, most of these systems were extremely heavyweight, tended to be very large and slow, and often were very difficult to use. PDO, on the other hand, relied on a small number of features in the Objective-C runtime to handle both portability as well as distribution. The key feature was the language's support for a "second chance" method in all classes; if a method call on an object failed because the object didn't support it (normally not allowed in most languages due to strong typing), the runtime would then bundle the message into a compact format and pass it back into the object's forwardInvocation method. The normal behavior for forwardInvocation was to return an error, including details taken from the message (the "invocation"). PDO instead supplied a number of new objects with forwardInvocation methods that passed the invocation object to another machine on the network, with various versions to support different networks and platforms. Calling methods on remote objects was almost invisible; after some network setup (a few lines typically) PDO objects were instantiated locally and called the same way as any other object on the system. The PDO object then forwarded the invocation to the remote computer for processing and unbundled the results when they were returned. In comparison with CORBA, PDO programs were typically 1/10 or less in size; it was common
https://en.wikipedia.org/wiki/Prototype-based%20programming
Prototype-based programming is a style of object-oriented programming in which behaviour reuse (known as inheritance) is performed via a process of reusing existing objects that serve as prototypes. This model can also be known as prototypal, prototype-oriented, classless, or instance-based programming. Prototype-based programming uses the process generalized objects, which can then be cloned and extended. Using fruit as an example, a "fruit" object would represent the properties and functionality of fruit in general. A "banana" object would be cloned from the "fruit" object and general properties specific to bananas would be appended. Each individual "banana" object would be cloned from the generic "banana" object. Compare to the class-based paradigm, where a "fruit" class would be extended by a "banana" class. The first prototype-oriented programming language was Self, developed by David Ungar and Randall Smith in the mid-1980s to research topics in object-oriented language design. Since the late 1990s, the classless paradigm has grown increasingly popular. Some current prototype-oriented languages are JavaScript (and other ECMAScript implementations such as JScript and Flash's ActionScript 1.0), Lua, Cecil, NewtonScript, Io, Ioke, MOO, REBOL and AHK. Design and implementation Prototypal inheritance in JavaScript is described by Douglas Crockford as Advocates of prototype-based programming argue that it encourages the programmer to focus on the behavior of some set of examples and only later worry about classifying these objects into archetypal objects that are later used in a fashion similar to classes. Many prototype-based systems encourage the alteration of prototypes during run-time, whereas only very few class-based object-oriented systems (such as the dynamic object-oriented system, Common Lisp, Dylan, Objective-C, Perl, Python, Ruby, or Smalltalk) allow classes to be altered during the execution of a program. Almost all prototype-based systems are based on interpreted and dynamically typed languages. Systems based on statically typed languages are technically feasible, however. The Omega language discussed in Prototype-Based Programming is an example of such a system, though according to Omega's website even Omega is not exclusively static, but rather its "compiler may choose to use static binding where this is possible and may improve the efficiency of a program." Object construction In prototype-based languages there are no explicit classes. Objects inherit directly from other objects through a prototype property. The prototype property is called prototype in Self and JavaScript, or proto in Io. There are two methods of constructing new objects: ex nihilo ("from nothing") object creation or through cloning an existing object. The former is supported through some form of object literal, declarations where objects can be defined at runtime through special syntax such as {...} and passed directly to a variable. While most systems
https://en.wikipedia.org/wiki/Display%20PostScript
Display PostScript (or DPS) is a 2D graphics engine system for computers which uses the PostScript (PS) imaging model and language (originally developed for computer printing) to generate on-screen graphics. To the basic PS system, DPS adds a number of features intended to ease working with bitmapped displays and improve performance of some common tasks. Early versions of PostScript display systems were developed at Adobe Systems. During development of the NeXT computers, NeXT and Adobe collaborated to produce the official DPS system, which was released in 1987. NeXT used DPS throughout its history, while versions from Adobe were popular on Unix workstations for a time during the 1980s and 1990s. Design In order to support interactive, on-screen use with reasonable performance, changes were needed: Multiple execution contexts: Unlike a printer environment where a PS interpreter processes one job at a time, DPS would be used in a number of windows at the same time, each with their own settings (colors, brush settings, scale, etc.). This required a modification to the system to allow it to keep several "contexts" (sets of state data) active, one for each process (window). Encoded names: Many of the procedures and data structures in PostScript are looked up by name, string identifier. In DPS these names could be replaced by integers, which are much faster for a computer to find. Interaction support: A number of procedures were defined to handle interaction, including hit detection. Halftone phase: In order to improve scrolling performance, DPS only drew the small portion of the window that became visible, shifting the rest of the image instead of re-drawing it. However this meant that the halftones might not line up, producing visible lines and boxes in the display of graphics. DPS included additional code to properly handle these cases. Modern full-color displays with no halftones have made this idea mostly obsolete. Incremental updates: In printing applications the PS code is interpreted until it gets a showpage, at which point it is printed out. This is not suitable for a display situation where a large number of minor updates are needed all the time. DPS included modes to allow semi-realtime display as the instructions were received from the user programs. Bitmap font support: DPS added the ability to map PS fonts onto hand-drawn bitmap fonts and change from one to the other on the fly. Adobe PS's ability to display fonts on low-resolution devices (significantly less than 300 dpi) was very poor. For example, a NeXT screen used only 96 dpi. This PS limitation was worked around by using hand-built bitmap fonts to provide passable quality. Later implementations of PS (including compatible replacements like Ghostscript) provided anti-aliased fonts on grayscale or colour displays, which significantly improved quality. However, this development was too late to be of much use. Modern displays are still around 100 dpi, but have far superior fo
https://en.wikipedia.org/wiki/Load%20balancing%20%28computing%29
In computing, load balancing is the process of distributing a set of tasks over a set of resources (computing units), with the aim of making their overall processing more efficient. Load balancing can optimize the response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle. Load balancing is the subject of research in the field of parallel computers. Two main approaches exist: static algorithms, which do not take into account the state of the different machines, and dynamic algorithms, which are usually more general and more efficient but require exchanges of information between the different computing units, at the risk of a loss of efficiency. Problem overview A load-balancing algorithm always tries to answer a specific problem. Among other things, the nature of the tasks, the algorithmic complexity, the hardware architecture on which the algorithms will run as well as required error tolerance, must be taken into account. Therefore compromise must be found to best meet application-specific requirements. Nature of tasks The efficiency of load balancing algorithms critically depends on the nature of the tasks. Therefore, the more information about the tasks is available at the time of decision making, the greater the potential for optimization. Size of tasks Perfect knowledge of the execution time of each of the tasks allows to reach an optimal load distribution (see algorithm of prefix sum). Unfortunately, this is in fact an idealized case. Knowing the exact execution time of each task is an extremely rare situation. For this reason, there are several techniques to get an idea of the different execution times. First of all, in the fortunate scenario of having tasks of relatively homogeneous size, it is possible to consider that each of them will require approximately the average execution time. If, on the other hand, the execution time is very irregular, more sophisticated techniques must be used. One technique is to add some metadata to each task. Depending on the previous execution time for similar metadata, it is possible to make inferences for a future task based on statistics. Dependencies In some cases, tasks depend on each other. These interdependencies can be illustrated by a directed acyclic graph. Intuitively, some tasks cannot begin until others are completed. Assuming that the required time for each of the tasks is known in advance, an optimal execution order must lead to the minimization of the total execution time. Although this is an NP-hard problem and therefore can be difficult to be solved exactly. There are algorithms, like job scheduler, that calculate optimal task distributions using metaheuristic methods. Segregation of tasks Another feature of the tasks critical for the design of a load balancing algorithm is their ability to be broken down into subtasks during execution. The "Tree-Shaped Computation" algorithm presented later takes great advantage of this specificity. S
https://en.wikipedia.org/wiki/Warcraft%20III%3A%20Reign%20of%20Chaos
Warcraft III: Reign of Chaos is a high fantasy real-time strategy computer video game developed and published by Blizzard Entertainment released in July 2002. It is the second sequel to Warcraft: Orcs & Humans, after Warcraft II: Tides of Darkness, the third game set in the Warcraft fictional universe, and the first to be rendered in three dimensions. An expansion pack, The Frozen Throne, was released in July 2003. Warcraft III is set several years after the events of Warcraft II, and tells the story of the Burning Legion's attempt to conquer the fictional world of Azeroth with the help of an army of the Undead, led by fallen paladin Arthas Menethil. It chronicles the combined efforts of the Human Alliance, Orcish Horde, and Night Elves to stop them before they can corrupt the World Tree. In the game, as in many real-time strategy (RTS) games, players collect resources, train individual units and heroes, and build bases in order to achieve various goals (in single-player mode), or to defeat the enemy player. Four playable factions can be chosen from: Humans, Orcs, (both of which appeared in the previous games) and two new factions: the Night Elves and the Undead. Warcraft IIIs single-player campaign is laid out similarly to that of StarCraft, and is told through the races in a progressive manner. Players can also play matches against the computer, or against others—using local area networking (LAN) or Blizzard's Battle.net gaming platform. After Warcraft II: Beyond the Dark Portal, the last in the Warcraft II saga, was released in 1996, Blizzard began development of a point-and-click adventure game called Warcraft Adventures: Lord of the Clans, which was supposed to continue the story. Lord of the Clans was canceled in favor of Warcraft III in 1998, which was presented to the public at the European Computer Trade Show in September 1999. The game's design and gameplay was significantly altered during development, with the final game sharing little similarities with the originally presented version (see similarities to StarCraft). The game received acclaim from critics, who praised the game's presentation and multiplayer features. It is considered an influential example of RTS video games. Warcraft III was a commercial success, shipping 4.4 million copies to retail stores, selling over a million within a month. In 2020, Blizzard released a remastered version of both Warcraft III and its expansion, The Frozen Throne, called Warcraft III: Reforged. Gameplay Warcraft III takes place on a map of varying size, such as large plains and fields, with terrain features like rivers, mountains, seas, or cliffs. The map is initially hidden from view and only becomes visible through exploration. Areas no longer in sight range of an allied unit or building are covered with the fog of war, meaning that while the terrain remains visible, changes such as enemy troop movements and building construction are not. During a game, players must establish settlements to
https://en.wikipedia.org/wiki/Minnesota%20Public%20Radio
Minnesota Public Radio (MPR), is a public radio network for the state of Minnesota. With its three services, News & Information, YourClassical MPR and The Current, MPR operates a 46-station regional radio network in the upper Midwest. MPR has won more than 875 journalism awards, including the Peabody Award, both the RTNDA Edward R. Murrow Award and the Corporation for Public Broadcasting award of the same name, and the Alfred I. duPont-Columbia University Gold Baton Award. As of September 2011, MPR was equal with WNYC for most listener support for a public radio network, and had the highest level of recurring monthly donors of any public radio network in the United States. MPR also produces and distributes national public radio programming via its subsidiary American Public Media, which is the second-largest producer of public radio programming in the United States, and largest producer and distributor of classical music programming. History Minnesota Public Radio began on January 22, 1967, when KSJR-FM first signed on from the campus of Saint John's University in Collegeville, just outside St. Cloud. Colman Barry, then president of Saint John's, saw promise in the then-relatively-new technology of FM radio, and believed radio was an appropriate extension of Saint John's cultural and artistic functions to the broader community. He hired a 23-year-old graduate of St. John's, William H. Kling, as director of broadcasting. It soon became apparent that St. Cloud and surrounding Stearns County did not have enough listeners for the station to be viable, so Kling more than tripled KSJR's power in hopes of reaching the Twin Cities. However, it only provided grade B coverage to Minneapolis and the western portion of the metro, and completely missed St. Paul and the east. Realizing that the station needed to cover the Twin Cities to have a realistic chance of survival, St. John's started KSJN, a low-powered repeater station for the Twin Cities, in 1968. The operation was awash in debt, and by 1969, St. John's realized it did not have the adequate financial or personnel resources to operate a full-fledged noncommercial radio station. With Barry's support, Saint John's transferred KSJR/KSJN's assets to a community corporation, St. John's University Broadcasting. This corporation later changed its name to Minnesota Educational Radio, and finally Minnesota Public Radio. Kling led MPR as president and CEO for 44 years, before retiring in 2011. MPR was a charter member of National Public Radio in 1971, and had helped lay the groundwork for forming that organization during 1969 and 1970. In 1971, the network moved its operations from Collegeville to St. Paul, funded in part with a news programming "demonstration" grant from the Corporation for Public Broadcasting. New studios were built and KSJN became the flagship station. During the 1970s, additional stations were added and the network expanded across Minnesota. It was during this period KSJN's news depar
https://en.wikipedia.org/wiki/X.500
X.500 is a series of computer networking standards covering electronic directory services. The X.500 series was developed by the Telecommunication Standardization Sector of the International Telecommunication Union (ITU-T). ITU-T was formerly known as the Consultative Committee for International Telephony and Telegraphy (CCITT). X.500 was first approved in 1988. The directory services were developed to support requirements of X.400 electronic mail exchange and name lookup. The International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) were partners in developing the standards, incorporating them into the Open Systems Interconnection suite of protocols. ISO/IEC 9594 is the corresponding ISO/IEC identification. X.500 protocols The protocols defined by X.500 include: * These protocols are typically defined piecemeal throughout multiple specifications and ASN.1 modules. The "Defining Specification" column above indicates (subjectively) which specification contributes most specifically to a protocol. Because these protocols used the OSI networking stack, a number of alternatives to DAP were developed to allow Internet clients to access the X.500 Directory using the TCP/IP networking stack. The most well-known alternative to DAP is Lightweight Directory Access Protocol (LDAP). While DAP and the other X.500 protocols can now use the TCP/IP networking stack, LDAP remains a popular directory access protocol. Transport Protocols The X.500 protocols traditionally use the OSI networking stack. However, the Lightweight Directory Access Protocol (LDAP) uses TCP/IP for transport. In later versions of the ITU Recommendation X.519, the Internet Directly-Mapped (IDM) protocols were introduced to allow X.500 protocol data units (PDUs) to be transported over the TCP/IP stack. This transport involves ISO Transport over TCP as well as a simple record-based binary protocol to frame protocol datagrams. X.500 data models The primary concept of X.500 is that there is a single Directory Information Tree (DIT), a hierarchical organization of entries which are distributed across one or more servers, called Directory System Agents (DSA). An entry consists of a set of attributes, each attribute with one or more values. Each entry has a unique Distinguished Name, formed by combining its Relative Distinguished Name (RDN), one or more attributes of the entry itself, and the RDNs of each of the superior entries up to the root of the DIT. As LDAP implements a very similar data model to that of X.500, there is further description of the data model in the article on LDAP. X.520 and X.521 together provide a definition of a set of attributes and object classes to be used for representing people and organizations as entries in the DIT. They are one of the most widely deployed white pages schema. X.509, the portion of the standard providing for an authentication framework, is now also widely used outside of the X.500 directory
https://en.wikipedia.org/wiki/ObjectPAL
ObjectPAL is short for Object-Oriented Paradox Application Language, which is the programming language used by the Borland Paradox database application (now owned by Corel). Paradox, now in its 11th version, is a constituent of Corel's Word Perfect X3 office suite, for 32-bit Microsoft Windows. The language is tightly-bound to the application's forms, and provides a very rapid and robust development environment for creating database applications for Windows. ObjectPAL is not a full free-standing object-oriented language. It belongs to the family of languages inspired by Hypercard, with influences from PAL (wherever functionality could be kept the same), Smalltalk, and Garnet (a UI language created by Brad Myers). While its objects do encapsulate source code, there is no support for polymorphism, and only a very limited inheritance concept, which is wedded to objects on a form which can be controlled by code placed on a higher object in a form's object hierarchy. However, for what it is, ObjectPAL provides a wideranging and versatile language for creating Paradox applications. The syntax and structure of the language resembles Visual Basic, but knowing Visual Basic would only help someone new to ObjectPAL in the sense that any other programming skill would be transferable to ObjectPAL. ObjectPAL was the successor to PAL, which was the Paradox for DOS programming language. With the advent of Paradox for Windows 1.0 in 1993, which was then owned by Borland Corporation, ObjectPAL was born. Version 1.0 was quickly succeeded by version 4.5 that same year. It can be used as such as a web server scripting language when combined with the Corel Web Server Control OCX, which implements a server API similar to the CGI, and its standalone console, the Corel Web Server. See also Comparison of web servers Domain-specific programming languages Borland
https://en.wikipedia.org/wiki/Tokenization%20%28data%20security%29
Tokenization, when applied to data security, is the process of substituting a sensitive data element with a non-sensitive equivalent, referred to as a token, that has no intrinsic or exploitable meaning or value. The token is a reference (i.e. identifier) that maps back to the sensitive data through a tokenization system. The mapping from original data to a token uses methods that render tokens infeasible to reverse in the absence of the tokenization system, for example using tokens created from random numbers. A one-way cryptographic function is used to convert the original data into tokens, making it difficult to recreate the original data without obtaining entry to the tokenization system's resources. To deliver such services, the system maintains a vault database of tokens that are connected to the corresponding sensitive data. Protecting the system vault is vital to the system, and improved processes must be put in place to offer database integrity and physical security. The tokenization system must be secured and validated using security best practices applicable to sensitive data protection, secure storage, audit, authentication and authorization. The tokenization system provides data processing applications with the authority and interfaces to request tokens, or detokenize back to sensitive data. The security and risk reduction benefits of tokenization require that the tokenization system is logically isolated and segmented from data processing systems and applications that previously processed or stored sensitive data replaced by tokens. Only the tokenization system can tokenize data to create tokens, or detokenize back to redeem sensitive data under strict security controls. The token generation method must be proven to have the property that there is no feasible means through direct attack, cryptanalysis, side channel analysis, token mapping table exposure or brute force techniques to reverse tokens back to live data. Replacing live data with tokens in systems is intended to minimize exposure of sensitive data to those applications, stores, people and processes, reducing risk of compromise or accidental exposure and unauthorized access to sensitive data. Applications can operate using tokens instead of live data, with the exception of a small number of trusted applications explicitly permitted to detokenize when strictly necessary for an approved business purpose. Tokenization systems may be operated in-house within a secure isolated segment of the data center, or as a service from a secure service provider. Tokenization may be used to safeguard sensitive data involving, for example, bank accounts, financial statements, medical records, criminal records, driver's licenses, loan applications, stock trades, voter registrations, and other types of personally identifiable information (PII). Tokenization is often used in credit card processing. The PCI Council defines tokenization as "a process by which the primary account number (P
https://en.wikipedia.org/wiki/Ross%20J.%20Anderson
Ross John Anderson (born 15 September 1956) is a researcher, author, and industry consultant in security engineering. He is Professor of Security Engineering at the Department of Computer Science and Technology, University of Cambridge where he is part of the University's security group. Education Anderson was educated at the High School of Glasgow. In 1978, he graduated with a Bachelor of Arts in mathematics and natural science from the University of Cambridge where he was an undergraduate student of Trinity College, Cambridge, and subsequently received a qualification in computer engineering. Anderson worked in the avionics and banking industry before moving back to the University of Cambridge in 1992, to work on his doctorate under the supervision of Roger Needham and start his career as an academic researcher. He received his PhD in 1995, and became a lecturer in the same year. Research and career Anderson's research interests are in security, cryptology, dependability and technology policy. In cryptography, he designed with Eli Biham the BEAR, LION and Tiger cryptographic primitives, and co-wrote with Biham and Lars Knudsen the block cipher Serpent, one of the finalists in the Advanced Encryption Standard (AES) competition. He has also discovered weaknesses in the FISH cipher and designed the stream cipher Pike. Anderson has always campaigned for computer security to be studied in a wider social context. Many of his writings emphasise the human, social, and political dimension of security. On online voting, for example, he writes "When you move from voting in person to voting at home (whether by post, by phone or over the internet) it vastly expands the scope for vote buying and coercion", making the point that it's not just a question of whether the encryption can be cracked. In 1998, Anderson founded the Foundation for Information Policy Research, a think tank and lobbying group on information-technology policy. Anderson is also a founder of the UK-Crypto mailing list and the economics of security research domain. He is well-known among Cambridge academics as an outspoken defender of academic freedoms, intellectual property and other matters of university politics. He is engaged in the "Campaign for Cambridge Freedoms" and has been an elected member of Cambridge University Council since 2002. In January 2004, the student newspaper Varsity declared Anderson to be Cambridge University's "most powerful person". In 2002, he became an outspoken critic of trusted computing proposals, in particular Microsoft's Palladium operating system vision. Anderson's TCPA FAQ has been characterised by IBM TC researcher David R. Safford as "full of technical errors" and of "presenting speculation as fact." For years Anderson has been arguing that by their nature large databases will never be free of abuse by breaches of security. He has said that if a large system is designed for ease of access it becomes insecure; if made watertight it becomes im
https://en.wikipedia.org/wiki/Access-control%20list
In computer security, an access-control list (ACL) is a list of permissions associated with a system resource (object or facility). An ACL specifies which users or system processes are granted access to resources, as well as what operations are allowed on given resources. Each entry in a typical ACL specifies a subject and an operation. For instance, If a file object has an ACL that contains , this would give Alice permission to read and write the file and give Bob permission only to read it. If the RACF profile CONSOLE CLASS(TSOAUTH) has an ACL that contains , this would give ALICE permission to use the TSO CONSOLE command. Implementations Many kinds of operating systems implement ACLs or have a historical implementation; the first implementation of ACLs was in the filesystem of Multics in 1965. Filesystem ACLs A filesystem ACL is a data structure (usually a table) containing entries that specify individual user or group rights to specific system objects such as programs, processes, or files. These entries are known as access-control entries (ACEs) in the Microsoft Windows NT, OpenVMS, and Unix-like operating systems such as Linux, macOS, and Solaris. Each accessible object contains an identifier to its ACL. The privileges or permissions determine specific access rights, such as whether a user can read from, write to, or execute an object. In some implementations, an ACE can control whether or not a user, or group of users, may alter the ACL on an object. One of the first operating systems to provide filesystem ACLs was Multics. PRIMOS featured ACLs at least as early as 1984. In the 1990s the ACL and RBAC models were extensively tested and used to administer file permissions. POSIX ACL POSIX 1003.1e/1003.2c working group made an effort to standardize ACLs, resulting in what is now known as "POSIX.1e ACL" or simply "POSIX ACL". The POSIX.1e/POSIX.2c drafts were withdrawn in 1997 due to participants losing interest for funding the project and turning to more powerful alternatives such as NFSv4 ACL. , no live sources of the draft could be found on the Internet, but it can still be found in the Internet Archive. Most of the Unix and Unix-like operating systems (e.g. Linux since 2.5.46 or November 2002, FreeBSD, or Solaris) support POSIX.1e ACLs (not necessarily draft 17). ACLs are usually stored in the extended attributes of a file on these systems. NFSv4 ACL NFSv4 ACLs are much more powerful than POSIX draft ACLs. Unlike draft POSIX ACLs, NFSv4 ACLs are defined by an actually published standard, as part of the Network File System. NFSv4 ACLs are supported by many Unix and Unix-like operating systems. Examples include AIX, FreeBSD, Mac OS X beginning with version 10.4 ("Tiger"), or Solaris with ZFS filesystem, support NFSv4 ACLs, which are part of the NFSv4 standard. There are two experimental implementations of NFSv4 ACLs for Linux: NFSv4 ACLs support for Ext3 filesystem and the more recent Richacls, which brings NFSv4 ACLs support
https://en.wikipedia.org/wiki/NeXT%20Computer
NeXT Computer (also called the NeXT Computer System) is a workstation computer that was developed, marketed, and sold by NeXT Inc. It was introduced in October 1988 as the company's first and flagship product, at a price of , aimed at the higher-education market. It was designed around the Motorola 68030 CPU and 68882 floating-point coprocessor, with a clock speed of . Its NeXTSTEP operating system is based on the Mach microkernel and BSD-derived Unix, with a proprietary GUI using a Display PostScript-based back end. The enclosure consists of a 1-foot () die-cast magnesium cube-shaped black case, which led to the machine being informally referred to as "The Cube". The NeXT Computer was renamed NeXTcube in a later upgrade. The NeXTstation, a more affordable version of the NeXTcube, was released in 1990. Launch The NeXT Computer was launched in October 1988 at a lavish invitation-only event, "NeXT Introduction – the Introduction to the NeXT Generation of Computers for Education" at the Louise M. Davies Symphony Hall in San Francisco, California. The next day, selected educators and software developers were invited to attend—for a $100 registration fee—the first public technical overview of the NeXT computer at an event called "The NeXT Day" at the San Francisco Hilton. It gave those interested in developing NeXT software an insight into the system's software architecture and object-oriented programming. Steve Jobs was the luncheon's speaker. Reception In 1989, BYTE magazine listed the NeXT Computer among the "Excellence" winners of the BYTE Awards, stating that it showed "what can be done when a personal computer is designed as a system, and not a collection of hardware elements". Citing as "truly innovative" the optical drive, DSP and object-oriented programming environment, it concluded that "the NeXT Computer is worth every penny of its $6,500 market price". It was, however, not a significant commercial success, failing to reach the level of high-volume sales like the Apple II, Commodore 64, Macintosh, or Microsoft Windows PCs. The workstations were sold to universities, financial institutions, and government agencies. Legacy A NeXT Computer and its object-oriented development tools and libraries were used by Tim Berners-Lee and Robert Cailliau at CERN to develop the world's first web server (CERN httpd) and web browser (WorldWideWeb). The NeXT platform was used by Jesse Tayler at Paget Press to develop the first electronic app store, called the Electronic AppWrapper, in the early 1990s. Issue #3 was first demonstrated to Steve Jobs at NeXTWorld Expo 1993. Pioneering PC games Doom, Doom II, and Quake (with respective level editors) were developed by id Software on NeXT machines. Doom engine games such as Heretic, Hexen, and Strife were also developed on NeXT hardware using id's tools. NeXT technology provisioned the first online food delivery system called CyberSlice, using GIS based geolocation, on which Steve Jobs performed the firs