source stringlengths 32 199 | text stringlengths 26 3k |
|---|---|
https://en.wikipedia.org/wiki/Autodesk%203ds%20Max | Autodesk 3ds Max, formerly 3D Studio and 3D Studio Max, is a professional 3D computer graphics program for making 3D animations, models, games and images. It is developed and produced by Autodesk Media and Entertainment. It has modeling capabilities and a flexible plugin architecture and must be used on the Microsoft Windows platform. It is frequently used by video game developers, many TV commercial studios, and architectural visualization studios. It is also used for movie effects and movie pre-visualization. 3ds Max features shaders (such as ambient occlusion and subsurface scattering), dynamic simulation, particle systems, radiosity, normal map creation and rendering, global illumination, a customizable user interface, and its own scripting language.
History
The original 3D Studio product was created for the DOS platform by the Yost Group, and published by Autodesk. The release of 3D Studio made Autodesk's previous 3D rendering package AutoShade obsolete. After 3D Studio DOS Release 4, the product was rewritten for the Windows NT platform, and renamed "3D Studio MAX". This version was also originally created by the Yost Group. It was released by Kinetix, which was at that time Autodesk's division of media and entertainment.
Autodesk purchased the product at the second release update of the 3D Studio MAX version and internalized development entirely over the next two releases. Later, the product name was changed to "3ds max" (all lower case) to better comply with the naming conventions of Discreet, a Montreal-based software company which Autodesk had purchased.
When it was re-released (release 7), the product was again branded with the Autodesk logo, and the short name was again changed to "3ds Max" (upper and lower case), while the formal product name became the current "Autodesk 3ds Max".
Version history
Features
MAXScript
MAXScript is a built-in scripting language that can be used to automate repetitive tasks, combine existing functionality in new ways, develop new tools and user interfaces, and much more. Plugin modules can be created entirely within MAXScript.
Character Studio
Character Studio was a plugin which since version 4 of Max is now integrated in 3ds Max; it helps users to animate virtual characters. The system works using a character rig or "Biped" skeleton which has stock settings that can be modified and customized to fit the character meshes and animation needs. This tool also includes robust editing tools for IK/FK switching, Pose manipulation, Layers and Keyframing workflows, and sharing of animation data across different Biped skeletons. These "Biped" objects have other useful features that help accelerate the production of walk cycles and movement paths, as well as secondary motion.
Scene Explorer
Scene Explorer, a tool that provides a hierarchical view of scene data and analysis, facilitates working with more complex scenes. Scene Explorer has the ability to sort, filter, and search a scene by any object typ |
https://en.wikipedia.org/wiki/Pamlico%20Sound | Pamlico Sound ( ) is a large estuarine lagoon in North Carolina. The largest lagoon along the North American East Coast, it extends long and wide. It is part of a large, interconnected network of similar lagoons that includes Albemarle Sound, Currituck Sound, Croatan Sound, Roanoke Sound, Pamlico Sound, Bogue Sound, Back Sound, and Core Sound known collectively as the Albemarle-Pamlico sound system. With over 3,000 sq. mi. (7,800 km2) of open water the combined estuary is second only in size to Chesapeake Bay in the United States.
The Pamlico Sound is separated from the Atlantic Ocean by the Outer Banks, a row of low, sandy barrier islands that include Cape Hatteras National Seashore, Cape Lookout National Seashore, and Pea Island National Wildlife Refuge. The Albemarle-Pamlico Sound is one of nineteen great waters recognized by the America's Great Waters Coalition.
Hydrology
Pamlico Sound is connected to the north with Albemarle Sound through passages provided by the Roanoke Sound and Croatan Sound. Core Sound is located at the Pamlico's narrow southern end. It is fed by the Neuse and Pamlico rivers (the latter of which is the estuary of the Tar River) from the west, and from the east by Oregon Inlet, Hatteras Inlet, and Ocracoke Inlet, which also provide passage to the Atlantic Ocean. The salinity of the sound averages 20 ppt, compared to an average coastal salinity of 35 ppt in the Atlantic and 3 ppt in the Currituck Sound, which is located north of the Albemarle Sound.
The sound and its ocean inlets are noted for wide expanses of shallow water and occasional shoaling, making the area hazardous for larger vessels. While the deepest hole of the estuary () can be found in the Pamlico Sound, depths generally range from . In addition, the shallow waters are susceptible to wind and barometric pressure-driven tidal fluctuations. This effect is amplified on the tributary rivers, where water levels can change by as much as two feet in three hours when winds are aligned with the rivers' axes and are blowing strongly.
History and current use
In March 1524, Italian Explorer Giovanni da Verrazzano mistook the sound for the Pacific Ocean because of its wide expanse and separation from the Atlantic Ocean by the Outer Banks barrier islands. The sound was named for the Pamlico that lived along the sound's mainland banks and who were referred to as the Pamouik by the Raleigh expeditions (circa 1584).
Three locations of Pamlico Sound in the Outer Banks between Cape Hatteras and Cape Fear were once under serious consideration by the United States Atomic Energy Commission as an atomic bomb test site during the late 1940s and early 1950s. Portions of Pamlico Sound are used as a bombing and training range for Camp Lejeune.
In 1987, Congress declared the Albemarle-Pamlico Sound an "estuary of national significance." For vacationers to the Outer Banks, the Pamlico Sound is a "watersports playground" providing opportunities for fishing and crabbing, |
https://en.wikipedia.org/wiki/Autoroutes%20of%20Quebec | The Quebec Autoroute System or le système d'autoroute au Québec is a network of freeways within the province of Quebec, Canada, operating under the same principle of controlled access as the Interstate Highway System in the United States and the 400-series highways in neighbouring Ontario. The Autoroutes are the backbone of Quebec's highway system, spanning almost . The speed limit on the Autoroutes is generally in rural areas and in urban areas; most roads are made of asphalt concrete.
The word autoroute is a blend of auto and route, equivalent to "freeway" or "motorway" in English, and it became the equivalent of "expressway" in French. In the 1950s, when the first Autoroutes were being planned, the design documents called them autostrades from the Italian word autostrada.
Signage
Autoroutes are identified by blue-and-red shields, similar to the American Interstate system. The red header of the shield contains a white image representing a highway overpass, and the blue lower portion of the shield contains the Autoroute's number in white, along with a fleur-de-lis, which is a provincial symbol of Quebec.
Most Autoroute and road traffic signs in the province are in French, though English is also used on federally-financed or -owned routes, such as the Bonaventure Expressway in Montreal. To surmount the language barrier, however, most signs in Quebec use pictograms and text is avoided in most cases, with the exceptions usually only being the names of control cities. Other exceptions that are posted in both languages is the illegal use of radar detectors when entering the province that reads "DÉTECTEURS DE RADAR INTERDITS/RADAR DETECTORS PROHIBITED", as well as areas where roads can be slippery due to melting ice and snow, marked "DEGEL/THAW".
Numbering system
Autoroutes are divided into three types – principal routes, deviation routes, and collector routes – and are laid out and numbered in a fashion similar to the Interstate Highway System in the United States. The principal Autoroutes are the major highways of the province, and have single- or double-digit numbers. East-west Autoroutes running parallel to the Saint Lawrence River (for example, Autoroute 20 and Autoroute 40) are assigned even numbers, while north-south Autoroutes running perpendicular to the Saint Lawrence (such as Autoroute 5 and Autoroute 15) are given odd numbers. Deviation and collector Autoroutes both feature triple-digit numbers. Deviation routes are bypasses intended for truck traffic to circumvent urban areas, and are identified by an even number prefixing the number of the nearby Autoroute that it bypasses (for example, Autoroute 440 in Laval). Collector Autoroutes, by contrast, are spur routes into urban areas, and are identified by an odd number prefixing the number of the nearby Autoroute that it branches off of (such as Autoroute 720, a spur of Autoroute 20 into downtown Montreal).
History
Quebec's first Autoroute was the Autoroute des Laurentides (Laurentia |
https://en.wikipedia.org/wiki/Long%20division | In arithmetic, long division is a standard division algorithm suitable for dividing multi-digit Hindu-Arabic numerals (Positional notation) that is simple enough to perform by hand. It breaks down a division problem into a series of easier steps.
As in all division problems, one number, called the dividend, is divided by another, called the divisor, producing a result called the quotient. It enables computations involving arbitrarily large numbers to be performed by following a series of simple steps. The abbreviated form of long division is called short division, which is almost always used instead of long division when the divisor has only one digit. Chunking (also known as the partial quotients method or the hangman method) is a less mechanical form of long division prominent in the UK which contributes to a more holistic understanding of the division process.
While related algorithms have existed since the 12th century, the specific algorithm in modern use was introduced by Henry Briggs 1600.
Education
Inexpensive calculators and computers have become the most common way to solve division problems, eliminating a traditional mathematical exercise, and decreasing the educational opportunity to show how to do so by paper and pencil techniques. (Internally, those devices use one of a variety of division algorithms, the faster ones among which rely on approximations and multiplications to achieve the tasks.) In the United States, long division has been especially targeted for de-emphasis, or even elimination from the school curriculum, by reform mathematics, though traditionally introduced in the 4th or 5th grades.
Method
In English-speaking countries, long division does not use the division slash or division sign symbols but instead constructs a tableau. The divisor is separated from the dividend by a right parenthesis or vertical bar ; the dividend is separated from the quotient by a vinculum (i.e., an overbar). The combination of these two symbols is sometimes known as a long division symbol or division bracket. It developed in the 18th century from an earlier single-line notation separating the dividend from the quotient by a left parenthesis.
The process is begun by dividing the left-most digit of the dividend by the divisor. The quotient (rounded down to an integer) becomes the first digit of the result, and the remainder is calculated (this step is notated as a subtraction). This remainder carries forward when the process is repeated on the following digit of the dividend (notated as 'bringing down' the next digit to the remainder). When all digits have been processed and no remainder is left, the process is complete.
An example is shown below, representing the division of 500 by 4 (with a result of 125).
125 (Explanations)
4)500
4 ( 4 × 1 = 4)
10 ( 5 - 4 = 1)
8 ( 4 × 2 = 8)
20 (10 - 8 = 2)
20 ( 4 × 5 = 20)
0 (20 - 20 = 0)
A |
https://en.wikipedia.org/wiki/HIPC | HIPC may refer to:
International Conference on High Performance Computing
Heavily indebted poor countries
See also
HICP, Harmonised Index of Consumer Prices |
https://en.wikipedia.org/wiki/Scandinavian%20Multi%20Access%20Reservations%20for%20Travel%20Agents | SMART, Scandinavian Multi Access Reservations for Travel Agents, is a computerized system for ticket reservation.
History
It was created in 1979 by SAS, Braathens and Swedish State Railways. Many travel companies had computerized their systems at the time, and provided terminal interfaces for travel agencies. Each had their own system, often involving widely different codes and procedures. It was cumbersome and expensive for a travel agency to have multiple terminals, each one connected to a different provider. SMART solved this, by providing a single interface over the public data network Datex.
It worked by having a Host Interface Processor (HIP) at each travel company. These would emulate a number of terminals, translate the messages, codes and addresses, wrap them in SMARTs own communications protocol, and provide the interface over Datex to the various travel agencies. There was, of course, functionality to limit access.
On the travel agency side, there would be SMART Terminal Equipment (STE) with the reverse function, emulating a server and providing interfaces for terminals. Now however, a travel agent could easily switch between screens for the different companies. The interfaces were similar to those for direct connections, but provided some standardization for codes to ease the transition between the systems.
The STE would also allow printing of documents, tickets, bills and similar, as well as interfacing with the accounting system.
SMART could utilize some of Datex extra features like queuing and group numbers, and a logical connection (session) was not dependent on the physical connection (which could go up and down for instance during idle times to save money). Parallel sessions could be held with different or the same provider.
SMART spawned off into a company centered in Stockholm in 1984, SMART AB, with the subsidiaries SMART Sverige AB, SMART Danmark A/S, and SMART Norge AS (in Sweden, Denmark and Norway respectively).
SMART is still in use, though not over Datex. It has been widely replaced by Amadeus, by the same company. In 2003, SMART AB changed its name to Amadeus Scandinavia.
See also
Travel technology
References
Scandinavia
Travel technology |
https://en.wikipedia.org/wiki/Regional%20Bus%20and%20Rail%20Company%20of%20Ticino | The Regional Bus and Rail Company of Ticino (, known by its Italian initials FART) is a limited company in the Swiss southern canton of Ticino, which provides the urban and suburban bus network in and around Locarno in Switzerland. It operates the cable cars between Verdasio and Rasa, Ticino, and between Intragna – Pila – Costa on behalf of the owning companies. Together with the Italian company (SSIF) they operate the railway through the Swiss Centovalli and the Italian Valle Vigezzo, which connects the Gotthard trans-Alpine rail route at Locarno railway station with the Simplon trans-Alpine route at Domodossola in Italy. There are 10 stations on the Swiss side of the frontier and 12 stations on the Italian side, the complete rail journey takes about 1 hour and 45 minutes.
A formal request for the licensing of a rail network was made by the mayor of Locarno, Francesco Balli, in 1898, but although construction of the Centovalli Railway was begun in May 1913, the collapse of the financing bank later that year, together with the intervention of the First World War meant that construction was not resumed until August 1921, with the line being constructed from each end and meeting on 27 March 1923 and public service being inaugurated on 25 November 1923. On 12 November 1918 Italy and Switzerland had signed a contract about
the construction of a narrow gauge railway, Locarno - Domodossola. This contract fixed the cross acceptance of Italian vehicles in Switzerland and vice versa without needing a second approval.
As well as an important public service, the route of the railway is extremely scenic, and the route is popular with tourists.
FART is a member of the Arcobaleno tariff network.
Rolling stock
Traffic on the Centovallina and Vigezzina is fully in the hand of electric motor coaches. A series of 12 articulated EMUs ABe 4/6, built in 1992 by Vevey Technologies made up most through workings until 2007, often working in pairs. 8 ABe 4/6 51-58 are owned by FART, 4 ABe 4/6 61–64 by SSIF.
9 articulated railcars from 1959 to 68, three-element ABe 8/8 21-24 and two-element ABe or ABDe 6/6 31-35 can not work in MU but can pull a trailer or two (101...111 and 201 from 1923, 120-123 from 1964, 130 from 1948).
2004 SSIF ordered three three-car panoramic multiple units from the Officine Ferroviaire Veronesi and Škoda Works. Soon the order was changed to three four-car trains. Delivery took place in 2007.
Driving motor, Domodossola direction: ABe 4/4 Pp 81, 83, 85
Driving motor, Locarno direction: Be 4/4 Pp 82, 84, 86
Non-driving motor: Be 4/4 Pi 87, 88, 89
Trailer: Rimorchiata P 810, 811, 812
Classifications ABe and Be don't match with classes available in every vehicle. In July 2008 the following consists were working:
85 - 812 - 89 - 86 1st class at the Locarno end (whole coach 86)
83 - 87 - 810 - 82 1st class at the Domodossola end (coach 83)
81 - 88 - 811 - 84 1st class at both ends (end compartment of coaches 81 and 84)
Off-season, thes |
https://en.wikipedia.org/wiki/Block%20%28data%20storage%29 | In computing (specifically data transmission and data storage), a block, sometimes called a physical record, is a sequence of bytes or bits, usually containing some whole number of records, having a maximum length; a block size. Data thus structured are said to be blocked. The process of putting data into blocks is called blocking, while deblocking is the process of extracting data from blocks. Blocked data is normally stored in a data buffer, and read or written a whole block at a time. Blocking reduces the overhead and speeds up the handling of the data stream. For some devices, such as magnetic tape and CKD disk devices, blocking reduces the amount of external storage required for the data. Blocking is almost universally employed when storing data to 9-track magnetic tape, NAND flash memory, and rotating media such as floppy disks, hard disks, and optical discs.
Most file systems are based on a block device, which is a level of abstraction for the hardware responsible for storing and retrieving specified blocks of data, though the block size in file systems may be a multiple of the physical block size. This leads to space inefficiency due to internal fragmentation, since file lengths are often not integer multiples of block size, and thus the last block of a file may remain partially empty. This will create slack space. Some newer file systems, such as Btrfs and FreeBSD UFS2, attempt to solve this through techniques called block suballocation and tail merging. Other file systems such as ZFS support variable block sizes.
Block storage is normally abstracted by a file system or database management system (DBMS) for use by applications and end users. The physical or logical volumes accessed via block I/O may be devices internal to a server, directly attached via SCSI or Fibre Channel, or distant devices accessed via a storage area network (SAN) using a protocol such as iSCSI, or AoE. DBMSes often use their own block I/O for improved performance and recoverability as compared to layering the DBMS on top of a file system.
References
Computer data storage
Data transmission |
https://en.wikipedia.org/wiki/Block%20size | Block size can refer to:
Block (data storage), the size of a block in data storage and file systems.
Block size (cryptography), the minimal unit of data for block ciphers.
Block (telecommunications)
Block size (mathematics)
The size of a city block |
https://en.wikipedia.org/wiki/Sobig | The Sobig Worm was a computer worm that infected millions of Internet-connected, Microsoft Windows computers in August 2003.
Although there were indications that tests of the worm were carried out as early as August 2002, Sobig.A was first found in the wild in January 2003. Sobig.B was released on May 18, 2003. It was first called Palyh, but was later renamed to Sobig.B after anti-virus experts discovered it was a new generation of Sobig. Sobig.C was released May 31 and fixed the timing bug in Sobig.B. Sobig.D came a couple of weeks later followed by Sobig.E on June 25. On August 19, Sobig.F became known and set a record in sheer volume of e-mails.
The worm was most widespread in its "Sobig.F" variant.
, Sobig is the second fastest computer worm to have ever entered the wild, being surpassed only by Mydoom.
Sobig was not only a computer worm in the sense that it replicates by itself, but also a Trojan horse in that it masquerades as something other than malware. The Sobig.F worm would appear as an electronic mail with one of the following subjects:
Re: Approved
Re: Details
Re: Re: My details
Re: Thank you!
Re: That movie
Re: Wicked screensaver
Re: Your application
Thank you!
Your details
It would contain the text: "See the attached file for details" or "Please see the attached file for details", as well as an attachment as one of the following names:
application.pif
details.pif
document_9446.pif
document_all.pif
movie0045.pif
thank_you.pif
your_details.pif
your_document.pif
wicked_scr.scr
Technical details
The Sobig viruses infected a host computer by way of the above-mentioned attachment. When this is started they will replicate by using their own SMTP agent engine. E-mail addresses that will be targeted by the virus are gathered from files on the host computer. The file extensions that will be searched for e-mail addresses are:
.dbx
.eml
.hlp
.htm
.html
.mht
.wab
.txt
The Sobig.F variant was programmed to contact 20 IP addresses on UDP port 8998 on August 26, 2003 to install some program or update itself. It is unclear what this program was, but earlier versions of the virus had installed the WinGate proxy server software—a legitimate product—in a configuration allowing it to be used as a backdoor for spammers to distribute unsolicited e-mail.
The Sobig worm was written using the Microsoft Visual C++ compiler, and subsequently compressed using a data compression program called tElock.
The Sobig.F worm deactivated itself on September 10, 2003. On November 5 the same year, Microsoft announced that they will pay $250,000 for information leading to the arrest of the creator of the Sobig worm. To date, the perpetrator has not been caught.
References
See also
Timeline of notable computer viruses and worms
Email worms
Computer worms |
https://en.wikipedia.org/wiki/Life%20in%20the%20United%20Kingdom%20test | The Life in the United Kingdom test is a computer-based test constituting one of the requirements for anyone seeking Indefinite Leave to Remain in the UK or naturalisation as a British citizen. It is meant to prove that the applicant has a sufficient knowledge of British life. The test is a requirement under the Nationality, Immigration and Asylum Act 2002. It consists of 24 questions covering topics such as British values, history, traditions and everyday life. The test has been frequently criticised for containing factual errors, expecting candidates to know information that would not be expected of native-born citizens as well as being just a "bad pub quiz" and "unfit for purpose".
Purpose
A pass in the test fulfils the requirements for "sufficient knowledge of life in the United Kingdom" which were introduced for naturalisation on 1 November 2005 and which were introduced for settlement on 2 April 2007.
Initially, attending the "ESOL (English for Speakers of Other Languages) with Citizenship" course was an alternative to passing the Life in the UK Test, but since 2013 applicants are required to both meet the knowledge of the language and pass the test to fulfil the requirements. Meeting the knowledge of English can be satisfied by having an English qualification at B1, B2, C1 or C2 level, or by completing a degree which is taught or researched in English. Legally, sufficient knowledge of Welsh or Scottish Gaelic can also be used to fulfil the language requirement, but the mechanism by which this can be achieved is not clear in legislation.
Conversely, Home Office guidance states that if anyone wishes to take the Life in the UK Test in these languages (for instance Gaelic‐speaking Canadians or Welsh‐speaking Argentinians), arrangements will be made for them to do so. One test each in Scottish Gaelic and in Welsh have been taken as of 2020.
Plans to introduce such a test were announced in September 2002 by the then Home Secretary, David Blunkett. He appointed a "Life in the United Kingdom Advisory Group", chaired by Sir Bernard Crick, to formulate the test's content. In 2003, the Group produced a report, "The New and the Old", with recommendations for the design and administration of the test. There was dissent among the committee members on certain issues, and many of the recommendations were not adopted by the Government. In 2005, plans to require foreign-born religious ministers to take the test earlier than other immigrants were abandoned by the then Immigration Minister, Tony McNulty.
Content
The test lasts for 45 minutes, during which time the candidate is required to answer 24 multiple-choice questions. To pass the test, the candidate must receive a grade of 75% or higher, i.e. at least 18 correct answers to the 24 questions. Testing is not directly administered by UK Visas and Immigration (which replaced the UK Border Agency in 2013), but is carried out by Learndirect, a private company. As of 20 July 2021 the cost of the test |
https://en.wikipedia.org/wiki/Binary%20large%20object | A binary large object (BLOB or blob) is a collection of binary data stored as a single entity. Blobs are typically images, audio or other multimedia objects, though sometimes binary executable code is stored as a blob. They can exist as persistent values inside some databases or version control system, or exist at runtime as program variables in some programming languages. Blobs are not to be confused with binary files stored in a file system.
Blobs were originally just big amorphous chunks of data invented by Jim Starkey at DEC, who describes them as "the thing that ate Cincinnati, Cleveland, or whatever" from "the 1958 Steve McQueen movie", referring to The Blob. Later, Terry McKiever, a marketing person for Apollo, felt that it needed to be an acronym and invented the backronym Basic Large Object. Then Informix invented an alternative backronym, Binary Large Object.
The data type and definition were introduced to describe data not originally defined in traditional computer database systems, particularly because it was too large to store practically at the time the field of database systems was first being defined in the 1970s and 1980s. The data type became practical when disk space became cheap. This definition gained popularity with IBM Db2.
The term is used in NoSQL databases, especially in key-value store databases such as Redis. The term is also used by languages that allow runtime manipulation of Blobs, like JavaScript.
The name "blob" is further borrowed by the deep learning software Caffe to represent multi-dimensional arrays.
In the world of free and open-source software, the term is also borrowed to refer to proprietary device drivers, which are distributed without their source code, exclusively through binary code; in such use, the term binary blob is common, even though the first letter in the blob abbreviation already stands for binary.
Depending on the implementation and culture around usage, the concept might be alternately referred to as a "basic large object" or "binary data type".
See also
Binary blob
Character large object
References
Databases
Data types |
https://en.wikipedia.org/wiki/David%20Kaye%20%28voice%20actor%29 | David Kaye is a Canadian-American voice actor. He is best known for animation roles such as Megatron in five of the Transformers series (Beast Wars, Beast Machines, Armada, Energon, and Cybertron), Optimus Prime in Transformers: Animated, Professor X in X-Men: Evolution, Cronus in Class of the Titans, Khyber in Ben 10: Omniverse, several characters in Avengers Assemble, and Duckworth in the reboot of DuckTales. He is also known for anime roles including Sesshōmaru in Inuyasha and Treize Khushrenada in Mobile Suit Gundam Wing, and video game roles such as Clank in the Ratchet & Clank series and Nathan Hale in the Resistance series. He is also the announcer for Last Week Tonight with John Oliver on HBO and voiced the Celestial Arishem in the Marvel Cinematic Universe film Eternals. He did voice work for various other studios in Vancouver, British Columbia, Canada for many years while occasionally doing voice work in Los Angeles, California, US, before fully relocating there in 2007.
Early life
During the 1980s, Kaye moved to Vancouver to work in radio. He also did theater, playing George in a production of Who's Afraid of Virginia Woolf? and Elwood P. Dowd in a production of Harvey.
Career
Kaye's voice acting career began in 1989 with General Hawk in the DiC animated series G.I. Joe. Working as an on-air talent for radio station CKLG (LG73), he quickly became less interested as both on-camera and voice roles started taking up more of his time. On-camera opportunities came in the form of guest roles on TV series and movies such as The X-Files, Battlestar Galactica, and Happy Gilmore. During this time, he was cast in several animation shows and video games, including a role as Megatron in Beast Wars. This began a decades-long relationship with the Transformers franchise. In 2007, Kaye become the first and (as of 2021) only actor in the franchise to play both Megatron and Optimus Prime in regular roles, voicing the latter for Cartoon Network's Transformers: Animated. He also lent his voice to the later series Transformers: Prime and Transformers: Robots in Disguise.
In anime, Kaye has been the voice behind Sesshomaru in the English dubbed InuYasha series, Treize Khushrenada in Mobile Suit Gundam Wing, Recoome in Dragon Ball Z (1996–98) and as Soun Tendo in Ranma ½. His involvement in anime has led to several appearances at conventions. Kaye later moved to Los Angeles and has since worked on Ben 10: Omniverse as the villain Khyber, on Regular Show as Reginald, and on Xiaolin Chronicles as Clay Bailey, F-Bot, and Chase Young. He has voiced several characters on the Avengers Assemble animated series and Max Tennyson in the reboot of Ben 10. He shared the role of the Stretch Monster with Miguel Ferrer in Stretch Armstrong and the Flex Fighters, and also took over in the late actor's role of Vandal Savage in Young Justice: Outsiders.
Kaye also narrates film trailers and network promos for ABC, Fox, The CW, National Geographic, TLC, FX, and others.
Per |
https://en.wikipedia.org/wiki/The%20Uneasy%20Case%20for%20Copyright | "The Uneasy Case for Copyright: A Study of Copyright in Books, Photocopies, and Computer Programs" was an article in the Harvard Law Review by future United States Supreme Court Justice Stephen Breyer in 1970, while he was still a legal academic. The article was a challenge to copyright expansionism, which was just entering its modern phase, and was still largely unquestioned in the United States. It became one of the most widely cited skeptical examinations of copyright.
In this piece, Breyer made several points:
That the only defensible justification of copyright is a consequentialist economic balance between maximizing the distribution of works and encouraging their production.
That there is significant historical, logical, and anecdotal evidence which shows that exclusive rights will provide only limited increases in the volume of literary production, particularly within certain sections of the book market.
That there was limited justification for contemporary expansions in the scope and duration of copyright.
There was a formal reply by law student Barry W. Tyerman in the UCLA Law Review, and a rejoinder by Breyer, but the article appears to have had little impact on copyright policy in the lead up to the Copyright Act of 1976.
Seventeen years later, in their mathematical law and economics article "An Economic Analysis of Copyright Law" (1989), William Landes and Richard Posner systematically analyzed each of Breyer's arguments and concluded that "they do not make a persuasive case for eliminating copyright protection." In particular they noted that many of his arguments rested on imperfect copying technology, an argument which weakens with technological innovation.
References
Copyright law literature
Works originally published in the Harvard Law Review
1970 documents
1970 in American law
Works about the information economy
Economics of intellectual property |
https://en.wikipedia.org/wiki/RSTS/E | RSTS () is a multi-user time-sharing operating system developed by Digital Equipment Corporation (DEC, now part of Hewlett-Packard) for the PDP-11 series of 16-bit minicomputers. The first version of RSTS (RSTS-11, Version 1) was implemented in 1970 by DEC software engineers that developed the TSS-8 time-sharing operating system for the PDP-8. The last version of RSTS (RSTS/E, Version 10.1) was released in September 1992. RSTS-11 and RSTS/E are usually referred to just as "RSTS" and this article will generally use the shorter form. RSTS-11 supports the BASIC programming language, an extended version called BASIC-PLUS, developed under contract by Evans Griffiths & Hart of Boston. Starting with RSTS/E version 5B, DEC added support for additional programming languages by emulating the execution environment of the RT-11 and RSX-11 operating systems.
Acronyms and abbreviations
BTSS (Basic Time Sharing System – never marketed) – The first name for RSTS.
CCL (Concise Command Language) – equivalent to a command to run a program kept in the Command Line Interpreter.
CIL (Core Image Library) – A container file format used to hold one or more standalone (bootable) programs and operating systems, such as RSTS through version 6A.
CILUS (Core Image Library Update and Save) – DOS-11 program to manipulate a CIL file.
CLI (Command Line Interpreter) – See Command-line interface.
CUSPs (Commonly Used System Programs) – System management applications like Task Manager or Registry Editor on Microsoft Windows. On RSTS-11, CUSPs were written in BASIC-Plus just like user programs.
DCL (Digital Command Language) – See DIGITAL Command Language.
DTR (DATATRIEVE) – programming language
FIP (File Information Processing) – resident area for issuing file requests
FIRQB (File Information Request Queue Block) – A data structure containing information about file requests.
KBM (Keyboard Monitor) – Analogous to Command Line Interpreter.
LAT (Local Area Transport) – Digital's predecessor to TCP/IP
MFD (Master File Directory) – Root directory of file system.
PBS (Print Batch Services)
PIP (Peripheral Interchange Program)
PPN (Project Programmer Number) – Analogous to GID and UID in Unix.
RDC (Remote Diagnostics Console) – A replacement front panel for a PDP-11 which used a serial connection to the console terminal or a modem instead of lights and toggle switches to control the CPU.
RSTS-11 (Resource Sharing Time Sharing System) – The first commercial product name for RSTS
RSTS/E (Resource Sharing Timesharing System Extended) – The current implementation of RSTS.
RTS (Run Time System) – Read only segment of code provided by the supplier which would be mapped into the high end of a 32K, 16-bit word address space that a user program would use to interface with the operating system. Only one copy of an RTS would be loaded into RAM, but would be mapped into the address space of any user program that required it. In essence, shared, re-entrant code, to reduce RAM requirements, by shari |
https://en.wikipedia.org/wiki/Local%20search%20%28optimization%29 | In computer science, local search is a heuristic method for solving computationally hard optimization problems. Local search can be used on problems that can be formulated as finding a solution maximizing a criterion among a number of candidate solutions. Local search algorithms move from solution to solution in the space of candidate solutions (the search space) by applying local changes, until a solution deemed optimal is found or a time bound is elapsed.
Local search algorithms are widely applied to numerous hard computational problems, including problems from computer science (particularly artificial intelligence), mathematics, operations research, engineering, and bioinformatics. Examples of local search algorithms are WalkSAT, the 2-opt algorithm for the Traveling Salesman Problem and the Metropolis–Hastings algorithm.
While it is sometimes possible to substitute gradient descent for a local search algorithm, gradient descent is not in the same family: although it is an iterative method for local optimization, it relies on an objective function’s gradient rather than an explicit exploration of the solution space.
Examples
Some problems where local search has been applied are:
The vertex cover problem, in which a solution is a vertex cover of a graph, and the target is to find a solution with a minimal number of nodes
The traveling salesman problem, in which a solution is a cycle containing all nodes of the graph and the target is to minimize the total length of the cycle
The boolean satisfiability problem, in which a candidate solution is a truth assignment, and the target is to maximize the number of clauses satisfied by the assignment; in this case, the final solution is of use only if it satisfies all clauses
The nurse scheduling problem where a solution is an assignment of nurses to shifts which satisfies all established constraints
The k-medoid clustering problem and other related facility location problems for which local search offers the best known approximation ratios from a worst-case perspective
The Hopfield Neural Networks problem for which finding stable configurations in Hopfield network.
Description
Most problems can be formulated in terms of a search space and target in several different manners. For example, for the traveling salesman problem a solution can be a route visiting all cities and the goal is to find the shortest route. But a solution can also be a path, and being a cycle is part of the target.
A local search algorithm starts from a candidate solution and then iteratively moves to a neighbor solution; a neighborhood being the set of all potential solutions that differ from the current solution by the minimal possible extent. This requires a neighborhood relation to be defined on the search space. As an example, the neighborhood of vertex cover is another vertex cover only differing by one node. For boolean satisfiability, the neighbors of a boolean assignment are those that have a single variable |
https://en.wikipedia.org/wiki/Task | Task may refer to:
Task (computing), in computing, a program execution context
Task (language instruction) refers to a certain type of activity used in language instruction
Task (project management), an activity that needs to be accomplished within a defined period of time
Task (teaching style)
TASK party, a series of improvisational participatory art-related events organized by artist Oliver Herring
Two-pore-domain potassium channel, a family of potassium ion channels
See also
The Task (disambiguation)
Task force (disambiguation)
Task switching (disambiguation) |
https://en.wikipedia.org/wiki/The%20Nashville%20Network | The Nashville Network, usually referred to as TNN, was an American country music-oriented cable television network. Programming included music videos, taped concerts, movies, game shows, syndicated programs, and numerous talk shows. On September 25, 2000, after an attempt to attract younger viewers failed, TNN's country music format was changed and the network was renamed The National Network, eventually becoming Spike TV in 2003 and Paramount Network in 2018.
On November 1, 2012, the network was revived as a digital broadcast television network. However, this lasted only 11 months, and the channel changed its name to Heartland on October 9, 2013.
History
Beginnings
The Nashville Network was launched as a basic cable and satellite television network on March 7, 1983, operating from the now-defunct Opryland USA theme park near Nashville, Tennessee. Country Music Television (CMT), founded by Glenn D. Daniels, beat TNN's launch by two days to become the first country music cable television network.
TNN was originally owned by WSM, Inc., a subsidiary of National Life and Accident Insurance Company that owned several broadcasting and tourism properties in Nashville and the traditional country radio and stage show The Grand Ole Opry, and initially focused on country music-related original programming. TNN's flagship shows included Nashville Now and Grand Ole Opry Live, both of which were broadcast live from Opryland USA. During TNN's first year of broadcasting, American General Corporation, parent company of NL&AIC, put the network up for sale, along with the other NL&AI properties, in an effort to focus on its core businesses.
Gaylord ownership (1983–1997)
The Gaylord Entertainment Company purchased TNN and the Opryland properties in the latter half of 1983. Much of TNN's programming during the Gaylord era was originally produced by Opryland Productions, also owned by Gaylord Entertainment. Programming included variety shows, talk shows, game shows (such as Fandango and Top Card), outdoors shows, and lifestyle shows, all centered in some way around country music or Southern U.S. culture. Some of TNN's popular on-air talent included Miss America 1983 Debra Maffett (TNN Country News), and local Nashville media personalities Ralph Emery, Dan Miller, Charlie Chase, Lorianne Crook and Gary Beaty, as well as established stars such as country music singer Bill Anderson and actresses Florence Henderson and Dinah Shore. TNN even created stars, such as wily professional fisherman Bill Dance. Grand Ole Opry singer and 1960s country star Bobby Lord, known for his skills as a sportsman and living in his native Florida, hosted the program Country Sportsman, featuring hunting and fishing excursions with various country stars. Inspired by ABC's The American Sportsman, the TNN show was later renamed Celebrity Sportsman after ABC objected to the similarity to its program. One of the most popular shows that aired on the network during this time was a variety show h |
https://en.wikipedia.org/wiki/TNN | TNN may refer to:
The National Network, a former name of the U.S. TV channel Paramount Network
The Nashville Network, an American country music-oriented cable television network
Tainan Airport (airport code TNN)
Times News Network, a news agency started by The Times of India
TNN Bass Tournament of Champions, fishing video game
TNN Motorsports Hardcore Heat, racing video game
TNN Radio, an American radio station in Anaheim, California
TNN24, a Thai news channel
Tribal News Network, a Pakistani news network |
https://en.wikipedia.org/wiki/Chemometrics | Chemometrics is the science of extracting information from chemical systems by data-driven means. Chemometrics is inherently interdisciplinary, using methods frequently employed in core data-analytic disciplines such as multivariate statistics, applied mathematics, and computer science, in order to address problems in chemistry, biochemistry, medicine, biology and chemical engineering. In this way, it mirrors other interdisciplinary fields, such as psychometrics and econometrics.
Background
Chemometrics is applied to solve both descriptive and predictive problems in experimental natural sciences, especially in chemistry. In descriptive applications, properties of chemical systems are modeled with the intent of learning the underlying relationships and structure of the system (i.e., model understanding and identification). In predictive applications, properties of chemical systems are modeled with the intent of predicting new properties or behavior of interest. In both cases, the datasets can be small but are often large and complex, involving hundreds to thousands of variables, and hundreds to thousands of cases or observations.
Chemometric techniques are particularly heavily used in analytical chemistry and metabolomics, and the development of improved chemometric methods of analysis also continues to advance the state of the art in analytical instrumentation and methodology. It is an application-driven discipline, and thus while the standard chemometric methodologies are very widely used industrially, academic groups are dedicated to the continued development of chemometric theory, method and application development.
Origins
Although one could argue that even the earliest analytical experiments in chemistry involved a form of chemometrics, the field is generally recognized to have emerged in the 1970s as computers became increasingly exploited for scientific investigation. The term 'chemometrics' was coined by Svante Wold in a 1971 grant application, and the International Chemometrics Society was formed shortly thereafter by Svante Wold and Bruce Kowalski, two pioneers in the field. Wold was a professor of organic chemistry at Umeå University, Sweden, and Kowalski was a professor of analytical chemistry at University of Washington, Seattle.
Many early applications involved multivariate classification, numerous quantitative predictive applications followed, and by the late 1970s and early 1980s a wide variety of data- and computer-driven chemical analyses were occurring.
Multivariate analysis was a critical facet even in the earliest applications of chemometrics. Data from infrared and UV/visible spectroscopy are often counted in thousands of measurements per sample. Mass spectrometry, nuclear magnetic resonance, atomic emission/absorption and chromatography experiments are also all by nature highly multivariate. The structure of these data was found to be conducive to using techniques such as principal components analysis (PCA), pa |
https://en.wikipedia.org/wiki/Polynomial%20long%20division | In algebra, polynomial long division is an algorithm for dividing a polynomial by another polynomial of the same or lower degree, a generalized version of the familiar arithmetic technique called long division. It can be done easily by hand, because it separates an otherwise complex division problem into smaller ones. Sometimes using a shorthand version called synthetic division is faster, with less writing and fewer calculations. Another abbreviated method is polynomial short division (Blomqvist's method).
Polynomial long division is an algorithm that implements the Euclidean division of polynomials, which starting from two polynomials A (the dividend) and B (the divisor) produces, if B is not zero, a quotient Q and a remainder R such that
A = BQ + R,
and either R = 0 or the degree of R is lower than the degree of B. These conditions uniquely define Q and R, which means that Q and R do not depend on the method used to compute them.
The result R = 0 occurs if and only if the polynomial A has B as a factor. Thus long division is a means for testing whether one polynomial has another as a factor, and, if it does, for factoring it out. For example, if a root r of A is known, it can be factored out by dividing A by (x – r).
Example
Polynomial long division
Find the quotient and the remainder of the division of the dividend, by the divisor.
The dividend is first rewritten like this:
The quotient and remainder can then be determined as follows:
Divide the first term of the dividend by the highest term of the divisor (meaning the one with the highest power of x, which in this case is x). Place the result above the bar (x3 ÷ x = x2).
Multiply the divisor by the result just obtained (the first term of the eventual quotient). Write the result under the first two terms of the dividend ().
Subtract the product just obtained from the appropriate terms of the original dividend (being careful that subtracting something having a minus sign is equivalent to adding something having a plus sign), and write the result underneath (). Then, "bring down" the next term from the dividend.
Repeat the previous three steps, except this time use the two terms that have just been written as the dividend.
Repeat step 4. This time, there is nothing to "bring down".
The polynomial above the bar is the quotient q(x), and the number left over (5) is the remainder r(x).
The long division algorithm for arithmetic is very similar to the above algorithm, in which the variable x is replaced (in base 10) by the specific number 10.
Polynomial short division
Blomqvist's method is an abbreviated version of the long division above. This pen-and-paper method uses the same algorithm as polynomial long division, but mental calculation is used to determine remainders. This requires less writing, and can therefore be a faster method once mastered.
The division is at first written in a similar way as long multiplication with the dividend at the top, and the divisor below it. The q |
https://en.wikipedia.org/wiki/NEXRAD | NEXRAD or Nexrad (Next-Generation Radar) is a network of 159 high-resolution S-band Doppler weather radars operated by the National Weather Service (NWS), an agency of the National Oceanic and Atmospheric Administration (NOAA) within the United States Department of Commerce, the Federal Aviation Administration (FAA) within the Department of Transportation, and the U.S. Air Force within the Department of Defense. Its technical name is WSR-88D (Weather Surveillance Radar, 1988, Doppler).
NEXRAD detects precipitation and atmospheric movement or wind. It returns data which when processed can be displayed in a mosaic map which shows patterns of precipitation and its movement. The radar system operates in two basic modes, selectable by the operator – a slow-scanning clear-air mode for analyzing air movements when there is little or no activity in the area, and a precipitation mode, with a faster scan for tracking active weather. NEXRAD has an increased emphasis on automation, including the use of algorithms and automated volume scans.
Deployment
In the 1970s, the U.S. Departments of Commerce, Defense, and Transportation, agreed that to better serve their operational needs, the existing national radar network needed to be replaced. The radar network consisted of WSR-57 developed in 1957, and WSR-74 developed in 1974. Neither system employed Doppler technology, which provides wind speed and direction information.
The Joint Doppler Operational Project (JDOP) was formed in 1976 at the National Severe Storms Laboratory (NSSL) to study the usefulness of using Doppler radar to identify severe and tornadic thunderstorms. Tests over the next three years, conducted by the National Weather Service and the Air Weather Service agency of the U.S. Air Force, found that Doppler radar provided much improved early detection of severe thunderstorms. A working group that included the JDOP published a paper providing the concepts for the development and operation of a national weather radar network. In 1979, the NEXRAD Joint System Program Office (JSPO) was formed to move forward with the development and deployment of the proposed NEXRAD radar network. That year, the NSSL completed a formal report on developing the NEXRAD system.
When the proposal was presented to the Reagan administration, two options were considered to build the radar systems: allow corporate bids to build the systems based on the schematics of the previously developed prototype radar or seek contractors to build their own systems using predetermined specifications. The JSPO group opted to select a contractor to develop and produce the radars that would be used for the national network. Radar systems developed by Raytheon and Unisys were tested during the 1980s. However, it took four years to allow the prospective contractors to develop their proprietary models. Unisys was selected as the contractor, and was awarded a full-scale production contract in January 1990.
Installation of an operational pr |
https://en.wikipedia.org/wiki/Barrel%20shifter | A barrel shifter is a digital circuit that can shift a data word by a specified number of bits without the use of any sequential logic, only pure combinational logic, i.e. it inherently provides a binary operation. It can however in theory also be used to implement unary operations, such as logical shift left, in cases where limited by a fixed amount (e.g. for address generation unit). One way to implement a barrel shifter is as a sequence of multiplexers where the output of one multiplexer is connected to the input of the next multiplexer in a way that depends on the shift distance. A barrel shifter is often used to shift and rotate n-bits in modern microprocessors, typically within a single clock cycle.
For example, take a four-bit barrel shifter, with inputs A, B, C and D. The shifter can cycle the order of the bits ABCD as DABC, CDAB, or BCDA; in this case, no bits are lost. That is, it can shift all of the outputs up to three positions to the right (and thus make any cyclic combination of A, B, C and D). The barrel shifter has a variety of applications, including being a useful component in microprocessors (alongside the ALU).
Implementation
The very fastest shifters are implemented as full crossbars, in a manner similar to the 4-bit shifter depicted above, only larger. These incur the least delay, with the output always a single gate delay behind the input to be shifted (after allowing the small time needed for the shift count decoder to settle; this penalty, however, is only incurred when the shift count changes). These crossbar shifters require however n2 gates for n-bit shifts. Because of this, the barrel shifter is often implemented as a cascade of parallel 2×1 multiplexers instead, which allows a large reduction in gate count, now growing only with n x log n; the propagation delay is however larger, growing with log n (instead of being constant as with the crossbar shifter).
For an 8-bit barrel shifter, two intermediate signals are used which shifts by four and two bits, or passes the same data, based on the value of S[2] and S[1]. This signal is then shifted by another multiplexer, which is controlled by S[0]:
int1 = IN , if S[2] == 0
= IN << 4, if S[2] == 1
int2 = int1 , if S[1] == 0
= int1 << 2, if S[1] == 1
OUT = int2 , if S[0] == 0
= int2 << 1, if S[0] == 1
Larger barrel shifters have additional stages.
The cascaded shifter has the further advantage over the full crossbar shifter of not requiring any decoding logic for the shift count.
Cost
The number of multiplexers required for an n-bit word is . Five common word sizes and the number of multiplexers needed are listed below:
128-bit —
64-bit —
32-bit —
16-bit —
8-bit —
Cost of critical path in FO4 (estimated, without wire delay):
32-bit: from 18 FO4 to 14 FO4
Uses
A common usage of a barrel shifter is in the hardware implementation of floating-point arithmetic. For a floating-point add or subtract opera |
https://en.wikipedia.org/wiki/Glasgow%20Central%20railway%20station | Glasgow Central () is one of two principal mainline rail terminals in Glasgow, Scotland. The railway station was opened by the Caledonian Railway on 1 August 1879 and is one of 20 managed by Network Rail. It is the northern terminus of the West Coast Main Line ( north of London Euston). As well as being Glasgow's principal inter-city terminus for services to England, Central also serves the southern suburbs of the Greater Glasgow conurbation, as well as the Ayrshire and Clyde coasts. The other main station in Glasgow is .
With just under 33 million passengers in 2017–18, Glasgow Central is the twelfth-busiest railway station in Britain and the busiest in Scotland. According to Network Rail, over 38 million people use it annually, 80% of whom are passengers. The station is protected as a category A listed building.
In Britain's 100 Best Railway Stations by Simon Jenkins, the station was one of only ten to be awarded five stars. In 2017, the station received a customer satisfaction score of 95.2%, the highest in the UK.
Original station
The original station, opened on 1 August 1879 on the north bank of the River Clyde, had eight platforms and was linked to Bridge Street station by a railway bridge over Argyle Street and a four-track railway bridge, built by Sir William Arrol, which crossed the Clyde to the south. The station was built over the site of Grahamston village, whose central street (Alston Street) was demolished to make way for the station platform.
The station was soon congested. In 1890, a temporary solution of widening the bridge over Argyle Street and inserting a ninth platform on Argyle Street bridge was completed. It was also initially intended to increase Bridge Street station to eight through lines and to increase Central station to 13 platforms.
Low-level station
The low-level platforms were originally a two-island separate station, and were added to serve the underground Glasgow Central Railway, authorised on 10 August 1888 and opened on 10 August 1896. The Glasgow Central Railway was taken over by the Caledonian Railway in 1890. Services ran from and from the Lanarkshire and Dunbartonshire Railway in the west through to and via Tollcross through to , Newton, and other Caledonian Railway destinations to the east of Glasgow. Other stations include Cambuslang and Motherwell.
1901–1905 station rebuild
By 1900 the station was again found to be too small, passenger numbers per annum on the high-level station having increased by 5.156 million since the first extension was completed in 1890. Passenger usage per annum in 1899 was 16.841 million on the high-level station and 6.416 million on the low-level station, a total of 23.257 million. The station is on two levels: the High-Level station at the same level as Gordon Street, which bridges over Argyle Street, and the underground Low-Level station.
Between 1901 and 1905 the original station was rebuilt. The station was extended over the top of Argyle Street, and thirteen platf |
https://en.wikipedia.org/wiki/GROMOS | GROningen MOlecular Simulation (GROMOS) is the name of a force field for molecular dynamics simulation, and a related computer software package. Both are developed at the University of Groningen, and at the Computer-Aided Chemistry Group at the Laboratory for Physical Chemistry at the Swiss Federal Institute of Technology (ETH Zurich). At Groningen, Herman Berendsen was involved in its development.
The united atom force field was optimized with respect to the condensed phase properties of alkanes.
Versions
GROMOS87
Aliphatic and aromatic hydrogen atoms were included implicitly by representing the carbon atom and attached hydrogen atoms as one group centered on the carbon atom, a united atom force field. The van der Waals force parameters were derived from calculations of the crystal structures of hydrocarbons, and on amino acids using short (0.8 nm) nonbonded cutoff radii.
GROMOS96
In 1996, a substantial rewrite of the software package was released. The force field was also improved, e.g., in the following way: aliphatic CHn groups were represented as united atoms with van der Waals interactions reparametrized on the basis of a series of molecular dynamics simulations of model liquid alkanes using long (1.4 nm) nonbonded cutoff radii. This version is continually being refined and several different parameter sets are available. GROMOS96 includes studies of molecular dynamics, stochastic dynamics, and energy minimization. The energy component was also part of the prior GROMOS, named GROMOS87. GROMOS96 was planned and conceived during a time of 20 months. The package is made of 40 different programs, each with a different essential function. An example of two important programs within the GROMOS96 are PROGMT, in charge of constructing molecular topology and also PROPMT, changing the classical molecular topology into the path-integral molecular topology.
GROMOS05
An updated version of the software package was introduced in 2005.
GROMOS11
The current GROMOS release is dated in May 2011.
Parameter sets
Some of the force field parameter sets that are based on the GROMOS force field. The A-version applies to aqueous or apolar solutions of proteins, nucleotides, and sugars. The B-version applies to isolated molecules (gas phase).
54
54A7 - 53A6 taken and adjusted torsional angle terms to better reproduce helical propensities, altered N–H, C=O repulsion, new CH3 charge group, parameterisation of Na+ and Cl− to improve free energy of hydration and new improper dihedrals.
54B7 - 53B6 in vacuo taken and changed in same manner as 53A6 to 54A7.
53
53A5 - optimised by first fitting to reproduce the thermodynamic properties of pure liquids of a range of small polar molecules and the solvation free enthalpies of amino acid analogs in cyclohexane, is an expansion and renumbering of 45A3.
53A6 - 53A5 taken and adjusted partial charges to reproduce hydration free enthalpies in water, recommended for simulations of biomolecules in explicit water.
|
https://en.wikipedia.org/wiki/CHARMM | Chemistry at Harvard Macromolecular Mechanics (CHARMM) is the name of a widely used set of force fields for molecular dynamics, and the name for the molecular dynamics simulation and analysis computer software package associated with them. The CHARMM Development Project involves a worldwide network of developers working with Martin Karplus and his group at Harvard to develop and maintain the CHARMM program. Licenses for this software are available, for a fee, to people and groups working in academia.
Force fields
The CHARMM force fields for proteins include: united-atom (sometimes termed extended atom) CHARMM19, all-atom CHARMM22 and its dihedral potential corrected variant CHARMM22/CMAP, as well as later versions CHARMM27 and CHARMM36 and various modifications such as CHARMM36m and CHARMM36IDPSFF. In the CHARMM22 protein force field, the atomic partial charges were derived from quantum chemical calculations of the interactions between model compounds and water. Furthermore, CHARMM22 is parametrized for the TIP3P explicit water model. Nevertheless, it is often used with implicit solvents. In 2006, a special version of CHARMM22/CMAP was reparametrized for consistent use with implicit solvent GBSW.
The CHARMM22 force field has the following potential energy function:
The bond, angle, dihedral, and nonbonded terms are similar to those found in other force fields such as AMBER. The CHARMM force field also includes an improper term accounting for out-of-plane bending (which applies to any set of four atoms that are not successively bonded), where is the force constant and is the out-of-plane angle. The Urey-Bradley term is a cross-term that accounts for 1,3 nonbonded interactions not accounted for by the bond and angle terms; is the force constant and is the distance between the 1,3 atoms.
For DNA, RNA, and lipids, CHARMM27 is used. Some force fields may be combined, for example CHARMM22 and CHARMM27 for the simulation of protein-DNA binding. Also, parameters for NAD+, sugars, fluorinated compounds, etc., may be downloaded. These force field version numbers refer to the CHARMM version where they first appeared, but may of course be used with subsequent versions of the CHARMM executable program. Likewise, these force fields may be used within other molecular dynamics programs that support them.
In 2009, a general force field for drug-like molecules (CGenFF) was introduced. It "covers a wide range of chemical groups present in biomolecules and drug-like molecules, including a large number of heterocyclic scaffolds". The general force field is designed to cover any combination of chemical groups. This inevitably comes with a decrease in accuracy for representing any particular subclass of molecules. Users are repeatedly warned in Mackerell's website not to use the CGenFF parameters for molecules for which specialized force fields already exist (as mentioned above for proteins, nucleic acids, etc.).
CHARMM also includes polarizable force field |
https://en.wikipedia.org/wiki/WSR-74 | WSR-74 radars were Weather Surveillance Radars designed in 1974 for the National Weather Service. They were added to the existing network of the WSR-57 model to improve forecasts and severe weather warnings. Some have been sold to other countries like Australia, Greece, and Pakistan.
Radar properties
There are two types in the WSR-74 series, which are almost identical except for operating frequency. The WSR-74C (used for local warnings) operates in the C band, and the WSR-74S (used in the national network) operates in the S band (like the WSR-57 and the current WSR-88D). S band frequencies are better suited because they are not attenuated significantly in heavy rain while the C Band is strongly attenuated, and has a generally shorter maximum effective range.
The WSR-74C uses a wavelength of 5.4 cm. It also has a dish diameter of 8 feet, and a maximum range of 579 km (313 nm) as it was used only for reflectivities (see Doppler dilemma).
History
The WSR-57 network was very spread out, with 66 radars to cover the entire country. There was little to no overlap in case one of these vacuum-tube radars went down for maintenance. The WSR-74 was introduced as a "gap filler", as well as an updated radar that, among other things, was transistor-based. In the early 1970s, Enterprise Electronics Corporation (EEC), based out of Enterprise, Alabama won the contract to design, manufacture, test, and deliver the entire WSR-74 radar network (both C and S-Band versions).
WSR-74C radars were generally local-use radars that didn't operate unless severe weather was expected, while WSR-74S radars were generally used to replace WSR-57 radars in the national weather surveillance network. When a network radar went down, a nearby local radar might have to supply updates like a network radar. NWS Lubbock received the first WSR-74C in August 1973 following widespread attention from the F5 Lubbock tornado of 1970.
128 of the WSR-57 and WSR-74 model radars were spread across the country as the National Weather Service's radar network until the 1990s. They were gradually replaced by the WSR-88D model (Weather Surveillance Radar - 1988, Doppler), constituting the NEXRAD network. The WSR-74 had served the NWS for two decades.
The last WSR-74C used by the NWS was located in Williston, ND, before being decommissioned at the end of 2012.
No WSR-74S's are in the NWS inventory today, having been replaced by the WSR-88D, but some of these radars are in commercial use.
Radar sites in the US
WSR-74 sites include the following two categories:
See also
References
National Weather Service weather radars |
https://en.wikipedia.org/wiki/WSR-57 | WSR-57 radars were the USA's main weather surveillance radar for over 35 years. The National Weather Service operated a network of this model radar across the country, watching for severe weather.
History
The WSR-57 (Weather Surveillance Radar - 1957) was the first 'modern' weather radar. Initially commissioned at the Miami Hurricane Forecast Center, the WSR-57 was installed in other parts of the CONUS (continental United States). The WSR-57 was the first generation of radars designed expressly for a national warning network.
The WSR-57 was designed in 1957 by Dewey Soltow using World War II technology. It gave only coarse reflectivity data and no velocity data, which made it extremely difficult to predict tornadoes. Weather systems were traced across the radar screen using grease pencils. Forecasters had to manually turn a crank to adjust the radar's scan elevation, and needed considerable skill to judge the intensity of storms based on green blotches on the radar scope.
The military designation for the WSR-57 is AN/FPS-41.
NOAA has pictures of the Charleston, SC, WSR-57 radar image of the 1989 Hurricane Hugo. A WSR-57 dish, located on the roof of the National Hurricane Center (NHC), was blown away by Hurricane Andrew. The NHC report on Hurricane Andrew shows its last radar image, as well as images from nearby WSR-88D radars. As the network of WSR-57 radars aged, some were replaced with WSR-74S models of similar performance but with better reliability. WSR-57 operators sometimes had to scramble for spare parts no longer manufactured in this country. 128 of the WSR-57 and WSR-74 model radars were spread across the country as the National Weather Service's radar network until the 1990s. The WSR-57 radars were gradually replaced by the Weather Surveillance Radar - 1988, Doppler, WSR-88D, which NOAA named the NEXRAD network.
The last WSR-57 radar in the United States was decommissioned on December 2, 1996.
Radar sites
The 66 former sites of the WSR-57 include the following:
Radar properties
The radar uses a wavelength of 10.3 cm. This corresponds to an operating frequency of 2890 MHz. This frequency is in the S band, which is also used by today's weather radar network.
WSR-57 radars had the following interesting statistics:
Dish diameter:
Power output: 410,000 watts
Maximum range: 915 km (494 nm)
References
National Weather Service weather radars
1957 meteorology
Radars of the United States Air Force
Military electronics of the United States
Military equipment introduced in the 1950s |
https://en.wikipedia.org/wiki/Reflective%20programming | In computer science, reflective programming or reflection is the ability of a process to examine, introspect, and modify its own structure and behavior.
Historical background
The earliest computers were programmed in their native assembly languages, which were inherently reflective, as these original architectures could be programmed by defining instructions as data and using self-modifying code. As the bulk of programming moved to higher-level compiled languages such as Algol, Cobol, Fortran, Pascal, and C, this reflective ability largely disappeared until new programming languages with reflection built into their type systems appeared.
Brian Cantwell Smith's 1982 doctoral dissertation introduced the notion of computational reflection in procedural programming languages and the notion of the meta-circular interpreter as a component of 3-Lisp.
Uses
Reflection helps programmers make generic software libraries to display data, process different formats of data, perform serialization or deserialization of data for communication, or do bundling and unbundling of data for containers or bursts of communication.
Effective use of reflection almost always requires a plan: A design framework, encoding description, object library, a map of a database or entity relations.
Reflection makes a language more suited to network-oriented code. For example, it assists languages such as Java to operate well in networks by enabling libraries for serialization, bundling and varying data formats. Languages without reflection such as C are required to use auxiliary compilers for tasks like Abstract Syntax Notation to produce code for serialization and bundling.
Reflection can be used for observing and modifying program execution at runtime. A reflection-oriented program component can monitor the execution of an enclosure of code and can modify itself according to a desired goal of that enclosure. This is typically accomplished by dynamically assigning program code at runtime.
In object-oriented programming languages such as Java, reflection allows inspection of classes, interfaces, fields and methods at runtime without knowing the names of the interfaces, fields, methods at compile time. It also allows instantiation of new objects and invocation of methods.
Reflection is often used as part of software testing, such as for the runtime creation/instantiation of mock objects.
Reflection is also a key strategy for metaprogramming.
In some object-oriented programming languages such as C# and Java, reflection can be used to bypass member accessibility rules. For C#-properties this can be achieved by writing directly onto the (usually invisible) backing field of a non-public property. It is also possible to find non-public methods of classes and types and manually invoke them. This works for project-internal files as well as external libraries such as .NET's assemblies and Java's archives.
Implementation
A language supporting reflection provides a number of features |
https://en.wikipedia.org/wiki/Sheffield%20Supertram | The Sheffield Supertram is a tram and tram-train network covering Sheffield and Rotherham in South Yorkshire, England. The infrastructure is owned by the South Yorkshire Passenger Transport Executive (SYPTE), with Stagecoach responsible for the operation and maintenance of rolling stock under a concession until 2024, under the brand name Stagecoach Supertram.
Interest in building a modern tram system for Sheffield had mounted during the 1980s. After detailed planning by SYPTE, the Supertram proposal was approved by Act of Parliament in 1991. Construction of the network, incorporating several existing heavy rail sections as well as new track, was carried out in sections, allowing revenue services to start during 1994. Early operations, hindered by a complex ticketing system and the initially small coverage area, had disappointing ridership figures. In an effort to turn around the performance, operations were privatised to Stagecoach in 1997, at price of £1.15 million, who took over from South Yorkshire Supertram Limited. After management and operational changes, and further expansion of the system, ridership numbers rose considerably.
From 2008, interest had been expressed in hybrid tram-train operations, which would be able to use sections of the mainline rail network as well as tramways. During 2012 an experimental trial was planned, as this was to be the first deployment of tram-trains anywhere in the United Kingdom. The start of tram-train operations, using a purpose-built fleet of new Class 399 Stadler Citylink electric multiple units, was repeatedly delayed, but on 25 October 2018, operations of the new tram-train line commenced.
The Supertram network now consists of 50 stations across four colour-coded lines, the Blue, Purple, Yellow and Tram-Train (Black) routes, which connect with local and national bus and rail services and six park and ride sites.
History
Background and initial launch
In common with many British cities, Sheffield used to have an extensive tram network, the Sheffield Tramway (1873-1960). This finally closed in October 1960, it then being argued that motorised buses offered superior economics.
The new Supertram network arose from ambitions held by the South Yorkshire Passenger Transport Executive (SYPTE), which had been assigned the role of public transport co-ordination in the area. SYPTE refined an earlier and more expansive light rail proposal to include pre-existing heavy rail alignments, in order to gain the required permissions to proceed, and deposited several Bills to Parliament in 1985–1990 to gain the necessary powers. Financial approval was given by the Department of Transport towards the end of 1990, allowing the £240 million construction of the initial line to commence in 1991.
This line was opened in stages between 1994 and 1995. The first section, located along a former heavy rail alignment to Meadowhall, opened on 21 March 1994. The network was operated by South Yorkshire Supertram Limited, a |
https://en.wikipedia.org/wiki/Magnetoresistive%20RAM | Magnetoresistive random-access memory (MRAM) is a type of non-volatile random-access memory which stores data in magnetic domains. Developed in the mid-1980s, proponents have argued that magnetoresistive RAM will eventually surpass competing technologies to become a dominant or even universal memory. Currently, memory technologies in use such as flash RAM and DRAM have practical advantages that have so far kept MRAM in a niche role in the market.
Description
Unlike conventional RAM chip technologies, data in MRAM is not stored as electric charge or current flows, but by magnetic storage elements. The elements are formed from two ferromagnetic plates, each of which can hold a magnetization, separated by a thin insulating layer. One of the two plates is a permanent magnet set to a particular polarity; the other plate's magnetization can be changed to match that of an external field to store memory. This configuration is known as a magnetic tunnel junction and is the simplest structure for an MRAM bit. A memory device is built from a grid of such "cells".
The simplest method of reading is accomplished by measuring the electrical resistance of the cell. A particular cell is (typically) selected by powering an associated transistor that switches current from a supply line through the cell to ground. Because of tunnel magnetoresistance, the electrical resistance of the cell changes with the relative orientation of the magnetization in the two plates. By measuring the resulting current, the resistance inside any particular cell can be determined, and from this the magnetization polarity of the writable plate. Typically if the two plates have the same magnetization alignment (low resistance state) this is considered to mean "1", while if the alignment is antiparallel the resistance will be higher (high resistance state) and this means "0".
Data is written to the cells using a variety of means. In the simplest "classic" design, each cell lies between a pair of write lines arranged at right angles to each other, parallel to the cell, one above and one below the cell. When current is passed through them, an induced magnetic field is created at the junction, which the writable plate picks up. This pattern of operation is similar to magnetic-core memory, a system commonly used in the 1960s.
However, due to process and material variations, an array of memory cells has a distribution of switching fields with a deviation σ. Therefore, to program all the bits in a large array with the same current, the applied field needs to be larger than the mean "selected" switching field by greater than 6σ. In addition,the applied field must be kept below a maximum value. Thus, this "conventional" MRAM must keep these two distributions well-separated. As a result, there is a narrow operating window for programming fields; and only inside this window, can all the bits be programmed without errors or disturbs. In 2005, a "Savtchenko switching" relying on the unique behavio |
https://en.wikipedia.org/wiki/VM%20%28operating%20system%29 | VM (often: VM/CMS) is a family of IBM virtual machine operating systems used on IBM mainframes System/370, System/390, zSeries, System z and compatible systems, including the Hercules emulator for personal computers.
The following versions are known:
Virtual Machine Facility/370
VM/370, released in 1972, is a System/370 reimplementation of earlier CP/CMS operating system.
VM/370 Basic System Extensions Program Product
VM/BSE (BSEPP) is an enhancement to VM/370 that adds support for more devices (such as 3370-type fixed-block-architecture DASD drives), improvements to the CMS environment (such as an improved editor), and some stability enhancements to CP.
VM/370 System Extensions Program Product
VM/SE (SEPP) is an enhancement to VM/370 that includes the facilities of VM/BSE, as well as a few additional fixes and features.
Virtual Machine/System Product
VM/SP, a milestone version, replaces VM/370, VM/BSE and VM/SE. Release 1 added EXEC2 and XEDIT System Product Editor; Release 3 added REXX; Release 6 added the shared filesystem.
Virtual Machine/System Product High Performance Option
VM/SP HPO adds additional device support and functionality to VM/SP, and allows certain S/370 machines that can utilize more than 16 MB of real storage to do so, up to 64 MB. This version was intended for users that would be running multiple S/370 guests at once.
Virtual Machine/Extended Architecture Migration Aid
VM/XA MA is intended to ease the migration from MVS/370 to MVS/XA by allowing both to run concurrently on the same processor complex.
Virtual Machine/Extended Architecture System Facility
VM/XA SF is an upgraded VM/XA MA with improved functionality and performance.
Virtual Machine/Extended Architecture System Product
VM/XA SP is an upgraded VM/XA MA with improved functionality and performance, offered as a replacement for VM/SP HPO on machines supporting S/370-XA. It includes a version of CMS that can run in either S/370 or S/370-XA mode.
Virtual Machine/Enterprise Systems Architecture
VM/ESA provides the facilities of VM/SP, VM/SP HPO and VM/XA SP. VM/ESA version 1 can run in S/370, ESA/370 or ESA/390 mode; it does not support S/370 XA mode. Version 2 only runs in ESA/390 mode. The S/370-capable versions of VM/ESA were actually their own separate version from the ESA/390 versions of VM/ESA, as the S/370 versions are based on the older VM/SP HPO codebase, and the ESA/390 versions are based on the newer VM/XA codebase.
z/VM
z/VM, the last version still widely used as one of the main full virtualization solutions for the mainframe market. z/VM 4.4 was the last version that could run in ESA/390 mode; subsequent versions only run in z/Architecture mode.
The CMS in the name refers to the Conversational Monitor System, a component of the product that is a single-user operating system that runs in a virtual machine and provides conversational time-sharing in VM.
Overview
The heart of the VM architecture is the Control Program or hypervisor abbrevia |
https://en.wikipedia.org/wiki/David%20Filo | David Robert Filo (born April 20, 1966) is an American billionaire businessman and the co-founder of Yahoo! with classmate Jerry Yang. His Filo Server Program, written in the C programming language, was the server-side software used to dynamically serve variable web pages, called Filo Server Pages, on visits to early versions of the Yahoo! website.
Early life
Filo was born in Wisconsin and was raised in Moss Bluff, Louisiana. He earned a B.S. degree in computer engineering at Tulane University (through the Dean's Honor Scholarship) and an M.S. degree in 1990 at Stanford University.
Career
In February 1994, he co-created with Jerry Yang an Internet website called "Jerry and David's Guide to the World Wide Web", consisting of a directory of other websites. It was renamed "Yahoo!" (an exclamation). Yahoo! became very popular, and Filo and Yang realized the business potential and co-founded Yahoo! Inc.
Yahoo! started off as a web portal with a web directory providing an extensive range of products and services for various online activities. Yahoo was one of the pioneers of the early Internet era in the 1990s. It is still one of the leading internet brands and, due to partnerships with telecommunications firms, is one of the most visited websites on the internet.
Personal life
Filo is married to photographer and teacher Angela Buenning, a graduate of Stanford (1993) and Berkeley (1999). They have one child, and live in Palo Alto, California.
In 2005, he gave $30 million to his alma mater, Tulane University, for use in its School of Engineering.
The Filos have been major benefactors of both Stanford, especially its schools of sustainability and education, and Berkeley, primarily its graduate school of journalism.
As of September 2019, Forbes estimated Filo to be worth $4.3 billion, ranking him the 379th-richest person in the world.
References
External links
MetroActive: A Couple of Yahoos
Michael Moritz interviews David Filo et al. at TechCrunch40 conference video
1966 births
American computer businesspeople
American billionaires
Business duos
Businesspeople in software
Living people
Web developers
Computer programmers
People from Lake Charles, Louisiana
Stanford University alumni
Tulane University alumni
Yahoo! employees
American technology company founders
Businesspeople from Wisconsin
20th-century American businesspeople
21st-century American businesspeople
Directors of Yahoo!
People from Moss Bluff, Louisiana |
https://en.wikipedia.org/wiki/Douglas%20Rain | Douglas James Rain (May 9, 1928 – November 11, 2018) was a Canadian actor and narrator. Although primarily a stage actor, he is perhaps best known for his voicing of the HAL 9000 computer in the film 2001: A Space Odyssey (1968) and its sequel 2010: The Year We Make Contact (1984).
Early life
Rain was born in Winnipeg, Manitoba, the son of Mary, a nurse, and James Rain, a rail yard switchman, both from Glasgow, Scotland.
Career
Rain graduated with a B.A. from the University of Manitoba in 1950, then studied acting at the Banff School of Fine Arts in Banff, Alberta and the Old Vic Theatre School in London, England. He was a founding member of the Stratford Festival of Canada in 1953 and was associated with it as an actor until 1998.
He performed a wide variety of theatrical roles, such as a production of Henry V staged in Stratford, Ontario, that was adapted for television in 1966. In 1972, he was nominated for the Tony Award for Best Supporting or Featured Actor (Dramatic) for his performance in Vivat! Vivat Regina!
Voice of the HAL 9000 computer
Stanley Kubrick cast Rain as the voice of the HAL 9000 computer for the film 2001: A Space Odyssey (1968) after hearing his narration of a short documentary titled Universe and later chose him as "the creepy voice of HAL". In the film, his voice was also sometimes processed with an electronic device called the Eltro information rate changer.
Rain reprised the role for the sequel 2010: The Year We Make Contact (1984). He also briefly parodied it in a sketch on Second City Television where Merv Griffin (played by Rick Moranis) takes his talk show into outer space.
Death
Rain died on November 11, 2018, at the age of 90 at St. Mary's Memorial Hospital in St. Marys, Ontario of natural causes. He was married twice and had three children and a grandchild.
Filmography
Christmas in the Market Place (1956, TV movie) - Joey
Oedipus Rex (1957) — Messenger
The Hill (1960, TV movie) - Jesus
Just Mary (1960, TV series) — voice
The Night They Killed Joe Howe (1960, TV drama, co-starring Austin Willis and James Doohan) — Joseph Howe
Universe (1960, short film) — Narrator
One Plus One (1961) — segment "The Divorcee"
William Lyon Mackenzie: A Friend to His Country (1961, short) — William Lyon Mackenzie
Robert Baldwin: A Matter of Principle (1961, short) — William Lyon Mackenzie
The Other Man (1963, TV miniseries) — David Henderson
Twelfth Night (1964, TV movie)
Fields of Sacrifice (1964) — Narrator
Henry V (1966, TV movie) — Henry V
2001: A Space Odyssey (1968) — voice of HAL 9000
Talking to a Stranger (1971, TV miniseries) — Alan
Sleeper (1973) — voice of Evil Computer and Various Robot Butlers
The Russian-German War (1973, documentary) — Narrator
The Man Who Skied Down Everest (1974) — Narrator
One Canadian: The Political Memoirs of the Rt. Hon. John G. Diefenbaker (1976, TV miniseries, voice)
SCTV (1982, "The Merv Griffin Show") — voice of HAL 9000
2010: The Year We Make Contact (1984) — voice of HAL 9000
Lo |
https://en.wikipedia.org/wiki/NorthPoint%20Communications | NorthPoint Communications Group, Inc. was a competitive local exchange carrier focused on data transmission via digital subscriber lines. The company had relationships with Microsoft, Tandy Corporation, Intel, Verio, Cable & Wireless, Frontier Corporation, Concentric Network, ICG Communications, Enron, Network Plus, and Netopia. The company had investments from The Carlyle Group, Accel Partners, Benchmark, and Greylock Partners.
History
The company was founded in 1997 by Michael W. Malaga and 5 other former executives of Metropolitan Fiber Systems.
On May 5, 1999, during the dot-com bubble, the company became a public company via an initial public offering in which it sold 15 million shares at $24 per share. Malaga, then 34 years old, was worth $300 million on paper.
In September 2000, Verizon agreed to acquire a 55% interest in the company and merge the companies' DSL businesses.
In November 2000, as its customers failed to pay their bills, NorthPoint restated downwards its financial performance for the third quarter of 2000, lowering revenue from $30 million to $24 million. After the earnings restatement, Verizon terminated its acquisition agreement, claiming that a material adverse change had occurred. Northpoint sued Verizon to force it to complete the transaction. The lawsuit was settled out of court in July 2002, with Verizon agreeing to pay $175 million to Northpoint. NorthPoint stated that "it would cut its workforce by 19%, or 248 jobs, to lower expenses after the collapse of its merger with Verizon."
Bankruptcy
In January 2001, NorthPoint filed bankruptcy. Some internet service providers, which faced a disruption in service, blamed the banks for failing to work out a deal to save the company. In March 2001, AT&T Corporation acquired the assets of NorthPoint for $135 million in a liquidation.
References
1997 establishments in California
1999 initial public offerings
Defunct companies based in the San Francisco Bay Area
Defunct telecommunications companies of the United States
Defunct networking companies
Dot-com bubble
AT&T subsidiaries
Internet service providers of the United States |
https://en.wikipedia.org/wiki/Therac-25 | The Therac-25 was a computer-controlled radiation therapy machine produced by Atomic Energy of Canada Limited (AECL) in 1982 after the Therac-6 and Therac-20 units (the earlier units had been produced in partnership with of France).
It was involved in at least six accidents between 1985 and 1987, in which patients were given massive overdoses of radiation. Because of concurrent programming errors (also known as race conditions), it sometimes gave its patients radiation doses that were hundreds of times greater than normal, resulting in death or serious injury. These accidents highlighted the dangers of software control of safety-critical systems, and they have become a standard case study in health informatics, software engineering, and computer ethics. Additionally, the overconfidence of the engineers and lack of proper due diligence to resolve reported software bugs are highlighted as an extreme case where the engineers' overconfidence in their initial work and failure to believe the end users' claims caused drastic repercussions.
History
The French company CGR manufactured the Neptune and Sagittaire linear accelerators.
In the early 1970s, CGR and the Canadian public company Atomic Energy Commission Limited (AECL) collaborated on the construction of linear accelerators controlled by a DEC PDP-11 minicomputer: the Therac-6, which produced X-rays of up to 6 MeV, and the Therac-20, which could produce X-rays or electrons of up to 20 MeV. The computer added some ease of use because the accelerator could operate without it. CGR developed the software for the Therac-6 and reused some subroutines for the Therac-20.
In 1981, the two companies ended their collaboration agreement. AECL developed a new double pass concept for electron acceleration in a more confined space, changing its energy source from klystron to magnetron. In certain techniques, the electrons produced are used directly, while in others they are made to collide against a tungsten anode to produce X-ray beams. This dual accelerator concept was applied to the Therac-20 and Therac-25, with the latter being much more compact, versatile, and easy to use. It was also more economical for a hospital to have a dual machine that could apply treatments of electrons and X-rays, instead of two machines.
The Therac-25 was designed as a machine controlled by a computer, and some safety mechanisms switched from hardware to software. AECL decided not to duplicate some safety mechanisms. AECL reused modules and code routines from the Therac-20 for the Therac-25.
The first prototype of the Therac-25 was built in 1976. It began to be marketed at the end of 1982.
The software for the Therac-25 was developed by one person over several years using PDP-11 assembly language. It was an evolution of the Therac-6 software. In 1986, the programmer left AECL. In a lawsuit, lawyers could not identify the programmer or learn about his qualification and experience.
Five machines were installed in the Unit |
https://en.wikipedia.org/wiki/In%C5%8D%20Tadataka | was a Japanese surveyor and cartographer. He is known for completing the first map of Japan using modern surveying techniques.
Early life
Inō was born in the small village of Ozeki in the middle of Kujūkuri beach, in Kazusa Province (in what is now Chiba Prefecture). He was born to the Jimbō family and his childhood name was Sanjirō. His mother died when he was seven and after a somewhat tumultuous childhood (not uncommon at the time), he was adopted (age 17) by the prosperous Inō family of Sawara (now a district of Katori, Chiba), a town in Shimōsa Province. He ran the family business, expanding its sake brewing and rice-trading concerns, until he retired at the age of 49.
After retirement, he moved to Edo and became a pupil of astronomer Takahashi Yoshitoki, from whom he learned Western astronomy, geography, and mathematics.
Mission
In 1800, after nearly five years of study, the Tokugawa shogunate authorized Inō to perform a survey of the country using his own money. This task, which consumed the remaining 17 years of his life, covered the entire coastline and some of the interior of each of the Japanese home islands. During this period Inō reportedly spent 3,736 days making measurements (and traveled 34,913 kilometres), stopping regularly to present the Shōgun with maps reflecting his survey's progress. He produced detailed maps (some at a scale of 1:36,000, others at 1:216,000) of select parts of Japan, mostly in Kyūshū and Hokkaidō.
Inō's magnum opus, his 1:216,000 map of the entire coastline of Japan, remained unfinished at his death in 1818 but was completed by his surveying team in 1821. An atlas collecting all of his survey work, Dai Nihon Enkai Yochi Zenzu (:ja:大日本沿海輿地全図 Maps of Japan's Coastal Area), was published that year. It had three pages of large-scale maps at 1:432,000, showed the entire country on eight pages at 1:216,000, and had 214 pages of select coastal areas in fine detail at 1:36,000. The Inō-zu (Inō's maps), many of which are accurate to 1/1000 of a degree, remained the definitive maps of Japan for nearly a century, and maps based on his work were in use as late as 1924.
Expeditions
Inō's surveys were done in ten expeditions. The first survey started on June 11, 1800 and included five members. This survey was mainly to begin charting the coast of Hokkaidō (where Russian ships had come to open trading houses). This survey was done almost entirely by measuring walking steps and taking astronomical observations. They made it to Bekkai 別解 in far northeast Hokkaido. In total they walked and surveyed 3,244 km.
The results of the first survey, paid for almost entirely by Inō's own funds, helped the shogunal government understand the significance of the work. For this reason, starting with the second expedition (departing Edo in the summer of 1801) he received more support, and the route was more ambitious, covering most of the eastern seaboard from just south of Edo to the far northern tip of Honshū, and then the inte |
https://en.wikipedia.org/wiki/Iram | Iram or IRAM may refer to:
Computing
i-RAM, a solid-state drive based on volatile electronic memory (RAM)
Berkeley IRAM project, research into intelligent random access memory
Internal RAM, the memory range internal to a CPU
Organisations
Institut de radioastronomie millimétrique, operates two radio astronomical observatories
Institut Reine Astrid Mons, a school in Belgium
Instituto Argentino de Normalización y Certificación, an Argentine institute known as IRAM
Other
Erum (name), a Muslim name commonly spelt in English as Iram.
Iram of the Pillars, a lost city located on the Arabian Peninsula
Improvised rocket-assisted munition
Information Risk Assessment Methodology, provides business-focused information risk assessment |
https://en.wikipedia.org/wiki/CP-67 | CP-67 is a hypervisor, or Virtual Machine Monitor, from IBM for its System/360 Model 67 computer.
CP-67 is the control program portion of CP/CMS, a virtual machine operating system developed by IBM's Cambridge Scientific Center in Cambridge, Massachusetts. It was a reimplementation of their earlier research system CP-40, which ran on a one-off customized S/360-40. CP-67 was later reimplemented (again) as CP-370, which IBM released as VM/370 in 1972, when virtual memory was added to the System/370 series.
CP and CMS are usually grouped together as a unit, but the "components are independent of each other. CP-67 can be used on an appropriate configuration without CMS, and CMS
can be run on a properly configured System/360 as a single-user system without CP-67."
Minimum hardware configuration
The minimum configuation for CP-67 is:
2067 CPU, model 1 or 2
2365 Processor Storage model 1—262,144 bytes of magnetic core memory with an access time of 750 ns (nanoseconds) per eight bytes.
IBM 1052 printer/keyboard
IBM 1403 printer
IBM 2540 card read/punch
Three IBM 2311 disk storage units, 7.5 MB each, 22.5 MB total
IBM 2400 magnetic tape data storage unit
IBM 270x Transmission Control unit
Installation
Disks to be used by CP have to be formatted by a standalone utility called FORMAT, loaded from tape or punched cards. CP disks are formatted with fixed-length 829 byte records.
Following formatting, a second stand-alone utility, DIRECT, partitions the disk space between permanent (system and user files) and temporary (paging and spooling) space. DIRECT also creates the user directory identifying the virtual machines (users) available in the system. For each user the directory contains identifying information, id and password, and lists the resources (core, devices, etc) that this user can access, Although a user may be allowed access to physical devices it is more common to specify virtual devices, such as a spooled card reader, card punch, and printer. A user can be allocated one or more virtual disk units, "mini disks" [sic.], which resemble a real disk of the same device type, except that they occupy a subset of the space on the real device.
Family tree
See also
History of CP/CMS
References
Virtualization software
IBM mainframe operating systems
History of software
VM (operating system) |
https://en.wikipedia.org/wiki/CP/CMS | CP/CMS (Control Program/Cambridge Monitor System) is a discontinued time-sharing operating system of the late 1960s and early 1970s, known for its excellent performance and advanced features. Among its three versions, CP-40/CMS was an important "one-off" research system that established the CP/CMS virtual machine architecture. It was followed by CP-67/CMS, a reimplementation of CP-40/CMS for the IBM System/360-67, and the primary focus of this article. Finally, CP-370/CMS was a reimplementation of CP-67/CMS for the System/370. While it was never released as such, it became the foundation of IBM's VM/370 operating system, announced in 1972.
Each implementation was a substantial redesign of its predecessor and an evolutionary step forward. CP-67/CMS was the first widely available virtual machine architecture. IBM pioneered this idea with its research systems M44/44X (which used partial virtualization) and CP-40 (which used full virtualization).
In addition to its role as the predecessor of the VM family, CP/CMS played an important role in the development of operating system (OS) theory, the design of IBM's System/370, the time-sharing industry, and the creation of a self-supporting user community that anticipated today's free software movement.
History
Fundamental CP/CMS architectural and strategic parameters were established in CP-40, which began production use at IBM's Cambridge Scientific Center in early 1967. This effort occurred in a complex political and technical milieu, discussed at some length and supported by first-hand quotes in the Wikipedia article History of CP/CMS.
In a nutshell:
In the early 1960s, IBM sought to maintain dominance over scientific computing, where time-sharing efforts such as CTSS and MIT's Project MAC gained focus. But IBM had committed to a huge project, the System/360, which took the company in a different direction.
The time-sharing community was disappointed with the S/360's lack of time-sharing capabilities. This led to key IBM sales losses at Project MAC and Bell Laboratories. IBM's Cambridge Scientific Center (CSC), originally established to support Project MAC, began an effort to regain IBM's credibility in time-sharing, by building a time-sharing operating system for the S/360. This system would eventually become CP/CMS. In the same spirit, IBM designed and released a S/360 model with time-sharing features, the IBM System/360-67, and a time-sharing operating system, TSS/360. TSS failed; but the 360-67 and CP/CMS succeeded, despite internal political battles over time-sharing, and concerted efforts at IBM to scrap the CP/CMS effort.
In 1967, CP/CMS production use began, first on CSC's CP-40, then later on CP-67 at Lincoln Laboratories and other sites. It was made available via the IBM Type-III Library in 1968. By 1972, CP/CMS had gone through several releases; it was a robust, stable system running on 44 systems; it could support 60 timesharing users on a S/360-67; and at least two commercial times |
https://en.wikipedia.org/wiki/QFE | QFE is a three letter acronym which can have meanings in aviation, in software development, and in network usage. It can refer to
QFE, a Q code used by pilots and air traffic controllers that refers to atmospheric pressure and altimeter settings
Quick Fix Engineering, also known as "hotfix".
Quoted for emphasis, used on internet forums when someone wants to reiterate a previously-made point.
Qualifying Financial Entities, are companies or organisations that are registered on the Financial Service Providers Register in New Zealand. |
https://en.wikipedia.org/wiki/NetKernel | NetKernel is a British software company and software platform by the same name that is used for High Performance Computing, Enterprise Application Integration, and Energy Efficient Computation.
It allows developers to cleanly separate code from architecture. It can be used as an application server, embedded in a Java container or employed as a cloud computing platform.
As a platform, it is an implementation of the resource-oriented computing (ROC) abstraction. ROC is a logical computing model that resides on top of but is completely isolated from the physical realm of code and objects. In ROC, information and services are identified by logical addresses which are resolved to physical endpoints for the duration of a request and then released. Logical indirect addressing results in flexible systems that can be changed while the system is in operation. In NetKernel, the boundary between the logical and physical layers is intermediated by an operation-system caliber microkernel that can perform various transparent optimization.
The idea of using resources to model abstract information stems from the REST architectural style and the World Wide Web. The idea of using a uniform addressing model stems from the Unix operating system. NetKernel can be considered a unification of the Web and Unix implemented as a
software operating system running on a monolithic microkernel within a single computer.
NetKernel was developed by 1060 Research and is offered under a dual open-source software and commercial software license.
History
NetKernel was started at Hewlett-Packard Labs in 1999. It was conceived by Dr. Russ Perry, Dr. Royston Sellman and Dr. Peter Rodgers as a general purpose XML operating environment that could address the needs of the exploding interest in XML dialects for intra-industry XML messaging.
Rodgers saw the web as an implementation of a general abstraction which he extrapolated as ROC, but whereas the web is limited to publishing information; he set about conceiving a solution that could perform computation using similar principles. Working in close partnership with co-founder Tony Butterfield, they discovered a method for writing software that could be executed across a logical model, separated from the physical realm of code and objects. Recognising the potential for this approach, they spun out of HP Labs.
Rodgers and Butterfield begun their company as "1060 Research Limited" in Chipping Sodbury, a small market town on the edge of the Cotsolds region of England in 2002, and over a number of years developed the platform that became NetKernel.
In early 2018, 1060 Research announced that it was appointing a new CEO, Charles Radclyffe. Radclyffe announced to the NetKernel community in February 2018 that the team were working on a new platform based on NKEE 6 which would be fully hosted, programmable and accessible via the web - NetKernel Cloud. Radclyffe resigned after six months.
Concepts
Resource
A resource is identifiable infor |
https://en.wikipedia.org/wiki/EOD | EOD, EoD, or Eod may refer to:
Earth Overshoot Day
Education Opens Doors, in Dallas, Texas
Electric organ discharge
End of data, a control character in telecommunications
End of day, in business
End of days (disambiguation)
Esoteric Order of Dagon, a fictional cult in the Cthulhu mythos of H. P. Lovecraft
Eves of Destruction, a Canadian roller derby team
Evolution of Dance
Explosive Ordnance Disposal |
https://en.wikipedia.org/wiki/City%20network | City networks can either refer to a membership organization city leaders join to connect their city to other municipalities, or to a geographical concept used to describe inter-connectivity of cities on different levels (trade, railways, culture etc.).
City networks in international cooperation
In the perspective of international cooperation, the term "city network" refers to a membership organization that cities join either to take part to specific projects, to be represented by geographical specificity, or to assert political commitments. One of the main reasons why cities join a city network is to learn good practices from peer cities that have similar characteristics and face comparable challenges (infrastructure, urban transport, water and sanitation, smart city, etc.).
City networks as a geographical concept
City networks are a geographical concept studying connections between cities by placing the cities as nodes on a network. In modern conceptions of cities, these networks play an important role in understanding the nature of cities. City networks can identify physical connections to other places, such as railways, canals, scheduled flights, or telecommunication networks, typically done using graph theory. City networks also exist in immaterial form, such as trade, global finance, markets, migration, cultural links, shared social spaces or shared histories. There are also networks of religious nature, in particular through pilgrimage.
The city itself is then regarded as the node where different networks run together. Some urban thinkers have argued that cities can only be understood if the context of the city's connections is understood.
It has been argued that city networks are a key ingredient of what defines a city, along with the number of people (density) and the particular way of life in cities.
References
Taylor, P. J. (2001), Specification of the World City Network. Geographical Analysis, 33: 181-194.
City
Networks
Social networks
Urban studies and planning terminology |
https://en.wikipedia.org/wiki/Virtual%20Storage%20Access%20Method | Virtual Storage Access Method (VSAM) is an IBM DASD file storage access method, first used in the OS/VS1, OS/VS2 Release 1 (SVS) and Release 2 (MVS) operating systems, later used throughout the Multiple Virtual Storage (MVS) architecture and now in z/OS. Originally a record-oriented filesystem, VSAM comprises four data set organizations: key-sequenced (KSDS), relative record (RRDS), entry-sequenced (ESDS) and linear (LDS). The KSDS, RRDS and ESDS organizations contain records, while the LDS organization (added later to VSAM) simply contains a sequence of pages with no intrinsic record structure, for use as a memory-mapped file.
Overview
An IBM Redbook named "VSAM PRIMER" (especially when used with the "Virtual Storage Access Method (VSAM) Options for Advanced Applications" manual) explains the concepts needed to make use of VSAM. IBM uses the term data set in official documentation as a synonym of file, and direct access storage device (DASD) because it supported other devices similar to disk drives.
VSAM records can be of fixed or variable length. They are organised in fixed-size blocks called Control Intervals (CIs), and then into larger divisions called Control Areas (CAs). Control Interval sizes are measured in bytes for example 4 kilobytes while Control Area sizes are measured in disk tracks or cylinders. Control Intervals are the units of transfer between disk and computer so a read request will read one complete Control Interval. Control Areas are the units of allocation so, when a VSAM data set is defined, an integral number of Control Areas will be allocated.
The Access Method Services utility program IDCAMS is commonly used to manipulate ("delete and define") VSAM data sets. Custom programs can access VSAM datasets through Data Definition (DD) statements in Job Control Language (JCL), via dynamic allocation or in online regions such as in Customer Information Control System (CICS).
Both IMS/DB and Db2 are implemented on top of VSAM and use its underlying data structures.
VSAM files
The physical organization of VSAM data sets differs considerably from the organizations used by other access methods, as follows.
A VSAM file is defined as a cluster of VSAM components, e.g., for KSDS a DATA component and an INDEX component.
Control Intervals and Control Areas
VSAM components consist of fixed length physical blocks grouped into fixed length control intervals (CI) and control areas (CA). The size of the CI and CA is determined by the Access Method Services (AMS), and the way in which they are used is normally not visible to the user. There will be a fixed number of control intervals in each control area.
A control interval normally contains multiple records. The records are stored within the control interval starting from the low address upwards. Control information is stored at the other end of the control interval, starting from the high address and moving downwards. The space between the records and the control information is free s |
https://en.wikipedia.org/wiki/Sleepycat%20Software | Sleepycat Software, Inc. was the software company primarily responsible for maintaining the Berkeley DB packages from 1996 to 2006.
Berkeley DB is freely-licensed database software originally developed at the University of California, Berkeley for 4.4BSD Unix. Developers from that project founded Sleepycat in 1996 to provide commercial support after a request by Netscape to provide new features in the software. In February 2006, Sleepycat was acquired by Oracle Corporation, which continued developing Berkeley DB.
The founders of the company were spouses Margo Seltzer and Keith Bostic, who are also original authors of Berkeley DB. Another original author, Michael Olson, was the President and CEO of Sleepycat. They were all at University of California, Berkeley, where they developed the software that grew to become Berkeley DB. Sleepycat was originally based in Carlisle, Massachusetts and moved to Lincoln, Massachusetts.
Sleepycat distributed Berkeley DB under a proprietary software license that included standard commercial features, and simultaneously under the newly created Sleepycat License, which allows open source use and distribution of Berkeley DB with a copyleft redistribution condition similar to the GNU General Public License.
Sleepycat had offices in California, Massachusetts and the United Kingdom, and was profitable during its entire existence.
See also
Berkeley Software Design
Computer Systems Research Group
References
External links
Oracle Berkeley DB — successor to Sleepycat's web site
Defunct companies based in Massachusetts
Defunct software companies of the United States
Free software companies
Oracle acquisitions
Software companies disestablished in 2006
Software companies established in 1996
2006 mergers and acquisitions
American companies disestablished in 2006
American companies established in 1996 |
https://en.wikipedia.org/wiki/Network%20society | Network society is the expression coined in 1991 related to the social, political, economic and cultural changes caused by the spread of networked, digital information and communications technologies. The intellectual origins of the idea can be traced back to the work of early social theorists such as Georg Simmel who analyzed the effect of modernization and industrial capitalism on complex patterns of affiliation, organization, production and experience.
Origins
The term network society was coined by Jan van Dijk in his 1991 Dutch book De Netwerkmaatschappij (The Network Society) and by Manuel Castells in The Rise of the Network Society (1996), the first part of his trilogy The Information Age. In 1978 James Martin used the related term 'The Wired Society' indicating a society that is connected by mass- and telecommunication networks.
Van Dijk defines the network society as a society in which a combination of social and media networks shapes its prime mode of organization and most important structures at all levels (individual, organizational and societal). He compares this type of society to a mass society that is shaped by groups, organizations and communities ('masses') organized in physical co-presence.
Barry Wellman, Hiltz and Turoff
Wellman studied the network society as a sociologist at the University of Toronto. His first formal work was in 1973, "The Network City" with a more comprehensive theoretical statement in 1988. Since his 1979 "The Community Question", Wellman has argued that societies at any scale are best seen as networks (and "networks of networks") rather than as bounded groups in hierarchical structures. More recently, Wellman has contributed to the theory of social network analysis with an emphasis on individualized networks, also known as "networked individualism". In his studies, Wellman focuses on three main points of the network society: community, work and organizations. He states that with recent technological advances an individual's community can be socially and spatially diversified. Organizations can also benefit from the expansion of networks in that having ties with members of different organizations can help with specific issues.
In 1978, Roxanne Hiltz and Murray Turoff's The Network Nation explicitly built on Wellman's community analysis, taking the book's title from Craven and Wellman's "The Network City". The book argued that computer supported communication could transform society. It was remarkably prescient, as it was written well before the advent of the Internet. Turoff and Hiltz were the progenitors of an early computer supported communication system, called EIES.
Manuel Castells
According to Castells, networks constitute the new social morphology of our societies. When interviewed by Harry Kreisler from the University of California Berkeley, Castells said "...the definition, if you wish, in concrete terms of a network society is a society where the key social structures and activities are orga |
https://en.wikipedia.org/wiki/CBS%20Radio%20Mystery%20Theater | CBS Radio Mystery Theater (a.k.a. Radio Mystery Theater and Mystery Theater, sometimes abbreviated as CBSRMT) is a radio drama series created by Himan Brown that was broadcast on CBS Radio Network affiliates from 1974 to 1982, and later in the early 2000s was repeated by the NPR satellite feed.
The format was similar to that of classic old time radio shows like The Mysterious Traveler and The Whistler, in that the episodes were introduced by host E. G. Marshall who provided pithy wisdom and commentary throughout. Unlike the hosts of those earlier programs, Marshall is fully mortal, merely someone whose heightened insight and erudition plunge the listener into the world of the macabre.
As with Himan Brown's prior Inner Sanctum Mysteries, each episode of CBS Radio Mystery Theater opened and closed with the ominous sound of a creaking door. This sound effect is accompanied by Marshall's greeting, "Come in!… Welcome. I'm E. G. Marshall." At each show's conclusion, the door swings shut, and Marshall signs off with: "Until next time, pleasant… dreams?" This is followed by an extended variation of the show's theme music.
CBSRMT was broadcast each weeknight, at first with a new program each night. Later in the run, three or four episodes were new originals each week, and the remainder repeats. There were 1,399 original episodes. The total number of broadcasts, including repeats, was 2,969. Each episode was allotted a full hour of airtime, but after commercials and newscasts, each episode typically ran for around 45 minutes.
Hosts
E. G. Marshall hosted the program from January 1974 until February 1, 1982, when actress Tammy Grimes took over for the remainder of the series' final season, maintaining the format. Himan Brown re-recorded E. G. Marshall's original host segments for NPR's broadcast of the show in the 2000s.
Music
The series' theme music features three descending notes from double basses, a stopped horn sting and timpani roll, then a low, eerie theme played by the bass clarinet. The opening and closing themes for CBSRMT are excerpted from the music from the score for Twilight Zone episode "Two", composed by Nathan Van Cleave. Series listeners will immediately recognize the "RMT Theme" beginning about 1:35 on the "Two" soundtrack selection from the Twilight Zone CD boxed set. Other background tracks from the Twilight Zone music library, to which CBS owned full rights, were featured repeatedly in episodes of CBSRMT. The theme song and the other music was also previously used in the 1950s and 1960s in other CBS-owned radio and television dramas (Perry Mason; Rawhide; The Fugitive; Gunsmoke; Have Gun Will Travel; Suspense; Yours Truly, Johnny Dollar; etc.), in addition to Twilight Zone, as it was all owned by CBS.
Scope
Despite the show's title, Brown expanded its scope beyond mysteries to include horror, science fiction, historical drama, westerns and comedy, along with seasonal dramas during the Christmas season: An adaptation of A Christma |
https://en.wikipedia.org/wiki/Fragile%20base%20class | The fragile base class problem is a fundamental architectural problem of object-oriented programming systems where base classes (superclasses) are considered "fragile" because seemingly safe modifications to a base class, when inherited by the derived classes, may cause the derived classes to malfunction. The programmer cannot determine whether a base class change is safe simply by examining in isolation the methods of the base class.
One possible solution is to make instance variables private to their defining class and force subclasses to use accessors to modify superclass states. A language could also make it so that subclasses can control which inherited methods are exposed publicly. These changes prevent subclasses from relying on implementation details of superclasses and allow subclasses to expose only those superclass methods that are applicable to themselves.
Another alternative solution could be to have an interface instead of superclass.
The fragile base class problem has been blamed on open recursion (dynamic dispatch of methods on this), with the suggestion that invoking methods on this default to closed recursion (static dispatch, early binding) rather than open recursion (dynamic dispatch, late binding), only using open recursion when it is specifically requested; external calls (not using this) would be dynamically dispatched as usual.
Java example
The following trivial example is written in the Java programming language and shows how a seemingly safe modification of a base class can cause an inheriting subclass to malfunction by entering an infinite recursion which will result in a stack overflow.
class Super {
private int counter = 0;
void inc1() {
counter++;
}
void inc2() {
counter++;
}
}
class Sub extends Super {
@Override
void inc2() {
inc1();
}
}
Calling the dynamically bound method inc2() on an instance of Sub will correctly increase the field counter by one. If however the code of the superclass is changed in the following way:
class Super {
private int counter = 0;
void inc1() {
inc2();
}
void inc2() {
counter++;
}
}
a call to the dynamically bound method inc2() on an instance of Sub will cause an infinite recursion between itself and the method inc1() of the super-class and eventually cause a stack overflow. This problem could have been avoided, by declaring the methods in the superclass as final, which would make it impossible for a sub-class to override them. However, this is not always desirable or possible. Therefore, it is good practice for super-classes to avoid changing calls to dynamically-bound methods.
Solutions
Objective-C has categories as well as non-fragile instance variables.
Component Pascal deprecates superclass calls.
Java, C++ (Since C++11) and D allow inheritance or overriding a class method to be prohibited by labeling a declaration of a class or method, respectively, with the keyword "final". |
https://en.wikipedia.org/wiki/Macintosh%20IIx | The Macintosh IIx is a personal computer designed, manufactured, and sold by Apple Computer from September 1988 to October 1990. This model was introduced as an update to the original Macintosh II, replacing the 16 MHz Motorola 68020 CPU and 68881 FPU with a 68030 CPU and 68882 FPU running at the same clock speed. The initial price of the IIx was or for the version with a 40 MB hard drive.
The 800 KB floppy drive was replaced with a 1.44 MB SuperDrive; the IIx is the first Macintosh to include this as standard.
The Mac IIx included 0.25 KiB of L1 instruction CPU cache, 0.25 KiB of L1 data cache, a 16 MHz bus (1:1 with CPU speed), and supported up to System 7.5.5.
The IIx was the second of three Macintosh models to use this case allowing dual floppy drives and 6 NuBus slots; the last model was the Macintosh IIfx. Apple's nomenclature of the time used the "x" to indicate the presence of the 68030 CPU as used in the Macintosh IIcx and IIvx.
Support and spare parts for the IIx were discontinued on August 31, 1998.
Timeline
References
External links
Macintosh IIx technical specification at apple.com
Macintosh IIx technical specifications at EveryMac (Accessed 9/11/2015)
x
IIx
IIx
IIx
Computer-related introductions in 1988
Products and services discontinued in 1990 |
https://en.wikipedia.org/wiki/Macintosh%20Centris | Macintosh Centris is a family of personal computers designed, manufactured and sold by Apple Computer, Inc. in 1992 and 1993. They were introduced as a replacement for the six-year-old Macintosh II family of computers; the name was chosen to indicate that the consumer was selecting a Macintosh in the center of Apple's product line. Centris machines were the first to offer Motorola 68040 CPUs at a price point around US$2,500, making them significantly less expensive (albeit slower) than Quadra computers, but also offering higher performance than the Macintosh LC computers of the time.
Apple released three computers bearing the Centris name: the Centris 610 (replacing the Macintosh IIsi) and Centris 650 (replacing the Macintosh IIci in form and the Quadra 700 in function), both of which were introduced in March 1993, and the Centris 660AV which followed in July. Apple also considered the Macintosh IIvx to be part of the Centris line. The IIvx was released in October of the previous year but, according to Apple, their lawyers were unable to complete the trademark check on the "Centris" name in time for the IIvx's release.
The retirement of the Centris name was announced in September 1993, with the 610, 650 and 660AV all being rebranded the following month as Macintosh Quadra machines as part of Apple's effort to reposition their product families to correlate with customer markets instead of price ranges and features. The IIvx was also discontinued in favor of the newly announced Quadra 605.
Overview
The Centris 610 uses a 20 MHz 68LC040 CPU, which has no math coprocessor functions. It used a new "pizza box" case that was intended to be placed under the user's computer monitor. The Centris 610 also provided the basis for the Workgroup Server 60. This case was also used for the Power Macintosh 6100 lines of computers and, when these later computers were introduced, Apple offered consumers a product upgrade path by letting them buy a new motherboard. Apple's motherboard upgrades of this type were considered expensive, however, and were not a popular option.
The base-configuration Centris 650 uses a 25 MHz 68LC040 CPU; while more expensive configurations with built-in Ethernet use the 25 MHz 68040 allowing it to succeed the Quadra 700. It uses the Macintosh IIvx-style desktop case.
The Centris 660AV uses a 25 MHz 68040 and also includes an AT&T 3210 digital signal processor. Like other "AV" computers from Apple, it supports both video input and output. It uses the "pizza box" case which debuted earlier in the Centris 610.
The Centris 610 and 650 were replaced about six months after their introduction by the Quadra 610 and 650 models, which kept the same case and designs but raised the CPU speeds from 20 MHz and 25 MHz to 25 MHz and 33 MHz, respectively – while the Centris 660AV was renamed as the Quadra 660AV without any actual design change. These Macs also existed during Apple's transition from auto-inject floppy drives to manual-inject drive |
https://en.wikipedia.org/wiki/Macintosh%20Quadra | The Macintosh Quadra is a family of personal computers designed, manufactured and sold by Apple Computer, Inc. from October 1991 to October 1995. The Quadra, named for the Motorola 68040 central processing unit, replaced the Macintosh II family as the high-end Macintosh model.
The first models were the Quadra 700 and Quadra 900, both introduced in October 1991. The Quadra 800, 840AV and 605 were added through 1993. The Macintosh Centris line was merged with the Quadra in October 1993, adding the 610, 650 and 660AV to the range. After the introduction of the Power Macintosh line in early 1994, Apple continued to produce and sell new Quadra models; the 950 continued to be sold until October, 1995.
The product manager for the Quadra family was Frank Casanova who was also the Product Manager for the Macintosh IIfx.
Models
The first computers bearing the Macintosh Quadra name were the Quadra 700 and Quadra 900, both introduced in 1991 with a central processing unit (CPU) speed of 25 MHz. The 700 was a compact model using the same case dimensions as the Macintosh IIci, with a Processor Direct Slot (PDS) expansion slot, while the latter was a newly designed tower case with five NuBus expansion slots and one PDS slot. The 900 was replaced in 1992 with the Quadra 950, with a CPU speed of 33 MHz. The line was joined by a number of "800-series" machines in a new minitower case design, starting with the Quadra 800, and the "600-series" pizza box desktop cases with the Quadra 610.
In 1993, the Quadra 840AV and 660AV were introduced at 40 MHz and 25 MHz respectively. They included an AT&T 3210 Digital signal processor and S-Video and composite video input/output ports, as well as CD-quality microphone and audio output ports. The AV models also introduced PlainTalk, consisting of the text-to-speech software MacinTalk Pro and speech control (although not dictation). However all of these features were poorly supported in software and a DSP was not installed in later AV Macs, which were based on the more powerful PowerPC 601 - a CPU powerful enough to handle the coprocessor's duties on its own.
Branding
Apple hired marketing firm Lexicon Branding to come up with the name. Lexicon chose the name Quadra hoping to appeal to engineers by evoking technical terms like quadrant and quadriceps.
The Quadra name was also used for the successors to the Centris models that briefly existed during 1993: The 610, the 650 and the 660AV. Centris was a "mid-range" line of systems between the Quadra on the high end and the LC on the low end, but it was later decided that there were too many product lines and the name was dropped. Some machines of this era including the Quadra 605 were also sold as Performas.
The last use of the name was for the Quadra 630, which was a variation of the LC 630 using a "full" Motorola 68040 instead of the LC's 68LC040, and introduced together with it in 1994. The 630 was the first Mac to use an IDE based drive bus for the internal hard disk dr |
https://en.wikipedia.org/wiki/Destructor | Destructor may refer to:
Destructor (computer programming), in object-oriented programming, a method which is automatically invoked when an object is destroyed
Euronymous (1968–1993), guitarist and co-founder of the Norwegian black metal band Mayhem
Spanish warship Destructor (1886), a fast ocean-going torpedo gunboat
Destructor, a Marvel Comics character; see Advanced Idea Mechanics
Destructor, a Futurama character
Cherax destructor, the scientific name for the Common Yabby
a municipal incinerator; the term destructor was used well into the 20th century. |
https://en.wikipedia.org/wiki/Temperature%20record%20of%20the%20last%202%2C000%20years | The temperature record of the last 2,000 years is reconstructed using data from climate proxy records in conjunction with the modern instrumental temperature record which only covers the last 170 years at a global scale. Large-scale reconstructions covering part or all of the 1st millennium and 2nd millennium have shown that recent temperatures are exceptional: the Intergovernmental Panel on Climate Change Fourth Assessment Report of 2007 concluded that "Average Northern Hemisphere temperatures during the second half of the 20th century were very likely higher than during any other 50-year period in the last 500 years and likely the highest in at least the past 1,300 years." The curve shown in graphs of these reconstructions is widely known as the hockey stick graph because of the sharp increase in temperatures during the last century. this broad pattern was supported by more than two dozen reconstructions, using various statistical methods and combinations of proxy records, with variations in how flat the pre-20th-century "shaft" appears. Sparseness of proxy records results in considerable uncertainty for earlier periods.
Individual proxy records, such as tree ring widths and densities used in dendroclimatology, are calibrated against the instrumental record for the period of overlap. Networks of such records are used to reconstruct past temperatures for regions: tree ring proxies have been used to reconstruct Northern Hemisphere extratropical temperatures (within the tropics trees do not form rings) but are confined to land areas and are scarce in the Southern Hemisphere which is largely ocean. Wider coverage is provided by multiproxy reconstructions, incorporating proxies such as lake sediments, ice cores and corals which are found in different regions, and using statistical methods to relate these sparser proxies to the greater numbers of tree ring records. The "Composite Plus Scaling" (CPS) method is widely used for large-scale multiproxy reconstructions of hemispheric or global average temperatures; this is complemented by Climate Field Reconstruction (CFR) methods which show how climate patterns have developed over large spatial areas, making the reconstruction useful for investigating natural variability and long-term oscillations as well as for comparisons with patterns produced by climate models.
During the 1,900 years before the 20th century, it is likely that the next warmest period was from 950 to 1100, with peaks at different times in different regions. This has been called the Medieval Warm Period, and some evidence suggests widespread cooler conditions during a period around the 17th century known as the Little Ice Age. In the "hockey stick controversy", climate change deniers have asserted that the Medieval Warm Period was warmer than at present, and have disputed the data and methods of climate reconstructions.
Temperature change in the last 2,000 years
According to IPCC Sixth Assessment Report, in the last 170 years, human |
https://en.wikipedia.org/wiki/Anthology | In book publishing, an anthology is a collection of literary works chosen by the compiler; it may be a collection of plays, poems, short stories, songs or excerpts by different authors.
In genre fiction, the term anthology typically categorizes collections of shorter works, such as short stories and short novels, by different authors, each featuring unrelated casts of characters and settings, and usually collected into a single volume for publication. Alternatively, it can also be a collection of selected writings (short stories, poems etc.) by one author.
Complete collections of works are often called "complete works" or "" (Latin equivalent).
Etymology
The word entered the English language in the 17th century, from the Greek word, ἀνθολογία (anthologic, literally "a collection of blossoms", from , ánthos, flower), a reference to one of the earliest known anthologies, the Garland (, stéphanos), the introduction to which compares each of its anthologized poets to a flower. That Garland by Meléagros of Gadara formed the kernel for what has become known as the Greek Anthology.
Florilegium, a Latin derivative for a collection of flowers, was used in medieval Europe for an anthology of Latin proverbs and textual excerpts. Shortly before anthology had entered the language, English had begun using florilegium as a word for such a collection.
Early anthologies
The Palatine Anthology, discovered in the Palatine Library, Heidelberg in 1606, is a collection of Greek poems and epigrams that was based on the lost 10th Century Byzantine collection of Constantinus Cephalas, which in turn was based on older anthologies. In The Middle Ages, European collections of florilegia became popular, bringing together extracts from various Christian and pagan philosophical texts. These evolved into commonplace books and miscellanies, including proverbs, quotes, letters, poems and prayers.
Songes and Sonettes, usually called Tottel's Miscellany, was the first printed anthology of English poetry. It was published by Richard Tottel in 1557 in London and ran to many editions in the sixteenth century. A widely read series of political anthologies, Poems on Affairs of State, began its publishing run in 1689, finishing in 1707.
In Britain, one of the earliest national poetry anthologies to appear was The British Muse (1738), compiled by William Oldys. Thomas Percy's influential Reliques of Ancient English Poetry (1765), was the first of the great ballad collections, responsible for the ballad revival in English poetry that became a significant part of the Romantic movement. William Enfield's The Speaker; Or, Miscellaneous Pieces was published in 1774 and was a mainstay of 18th Century schoolrooms. Important nineteenth century anthologies included Palgrave's Golden Treasury (1861), Edward Arber's Shakespeare Anthology (1899) and the first edition of Arthur Quiller Couch's Oxford Book of English Verse (1900).
Traditional
In East Asian tradition, an anthology was a recog |
https://en.wikipedia.org/wiki/Cliff%20Shaw | John Clifford Shaw (February 23, 1922 – February 9, 1991) was a systems programmer at the RAND Corporation. He is a coauthor of the first artificial intelligence program, the Logic Theorist, and was one of the developers of General Problem Solver (universal problem solver machine) and Information Processing Language (a programming language of the 1950s). Information Processing Language is considered the true "father" of the JOSS language. One of the most significant events that occurred in the programming was the development of the concept of list processing by Allen Newell, Herbert A. Simon and Cliff Shaw during the development of the language IPL-V. He invented the linked list, which remains fundamental in many strands of modern computing technology.
References
External links
Simon, Herbert A. Allen Newell - a referenced biography of Newell and Shaw at the National Academy of Sciences.
People in information technology
Artificial intelligence researchers
Carnegie Mellon University faculty
Place of birth missing
1922 births
1991 deaths |
https://en.wikipedia.org/wiki/List%20of%20airports%20by%20IATA%20airport%20code%3A%20O |
O
Notes
OSA is common IATA code for Kansai International Airport , Osaka International Airport and Kobe Airport .
References
- includes IATA codes
Aviation Safety Network - IATA and ICAO airport codes
Great Circle Mapper - IATA, ICAO and FAA airport codes
O |
https://en.wikipedia.org/wiki/List%20of%20airports%20by%20IATA%20airport%20code%3A%20Q |
Q
References
- includes IATA codes
Aviation Safety Network - IATA and ICAO airport codes
Great Circle Mapper - IATA, ICAO and FAA airport codes
Datahub complete list of IATA codes
Q |
https://en.wikipedia.org/wiki/List%20of%20airports%20by%20IATA%20airport%20code%3A%20U |
U
References
- includes IATA codes
Aviation Safety Network - IATA and ICAO airport codes
Great Circle Mapper - IATA, ICAO and FAA airport codes
U |
https://en.wikipedia.org/wiki/List%20of%20airports%20by%20IATA%20airport%20code%3A%20V |
V
References
- includes IATA codes
Aviation Safety Network - IATA and ICAO airport codes
Great Circle Mapper - IATA, ICAO and FAA airport codes
V |
https://en.wikipedia.org/wiki/List%20of%20airports%20by%20IATA%20airport%20code%3A%20X |
X
References
- includes IATA codes
Aviation Safety Network - IATA and ICAO airport codes
Great Circle Mapper - IATA, ICAO and FAA airport codes
X |
https://en.wikipedia.org/wiki/List%20of%20airports%20by%20IATA%20airport%20code%3A%20Z |
Z
References
- includes IATA codes
Aviation Safety Network - IATA and ICAO airport codes
Great Circle Mapper - IATA, ICAO and FAA airport codes
Z |
https://en.wikipedia.org/wiki/Quine%E2%80%93McCluskey%20algorithm | The Quine–McCluskey algorithm (QMC), also known as the method of prime implicants, is a method used for minimization of Boolean functions that was developed by Willard V. Quine in 1952 and extended by Edward J. McCluskey in 1956. As a general principle this approach had already been demonstrated by the logician Hugh McColl in 1878, was proved by Archie Blake in 1937, and was rediscovered by Edward W. Samson and Burton E. Mills in 1954 and by Raymond J. Nelson in 1955. Also in 1955, Paul W. Abrahams and John G. Nordahl as well as Albert A. Mullin and Wayne G. Kellner proposed a decimal variant of the method.
The Quine–McCluskey algorithm is functionally identical to Karnaugh mapping, but the tabular form makes it more efficient for use in computer algorithms, and it also gives a deterministic way to check that the minimal form of a Boolean function has been reached. It is sometimes referred to as the tabulation method.
The method involves two steps:
Finding all prime implicants of the function.
Use those prime implicants in a prime implicant chart to find the essential prime implicants of the function, as well as other prime implicants that are necessary to cover the function.
Complexity
Although more practical than Karnaugh mapping when dealing with more than four variables, the Quine–McCluskey algorithm also has a limited range of use since the problem it solves is NP-complete. The running time of the Quine–McCluskey algorithm grows exponentially with the number of variables. For a function of n variables the number of prime implicants can be as large as , e.g. for 32 variables there may be over 534 × 1012 prime implicants. Functions with a large number of variables have to be minimized with potentially non-optimal heuristic methods, of which the Espresso heuristic logic minimizer was the de facto standard in 1995.
Step two of the algorithm amounts to solving the set cover problem; NP-hard instances of this problem may occur in this algorithm step.
Example
Input
In this example, the input is a Boolean function in four variables, which evaluates to on the values and , evaluates to an unknown value on and , and to everywhere else (where these integers are interpreted in their binary form for input to for succinctness of notation). The inputs that evaluate to are called 'minterms'. We encode all of this information by writing
This expression says that the output function f will be 1 for the minterms and (denoted by the 'm' term) and that we don't care about the output for and combinations (denoted by the 'd' term). The summation symbol denotes the logical sum (logical OR, or disjunction) of all the terms being summed over.
Step 1: finding prime implicants
First, we write the function as a table (where 'x' stands for don't care):
{| class="wikitable"
|-
! !! A !! B !! C !! D !! f
|-
| m0 || 0 || 0 || 0 || 0 || 0
|-
| m1 || 0 || 0 || 0 || 1 || 0
|-
| m2 || 0 || 0 || 1 || 0 || 0
|-
| m3 || 0 || 0 || 1 || 1 || 0
|-
| m4 || 0 || |
https://en.wikipedia.org/wiki/NSS | NSS may refer to:
Arts and entertainment
New Star Soccer, a computer game
Newsstand Specials, a spinoff of Playboy magazine
Nintendo Super System, an arcade game cabinet that plays Super NES games
Northern Sound System, a youth music education centre and venue in Adelaide, South Australia
NSS, a 1958 computer chess program
Organizations
Government bodies
National Security Secretariat (United Kingdom), part of the Cabinet Office
National Security Service (United States), an office in the American FBI
National Security Service (Maldives), former name of the Maldivian armed forces
National Security Service (Somalia), a former primary intelligence agency
National Seismological Service, an agency of the Mexican government
National Service Scheme, a Government of India-sponsored program for community service and social activities
National Statistical Service of the Republic of Armenia, former name of the Statistical Committee of Armenia
NHS National Services Scotland, the central support agency to the NHS in Scotland
Political parties
People's Socialist Party of Montenegro, a political party in the Republic of Montenegro
National Power Unity, a far-right party in Latvia
People's Peasant Party (Narodna seljačka stranka), a political party in Serbia
Schools
National Sport School (Canada), a high school in Alberta, Canada
Nauru Secondary School, a high school in Nauru
National Socialist Schoolchildren's League (Nationalsozialistischer Schülerbund), the organization for school pupils in the Third Reich
Northbrooks Secondary School, a secondary school in Yishun, Singapore
Northern Secondary School, a high school in Toronto, Canada
Companies
National Screen Service, an American company involved in cinema advertising
New Skies Satellites, the former name of the Dutch spacecraft operator SES NEW SKIES
Norsk Spisevognselskap, a dining car company of Norway
Other organizations
Nair Service Society, a community welfare organization for the Nair community from Kerala, India
National Space Society, an international space advocacy organization
National Sculpture Society, US
National Secular Society, a British organization
National Speleological Society, an American organization
Nilachala Saraswata Sangha, a religious organization
National Service Scheme, Indian government-sponsored public service programme
Politics and government
National Security Strategy (United States)
National Shelter System, a database of emergency shelters in the US administered by the Federal Emergency Management Agency
National Student Survey, a survey of final-year degree students in the UK
Nuclear Security Summit, a summit on nuclear security established by US President Obama
Science and technology
Computing
Name Service Switch, a technology used in UNIX
Namespace Specific String, a part of the syntax of a Uniform Resource Name
Network Security Services, cryptographic software libraries for client and server security
Network switching sub |
https://en.wikipedia.org/wiki/WebObjects | WebObjects was a Java web application server and a server-based web application framework originally developed by NeXT Software, Inc.
WebObject's hallmark features are its object-orientation, database connectivity, and prototyping tools. Applications created with WebObjects can be deployed as web sites, Java WebStart desktop applications, and/or standards-based web services.
The deployment runtime is pure Java, allowing developers to deploy WebObjects applications on platforms that support Java. One can use the included WebObjects Java SE application server or deploy on third-party Java EE application servers such as JBoss, Apache Tomcat, WebLogic Server or IBM WebSphere.
WebObjects was maintained by Apple for quite a while. However, because Apple has stopped maintaining the software, it now is instead maintained by an online community of volunteers. This community calls it "Project Wonder".
WebObjects now also has a few competitors: see below.
History
NeXT creates WebObjects
WebObjects was created by NeXT Software, Inc., first publicly demonstrated at the Object World conference in 1995 and released to the public in March 1996. The time and cost benefits of rapid, object-oriented development attracted major corporations to WebObjects in the early days of e-commerce, with clients including BBC News, Dell Computer, Disney, DreamWorks SKG, Fannie Mae, GE Capital, Merrill Lynch, and Motorola.
Apple acquires NeXT, and continues to maintain the software
Following NeXT's merger into Apple Inc. in 1997, WebObjects' public profile languished. Many early adopters later switched to alternative technologies, and currently Apple remains the biggest client for the software, relying on it to power parts of its online Apple Store and the iTunes Store — WebObjects' highest-profile implementation.
WebObjects was part of Apple's strategy of using software to drive hardware sales, and in 2000 the price was lowered from $50,000 (for the full deployment license) to $699. From May 2001, WebObjects was included with Mac OS X Server, and no longer required a license key for development or deployment.
WebObjects transitioned from a stand-alone product to be a part of Mac OS X with the release of version 5.3 in June 2005. The developer tools and frameworks, which previously sold for US$699, were bundled with Apple's Xcode IDE. Support for other platforms, such as Windows, was then discontinued. Apple said that it would further integrate WebObjects development tools with Xcode in future releases. This included a new EOModeler Plugin for Xcode. This strategy, however, was not pursued further.
In 2006, Apple announced the deprecation of Mac OS X's Cocoa-Java bridge with the release of Xcode 2.4 at the August 2006 Worldwide Developers Conference, and with it all dependent features, including the entire suite of WebObjects developer applications: EOModeler, EOModeler Plugin, WebObjects Builder, WebServices Assistant, RuleEditor and WOALauncher. Apple had decided t |
https://en.wikipedia.org/wiki/Object%20request%20broker | In distributed computing, an object request broker (ORB) is a concept of a middleware, which allows program calls to be made from one computer to another via a computer network, providing location transparency through remote procedure calls. ORBs promote interoperability of distributed object systems, enabling such systems to be built by piecing together objects from different vendors, while different parts communicate with each other via the ORB. Common Object Request Broker Architecture (by Object Management Group) standardizes the way ORB may be implemented.
Overview
ORBs assumed to handle the transformation of in-process data structures to and from the raw byte sequence, which is transmitted over the network. This is called marshalling or serialization. In addition to marshalling data, ORBs often expose many more features, such as distributed transactions, directory services or real-time scheduling. Some ORBs, such as CORBA-compliant systems, use an interface description language to describe the data that is to be transmitted on remote calls.
In object-oriented languages (.e.g. java), an ORB actually provides a framework which enables remote objects to be used over the network, in the same way as if they were local and part of the same process. On the client side, so-called stub objects are created and invoked, serving as the only part visible and used inside the client application. After the stub's methods are invoked, the client-side ORB performs the marshalling of invocation data, and forwards the request to the server-side ORB. On the server side, ORB locates the targeted object, executes the requested operation, and returns the results. Having the results available, the client's ORB performs the demarshalling and passes the results back into the invoked stub, making them available to the client application. The whole process is transparent, resulting in remote objects appearing as if they were local.
Implementations
CORBA - Common Object Request Broker Architecture.
ICE - the Internet Communications Engine
.NET Remoting - object remoting library within Microsoft's .NET Framework
Windows Communication Foundation (WCF)
ORBexpress - Real-time and Enterprise ORBs by Objective Interface Systems
Orbix - An Enterprise-level CORBA ORB from IONA Technologies
DCOM - the Distributed Component Object Model from Microsoft
RMI - the Remote Method Invocation Protocol from Sun Microsystems
ORBit - an open-source CORBA ORB used as middleware for GNOME
The ACE ORB - a CORBA implementation from the Distributed Object Computing (DOC) Group
omniORB - Free CORBA ORB
See also
References
Middleware |
https://en.wikipedia.org/wiki/UCPH%20Department%20of%20Computer%20Science | The UCPH Department of Computer Science () is a department in the Faculty of Science at the University of Copenhagen (UCPH). It is the longest established department of Computer Science in Denmark and was founded in 1970 by Turing Award winner Peter Naur. As of 2021, it employs 82 academic staff, 126 research staff and 38 support staff. It is consistently ranked the top Computer Science department in the Nordic countries, and in 2017 was placed 9th worldwide by the Academic Ranking of World Universities.
History
DIKU has its roots at the Institute for Mathematical Sciences, where in 1963, the first computer was bought.
In 1969, Peter Naur became the first professor in Computer Science at the University of Copenhagen, and in 1970, DIKU was officially established its own department.
Research
As of 2021, the department is home to 82 academic staff, 126 research staff and 38 support staff. Research is organised into seven research sections:
The Algorithms and Complexity Section, headed by Mikkel Thorup, who conduct basic algorithms research, as well as research on data structures and computational complexity
The Human‐Centered Computing Section, headed by Kasper Hornbæk, who research human-computer interaction, computer-supported cooperative work, as well as health informatics
The Image Section, headed by Kim Steenstrup Pedersen, who work on image processing including medical image processing, computer vision, physics based animation and robotics.
The Machine Learning Section, headed by Christina Lioma, researching theoretical machine learning, information retrieval, and machine learning in biology
The Natural Language Processing Section, headed by Isabelle Augenstein, who conduct research on core natural language processing, natural language understanding, computational linguistics, as well as multimodal learning
The Programming Languages and Theory of Computation section, headed by Martin Elsman, researching programming languages, theory of computation, computer security, and approaches to financial transparency
The Software, Data, People & Society Section, headed by Thomas Troels Hildebrandt, who work on decentralised systems, data management systems, and process modelling
Teaching
The department offers programmes at BSc as well as MSc level, both in core computer science and in interdisciplinary subjects. Bachelor's programmes are 3-year programmes and mostly taught in Danish, whereas Master's programmes are 2-year programmes and taught in English. In 2020, DIKU enrolled 610 new Bachelor's students and 136 new Master's students.
As of 2021, DIKU offers the following study programmes:
Bachelor of Science (BSc) in Computer Science
Bachelor of Science (BSc) in Machine Learning and Data Science
Bachelor of Science (BSc) in Computer Science and Economy
Bachelor of Science (BSc) in Communication and IT
Bachelor of Science (BSc) in Health and IT
Master of Science (MSc) in Computer Science
Part-time Master of Science (MSc) in Compute |
https://en.wikipedia.org/wiki/EMac | The eMac (short for education Mac) is a discontinued all-in-one Mac desktop computer that was produced and designed by Apple Computer. Released in 2002, it was originally aimed at the education market but was later made available as a cheaper mass-market alternative to Apple's "Sunflower" iMac G4. The eMac was pulled from retail on October 12, 2005, and was again sold exclusively to educational institutions thereafter. It was discontinued by Apple on July 5, 2006, and replaced by a cheaper, low-end iMac G5 that, like the eMac, was exclusively sold to educational institutions.
The eMac design closely resembles the Snow iMac G3, though the eMac was only available in white, slightly larger in size, did not include a carry handle, and was heavier than the preceding G3, weighing . The unique shape of the computer was also similar to the 17-inch CRT Studio Display from 2000 (the last standalone CRT monitor Apple made). The Apple eMac features a PowerPC 7450 (G4e) processor that is significantly faster than the previous-generation PowerPC 750 (G3) processor, as well as a 17-inch flat Trinitron CRT display, which was aimed at the education market, as LCD screens would be expensive.
Background
In 1998, Apple released the iMac G3, an all-in-one computer built around a cathode-ray tube display. The iMac was a major success for Apple, selling more than five million units; it also sold for as low as US$799, making it the most affordable Mac model Apple offered. In January 2002, Apple announced a successor to the iMac G3, the iMac G4. This iMac was built around a floating flat-panel display, and started at a higher price than the previous generation. While a few models of the iMac G3 remained at lower price points, they lacked power for educational tasks like video. Education customers made up nearly a quarter of Apple's sales, and with Windows-based computers eating into Apple's market share of the sector, Apple consulted with educators to build a cheaper G4-powered successor for the price-conscious market.
Apple announced the eMac on April 29, 2002, to be sold only to education markets. Apple had previously created education-only computer models, including the iMac predecessor Power Macintosh G3 All-In-One. The machine's CRT screen made it cheaper than the iMac G4 (the most expensive configuration was still cheaper than the cheapest iMac G4), and its bulk was intended to make it more resilient to wear and tear in a school setting than the fragile hinge and flat screen of the iMac.
Design and release
The eMac had a substantially similar design to the iMac G3, but featured a larger (16-inch viewable) flat-screen CRT monitor. The larger screen offered 40percent more viewing area than the iMac. Thanks to the short-necked CRT, it took up the same space as the iMac—in fact, it was a few millimeters shorter–but also was heavier, at . The computer was powered by a PowerPC G4 processor much faster than the G3-powered iMacs. The machine's serial number and netw |
https://en.wikipedia.org/wiki/Data%20East | , also abbreviated as DECO, was a Japanese video game, pinball and electronic engineering company. The company was in operation from 1976 to 2003, and released 150 video game titles. Its main headquarters were located in Suginami, Tokyo. The American subsidiary, Data East USA, was headquartered in San Jose, California.
The majority of Data East's video games, its trademark and logo, are owned today by the mobile gaming company G-Mode, a subsidiary of Marvelous. A small number of Data East video games are owned by other companies, notably Paon DP.
History
Data East was founded on April 20, 1976, by Tokai University alumnus Tetsuo Fukuda. Data East developed and released in July 1977 its first arcade game Jack Lot, a medal game based on Blackjack for business use. This was followed in January 1978 by Super Break which was its first actual video game. More than 15 arcade games were released by Data East in the 1970s.
Data East established its U.S. division in June 1979. In 1980, Data East published Astro Fighter which became its first major arcade game title. While making games, Data East released a series of interchangeable systems compatible with its arcade games, notably the DECO Cassette System which soon became infamous among users due to technical problems. Data East dropped the DECO Cassette by 1985. It was the first interchangeable arcade system board, developed in 1979 and released in 1980, inspiring later arcade conversion systems such as Sega's Convert-a-Game in 1981 and the Nintendo VS. System in 1984. Data East abandoned the DECO Cassette System in favor of dedicated arcade cabinets, bringing Data East greater success over the next several years, starting with the hit title BurgerTime (1982).
In 1981, three staff members of Data East founded Technōs Japan, who then supported Data East for a while before becoming completely independent.
In 1983, the company moved its headquarters to a new building in Ogikubo, Suginami, where it stayed for the remaining of its lifespan. In March 1985, Data East Europe was established in London. Data East continued to release arcade video games over the next 15 years following the video game crash of 1983.
Data East distributed three major arcade hits in North America between 1984 and 1985: the fighting game Karate Champ (1984), the beat 'em up title Kung-Fu Master (1984), and the run and gun video game Commando (1985). These three titles catapulted Data East to the forefront of the amusement arcade industry in the mid-1980s. Karate Champ, Kung-Fu Master and Commando were the top three highest-grossing arcade games of 1985 in the United States. Karate Champ was the first successful fighting game, and one of the most influential to modern fighting game standards. Some of Data East's other most famous coin-op arcade games from its 1980s heyday include Heavy Barrel, Bad Dudes Vs. Dragon Ninja, Sly Spy, RoboCop, Bump 'n' Jump, Trio The Punch – Never Forget Me..., Karnov and Atomic Runner Chelnov.
Da |
https://en.wikipedia.org/wiki/ABC%20News | ABC News is the news division of the American television network ABC. Its flagship program is the daily evening newscast ABC World News Tonight with David Muir; other programs include morning news-talk show Good Morning America, Nightline, Primetime, and 20/20, and Sunday morning political affairs program This Week with George Stephanopoulos.
In addition to the division's television programs, ABC News has radio and digital outlets, including ABC News Radio and ABC News Live, plus various podcasts hosted by ABC News personalities.
History
Early years
ABC began in 1943 as the NBC Blue Network, a radio network that was spun off from NBC, as ordered by the Federal Communications Commission (FCC) in 1942. The reason for the order was to expand competition in radio broadcasting in the United States, specifically news and political broadcasting, and broaden the projected points of view. Only a few companies, such as NBC and CBS, dominated the radio market. NBC conducted the split voluntarily in case its appeal of the ruling was denied, and it was forced to split its two networks into separate companies.
Regular television news broadcasts on ABC began soon after the network signed on its initial owned-and-operated television station (WJZ-TV, now WABC-TV) and production center in New York City in August 1948. Broadcasts continued as the ABC network expanded nationwide. Until the early 1970s, ABC News programs and ABC in general consistently ranked third in viewership behind CBS and NBC news programs. ABC had fewer affiliate stations and a weaker prime-time programming slate to support the network's news operations compared to the two larger networks, each of which had established their radio news operations during the 1930s.
Roone Arledge
By the 1970s, the network had effectively turned around, with its prime-time entertainment programs achieving more substantial ratings and drawing in higher advertising revenue and profits for ABC overall. With the appointment of the president of ABC Sports, Roone Arledge as president of ABC News in 1977, ABC invested the resources to make it a significant source of news content. Arledge, known for experimenting with the broadcast "model", created many of ABC News' most popular and enduring programs, including 20/20, World News Tonight, This Week, Nightline, and Primetime Live. ABC News' longtime slogan, "More Americans get their news from ABC News than from any other source." (introduced in the late 1980s), was a claim referring to the number of people who watch, listen to and read ABC News content on television, radio and (eventually) the Internet, and not necessarily to the telecasts alone.
In June 1998, ABC News (which owned an 80% stake in the service), Nine Network and ITN sold their respective interests in Worldwide Television News to the Associated Press. Additionally, ABC News signed a multi-year content deal with AP for its affiliate video service, Associated Press Television News (APTV), while providing |
https://en.wikipedia.org/wiki/OpenOffice%20Basic | OpenOffice Basic (formerly known as StarOffice Basic or StarBasic or OOoBasic) is a dialect of the programming language BASIC that originated with the StarOffice office suite and spread through OpenOffice.org and derivatives such as Apache OpenOffice and LibreOffice (where it is known as LibreOffice Basic). The language is a domain-specific programming language which specifically serves the OpenOffice application suite.
Example
Although OpenOffice Basic is similar to other dialects of BASIC, such as Microsoft's Visual Basic for Applications (VBA), the application programming interface (API) is very different, as the example below of a macro illustrates. While there is a much easier way to obtain the "paragraph count" document property, the example shows the fundamental methods for accessing each paragraph in a text document, sequentially.
Sub ParaCount
'
' Count number of paragraphs in a text document
'
Dim Doc As Object, Enum As Object, TextEl As Object, Count As Long
Doc = ThisComponent
' Is this a text document?
If Not Doc.SupportsService("com.sun.star.text.TextDocument") Then
MsgBox "This macro must be run from a text document", 64, "Error"
Exit Sub
End If
Count = 0
' Examine each component - paragraph or table?
Enum = Doc.Text.CreateEnumeration
While Enum.HasMoreElements
TextEl = Enum.NextElement
' Is the component a paragraph?
If TextEl.SupportsService("com.sun.star.text.Paragraph") Then
Count = Count + 1
End If
Wend
'Display result
MsgBox Count, 0, "Paragraph Count"
End Sub
See also
Comparison of office suites
Further reading
External links
OpenOffice.org BASIC Programming Guide wiki
LibreOffice Basic Help
Automating Open Office in VB.NET
Articles with example BASIC code
BASIC programming language family
Basic
Basic |
https://en.wikipedia.org/wiki/French%20Institute%20for%20Research%20in%20Computer%20Science%20and%20Automation | The National Institute for Research in Digital Science and Technology (Inria) () is a French national research institution focusing on computer science and applied mathematics.
It was created under the name French Institute for Research in Computer Science and Automation (IRIA) () in 1967 at Rocquencourt near Paris, part of Plan Calcul. Its first site was the historical premises of SHAPE (central command of NATO military forces), which is still used as Inria's main headquarters. In 1980, IRIA became INRIA. Since 2011, it has been styled Inria.
Inria is a Public Scientific and Technical Research Establishment (EPST) under the double supervision of the French Ministry of National Education, Advanced Instruction and Research and the Ministry of Economy, Finance and Industry.
Administrative status
Inria has nine research centers distributed across France (in Bordeaux, Grenoble-Inovallée, Lille, Lyon, Nancy, Paris-Rocquencourt, Rennes, Saclay, and Sophia Antipolis) and one center abroad in Santiago de Chile, Chile. It also contributes to academic research teams outside of those centers.
Inria Rennes is part of the joint Institut de recherche en informatique et systèmes aléatoires (IRISA) with several other entities.
Before December 2007, the three centers of Bordeaux, Lille and Saclay formed a single research center called INRIA Futurs.
In October 2010, Inria, with Pierre and Marie Curie University (Now Sorbonne University) and Paris Diderot University started IRILL, a center for innovation and research initiative for free software.
Inria employs 3800 people. Among them are 1300 researchers, 1000 Ph.D. students and 500 postdoctorates.
Research
Inria does both theoretical and applied research in computer science. In the process, it has produced many widely used programs, such as
Bigloo, a Scheme implementation
CADP, a tool box for the verification of asynchronous concurrent systems
Caml, a language from the ML family
Caml Light and OCaml implementations
Chorus, microkernel-based distributed operating system
CompCert, verified C compiler for PowerPC, ARM and x86_32
Contrail
Coq, a proof assistant
CYCLADES, pioneered the use of datagrams, functional layering, and the end-to-end strategy.
Eigen (C++ library)
Esterel, a programming language for State Automata
Geneauto — code-generation from model
Graphite, a research platform for computer graphics, 3D modeling and numerical geometry
Gudhi — A C++ library with Python interface for computational topology and topological data analysis
Le Lisp, a portable Lisp implementation
medInria, a medical image processing software, popularly used for MRI images.
GNU MPFR, an arbitrary-precision floating-point library
OpenViBE, a software platform dedicated to designing, testing and using brain–computer interfaces.
Pharo, an open-source Smalltalk derived from Squeak .
scikit-learn, a machine learning software package
Scilab, a numerical computation software package
SimGrid
SmartEiffel, |
https://en.wikipedia.org/wiki/Text%20mining | Text mining, text data mining (TDM) or text analytics is the process of deriving high-quality information from text. It involves "the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources." Written resources may include websites, books, emails, reviews, and articles. High-quality information is typically obtained by devising patterns and trends by means such as statistical pattern learning. According to Hotho et al. (2005) we can distinguish between three different perspectives of text mining: information extraction, data mining, and a knowledge discovery in databases (KDD) process. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interest. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling (i.e., learning relations between named entities).
Text analysis involves information retrieval, lexical analysis to study word frequency distributions, pattern recognition, tagging/annotation, information extraction, data mining techniques including link and association analysis, visualization, and predictive analytics. The overarching goal is, essentially, to turn text into data for analysis, via the application of natural language processing (NLP), different types of algorithms and analytical methods. An important phase of this process is the interpretation of the gathered information.
A typical application is to scan a set of documents written in a natural language and either model the document set for predictive classification purposes or populate a database or search index with the information extracted. The document is the basic element when starting with text mining. Here, we define a document as a unit of textual data, which normally exists in many types of collections.
Text analytics
Text analytics describes a set of linguistic, statistical, and machine learning techniques that model and structure the information content of textual sources for business intelligence, exploratory data analysis, research, or investigation. The term is roughly synonymous with text mining; indeed, Ronen Feldman modified a 2000 description of "text mining" in 2004 to describe "text analytics". The latter term is now used more frequently in business settings while "text mining" is used in some of the earliest application areas, dating to the 1980s, notably life-sciences research and government intelligence.
The term text analytics also describes that application of text analytic |
https://en.wikipedia.org/wiki/Commodore%20BASIC | Commodore BASIC, also known as PET BASIC or CBM-BASIC, is the dialect of the BASIC programming language used in Commodore International's 8-bit home computer line, stretching from the PET (1977) to the Commodore 128 (1985).
The core is based on 6502 Microsoft BASIC, and as such it shares many characteristics with other 6502 BASICs of the time, such as Applesoft BASIC. Commodore licensed BASIC from Microsoft in 1977 on a "pay once, no royalties" basis after Jack Tramiel turned down Bill Gates' offer of a per unit fee, stating, "I'm already married," and would pay no more than for a perpetual license.
The original PET version was very similar to the original Microsoft implementation with few modifications. BASIC 2.0 on the C64 was also similar, and was also seen on C128s (in C64 mode) and other models. Later PETs featured BASIC 4.0, similar to the original but adding a number of commands for working with floppy disks.
BASIC 3.5 was the first to really deviate, adding a number of commands for graphics and sound support on the C16 and Plus/4. BASIC 7.0 was included with the Commodore 128, and included structured programming commands from the Plus/4's BASIC 3.5, as well as keywords designed specifically to take advantage of the machine's new capabilities. A sprite editor and machine language monitor were added. The last, BASIC 10.0, was part of the unreleased Commodore 65.
History
Commodore took the source code of the flat-fee BASIC and further developed it internally for all their other 8-bit home computers. It was not until the Commodore 128 (with V7.0) that a Microsoft copyright notice was displayed. However, Microsoft had built an easter egg into the version 2 or "upgrade" Commodore Basic that proved its provenance: typing the (obscure) command WAIT 6502, 1 would result in Microsoft! appearing on the screen. (The easter egg was well-obfuscated—the message did not show up in any disassembly of the interpreter.)
The popular Commodore 64 came with BASIC v2.0 in ROM even though the computer was released after the PET/CBM series that had version 4.0 because the 64 was intended as a home computer, while the PET/CBM series were targeted at business and educational use where their built-in programming language was presumed to be more heavily used. This saved manufacturing costs, as the V2 fit into smaller ROMs.
Technical details
Program editing
A convenient feature of Commodore's ROM-resident BASIC interpreter and KERNAL was the full-screen editor. Although Commodore keyboards only have two cursor keys which alternated direction when the shift key was held, the screen editor allowed users to enter direct commands or to input and edit program lines from anywhere on the screen. If a line was prefixed with a line number, it was tokenized and stored in program memory. Lines not beginning with a number were executed by pressing the key whenever the cursor happened to be on the line. This marked a significant upgrade in program entry interfaces compar |
https://en.wikipedia.org/wiki/IBM%201013 | The IBM 1013 Card Transmission Terminal was a device manufactured by IBM in 1961 which transmitted the data held on 80-column cards to a remote computer or another 1013.
The speed was generally considered 100 cards per minute but could be faster if programmed to send/receive only a portion of the cards if all 80 columns were not used. It needed a full-duplex circuit to operate but at any given time could only transmit or receive.
References
External links
"IBM 1013 Card Transmission Terminal / Communicating Reader-Punch" at Computer History Museum
1013
1013 |
https://en.wikipedia.org/wiki/F-test | An F-test is any statistical test in which the test statistic has an F-distribution under the null hypothesis. It is most often used when comparing statistical models that have been fitted to a data set, in order to identify the model that best fits the population from which the data were sampled. Exact "F-tests" mainly arise when the models have been fitted to the data using least squares. The name was coined by George W. Snedecor, in honour of Ronald Fisher. Fisher initially developed the statistic as the variance ratio in the 1920s.
Common examples
Common examples of the use of F-tests include the study of the following cases:
The hypothesis that the means of a given set of normally distributed populations, all having the same standard deviation, are equal. This is perhaps the best-known F-test, and plays an important role in the analysis of variance (ANOVA).
The hypothesis that a proposed regression model fits the data well. See Lack-of-fit sum of squares.
The hypothesis that a data set in a regression analysis follows the simpler of two proposed linear models that are nested within each other.
In addition, some statistical procedures, such as Scheffé's method for multiple comparisons adjustment in linear models, also use F-tests.
F-test of the equality of two variances
The F-test is sensitive to non-normality. In the analysis of variance (ANOVA), alternative tests include Levene's test, Bartlett's test, and the Brown–Forsythe test. However, when any of these tests are conducted to test the underlying assumption of homoscedasticity (i.e. homogeneity of variance), as a preliminary step to testing for mean effects, there is an increase in the experiment-wise Type I error rate.
Formula and calculation
Most F-tests arise by considering a decomposition of the variability in a collection of data in terms of sums of squares. The test statistic in an F-test is the ratio of two scaled sums of squares reflecting different sources of variability. These sums of squares are constructed so that the statistic tends to be greater when the null hypothesis is not true. In order for the statistic to follow the F-distribution under the null hypothesis, the sums of squares should be statistically independent, and each should follow a scaled χ²-distribution. The latter condition is guaranteed if the data values are independent and normally distributed with a common variance.
Multiple-comparison ANOVA problems
The F-test in one-way analysis of variance (ANOVA) is used to assess whether the expected values of a quantitative variable within several pre-defined groups differ from each other. For example, suppose that a medical trial compares four treatments. The ANOVA F-test can be used to assess whether any of the treatments are on average superior, or inferior, to the others versus the null hypothesis that all four treatments yield the same mean response. This is an example of an "omnibus" test, meaning that a single test is performed to detect any of |
https://en.wikipedia.org/wiki/Pulse%20computation | Pulse computation is a hybrid of digital & analog computation that uses aperiodic electrical spikes, as opposed to the periodic voltages in a digital computer or the continuously varying voltages in an analog computer. Pulse streams are unclocked, so they can arrive at arbitrary times and can be generated by analog processes, although each spike is allocated a binary value, as it would be in a digital computer.
Pulse computation is primarily studied as part of the field of neural networks. The processing unit in such a network is called a "neuron".
References
Computational neuroscience |
https://en.wikipedia.org/wiki/Strict%20function | In computer science and computer programming, a function f is said to be strict if, when applied to a non-terminating expression, it also fails to terminate. A strict function in the denotational semantics of programming languages is a function f where . The entity , called bottom, denotes an expression that does not return a normal value, either because it loops endlessly or because it aborts due to an error such as division by zero. A function that is not strict is called non-strict. A strict programming language is one in which user-defined functions are always strict.
Intuitively, non-strict functions correspond to control structures. Operationally, a strict function is one that always evaluates its argument; a non-strict function is one that might not evaluate some of its arguments. Functions having more than one parameter can be strict or non-strict in each parameter independently, as well as jointly strict in several parameters simultaneously.
As an example, the if-then-else expression of many programming languages, called ?: in languages inspired by C, may be thought of as a function of three parameters. This function is strict in its first parameter, since the function must know whether its first argument evaluates to true or to false before it can return; but it is non-strict in its second parameter, because (for example) if(false,,1) = 1, as well as non-strict in its third parameter, because (for example) if(true,2,) = 2. However, it is jointly strict in its second and third parameters, since if(true,,) = and if(false,,) = .
In a non-strict functional programming language, strictness analysis refers to any algorithm used to prove the strictness of a function with respect to one or more of its arguments. Such functions can be compiled to a more efficient calling convention, such as call by value, without changing the meaning of the enclosing program.
See also
Eager evaluation
Lazy evaluation
Short-circuit evaluation
References
Formal methods
Denotational semantics
Evaluation strategy |
https://en.wikipedia.org/wiki/Strict%20programming%20language | A strict programming language is a programming language which employs a strict programming paradigm, allowing only strict functions (functions whose parameters must be evaluated completely before they may be called) to be defined by the user. A non-strict programming language allows the user to define non-strict functions, and hence may allow lazy evaluation.
Examples
Nearly all programming languages in common use today are strict. Examples include C#, Java, Perl (all versions, i.e. through version 5 and version 7), Python, Ruby, Common Lisp, and ML.
Some strict programming languages include features that mimic laziness. Raku, formerly known as Perl 6, has lazy lists. Python has generator functions. Julia provides a macro system to build non-strict functions, as does Scheme.
Examples for non-strict languages are Haskell, R, Miranda, and Clean.
Explanation
In most non-strict languages the non-strictness extends to data constructors. This allows conceptually infinite data structures (such as the list of all prime numbers) to be manipulated in the same way as ordinary finite data structures. It also allows for the use of very large but finite data structures such as the complete game tree of chess.
Non-strictness has several disadvantages which have prevented widespread adoption:
Because of the uncertainty regarding if and when expressions will be evaluated, non-strict languages generally must be purely functional to be useful.
All hardware architectures in common use are optimized for strict languages, so the best compilers for non-strict languages produce slower code than the best compilers for strict languages.
Space complexity of non-strict programs is difficult to understand and predict.
Strict programming languages are often associated with eager evaluation, and non-strict languages with lazy evaluation, but other evaluation strategies are possible in each case. The terms "eager programming language" and "lazy programming language" are often used as synonyms for "strict programming language" and "non-strict programming language" respectively.
In many strict languages, some advantages of non-strict functions can be obtained through the use of macros or thunks.
Citations
References
Programming paradigms
Evaluation strategy |
https://en.wikipedia.org/wiki/Viewpoint | Viewpoint may refer to:
Scenic viewpoint, a high place where people can gather to view scenery
In computing
Viewpoint model, a computer science technique for making complex systems more comprehensible to human engineers
Viewpoint Corporation, a digital media company known for its subsidiary Fotomat
Viewpoint Media Player, a software product made by Viewpoint Corporation, and the associated file format
ViewPoint, the operating system of the Xerox Daybreak computer
In arts and entertainment
Viewpoints, an acting technique based on improvisation
Camera angle, in photography, filmmaking, and other visual arts
Games
Viewpoint (video game), shooter video game
Viewpoint (card game), dedicated deck card game
Television
Viewpoint (Australian TV program), a 2012–2017 Australian current affairs television program broadcast on Sky News Australia
Viewpoint (British TV series), a 2021 British drama television series
Viewpoint (Canadian TV program), a 1957–1976 Canadian current affairs television program which aired on CBC
Viewpoint (Philippine TV program), a 1984–1994 Philippine late night public affairs talk show and television program that was broadcast on GMA network
Viewpoint (talk show), an American television talk show, mostly hosted by Eliot Spitzer, that aired on Current TV
Other uses
ViewPoint, a skyscraper in the American city of Atlanta
Viewpoint School, a K-12 school in Calabasas, California, US
Viewpoints Research Institute, a nonprofit organization focused on education, systems research, and personal computing
See also
Point of view (disambiguation)
View Point, a land form on Antarctica |
https://en.wikipedia.org/wiki/Rendezvous%20%28Plan%209%29 | Rendezvous is a data synchronization mechanism in Plan 9 from Bell Labs. It is a system call that allows two processes to exchange a single datum while synchronizing.
The rendezvous call takes a tag and a value as its arguments. The tag is typically an address in memory shared by both processes. Calling rendezvous causes a process to sleep until a second rendezvous call with a matching tag occurs. Then, the values are exchanged and both processes are awakened.
More complex synchronization mechanisms can be created from this primitive operation. See also mutual exclusion.
See also
Synchronous rendezvous
Communicating sequential processes
References
External links
Process Sleep and Wakeup on a Shared-memory Multiprocessor by Rob Pike, Dave Presotto, Ken Thompson and Gerard Holzmann.
Plan 9 from Bell Labs
Parallel computing
Inter-process communication |
https://en.wikipedia.org/wiki/Zero-configuration%20networking | Zero-configuration networking (zeroconf) is a set of technologies that automatically creates a usable computer network based on the Internet Protocol Suite (TCP/IP) when computers or network peripherals are interconnected. It does not require manual operator intervention or special configuration servers. Without zeroconf, a network administrator must set up network services, such as Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS), or configure each computer's network settings manually.
Zeroconf is built on three core technologies: automatic assignment of numeric network addresses for networked devices, automatic distribution and resolution of computer hostnames, and automatic location of network services, such as printing devices.
Background
Computer networks use numeric network addresses to identify communications endpoints in a network of participating devices. This is similar to the telephone network which assigns a string of digits to identify each telephone. In modern networking protocols, information to be transmitted is divided into a series of network packets. Every packet contains the source and destination addresses for the transmission. Network routers examine these addresses to determine the best network path in forwarding the data packet at each step toward its destination.
Similarly to telephones being labeled with their telephone number, it was a common practice in early networks to attach an address label to networked devices. The dynamic nature of modern networks, especially residential networks in which devices are powered up only when needed, desire dynamic address assignment mechanisms that do not require user involvement for initialization and management. These systems automatically give themselves common names chosen either by the equipment manufacturer, such as a brand and model number or chosen by users for identifying their equipment. The names and addresses are then automatically entered into a directory service.
Early computer networking was built upon technologies of the telecommunications networks and thus protocols tended to fall into two groups: those intended to connect local devices into a local area network (LAN), and those intended primarily for long-distance communications. The latter wide area network (WAN) systems tended to have centralized setup, where a network administrator would manually assign addresses and names. LAN systems tended to provide more automation of these tasks so that new equipment could be added to a LAN with a minimum of operator and administrator intervention.
An early example of a zero-configuration LAN system is AppleTalk, a protocol introduced by Apple Inc. for the early Macintosh computers in the 1980s. Macs, as well as other devices supporting the protocol, could be added to the network by simply plugging them in; all further configuration was automated. Network addresses were automatically selected by each device using a protocol known as AppleTalk Addr |
https://en.wikipedia.org/wiki/Speculative%20execution | Speculative execution is an optimization technique where a computer system performs some task that may not be needed. Work is done before it is known whether it is actually needed, so as to prevent a delay that would have to be incurred by doing the work after it is known that it is needed. If it turns out the work was not needed after all, most changes made by the work are reverted and the results are ignored.
The objective is to provide more concurrency if extra resources are available. This approach is employed in a variety of areas, including branch prediction in pipelined processors, value prediction for exploiting value locality, prefetching memory and files, and optimistic concurrency control in database systems.
Speculative multithreading is a special case of speculative execution.
Overview
Modern pipelined microprocessors use speculative execution to reduce the cost of conditional branch instructions using schemes that predict the execution path of a program based on the history of branch executions. In order to improve performance and utilization of computer resources, instructions can be scheduled at a time when it has not yet been determined that the instructions will need to be executed, ahead of a branch.
Variants
Speculative computation was a related earlier concept.
Eager execution
Eager execution is a form of speculative execution where both sides of the conditional branch are executed; however, the results are committed only if the predicate is true. With unlimited resources, eager execution (also known as oracle execution) would in theory provide the same performance as perfect branch prediction. With limited resources, eager execution should be employed carefully, since the number of resources needed grows exponentially with each level of branch executed eagerly.
Predictive execution
Predictive execution is a form of speculative execution where some outcome is predicted and execution proceeds along the predicted path until the actual result is known. If the prediction is true, the predicted execution is allowed to commit; however, if there is a misprediction, execution has to be unrolled and re-executed. Common forms of this include branch predictors and memory dependence prediction. A generalized form is sometimes referred to as value prediction.
Runahead
Related concepts
Lazy execution
Lazy execution is the opposite of eager execution, and does not involve speculation. The incorporation of speculative execution into implementations of the Haskell programming language, a lazy language, is a current research topic. Eager Haskell, a variant of the language, is designed around the idea of speculative execution. A 2003 PhD thesis made GHC support a kind of speculative execution with an abortion mechanism to back out in case of a bad choice called optimistic execution. It was deemed too complicated.
Security vulnerabilities
Starting in 2017, a series of security vulnerabilities were found in the implementations of sp |
https://en.wikipedia.org/wiki/Verisign | Verisign Inc. is an American company based in Reston, Virginia, United States, that operates a diverse array of network infrastructure, including two of the Internet's thirteen root nameservers, the authoritative registry for the , , and generic top-level domains and the country-code top-level domains, and the back-end systems for the and sponsored top-level domains.
In 2010, Verisign sold its authentication business unit – which included Secure Sockets Layer (SSL) certificate, public key infrastructure (PKI), Verisign Trust Seal, and Verisign Identity Protection (VIP) services – to Symantec for $1.28 billion. The deal capped a multi-year effort by Verisign to narrow its focus to its core infrastructure and security business units. Symantec later sold this unit to DigiCert in 2017. On October 25, 2018, NeuStar, Inc. acquired VeriSign’s Security Service Customer Contracts. The acquisition effectively transferred Verisign Inc.’s Distributed Denial of Service (DDoS) protection, Managed DNS, DNS Firewall and fee-based Recursive DNS services customer contracts.
Verisign's former chief financial officer (CFO) Brian Robins announced in August 2010 that the company would move from its original location of Mountain View, California, to Dulles in Northern Virginia by 2011 due to 95% of the company's business being on the East Coast. The company is incorporated in Delaware.
History
Verisign was founded in 1995 as a spin-off of the RSA Security certification services business. The new company received licenses to key cryptographic patents held by RSA (set to expire in 2000) and a time-limited non-compete agreement. The new company served as a certificate authority (CA) and its initial mission was "providing trust for the Internet and Electronic Commerce through our Digital Authentication services and products". Prior to selling its certificate business to Symantec in 2010, Verisign had more than 3 million certificates in operation for everything from military to financial services and retail applications, making it the largest CA in the world.
In 2000, Verisign acquired Network Solutions for $21billion, which operated the , and TLDs under agreements with the Internet Corporation for Assigned Names and Numbers (ICANN) and the United States Department of Commerce. Those core registry functions formed the basis for Verisign's naming division, which by then had become the company's largest and most significant business unit. In 2002, Verisign was charged with violation of the Securities Exchange Act. Verisign divested the Network Solutions retail (domain name registrar) business in 2003 for $100million, retaining the domain name registry (wholesale) function as its core Internet addressing business.
For the year ended December 31, 2010, Verisign reported revenue of $681 million, up 10% from $616 million in 2009. Verisign operates two businesses, Naming Services, which encompasses the operation of top-level domains and critical Internet infrastructu |
https://en.wikipedia.org/wiki/Smart%20pointer | In computer science, a smart pointer is an abstract data type that simulates a pointer while providing added features, such as automatic memory management or bounds checking. Such features are intended to reduce bugs caused by the misuse of pointers, while retaining efficiency. Smart pointers typically keep track of the memory they point to, and may also be used to manage other resources, such as network connections and file handles. Smart pointers were first popularized in the programming language C++ during the first half of the 1990s as rebuttal to criticisms of C++'s lack of automatic garbage collection.
Pointer misuse can be a major source of bugs. Smart pointers prevent most situations of memory leaks by making the memory deallocation automatic. More generally, they make object destruction automatic: an object controlled by a smart pointer is automatically destroyed (finalized and then deallocated) when the last (or only) owner of an object is destroyed, for example because the owner is a local variable, and execution leaves the variable's scope. Smart pointers also eliminate dangling pointers by postponing destruction until an object is no longer in use.
If a language supports automatic garbage collection (for example, Java or C#), then smart pointers are unneeded for reclaiming and safety aspects of memory management, yet are useful for other purposes, such as cache data structure residence management and resource management of objects such as file handles or network sockets.
Several types of smart pointers exist. Some work with reference counting, others by assigning ownership of an object to one pointer.
History
Even though C++ popularized the concept of smart pointers, especially the reference-counted variety, the immediate predecessor of one of the languages that inspired C++'s design had reference-counted references built into the language. C++ was inspired in part by Simula67. Simula67's ancestor was Simula I. Insofar as Simula I's element is analogous to C++'s pointer without null, and insofar as Simula I's process with a dummy-statement as its activity body is analogous to C++'s struct (which itself is analogous to C. A. R. Hoare's record in then-contemporary 1960s work), Simula I had reference counted elements (i.e., pointer-expressions that house indirection) to processes (i.e., records) no later than September 1965, as shown in the quoted paragraphs below.
Processes can be referenced individually. Physically, a process reference is a pointer to an area of memory containing the data local to the process and some additional information defining its current state of execution. However, for reasons stated in the Section 2.2 process references are always indirect, through items called elements. Formally a reference to a process is the value of an expression of type element.
…
element values can be stored and retrieved by assignments and references to element variables and by other means.
The language contains a mechanism for |
https://en.wikipedia.org/wiki/Automatic%20label%20placement | Automatic label placement, sometimes called text placement or name placement, comprises the computer methods of placing labels automatically on a map or chart. This is related to the typographic design of such labels.
The typical features depicted on a geographic map are line features (e.g. roads), area features (countries, parcels, forests, lakes, etc.), and point features (villages, cities, etc.). In addition to depicting the map's features in a geographically accurate manner, it is of critical importance to place the names that identify these features, in a way that the reader knows instantly which name describes which feature.
Automatic text placement is one of the most difficult, complex, and time-consuming problems in mapmaking and GIS (Geographic Information System). Other kinds of computer-generated graphics – like charts, graphs etc. – require good placement of labels as well, not to mention engineering drawings, and professional programs which produce these drawings and charts, like spreadsheets (e.g. Microsoft Excel) or computational software programs (e.g. Mathematica).
Naively placed labels overlap excessively, resulting in a map that is difficult or even impossible to read. Therefore, a GIS must allow a few possible placements of each label, and often also an option of resizing, rotating, or even removing (suppressing) the label. Then, it selects a set of placements that results in the least overlap, and has other desirable properties. For all but the most trivial setups, the problem is NP-hard.
Rule-based algorithms
Rule-based algorithms try to emulate an experienced human cartographer. Over centuries, cartographers have developed the art of mapmaking and label placement. For example, an experienced cartographer repeats road names several times for long roads, instead of placing them once, or in the case of Ocean City depicted by a point very close to the shore, the cartographer would place the label "Ocean City" over the land to emphasize that it is a coastal town.
Cartographers work based on accepted conventions and rules, such as those itemized by Swiss cartographer Eduard Imhof in 1962. For example, New York City, Vienna, Berlin, Paris, or Tokyo must show up on country maps because they are high-priority labels. Once those are placed, the cartographer places the next most important class of labels, for example major roads, rivers, and other large cities. In every step they ensure that (1) the text is placed in a way that the reader easily associates it with the feature, and (2) the label does not overlap with those already placed on the map.
However, if a particular label placement problem can be formulated as a mathematical optimization problem, using mathematics to solve the problem is usually better than using a rule-based algorithm.
Local optimization algorithms
The simplest greedy algorithm places consecutive labels on the map in positions that result in minimal overlap of labels. Its results are not perfect even fo |
https://en.wikipedia.org/wiki/Index%20notation | In mathematics and computer programming, index notation is used to specify the elements of an array of numbers. The formalism of how indices are used varies according to the subject. In particular, there are different methods for referring to the elements of a list, a vector, or a matrix, depending on whether one is writing a formal mathematical paper for publication, or when one is writing a computer program.
In mathematics
It is frequently helpful in mathematics to refer to the elements of an array using subscripts. The subscripts can be integers or variables. The array takes the form of tensors in general, since these can be treated as multi-dimensional arrays. Special (and more familiar) cases are vectors (1d arrays) and matrices (2d arrays).
The following is only an introduction to the concept: index notation is used in more detail in mathematics (particularly in the representation and manipulation of tensor operations). See the main article for further details.
One-dimensional arrays (vectors)
A vector treated as an array of numbers by writing as a row vector or column vector (whichever is used depends on convenience or context):
Index notation allows indication of the elements of the array by simply writing ai, where the index i is known to run from 1 to n, because of n-dimensions.
For example, given the vector:
then some entries are
.
The notation can be applied to vectors in mathematics and physics. The following vector equation
can also be written in terms of the elements of the vector (aka components), that is
where the indices take a given range of values. This expression represents a set of equations, one for each index. If the vectors each have n elements, meaning i = 1,2,…n, then the equations are explicitly
Hence, index notation serves as an efficient shorthand for
representing the general structure to an equation,
while applicable to individual components.
Two-dimensional arrays
More than one index is used to describe arrays of numbers, in two or more dimensions, such as the elements of a matrix, (see also image to right);
The entry of a matrix A is written using two indices, say i and j, with or without commas to separate the indices: aij or ai,j, where the first subscript is the row number and the second is the column number. Juxtaposition is also used as notation for multiplication; this may be a source of confusion. For example, if
then some entries are
.
For indices larger than 9, the comma-based notation may be preferable (e.g., a3,12 instead of a312).
Matrix equations are written similarly to vector equations, such as
in terms of the elements of the matrices (aka components)
for all values of i and j. Again this expression represents a set of equations, one for each index. If the matrices each have m rows and n columns, meaning and , then there are mn equations.
Multi-dimensional arrays
The notation allows a clear generalization to multi-dimensional arrays of elements: tensors. For example,
repres |
https://en.wikipedia.org/wiki/Waterkeeper%20Alliance | Waterkeeper Alliance is a worldwide network of environmental organizations founded in 1999 that work to protect bodies of water around the United States and the world. By December 2019, the group said it had grown to 350 members in 46 countries, with half the membership outside the U.S.; the alliance had added 200 groups in the last five years.
In 1983, the founding Riverkeeper organization formed around the Hudson River in New York, in response to the untreated sewage and industrial water pollution that was degrading water quality in the river. Today, Waterkeeper Alliance, based in Manhattan, unites all Waterkeeper organizations. The group helps to coordinate and cover issues affecting Waterkeepers that work to protect rivers, lakes, bays, sounds, and other water bodies around the world. In the United States, only 52 of the 180 groups cover watersheds west of the Mississippi River.
In June 2019, the group announced a project with online travel website Culture Trip called "Waterkeeper Warriors." They named 20 activists who “represent the impact one person can make on an issue that affects us all."
Notes
External links
Water organizations in the United States
Environmental organizations based in New York City
Water resource policy
Environmental organizations established in 1999
Water conservation in the United States
1999 establishments in New York (state)
Robert F. Kennedy Jr. |
https://en.wikipedia.org/wiki/Programmed%20input%E2%80%93output | Programmed input–output (also programmable input/output, programmed input/output, programmed I/O, PIO) is a method of data transmission, via input/output (I/O), between a central processing unit (CPU) and a peripheral device, such as a Parallel ATA storage device. Each data item transfer is initiated by an instruction in the program, involving the CPU for every transaction. In contrast, in direct memory access (DMA) operations, the CPU is uninvolved in the data transfer.
The term can refer to either memory-mapped I/O (MMIO) or port-mapped I/O (PMIO). PMIO refers to transfers using a special address space outside of normal memory, usually accessed with dedicated instructions, such as IN and OUT in x86 architectures. MMIO refers to transfers to I/O devices that are mapped into the normal address space available to the program. PMIO was very useful for early microprocessors with small address spaces, since the valuable resource was not consumed by the I/O devices.
The best known example of a PC device that uses programmed I/O is the Parallel AT Attachment (PATA) interface; however, the AT Attachment interface can also be operated in any of several DMA modes. Many older devices in a PC also use PIO, including legacy serial ports, legacy parallel ports when not in ECP mode, keyboard and mouse PS/2 ports, legacy MIDI and joystick ports, the interval timer, and older network interfaces.
PIO mode in the ATA interface
The PIO interface is grouped into different modes that correspond to different transfer rates. The electrical signaling among the different modes is similar — only the cycle time between transactions is reduced in order to achieve a higher transfer rate. All ATA devices support the slowest mode — Mode 0. By accessing the information registers (using Mode 0) on an ATA drive, the CPU is able to determine the maximum transfer rate for the device and configure the ATA controller for optimal performance.
The PIO modes require a great deal of CPU overhead to configure a data transaction and transfer the data. Because of this inefficiency, the DMA (and eventually Ultra Direct Memory Access (UDMA) interface was created to increase performance. The simple digital logic needed to implement a PIO transfer still makes this transfer method useful today, especially if high transfer rates are unneeded as in embedded systems, or with field-programmable gate array (FPGA) chips, where PIO mode can be used with no significant performance loss.
Two additional advanced timing modes have been defined in the CompactFlash specification 2.0. Those are PIO modes 5 and 6. They are specific to CompactFlash.
PIO Mode 5
A PIO Mode 5 was proposed with operation at 22 MB/s, but was never implemented on hard disks because CPUs of the time would have been crippled waiting for the hard disk at the proposed PIO 5 timings, and the DMA standard ultimately obviated it. While no hard disk drive was ever manufactured to support this mode, some motherboard manufacturers pree |
https://en.wikipedia.org/wiki/Delta%20encoding | Delta encoding is a way of storing or transmitting data in the form of differences (deltas) between sequential data rather than complete files; more generally this is known as data differencing. Delta encoding is sometimes called delta compression, particularly where archival histories of changes are required (e.g., in revision control software).
The differences are recorded in discrete files called "deltas" or "diffs". In situations where differences are small – for example, the change of a few words in a large document or the change of a few records in a large table – delta encoding greatly reduces data redundancy. Collections of unique deltas are substantially more space-efficient than their non-encoded equivalents.
From a logical point of view the difference between two data values is the information required to obtain one value from the other – see relative entropy. The difference between identical values (under some equivalence) is often called 0 or the neutral element.
Simple example
Perhaps the simplest example is storing values of bytes as differences (deltas) between sequential values, rather than the values themselves. So, instead of 2, 4, 6, 9, 7, we would store 2, 2, 2, 3, −2. This reduces the variance (range) of the values when neighbor samples are correlated, enabling a lower bit usage for the same data. IFF 8SVX sound format applies this encoding to raw sound data before applying compression to it. Not even all 8-bit sound samples compress better when delta encoded, and the usability of delta encoding is even smaller for 16-bit and better samples. Therefore, compression algorithms often choose to delta encode only when the compression is better than without. However, in video compression, delta frames can considerably reduce frame size and are used in virtually every video compression codec.
Definition
A delta can be defined in 2 ways, symmetric delta and directed delta. A symmetric delta can be expressed as
where and represent two versions.
A directed delta, also called a change, is a sequence of (elementary) change operations which, when applied to one version , yields another version (note the correspondence to transaction logs in databases). In computer implementations, they typically take the form of a language with two commands: copy data from v1 and write literal data.
Variants
A variation of delta encoding which encodes differences between the prefixes or suffixes of strings is called incremental encoding. It is particularly effective for sorted lists with small differences between strings, such as a list of words from a dictionary.
Implementation issues
The nature of the data to be encoded influences the effectiveness of a particular compression algorithm.
Delta encoding performs best when data has small or constant variation; for an unsorted data set, there may be little to no compression possible with this method.
In delta encoded transmission over a network where only a single copy of the file is availabl |
https://en.wikipedia.org/wiki/North%20American%20Network%20Operators%27%20Group | The North American Network Operators' Group (NANOG) is an educational and operational forum for the coordination and dissemination of technical information related to backbone/enterprise networking technologies and operational practices. It runs meetings, talks, surveys, and an influential mailing list for Internet service providers. The main method of communication is the NANOG mailing list (known informally as nanog-l), a free mailing list to which anyone may subscribe or post.
Meetings
NANOG meetings are held three times each year, and include presentations, tutorials, and BOFs (Birds of a Feather meetings). There are also 'lightning talks', where speakers can submit brief presentations (no longer than 10 minutes), on a very short term. The meetings are informal, and membership is open. Conference participants typically include senior engineering staff from tier 1 and tier 2 ISPs. Participating researchers present short summaries of their work for operator feedback. In addition to the conferences, NANOG On the Road events offer single-day professional development and networking events touching on current NANOG discussion topics.
Organization
NANOG meetings are organized by NewNOG, Inc., a Delaware non-profit organization, which took over responsibility for NANOG from the Merit Network in February 2011. Meetings are hosted by NewNOG and other organizations from the U.S. and Canada. Overall leadership is provided by the NANOG Steering Committee, established in 2005, and a Program Committee.
History
NANOG evolved from the NSFNET "Regional-Techs" meetings, where technical staff from the regional networks met to discuss operational issues of common concern with each other and with the Merit engineering staff. At the February 1994 regional techs meeting in San Diego, the group revised its charter to include a broader base of network service providers, and subsequently adopted NANOG as its new name.
NANOG was organized by Merit Network, a non-profit Michigan organization, from 1994 through 2011 when it was transferred to NewNOG.
Funding
Funding for NANOG originally came from the National Science Foundation, as part of two projects Merit undertook in partnership with NSF and other organizations: the NSFNET Backbone Service and the Routing Arbiter project. All NANOG funds now come from conference registration fees and donations from vendors, and starting in 2011, membership dues.
Scope
NANOG meetings provide a forum for the exchange of technical information, and promote discussion of implementation issues that require community cooperation. Coordination among network service providers helps ensure the stability of overall service to network users. The group's charter is available on the official NANOG website.
Topics
The NANOG Program Committee publishes a Call for Presentations as well as proposes topics that address current operational issues. The committee's criteria for selecting talks are outlined on the Call for Presentations: the t |
https://en.wikipedia.org/wiki/Charles%20H.%20Moore | Charles Havice Moore II (born 9 September 1938), better known as Chuck Moore, is an American computer engineer and programmer, best known for inventing the Forth programming language in 1968. He cofounded FORTH, Inc., with Elizabeth Rather in 1971 and continued to evolve the language with an emphasis on simplicity. Beginning in the early 1980s, he shifted focus to designing stack machines in hardware conjoined with Forth-like languages to run on them. He developed the Novix NC4000 and Sh-Boom, then the minimal instruction set MuP21, and i21. In the 2000s he created a series of low-power chips containing up to 144 individual stack processors. He has implemented his own tools for processor design.
Early career
Moore began programming at the Smithsonian Astrophysical Observatory by the late 1950s. He attended the Massachusetts Institute of Technology and received a bachelors in physics in 1961. He entered Stanford University for graduate school to study mathematics but in 1965 he left to move to New York City to become a freelance programmer.
Forth
In 1968, while employed at the United States National Radio Astronomy Observatory (NRAO), Moore invented the initial version of the Forth language to help control radio telescopes. In 1971 he co-founded (with Elizabeth Rather) FORTH, Inc., the first, and still one of the leading, purveyors of Forth solutions. During the 1970s he ported Forth to dozens of computer architectures.
Hardware design
In the 1980s, Moore turned his attention and Forth development techniques to CPU design, developing several stack machine microprocessors and gaining several microprocessor-related patents along the way. His designs have all emphasized high performance at low power usage. He also explored alternate Forth architectures such as cmForth and machine Forth, which more closely matched his chips' machine languages.
In 1983 Moore founded Novix, Inc., where he developed the NC4000 processor. This design was licensed to Harris Semiconductor which marketed an enhanced version as the RTX2000, a radiation hardened stack processor which has been used in numerous NASA missions. In 1985 at his consulting firm Computer Cowboys, he developed the Sh-Boom processor. Starting in 1990, he developed his own VLSI CAD system, OKAD, to overcome limitations in existing CAD software. He used these tools to develop several multi-core minimal instruction set computer (MISC) chips: the MuP21 in 1990 and the F21 in 1993.
Moore was a founder of iTv Corp, one of the first companies to work on internet appliances. In 1996 he designed another custom chip for this system, the i21.
Moore developed the colorForth dialect of Forth, a language derived from the scripting language for his custom VLSI CAD system, OKAD. In 2001, he rewrote OKAD in colorForth and designed the c18 processor.
In 2005, Moore co-founded and became Chief Technology Officer of IntellaSys, which develops and markets his chip designs, such as the seaForth-24 multi-core pro |
https://en.wikipedia.org/wiki/SmallBASIC | SmallBASIC is a BASIC programming language dialect with interpreters released as free software under the GNU General Public License version 3 for Microsoft Windows, Linux and Android.
Description
The dialect is described by the authors as a second generation BASIC, and has a lot in common with QBasic. SmallBASIC includes trigonometric, matrices and algebra functions, a built in IDE, a string library, system, sound, and graphic commands along with structured programming syntax.
Intended application
The "Small" prefix in the name SmallBASIC reflects the project's original intention of being used with the Palm, a small hand-held device. SmallBASIC was designed for portability, and is written in C with separate modules containing any code that is unique to a particular platform.
SmallBASIC is intended to support the same sorts of applications supported by GW-BASIC and QBasic on the IBM PC, with support for drawing Graphic Primitives to the screen, creating sounds, String Manipulation, and displaying text in various fonts. SmallBASIC also adds functions such as "File Save", "Save As", "Close File", and "Open File" to the Palm, a device with no native filesystem. SmallBASIC is also intended as a tool for mathematics, with built-in functions for Unit conversion, Algebra, Matrix math, Trigonometry, Statistics, and for two and three dimensional Equation Graphing.
History
SmallBASIC was designed to run on minimal hardware. One of the primary platforms supported was Palm OS, where memory, CPU cycles, and screen space were limited. The SmallBASIC graphics engine could use ASCII graphics (similar to ASCII art) and therefore ran many programs on pure text devices. SmallBASIC runs even on Palm OS wristwatches made by Fossil, Inc.
Platforms
SmallBASIC is available for all POSIX-Compliant operating systems (including Linux, BSD, and UNIX), DOS/DJGPP, Win32, FLTK, VTOS, Franklin eBookMan, Cygwin/MingW, Helio/VT-OS, Android, the Nokia N770 Internet Tablet., and on any system that supports SDL, FLTK, SVGALib, Linux framebuffer, or Windows GUI.
Syntax
The syntax of SmallBASIC has a lot in common with QBasic. Line numbers are not required, and statements are terminated by newlines. Multiple statements may be written on a single line by separating each statement with a colon (:)
An example "Hello, World!" program is:
PRINT "Hello, World!"
An example of how SmallBASIC allows to load an image file and display the image:
I = IMAGE("image_name.png") 'Loads a png file
I.SHOW(100,100) 'shows the image on screen at the coordinates 100,100
Loadable modules
External modules can be written in C to extend the functionality provided by SmallBASIC. Since version 12.20 modules for Raylib, Nuklear and WebSockets are included in the release. Additionally a loadable module to access the GPIO connector of the Raspberry Pi exists.
Reception
Tech Republic calls it "an excellent tool to begin programming with."
ASCII-World says "SmallBASIC is an excellent tool for math |
https://en.wikipedia.org/wiki/Adventure%20%281980%20video%20game%29 | Adventure is a video game developed by Warren Robinett for the Atari Video Computer System (later renamed Atari 2600) and released in 1980 by Atari, Inc. The player controls a square avatar whose quest is to explore an open-ended environment to find a magical chalice and return it to the golden castle. The game world is populated by roaming enemies: three dragons that can eat the avatar and a bat that randomly steals and hides items around the game world. Adventure introduced new elements to console games, including enemies that continue to move when offscreen.
The game was conceived as a graphical version of the 1977 text adventure Colossal Cave Adventure. Warren Robinett spent approximately one year designing and coding the game, while overcoming a variety of technical limitations in the Atari 2600 console hardware, as well as difficulties with management within Atari. As a result of conflicts with Atari's management which denied giving public credit for programmers, Robinett programmed a secret room that contained his name within the game, only found by players after the game shipped and Robinett had left Atari. While not the first such Easter egg, Robinett's secret room pioneered this idea within video games and other forms of media, and since has transcended into popular culture, such as the climax of Ernest Cline's book and film adaption Ready Player One.
Adventure received mostly positive reviews at the time of its release and in the decades since, often named as one of the industry's most influential games and among the greatest video games of all time. It is considered the first action-adventure and console fantasy game, and inspired other games in the genres. More than one million cartridges of Adventure were sold, and the game has been included in numerous Atari 2600 game collections for modern computer hardware. The game's prototype code was used as the basis for the 1979 Superman game, and a planned sequel eventually formed the basis for the Swordquest games.
Gameplay
In Adventure, the player's goal is to recover the Enchanted Chalice that an evil magician has stolen and hidden in the kingdom and return it to the Golden Castle. The kingdom is made of a total of thirty rooms, with various obstacles, enemies, and mazes located in and around the Golden, White, and Black Castles. The kingdom is guarded by three dragonsthe yellow Yorgle, the green Grundle, and the red Rhindlethat protect or flee from various items and attack the player's avatar. An enemy bat can roam the kingdom freely, carrying an item or a dragon around; the bat was to be named Knubberrub but the name is not in the manual. The bat's two states are agitation and non-agitation. When in the agitated state, the bat will either pick up or swap what it currently carries with an object in the present room, eventually returning to the non-agitated state where it will not pick up an object. The bat continues to fly around even offscreen, swapping objects.
The player's avata |
https://en.wikipedia.org/wiki/Computer-mediated%20communication | Computer-mediated communication (CMC) is defined as any human communication that occurs through the use of two or more electronic devices. While the term has traditionally referred to those communications that occur via computer-mediated formats (e.g., instant messaging, email, chat rooms, online forums, social network services), it has also been applied to other forms of text-based interaction such as text messaging. Research on CMC focuses largely on the social effects of different computer-supported communication technologies. Many recent studies involve Internet-based social networking supported by social software.
Forms
Computer-mediated communication can be broken down into two forms: synchronous and asynchronous. Synchronous computer-mediated communication refers to communication that occurs in real-time. All parties are engaged in the communication simultaneously; however, they are not necessarily all in the same location. Examples of synchronous communication are video chats and FaceTime audio calls. On the contrary, asynchronous computer-mediated communication refers to communication that takes place when the parties engaged are not communicating in unison. In other words, the sender does not receive an immediate response from the receiver. Most forms of computer-mediated technology are asynchronous. Examples of asynchronous communication are text messages and emails.
Scope
Scholars from a variety of fields study phenomena that can be described under the umbrella term of computer-mediated communication (CMC) (see also Internet studies). For example, many take a sociopsychological approach to CMC by examining how humans use "computers" (or digital media) to manage interpersonal interaction, form impressions and maintain relationships. These studies have often focused on the differences between online and offline interactions, though contemporary research is moving towards the view that CMC should be studied as embedded in everyday life. Another branch of CMC research examines the use of paralinguistic features such as emoticons, pragmatic rules such as turn-taking and the sequential analysis and organization of talk, and the various sociolects, styles, registers or sets of terminology specific to these environments (see Leet). The study of language in these contexts is typically based on text-based forms of CMC, and is sometimes referred to as "computer-mediated discourse analysis".
The way humans communicate in professional, social, and educational settings varies widely, depending upon not only the environment but also the method of communication in which the communication occurs, which in this case is through computers or other information and communication technologies (ICTs). The study of communication to achieve collaboration—common work products—is termed computer-supported collaboration and includes only some of the concerns of other forms of CMC research.
Popular forms of CMC include e-mail, video, audio or text chat (t |
https://en.wikipedia.org/wiki/Wes%20Clark | Wes Clark may refer to:
Wes Clark (basketball), American basketball player
Wesley Clark, American army general, retired
Wesley A. Clark, computer scientist |
https://en.wikipedia.org/wiki/Packard%20Bell | Packard Bell Electronics, Inc., was an American computer company independently active from 1986 to 1996, now a Dutch-registered computer manufacturing brand and subsidiary of Acer Inc. The company was originally founded in 1986, after Israeli-American investors bought the trademark rights to the Packard Bell Corporation from Teledyne. The investors wanted to name their newly formed personal computer manufacturing company producing discount computers in the North American markets. In the late 1990s, Packard Bell became a subsidiary of Japanese electronics conglomerate NEC, Packard Bell NEC. In 1999, NEC stopped its North American operations and focused squarely the division on the European market, where it continued to sell PC and laptop under the Packard Bell name. In 2006, NEC divested Packard Bell, and in 2008, the brand was acquired by the Taiwanese consumer electronic firm Acer, in the aftermath of their takeover of Gateway, Inc.
History
1986–1987: Foundation
The Packard Bell computer company was incorporated in Chatsworth, Los Angeles, California, in 1986 by Beny Alagem, Jason Barzilay, and Alex Sandel, three Israeli-born United States businessmen based in California. Packard Bell was previously the namesake of an American consumer electronics company founded in the 1920s, Packard Bell Electronics. The latter made a name for itself for its radios before branching out to television sets in the 1950s. In the 1960s, as with many American consumer electronics companies, Packard Bell encountered difficulty in the marketplace due to increasing competition from Japanese companies like Sony, Sanyo, and Panasonic. In 1968, Packard Bell was acquired by Teledyne Technologies, an electronics conglomerate. Teledyne let the Packard Bell trademark languish in the following years; by the mid-1970s, it was all but retired. Alagem, fascinated by Packard Bell's history as a once-beloved consumer electronics brand, bartered Teledyne for the rights to the trademark for just under $100,000. Alagem later found that brand recognition for Packard Bell was at 70 percent, among a random sampling of adults in the United States.
Alagem and Barzilay met in the 1970s after the former had graduated from Cal Poly. Together, they founded a semiconductor distribution company. In 1983, they merged with another electronics supplier owned by Sandel to form Cal Circuit Abco, Inc., in Woodland Hills, California. Cal Circuit Abco sold computer peripherals on top of semiconductor and had generated over $500 million in annual revenues by 1985. Seeing the increasing commodification of the IBM Personal Computer standard by way of clone markers, or companies that manufactured systems which were plug- and software-compatible with the IBM PC architecture, the three businessmen decided they wanted in and reincorporated Cal Circuit Abco as Packard Bell Electronics in 1986, after Alagem had bought the Packard Bell name from Teledyne. The three leveraged their business connections with Asi |
https://en.wikipedia.org/wiki/Model%20checking | In computer science, model checking or property checking is a method for checking whether a finite-state model of a system meets a given specification (also known as correctness). This is typically associated with hardware or software systems, where the specification contains liveness requirements (such as avoidance of livelock) as well as safety requirements (such as avoidance of states representing a system crash).
In order to solve such a problem algorithmically, both the model of the system and its specification are formulated in some precise mathematical language. To this end, the problem is formulated as a task in logic, namely to check whether a structure satisfies a given logical formula. This general concept applies to many kinds of logic and many kinds of structures. A simple model-checking problem consists of verifying whether a formula in the propositional logic is satisfied by a given structure.
Overview
Property checking is used for verification when two descriptions are not equivalent. During refinement, the specification is complemented with details that are unnecessary in the higher-level specification. There is no need to verify the newly introduced properties against the original specification since this is not possible. Therefore, the strict bi-directional equivalence check is relaxed to a one-way property check. The implementation or design is regarded as a model of the system, whereas the specifications are properties that the model must satisfy.
An important class of model-checking methods has been developed for checking models of hardware and software designs where the specification is given by a temporal logic formula. Pioneering work in temporal logic specification was done by Amir Pnueli, who received the 1996 Turing award for "seminal work introducing temporal logic into computing science". Model checking began with the pioneering work of E. M. Clarke, E. A. Emerson, by J. P. Queille, and J. Sifakis. Clarke, Emerson, and Sifakis shared the 2007 Turing Award for their seminal work founding and developing the field of model checking.
Model checking is most often applied to hardware designs. For software, because of undecidability (see computability theory) the approach cannot be fully algorithmic, apply to all systems, and always give an answer; in the general case, it may fail to prove or disprove a given property. In embedded-systems hardware, it is possible to validate a specification delivered, e.g., by means of UML activity diagrams or control-interpreted Petri nets.
The structure is usually given as a source code description in an industrial hardware description language or a special-purpose language. Such a program corresponds to a finite state machine (FSM), i.e., a directed graph consisting of nodes (or vertices) and edges. A set of atomic propositions is associated with each node, typically stating which memory elements are one. The nodes represent states of a system, the edges represent possible transit |
https://en.wikipedia.org/wiki/Preview%20%28macOS%29 | Preview is the built-in image viewer and PDF viewer of the macOS operating system. In addition to viewing and printing digital images and Portable Document Format (PDF) files, it can also edit these media types. It employs the Aqua graphical user interface, the Quartz graphics layer, and the ImageIO and Core Image frameworks.
History
Like macOS, Preview originated in the NeXTSTEP operating system by NeXT, where it was part of every release since 1989. Between 2003 and 2005, Apple claimed Preview was the "fastest PDF viewer on the planet."
Supported file types
Preview can open the following file types.
AI – Adobe Illustrator artwork files (if PDF content included in file)
BMP – Windows bitmap files
CR2 – Raw image file used by Canon cameras
DAE – Collada 3D files
DNG – Digital negative files
FAX – Faxes
FPX – FlashPix files
GIF – Graphics Interchange Format files
HDR – High-dynamic-range image files
ICNS – Apple Icon Image files
ICO – Windows icon files
JPEG – Joint Photographic Experts Group files
JPEG 2000 – JPEG 2000 files
OBJ – Wavefront 3D file
OpenEXR – OpenEXR files
PDF – Portable Document Format version 1.5 + some additional features
PICT – QuickDraw image files
PNG – Portable Network Graphics files
PPM – Netpbm Color Image files
PNTG – MacPaint Bitmap Graphic files
PPT – PowerPoint files
PSD – Adobe Photoshop files
QTIF – QuickTime image files
RAD – Radiance 3D Scene Description files
RAW – Raw image files
SGI – Silicon Graphics Image files
STL – STereoLithography 3D format
TGA – TARGA image files
TIF (TIFF) – Tagged Image File Format files
XBM – X BitMap files
In macOS Monterey and earlier, Preview supported the display of EPS and PostScript documents using on-the-fly conversion to PDF format. However, this functionality was removed in macOS Ventura, although users can continue to print and files by dragging them into the printer queue.
The version of Preview included with OS X 10.3 (Panther) could play animated GIF images, for which an optional button could be added to the toolbar. As of OS X 10.4 (Tiger), Preview lost playback functionality and animated GIF files are displayed as individual frames in a numbered sequence.
Features
Editing PDF documents
Preview can encrypt PDF documents, and restrict their use; for example, it is possible to save an encrypted PDF so that a password is required to copy data from the document, or to print it. However, encrypted PDFs cannot be edited further, so the original author should always keep an unencrypted version. A new "edit button" where the picture can be edited is introduced in Version 7. The "edit button" allows options to insert shapes, lines, do cropping, and among other things.
Some features which are otherwise only available in professional PDF editing software are provided by Preview: It is possible to extract single pages out of multi-page documents (e.g. PDF files), sort pages, and drag & drop single or multiple pages between several opened mult |
https://en.wikipedia.org/wiki/High-level%20assembler | A high-level assembler in computing is an assembler for assembly language that incorporate features found in a high-level programming language.
The earliest high-level assembler was probably Burroughs' Executive Systems Problem Oriented Language (ESPOL) in about 1960, which provided an ALGOL-like syntax around explicitly-specified Burroughs B5000 machine instructions. This was followed by Niklaus Wirth's PL360 in 1968; this replicated the Burroughs facilities, with which he was familiar, on an IBM System/360. More recent high-level assemblers are Borland's Turbo Assembler (TASM), Netwide Assembler (NASM), Microsoft's Macro Assembler (MASM), IBM's High Level Assembler (HLASM) for z/Architecture systems, Alessandro Ghignola's Linoleum, X# used in Cosmos and Ziron.
High-level assemblers typically provide instructions that directly assemble one-to-one into low-level machine code as in any assembler, plus control statements such as IF, WHILE, REPEAT...UNTIL, and FOR, macros, and other enhancements. This allows the use of high-level control statement abstractions wherever maximal speed or minimal space is not essential; low-level statements that assemble directly to machine code can be used to produce the fastest or shortest code. The end result is assembly source code that is far more readable than standard assembly code while preserving the efficiency inherent with using assembly language.
High-level assemblers generally provide information-hiding facilities and the ability to call functions and procedures using a high-level-like syntax (i.e., the assembler automatically produces code to push parameters on the call stack rather than the programmer having to manually write the code to do this).
High-level assemblers also provide data abstractions normally found in high-level languages. Examples include: data structures, unions, classes, and sets. Some high-level assemblers (e.g., TASM and High Level Assembly (HLA)) support object-oriented programming.
References
(xiv+294+4 pages) (NB. Presents definitions and examples of older high-level assemblers.)
The Art of Assembly Language, Randall Hyde
Webster site with information and links on HLA and assembler
High-level |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.