source
stringlengths
32
199
text
stringlengths
26
3k
https://en.wikipedia.org/wiki/His
His or HIS may refer to: Computing Hightech Information System, a Hong Kong graphics card company Honeywell Information Systems Hybrid intelligent system Microsoft Host Integration Server Education Hangzhou International School, in China Harare International School, in Zimbabwe Hokkaido International School, in Japan Hsinchu International School, in Taiwan Hollandsch-Inlandsche School, a Dutch school for native Indonesians in the Dutch East Indies Science Bundle of His, a collection of specialized heart cells Health information system Hospital information system Human identical sequence His-Tag, a polyhistidine motif in proteins Histidine, an amino acid abbreviated as His or H His 1 virus, a synonym of Halspiviridae HIS-1, a long non-coding RNA, also known as VIS1 People Wilhelm His Sr. (1831–1904), Swiss anatomist Wilhelm His Jr. (1863–1934), Swiss anatomist Places His, Agder, a village in Arendal municipality in Agder county, Norway Other uses His, the possessive form of the English-language pronoun he H.I.S. (travel agency), a Japanese travel agency His, Haute-Garonne, a commune in the Haute-Garonne department, France B sharp, known as His in some European countries See also Hiss (disambiguation)
https://en.wikipedia.org/wiki/Jini
Jini (), also called Apache River, is a network architecture for the construction of distributed systems in the form of modular co-operating services. JavaSpaces is a part of the Jini. Originally developed by Sun Microsystems, Jini was released under the Apache License 2.0. Responsibility for Jini has been transferred to Apache under the project name "River". History Sun Microsystems introduced Jini in July 1998. In November 1998, Sun announced that there were some firms supporting Jini. The Jini team at Sun has always stated that Jini is not an acronym. Ken Arnold has joked that it means "Jini Is Not Initials", making it a recursive anti-acronym, but it has always been just Jini. The word 'jini' means "the devil" in Swahili; this is borrowed from the Arabic word for a mythological spirit, originated from the Latin genius, which is also the origin of the English word 'genie'. Jini provides the infrastructure for the Service-object-oriented architecture (SOOA). Using a service Locating services is done through a lookup service. Services try to contact a lookup service (LUS), either by unicast interaction, when it knows the actual location of the lookup service, or by dynamic multicast discovery. The lookup service returns an object called the service registrar that can be used by services to register themselves so they can be found by clients. Clients can use the lookup service to retrieve a proxy object to the service; calls to the proxy translate the call to a service request, performs this request on the service, and returns the result to the client. This strategy is more convenient than Java remote method invocation, which requires the client to know the location of the remote service in advance. Limitations Jini uses a lookup service to broker communication between the client and service. This appears to be a centralized model (though the communication between client and service can be seen as decentralized) that does not scale well to very large systems. However, the lookup service can be horizontally scaled by running multiple instances that listen to the same multicast group. See also Jim Waldo, lead architect of Jini Ken Arnold, one of the original Jini architects Juxtapose (JXTA) SORCER (SORCER) Java Management Extensions (JMX) Simple Network Management Protocol (SNMP) Zero Configuration Networking OSGi Alliance Service Location Protocol Universal Plug and Play (UPnP) Devices Profile for Web Services (DPWS) Tuple space CORBA References External links Jini Java platform Beta software Software using the Apache license
https://en.wikipedia.org/wiki/Mary%20Kay%20and%20Johnny
Mary Kay and Johnny is an American situation comedy starring real-life married couple Mary Kay and Johnny Stearns. It was the first sitcom broadcast on a network television in the United States. Mary Kay and Johnny initially aired live on the DuMont Television Network before moving to CBS and then NBC. Format Plots centered around a bank employee and his "zany, but not dumb" wife and the problems that they encountered. Much of the activity occurred in the couple's apartment in Greenwich Village. A review in the March 6, 1948, issue of the trade publication Billboard began, "This program comes close to being a model tele[vision] show. In detailing the adventures, mainly domestic, of a young married couple, Johnny and Mary Kay Stearns have come up with charming and fresh material, which always takes into consideration that there are cameras taking everything in." Later in the review, however, the author wrote, "At times the show got just a bit too cute" with the female star squealing too much and the story falling into familiar family sitcom patterns. Cast In addition to the Stearnses, the cast included their son, Christopher Stearns, as himself. Mary Kay's mother was played by Nydia Westman, and Johnny's friend Howie was played by Howard Thomas. Jim Stevenson was the announcer. Broadcast history The first 15-minute episode debuted on the DuMont Television Network on Tuesday, November 18, 1947. The Stearnses created and wrote all the scripts. The program was broadcast live, most of the action taking place on a set representing the New York City apartment of the title characters, a young married couple. Mary Kay and Johnny was the first program to show a couple sharing a bed, and the first series to show a woman's pregnancy on television: Mary Kay became pregnant in 1948 and after unsuccessfully trying to hide her pregnancy, the producers wrote it into the show. On December 31, 1948, the Stearns' weeks-old son Christopher appeared on the show and became a character. After a year on DuMont, the show moved to CBS for half a year, much of the time being broadcast every weeknight, then ran for another year each Saturday night on NBC, where it debuted on October 10, 1948. It broadcast the final episode on March 11, 1950. Viewership At a time when there were no TV ratings (the A.C. Nielsen Company would not begin measuring TV ratings until 1950), Anacin decided to take a chance and sponsor the show. This decision worried the advertising executives at Anacin, who thought that they might be wasting money by sponsoring a show with a sparse audience. A simple, non-scientific scheme to gauge the size of the audience was hatched. During one commercial spot, Anacin offered a free pocket mirror to the first 200 viewers who wrote in requesting one. As a precaution, they purchased a total of 400 mirrors in case the audience was twice as large as they expected. Although the free mirror was offered only during that one spot, Anacin received nearly 9000 re
https://en.wikipedia.org/wiki/Compression%20artifact
A compression artifact (or artefact) is a noticeable distortion of media (including images, audio, and video) caused by the application of lossy compression. Lossy data compression involves discarding some of the media's data so that it becomes small enough to be stored within the desired disk space or transmitted (streamed) within the available bandwidth (known as the data rate or bit rate). If the compressor cannot store enough data in the compressed version, the result is a loss of quality, or introduction of artifacts. The compression algorithm may not be intelligent enough to discriminate between distortions of little subjective importance and those objectionable to the user. The most common digital compression artifacts are DCT blocks, caused by the discrete cosine transform (DCT) compression algorithm used in many digital media standards, such as JPEG, MP3, and MPEG video file formats. These compression artifacts appear when heavy compression is applied, and occur often in common digital media, such as DVDs, common computer file formats such as JPEG, MP3 and MPEG files, and some alternatives to the compact disc, such as Sony's MiniDisc format. Uncompressed media (such as on Laserdiscs, Audio CDs, and WAV files) or losslessly compressed media (such as FLAC or PNG) do not suffer from compression artifacts. The minimization of perceivable artifacts is a key goal in implementing a lossy compression algorithm. However, artifacts are occasionally intentionally produced for artistic purposes, a style known as glitch art or datamoshing. Technically speaking, a compression artifact is a particular class of data error that is usually the consequence of quantization in lossy data compression. Where transform coding is used, it typically assumes the form of one of the basis functions of the coder's transform space. Images When performing block-based discrete cosine transform (DCT) coding for quantization, as in JPEG-compressed images, several types of artifacts can appear. Ringing Contouring Posterizing Staircase noise (aliasing) along curving edges Blockiness in "busy" regions (block boundary artifacts, sometimes called macroblocking, quilting, or checkerboarding) Other lossy algorithms, which use pattern matching to deduplicate similar symbols, are prone to introducing hard to detect errors in printed text. For example, the numbers "6" and "8" may get replaced. This has been observed to happen with JBIG2 in certain photocopier machines. Block boundary artifacts At low bit rates, any lossy block-based coding scheme introduces visible artifacts in pixel blocks and at block boundaries. These boundaries can transform block boundaries, prediction block boundaries, or both, and may coincide with macroblock boundaries. The term macroblocking is commonly used regardless of the artifact's cause. Other names include tiling, mosaicing, pixelating, quilting, and checkerboarding. Block-artifacts are a result of the very principle of block transfo
https://en.wikipedia.org/wiki/Sami%20Al-Arian
Sami Amin Al-Arian (; born January 14, 1958) is a Kuwaiti-born political activist of Palestinian origin who was a computer engineering professor at University of South Florida. During the Clinton administration and Bush administration, he was invited to the White House. He actively campaigned for the Bush presidential campaign in the United States presidential election in 2000. After a contentious interview with Bill O'Reilly on The O'Reilly Factor following the September 11 attacks, Al-Arian's tenure at University of South Florida came under public scrutiny. He was indicted in February 2003 on 17 counts under the Patriot Act. A jury acquitted him on 8 counts and deadlocked on the remaining 9 counts. He later struck a plea bargain and admitted to one of the remaining charges in exchange for being released and deported by April 2007. However, as his release date approached, a federal prosecutor in Virginia demanded he testify before a grand jury in a separate case, which he refused to do, claiming it would violate his plea deal. He was held under house arrest in Northern Virginia from 2008 until 2014 when federal prosecutors filed a motion to dismiss charges against him. Al-Arian's activities and connections became a factor in multiple political campaigns, including the 2004 United States Senate election in Florida and the 2010 United States Senate election in California. He was deported to Turkey on February 4, 2015. Early life and education Kuwait and Egypt Al-Arian was born on January 14, 1958, in Kuwait. His parents, Amin and Laila Al-Arian, were Palestinian refugees who left after the creation of Israel in 1948. After the 1948 Palestine war, Amin had to leave behind the family soap factory in Jaffa and flee towards the Gaza Strip's refugee camps. Amin's family migrated to Kuwait in 1957 where Sami Al-Arian was born. Under Kuwaiti law, his parents had legal resident status but he was not eligible for citizenship. In 1966, his family left Kuwait and went back to Egypt. He received his primary and secondary education at Cairo, Egypt. He left Egypt in 1975, and returned in 1979 for a visit when he married Nahla Al-Najjar. United States Sponsored by his father, Sami went to America for education. In 1975, Al-Arian came to the United States to study engineering at Southern Illinois University. In 1978, he graduated with a major in Electrical Sciences and Systems Engineering. At North Carolina State University, he earned his master's degree in 1980 and doctorate in 1985. He worked with Professor Dharma P. Agrawal on physical failures and fault models of CMOS circuits. Tenured at University of South Florida He moved to Temple Terrace after he was hired as an assistant professor to teach computer engineering at University of South Florida (USF) on January 22, 1986. He was granted permanent resident status for United States in March 1989. He was promoted from an assistant professor to an associate professor with tenure. He received many accolad
https://en.wikipedia.org/wiki/Superfamily
Superfamily may refer to: Protein superfamily Superfamily database Superfamily (taxonomy), a taxonomic rank Superfamily (linguistics), also known as macrofamily Font superfamily, a large typographic family Superfamily (band), a Norwegian pop band "Super Family", a group of comic characters
https://en.wikipedia.org/wiki/National%20Educational%20Television
National Educational Television (NET) was an American educational broadcast television network owned by the Ford Foundation and later co-owned by the Corporation for Public Broadcasting. It operated from May 16, 1954, to October 4, 1970, and was succeeded by the Public Broadcasting Service (PBS), which has memberships with many television stations that were formerly part of NET. The Council on Library and Information Resources (CLIR) provided funds for cataloging the NET collection, and as part of an on-going preservation effort with the Library of Congress, over 10,000 digitized television programs from the non-commercial TV stations and producers spanning 20 years from 1952 to 1972 have been contributed to the American Archive of Public Broadcasting. History The network was founded as the Educational Television and Radio Center (ETRC) in November 1952 by a grant from the Ford Foundation's Fund for Adult Education (FAE). It was originally a limited service for exchanging and distributing educational television programs produced by local television stations to other stations; it did not produce any material by itself. In the spring of 1954, ETRC moved its operations to Ann Arbor, Michigan, and on May 16 of that year, it began operating as a "network". It put together a weekly five-hour package of television programs, distributing them primarily on kinescope film to the affiliated stations by mail. By 1956, ETRC had 22 affiliated stations, expected to grow to 26 by March 1957. The programming was noted for treating subjects in depth, including hour-long interviews with people of literary and historical importance. The programming was also noted for being dry and academic, with little consideration given to entertainment value, a marked contrast to commercial television. Many of the shows were designed as adult education, and ETRC was nicknamed the "University of the Air" (or, less kindly, "The Bicycle Network", both for its low budget and for the way NET supposedly sent programs to its affiliates, by distributing its program films and videotapes via non-electronic means such as by mail, termed in the television industry as "bicycling"). The center's headquarters moved from Ann Arbor to New York City in 1958, and the organization became known as the National Educational Television and Radio Center (NETRC). The center became more aggressive at this time, aiming to ascend to the role of the U.S.' fourth television network. Among its efforts, the network began importing programs from the BBC into the United States, starting with An Age of Kings in 1961. It increased its programming output to ten hours a week. Most NETRC network programs were produced by the affiliate stations because the NETRC had no production staff or facilities of its own. NETRC also contracted programs from independent producers and acquired foreign material from countries like Canada, the United Kingdom, Australia, Yugoslavia, the USSR, France, Italy and West Germany. Start
https://en.wikipedia.org/wiki/Felicific%20calculus
The felicific calculus is an algorithm formulated by utilitarian philosopher Jeremy Bentham (1747–1832) for calculating the degree or amount of pleasure that a specific action is likely to induce. Bentham, an ethical hedonist, believed the moral rightness or wrongness of an action to be a function of the amount of pleasure or pain that it produced. The felicific calculus could, in principle at least, determine the moral status of any considered act. The algorithm is also known as the utility calculus, the hedonistic calculus and the hedonic calculus. To be included in this calculation are several variables (or vectors), which Bentham called "circumstances". These are: Intensity: How strong is the pleasure? Duration: How long will the pleasure last? Certainty or uncertainty: How likely or unlikely is it that the pleasure will occur? Propinquity or remoteness: How soon will the pleasure occur? Fecundity: The probability that the action will be followed by sensations of the same kind. Purity: The probability that it will not be followed by sensations of the opposite kind. Extent: How many people will be affected? Bentham's instructions To take an exact account of the general tendency of any act, by which the interests of a community are affected, proceed as follows. Begin with any one person of those whose interests seem most immediately to be affected by it: and take an account, Of the value of each distinguishable pleasure which appears to be produced by it in the first instance. Of the value of each pain which appears to be produced by it in the first instance. Of the value of each pleasure which appears to be produced by it after the first. This constitutes the fecundity of the first pleasure and the impurity of the first pain. Of the value of each pain which appears to be produced by it after the first. This constitutes the fecundity of the first pain, and the impurity of the first pleasure. Sum up all the values of all the pleasures on the one side, and those of all the pains on the other. The balance, if it be on the side of pleasure, will give the good tendency of the act upon the whole, with respect to the interests of that individual person; if on the side of pain, the bad tendency of it upon the whole. Take an account of the number of persons whose interests appear to be concerned; and repeat the above process with respect to each. Sum up the numbers expressive of the degrees of good tendency, which the act has, with respect to each individual, in regard to whom the tendency of it is good upon the whole. Do this again with respect to each individual, in regard to whom the tendency of it is bad upon the whole. Take the balance which if on the side of pleasure, will give the general good tendency of the act, with respect to the total number or community of individuals concerned; if on the side of pain, the general evil tendency, with respect to the same community. To make his proposal easier to remember, Bentham devised wh
https://en.wikipedia.org/wiki/PCCW
PCCW Limited (formerly known as Pacific Century CyberWorks Limited) is a Hong Kong-based information and communication technology (ICT) company. The company is the major owner of telecommunications company HKT Limited, and also holds a major interest in Pacific Century Premium Developments Limited. PCCW headquartered in Hong Kong and operates in Europe, the Middle East, Africa, the Americas, mainland China, and other parts of Asia. Main business and subsidiaries HKT Limited Subsidiaries and services Netvigator CSL Mobile ("csl", "1O1O" and "Club SIM") Sun Mobile (majority stake) PPS (Payment by Phone Service) – a bill payment service provided by HKT and EPS. The club – a Hong Kong customer loyalty programme. HKT Teleservices – formerly PCCW Teleservices, a contact centers and business process outsourcing provider HKT Payment Limited – the developer of "Tap & Go", a prepaid mobile payment service for Hong Kong users. HKT-eye – over-the-top media services and Internet Protocol TV service delivered to firmware-modified tablet computer PCCW Global Console Connect Crypteia Networks HKT Interactive Media/(Now TV/MOOV) Moov is a lossless music digital streaming service based in Hong Kong. It provides music content, including songs, concert videos, MVs, and other music shows, under a monthly fee. Former service iTV, an interactive television Viu Viu is an Over-the-top (OTT) video service operated by PCCW Media, providing popular Korean dramas and variety shows. ViuTV ViuTV and ViuTVsix are a general entertainment television channels in Hong Kong operated by HK Television Entertainment (HKTVE). The channel serves as a free-to-air outlet for television programmes shown on the channels operated by Now TV. In 2015, HK Television Entertainment was granted a 12-year free-to-air television broadcast license by the Hong Kong Government. Lenovo PCCW Solutions PCCW Solutions is the information technology services and business process outsourcing (BPO) division of PCCW. Press releases prior to February 2006 refer to PCCW Solutions by the name Unihub. Unihub was a re-branding of PCCW's Business eSolutions division, from 1 September 2003. Business eSolutions division formed a venture with China Telecom to provide IT solutions to major business organisations in 2002. This was in addition to PCCW's PCITC alliance with Sinopec, formed to serve Sinopec plus other players in China's petrochemical sector. The division also contributed to the new Hong Kong Identity Card system in 2003. In early 2003, Business eSolutions entered a contract to provide services for Bank of China's credit card back office processing system in China. It also extended a 2002 enterprise resource planning (ERP) project into more provinces for China Mobile and completed the flight information display system (FIDS) for Xiamen Airport, and a human resource management and financial management system for the Hong Kong Council of Social Service. On August 15, 2022, PCCW and
https://en.wikipedia.org/wiki/Microsoft%20Personal%20Web%20Server
Microsoft Personal Web Server (PWS) is a scaled-down web server software for Windows operating systems. It has fewer features than Microsoft's Internet Information Services (IIS) and its functions have been superseded by IIS and Visual Studio. Microsoft officially supports PWS on Windows 95-98, Windows 98 SE, and Windows NT 4.0. Prior to the release of Windows 2000, PWS was available as a free download as well as included on the Windows distribution CDs. PWS 4 was the last version and it can be found on the Windows 98 CD and the Windows NT 4.0 Option Pack. Personal Web Server was originally created by Vermeer Technologies, the same company which created Microsoft FrontPage, before they were acquired by Microsoft. It was installed by FrontPage versions 1.1 to 98 as well. NT Workstation 4.0 shipped with Peer Web Services, which was based on IIS 2.0 and 3.0. With IIS 4.0, this was renamed Personal Web Server to be consistent with the name used in 95/98. Since Windows 2000, PWS was renamed the same IIS name used in server versions of Windows as a standard Windows component. Windows ME and Windows XP Home Edition support neither PWS nor IIS, although PWS can be installed on Windows ME. In other editions of Windows XP, IIS is included as standard. Before Microsoft Visual Studio 2005, PWS was useful in developing web applications on the localhost before deploying to a production web server. The IDE of Visual Studio 2005 (and later versions) now contains a built-in lightweight web server for such development purposes. FTP, SMTP, HTTP and the usual web languages such as PHP and Perl are supported by PWS. It also supports basic CGI (Common Gateway Interface) conventions and a subset of Classic ASP. Using these technologies, web applications running on PWS are capable of performing and interpreting database queries and results. Microsoft also produced a version of Personal Web Server for the Macintosh based on code acquired in its acquisition of ResNova Software in November 1996. References Home servers Web server software Personal Web Server
https://en.wikipedia.org/wiki/VA%20Kernel
The VA Kernel is a set of programs, developed by the Department of Veterans Affairs of the United States Government, which provide an operating system and MUMPS implementation independent abstraction to the VistA Hospital Information System. These programs (called 'routines' in MUMPS) are the only programs which are expected to not be written in ANSI Standard MUMPS. The MUMPS language used in the kernel is amazingly simple, consisting of a single language (MUMPS), a single data type (string), a single data storage mechanism (global arrays stored on disk), 19 commands and 22 functions. MUMPS is a symbolic language with linguistic roots closer to LISP than Fortran or COBOL. Because of this simple software layer, the VistA software architecture has been able to adapt to changing hardware environments over the decades with only the minimum amount of software changes at higher levels of abstraction. The CHCS system and the RPMS system have a Kernel as well, which provides a similar degree of support to those systems as the VA Kernel does to VistA. The VA Kernel provides abstractions for: Menu Management (MenuMan) Electronic mail, group conferencing, transaction processing (MailMan) Login and Access Security Task scheduling and Batch processing Input/Output devices Protocol and Event processing Date processing and manipulation Mathematical and common library functions References Video Interview of Tom Munnecke on the design of the kernel Software frameworks MUMPS programming language
https://en.wikipedia.org/wiki/Hiroo%20Yamagata
is a Japanese author, critic, economist and translator. He translated some important works in computer technology such as "The Cathedral and the Bazaar" by Eric S. Raymond, "Code and Other Laws of Cyberspace" by Lawrence Lessig into Japanese. He is also the founder and chairman of Project Sugita Genpaku, which is a volunteer effort to translate free content texts into Japanese. See also Japanese literature List of Japanese authors Paul Krugman External links YAMAGATA Hiroo: The Official Page YAMAGATA Hiroo: The Official Page (in Japanese) "Project Sugita Genpaku" (in Japanese) 1964 births Living people Japanese writers
https://en.wikipedia.org/wiki/Jimmy%20Swaggart
Jimmy Lee Swaggart (; born March 15, 1935) is an American Pentecostal televangelist. Jimmy Swaggart Ministries owns and operates the SonLife Broadcasting Network (SBN). Swaggart is the senior pastor of the Family Worship Center in Baton Rouge, Louisiana. Swaggart was defrocked by the Assemblies of God in 1988 after a prostitution scandal. In 1991 he was pulled over by police with a prostitute in his car. Early life Jimmy Lee Swaggart was born on March 15, 1935, in Ferriday, Louisiana, to fiddle player and Pentecostal preacher Willie Leon (known as "Sun" or "Son") Swaggart and Minnie Bell Herron, daughter of sharecropper William Herron. Swaggart's parents were related by marriage, as Son Swaggart's maternal uncle, Elmo Lewis, was married to Minnie Herron's sister, Mamie. The extended family had a complex network of interrelationships: "cousins and in-laws and other relatives married each other until the clan was entwined like a big, tight ball of rubber bands". Swaggart is the cousin of rockabilly pioneer Jerry Lee Lewis and country music star Mickey Gilley. He also had a sister, Jeanette Ensminger (1942–1999). With his parents, Swaggart attended small Assemblies of God churches in Ferriday and Wisner. In 1952, aged 17, Swaggart married 15-year-old Frances Anderson, whom he met in church in Wisner, Louisiana while he was playing music with his father, who pastored the Assembly of God Church there. They have a son named Donnie. Swaggart worked several part-time odd jobs to support his young family and also began singing Southern Gospel music at various churches. According to his autobiography To Cross a River, Swaggart, along with his wife and son, lived in poverty during the 1950s as he preached throughout rural Louisiana, struggling to survive on an income of $30 a week (). Being too poor to own a home, the Swaggarts lived in church basements, homes of pastors, and small motels. Sun Records producer Sam Phillips wanted to start a gospel line of music for the label (perhaps to remain in competition with RCA Victor and Columbia, who also had gospel lines at the time) and wanted Swaggart for Sun as the first gospel artist for the label. Swaggart's cousin, Jerry Lee Lewis, had previously signed with Sun and was reportedly earning $20,000 per week at the time. Although the offer meant a promise for significant income for him and his family, Swaggart turned Phillips down, stating that he was called to preach the gospel. Career Ordination and early career Preaching from a flatbed trailer donated to him, Swaggart began full-time evangelistic work in 1955. He began developing a revival-meeting following throughout the American South. In 1960, he began recording gospel music record albums and transmitting on Christian radio stations. In 1961, Swaggart was ordained by the Assemblies of God; a year later he began his radio ministry. In the late 1960s, Swaggart founded what was then a small church named the Family Worship Center in Baton Rouge, Louis
https://en.wikipedia.org/wiki/CBBS
CBBS ("Computerized Bulletin Board System") was a computer program created by Ward Christensen and Randy Suess to allow them and other computer hobbyists to exchange information between each other. In January 1978, Chicago was hit by the Great Blizzard of 1978, which dumped record amounts of snow throughout the Midwest. Among those caught in the storm were Christensen and Suess, who were members of CACHE, the Chicago Area Computer Hobbyists' Exchange. They had met at that computer club in the mid-1970s and become friends. Christensen had created a file transfer protocol for sending binary computer files through modem connections, which was called, simply, MODEM. Later improvements to the program motivated a name change into the now familiar XMODEM. The success of this project encouraged further experiments. CACHE members frequently shared programs and had long been discussing some form of file transfer using modems, and Christensen was naturally at the center of these discussions; however, Suess in particular was skeptical of accomplishing such a project by a volunteer committee. Christensen and Suess became enamored of the extended idea of creating a computerized answering machine and message center, which would allow members to call in with their then-new modems and leave announcements for upcoming meetings. However, they needed some quiet time to set aside for such a project, and the blizzard gave them that time. Christensen worked on the software and Suess cobbled together an S-100 computer to put the program on. They had a working version within two weeks, but claimed soon afterwards that it had taken four so that it wouldn't seem like a "rushed" project. Time and tradition have settled that date to be February 16, 1978. Christensen and Suess described their innovation in an article entitled "Hobbyist Computerized Bulletin Board" in the November 1978 issue of Byte magazine. Because the Internet was still small and not available to most computer users, users had to dial CBBS directly using a modem. Also because the CBBS hardware and software supported only a single modem for most of its existence, users had to take turns accessing the system, each hanging up when done to let someone else have access. Despite these limitations, the system was seen as very useful, and ran for many years and inspired the creation of many other bulletin board systems. Ward & Randy would often watch the users while they were online and comment or go into chat if the subject warranted. At times, online users wondered if Ward & Randy actually existed. The program had many forward thinking ideas, now accepted as canonical in the creation of message bases or "forums". As Christensen and Suess went their separate ways, the CBBS name lived on, and survives to an extent as a web-based forum on Suess' website, chinet.com. Christensen's version of CBBS, called "Ward's Board", closed in the early 1990s. On February 16, 2003, Chicago's Mayor Richard M. Daley declare
https://en.wikipedia.org/wiki/Point-to-Point%20Tunneling%20Protocol
The Point-to-Point Tunneling Protocol (PPTP) is an obsolete method for implementing virtual private networks. PPTP has many well known security issues. PPTP uses a TCP control channel and a Generic Routing Encapsulation tunnel to encapsulate PPP packets. Many modern VPNs use various forms of UDP for this same functionality. The PPTP specification does not describe encryption or authentication features and relies on the Point-to-Point Protocol being tunneled to implement any and all security functionalities. The PPTP implementation that ships with the Microsoft Windows product families implements various levels of authentication and encryption natively as standard features of the Windows PPTP stack. The intended use of this protocol is to provide security levels and remote access levels comparable with typical VPN products. History A specification for PPTP was published in July 1999 as RFC 2637 and was developed by a vendor consortium formed by Microsoft, Ascend Communications (today part of Nokia), 3Com, and others. PPTP has not been proposed nor ratified as a standard by the Internet Engineering Task Force. Description A PPTP tunnel is instantiated by communication to the peer on TCP port 1723. This TCP connection is then used to initiate and manage a GRE tunnel to the same peer. The PPTP GRE packet format is non standard, including a new acknowledgement number field replacing the typical routing field in the GRE header. However, as in a normal GRE connection, those modified GRE packets are directly encapsulated into IP packets, and seen as IP protocol number 47. The GRE tunnel is used to carry encapsulated PPP packets, allowing the tunnelling of any protocols that can be carried within PPP, including IP, NetBEUI and IPX. In the Microsoft implementation, the tunneled PPP traffic can be authenticated with PAP, CHAP, MS-CHAP v1/v2 . Security PPTP has been the subject of many security analyses and serious security vulnerabilities have been found in the protocol. The known vulnerabilities relate to the underlying PPP authentication protocols used, the design of the MPPE protocol as well as the integration between MPPE and PPP authentication for session key establishment. A summary of these vulnerabilities is below: MS-CHAP-v1 is fundamentally insecure. Tools exist to trivially extract the NT Password hashes from a captured MSCHAP-v1 exchange. When using MS-CHAP-v1, MPPE uses the same RC4 session key for encryption in both directions of the communication flow. This can be cryptanalysed with standard methods by XORing the streams from each direction together. MS-CHAP-v2 is vulnerable to dictionary attacks on the captured challenge response packets. Tools exist to perform this process rapidly. In 2012, it was demonstrated that the complexity of a brute-force attack on a MS-CHAP-v2 key is equivalent to a brute-force attack on a single DES key. An online service was also demonstrated which is capable of decrypting a MS-CHAP-v2 MD4 passp
https://en.wikipedia.org/wiki/Rail%20transport%20in%20Germany
, Germany had a railway network of , of which were electrified and were double track. Germany is a member of the International Union of Railways (UIC). The UIC Country Code for Germany is 80. Germany was ranked fourth among national European rail systems in the 2017 European Railway Performance Index assessing intensity of use, quality of service and safety. Germany had a very good rating for intensity of use, by both passengers and freight, and good ratings for quality of service and safety. Germany also captured relatively high value in return for public investment with cost to performance ratios that outperform the average ratio for all European countries. Germany's rail freight of 117 billion tons/kilometer meant it carried 17.6% of all inland German cargo in 2015. Overview In 2018, railways in Germany transported the following amount of passengers and freight. In 2014 (local passenger) and 2015 (other), there were the following amount of railway cars in Germany. Deutsche Bahn (state-owned private company) is the main provider of railway service. In recent years a number of competitors have started business. They mostly offer state funded regional services, but some companies offer long-distance services as well. In 2016, Deutsche Bahn had a share of 67% in the regional railway market and 68.6% in the inland freight market. As of October 2016, there were 452 railway operators registered in Germany, among them 124 regional passenger operators, 20 long-distance operators, and 163 freight operators. In 2018, public sector funding accounted for 25.6% of the cost of short-distance passenger transport including all rail and bus services. The long-distance market generally does not require government funding. Special scheme In June, July and August 2022, there was a special ticket called the 9-Euro-Ticket, which was a ticket with which passengers could travel for 9 Euros per month on local and regional transport throughout Germany. The initiative aimed to reduce energy consumption during the global energy crisis in 2021–2022 and alleviate the costs of living for people. Some criticized the enterprise, saying it led to overcrowding of trains at some points. Long distance Deutsche Bahn services InterCity-Express – high speed train, largely national but some routes to the Netherlands, Belgium, Switzerland, Austria, France, and Denmark EuroCity – international long-distance trains InterCity – national long-distance trains EuroNight – international night trains InterRegio services, introduced in 1988 to replace the former Schnellzug and InterCity, were abolished in 2003. UrlaubsExpress, national night trains to the Alps and the Baltic Sea during vacation times, were abolished in 2007. Deutsche Bahn is gradually increasing the percentage of InterCity-Express services and downgrading the remaining InterCity services to the role formerly played by InterRegio. Other long distance services Thalys – high-speed services to Belgium and France, us
https://en.wikipedia.org/wiki/ADL
Adl is an Arabic word meaning justice. Adl or ADL may also refer to: Computing Action description language, a formal language for automatic planning systems Adventure Development Language, created by On-Line Systems Alder Lake series Intel CPUs Archetype definition language, as used in openEHR archetypes Architecture description language, a formal language for architecture description and representation Argument-dependent name lookup, a lookup for function names in the C++ programming language Assertion definition language, a specification language Organizations ADL astronomical society, Slovenian astronomical society Akademiska Damkören Lyran or The Academic Female Voice Choir Lyran, a Finnish choir Alexander Dennis Limited, bus manufacturer in Scotland Animal Defense League, animal rights organisation in North America Anti-Defamation League, a Jewish non-governmental organization based in the US Armenian Democratic Liberal Party, a political party Arthur D. Little, a management consulting firm Places Adelaide Airport, Australia, IATA airport code ADL A demonym for the city of Adelaide, South Australia, where the above airport is located Adlington (Lancashire) railway station, England, National Rail code ADL Other uses Activities of daily living, a term used in medicine and nursing, especially in the care of the elderly Advance–decline line, a stock market indicator Advanced Distributed Learning, part of an effort to standardize and modernize training and education management and delivery Arena Developmental League, American football league that became the National Arena League Gallong language, a Tibeto-Burman language of India New Zealand ADL class diesel electric unit, a type of diesel railway vehicle used on Auckland's suburban network A driver's license issued by a jurisdiction whose name begins with the letter A People Aurelio De Laurentiis, filmmaker and president of S.S.C. Napoli See also ADLS (disambiguation) ALD (disambiguation)
https://en.wikipedia.org/wiki/SSN
SSN may refer to: Broadcasting Setanta Sports News, a former 24-hour sports news network in the United Kingdom Sky Sports News, a 24-hour sports news network in the United Kingdom Soul of the South Network, an African-American oriented TV Network that launched May 28, 2013 Scoil an Spioraid Naoimh, a primary school in Cork City Ireland Scholars Strategy Network, an association of academics and researchers SSN College of Engineering, an engineering institution located in the suburbs of Chennai, Tamil Nadu, India Entertainment Tom Clancy's SSN, a 1996 game by Clancy Interactive Entertainment describing the operations of a U.S. Navy attack submarine SSN (novel), a 1996 novel by Tom Clancy based upon the game by the same name Government Servizio Sanitario Nazionale, Italy's national health service Social Security number, an identification number used by the U.S. Social Security Administration SSN (hull classification symbol), the United States Navy's hull classification symbol for a fast attack submarine, one that is propelled by nuclear energy Servicio Sismológico Nacional, the Mexican National Seismological Service, UNAM, Mexico South Sudan Superintendencia de Seguros de la Nación, a government agency of Argentina overseeing insurance companies Technology Secure Service Network, a network behind a firewall or IPS containing systems which can be accessed from both the internal and external networks, but cannot reach the internal network Semantic Sensor Network, an ontology describing sensors and observations, and related concepts SIM Serial Number, used to identify a mobile phone's SIM card Subsystem number, used in the Signaling Connection and Control Part of Signaling System #7 routing Surgical Segment Navigator, a system for computer-assisted surgery Other Space Surveillance Network, a multinational cooperative effort to monitor space activity and track earth-orbiting objects and satellites Species Survival Network, a coalition of conservation organizations The IATA code for Seoul Airbase Stoom Stichting Nederland, a Dutch Railway Museum The Seed Savers' Network, an international not-for-profit seed-saving organisation founded in Australia. Socialist Solidarity Network, a Trotskyist group in the UK Superior salivatory nucleus, a cranial nerve nucleus See also SN (disambiguation) SS (disambiguation)
https://en.wikipedia.org/wiki/Mutual%20Broadcasting%20System
The Mutual Broadcasting System (commonly referred to simply as Mutual; sometimes referred to as MBS, Mutual Radio or the Mutual Radio Network) was an American commercial radio network in operation from 1934 to 1999. In the golden age of U.S. radio drama, Mutual was best known as the original network home of The Lone Ranger and The Adventures of Superman and as the long-time radio residence of The Shadow. For many years, it was a national broadcaster for Major League Baseball (including the All-Star Game and World Series), the National Football League, and Notre Dame Fighting Irish football. From the mid-1930s and until the retirement of the network in 1999, Mutual ran a highly respected news service accompanied by a variety of popular commentary shows. Mutual pioneered the nationwide late night call-in talk radio program in the late 1970s, introducing the country to Larry King and later Jim Bohannon. In the 1970s, acting in much the same style as rival ABC Radio had splitting their network in 1968, Mutual launched four sister radio networks: Mutual Black Network (MBN) (initially launched as "Mutual Reports"), which evolved to today's American Urban Radio Networks (AURN); Mutual Cadena Hispánica (or in English, "Mutual Spanish Network"); Mutual Southwest Network; and Mutual Progressive Network (later re-branded "Mutual Lifestyle Radio" in 1980, then retired in 1983). Of the four national networks of American radio's classic era, Mutual had for decades the largest number of affiliates, but the least certain financial position (which prevented Mutual from expanding into television broadcasting after World War II, as NBC, CBS and ABC did). For the first 18 years of its existence, Mutual was owned and operated as a cooperative (a system similar to that of today's National Public Radio), setting the network apart from its corporate owned competitors. Mutual's member stations shared their own original programming, transmission and promotion expenses, and advertising revenues. From December 30, 1936, when it debuted in the West, the Mutual Broadcasting System had affiliates from coast to coast. Its business structure would change after General Tire assumed majority ownership in 1952 through a series of regional and individual station acquisitions. Once General Tire sold the network in 1957 to a syndicate led by Dr. Armand Hammer, Mutual's ownership was largely disconnected from the stations it served, leading to a more conventional, top-down model of program production and distribution. Due to the multiple sales of the network that followed, Mutual was once described in Broadcasting magazine as "often traded". After a group that involved Hal Roach Studios purchased Mutual from Hammer's group, the new executive team was charged with accepting money to use Mutual as a vehicle for foreign propaganda while the network suffered significant financial losses and affiliate defections. Concurrently filing for Chapter 11 bankruptcy and sold twice in the span of
https://en.wikipedia.org/wiki/State%20diagram
A state diagram is a type of diagram used in computer science and related fields to describe the behavior of systems. State diagrams require that the system described is composed of a finite number of states; sometimes, this is indeed the case, while at other times this is a reasonable abstraction. Many forms of state diagrams exist, which differ slightly and have different semantics. Overview State diagrams are used to give an abstract description of the behavior of a system. This behavior is analyzed and represented by a series of events that can occur in one or more possible states. Hereby "each diagram usually represents objects of a single class and track the different states of its objects through the system". State diagrams can be used to graphically represent finite-state machines (also called finite automata). This was introduced by Claude Shannon and Warren Weaver in their 1949 book The Mathematical Theory of Communication. Another source is Taylor Booth in his 1967 book Sequential Machines and Automata Theory. Another possible representation is the state-transition table. Directed graph A classic form of state diagram for a finite automaton (FA) is a directed graph with the following elements (Q, Σ, Z, δ, q0, F): Vertices Q: a finite set of states, normally represented by circles and labeled with unique designator symbols or words written inside them Input symbols Σ: a finite collection of input symbols or designators Output symbols Z: a finite collection of output symbols or designators The output function ω represents the mapping of ordered pairs of input symbols and states onto output symbols, denoted mathematically as ω : Σ × Q→ Z. Edges δ: represent transitions from one state to another as caused by the input (identified by their symbols drawn on the edges). An edge is usually drawn as an arrow directed from the present state to the next state. This mapping describes the state transition that is to occur on input of a particular symbol. This is written mathematically as δ : Q × Σ → Q, so δ (the transition function) in the definition of the FA is given by both the pair of vertices connected by an edge and the symbol on an edge in a diagram representing this FA. Item δ(q, a) = p in the definition of the FA means that from the state named q under input symbol a, the transition to the state p occurs in this machine. In the diagram representing this FA, this is represented by an edge labeled by a pointing from the vertex labeled by q to the vertex labeled by p. Start state q0: (not shown in the examples below). The start state q0 ∈ Q is usually represented by an arrow with no origin pointing to the state. In older texts, the start state is not shown and must be inferred from the text. Accepting state(s) F: If used, for example for accepting automata, F ∈ Q is the accepting state. It is usually drawn as a double circle. Sometimes the accept state(s) function as "Final" (halt, trapped) states. For a deterministic finite autom
https://en.wikipedia.org/wiki/Cisco%20IOS
The Internetworking Operating System (IOS) is a family of proprietary network operating systems used on several router and network switch models manufactured by Cisco Systems. The system is a package of routing, switching, internetworking, and telecommunications functions integrated into a multitasking operating system. Although the IOS code base includes a cooperative multitasking kernel, most IOS features have been ported to other kernels, such as Linux and QNX, for use in Cisco products. Not all Cisco networking products run IOS. Exceptions include some Cisco Catalyst switches, which run IOS XE, and Cisco ASR routers, which run either IOS XE or IOS XR; both are Linux-based operating systems. For data center environments, Cisco Nexus switches (Ethernet) and Cisco MDS switches (Fibre Channel) both run Cisco NX-OS, also a Linux-based operating system. History The IOS network operating system was created from code written by William Yeager at Stanford University, which was developed in the 1980s for routers with 256 kB of memory and low CPU processing power. Through modular extensions, IOS has been adapted to increasing hardware capabilities and new networking protocols. When IOS was developed, Cisco Systems' main product line were routers. The company acquired a number of young companies that focused on network switches, such as the inventor of the first Ethernet switch Kalpana, and as a result Cisco switches did not initially run IOS. Prior to IOS, the Cisco Catalyst series ran CatOS. Command-line interface The IOS command-line interface (CLI) provides a fixed set of multiple-word commands. The set available is determined by the "mode" and the privilege level of the current user. "Global configuration mode" provides commands to change the system's configuration, and "interface configuration mode" provides commands to change the configuration of a specific interface. All commands are assigned a privilege level, from 0 to 15, and can only be accessed by users with the necessary privilege. Through the CLI, the commands available to each privilege level can be defined. Most builds of IOS include a Tcl interpreter. Using the embedded event manager feature, the interpreter can be scripted to react to events within the networking environment, such as interface failure or periodic timers. Available command modes include: User EXEC Mode Privileged EXEC Mode Global Configuration Mode ROM Monitor Mode Setup Mode And more than 100 configuration modes and submodes. Architecture Cisco IOS has a monolithic architecture, owing to the limited hardware resources of routers and switches in the 1980s. This means that all processes have direct hardware access to conserve CPU processing time. There is no memory protection between processes and IOS has a run to completion scheduler, which means that the kernel does not pre-empt a running process. Instead the process must make a kernel call before other processes get a chance to run. IOS considers each pr
https://en.wikipedia.org/wiki/Software%20metric
In software engineering and development, a software metric is a standard of measure of a degree to which a software system or process possesses some property. Even if a metric is not a measurement (metrics are functions, while measurements are the numbers obtained by the application of metrics), often the two terms are used as synonyms. Since quantitative measurements are essential in all sciences, there is a continuous effort by computer science practitioners and theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance, testing, software debugging, software performance optimization, and optimal personnel task assignments. Common software measurements Common software measurements include: ABC Software Metric Balanced scorecard Bugs per line of code Code coverage Cohesion Comment density Connascent software components Constructive Cost Model Coupling Cyclomatic complexity (McCabe's complexity) Cyclomatic complexity density Defect density - defects found in a component Defect potential - expected number of defects in a particular component Defect removal rate DSQI (design structure quality index) Function Points and Automated Function Points, an Object Management Group standard Halstead Complexity Instruction path length Maintainability index Source lines of code - number of lines of code Program execution time Program load time Program size (binary) Weighted Micro Function Points Cycle time (software) First pass yield Corrective Commit Probability Limitations As software development is a complex process, with high variance on both methodologies and objectives, it is difficult to define or measure software qualities and quantities and to determine a valid and concurrent measurement metric, especially when making such a prediction prior to the detail design. Another source of difficulty and debate is in determining which metrics matter, and what they mean. The practical utility of software measurements has therefore been limited to the following domains: Scheduling Software sizing Programming complexity Software development effort estimation Software quality A specific measurement may target one or more of the above aspects, or the balance between them, for example as an indicator of team motivation or project performance. Additionally metrics vary between static and dynamic program code, as well as for object oriented software (systems). Acceptance and public opinion Some software development practitioners point out that simplistic measurements can cause more harm than good. Others have noted that metrics have become an integral part of the software development process. Impact of measurement on programmer psychology have raised concerns for harmful effects to performance due to stress, performance anxiety, and a
https://en.wikipedia.org/wiki/Dpkg
dpkg is the software at the base of the package management system in the free operating system Debian and its numerous derivatives. dpkg is used to install, remove, and provide information about .deb packages. dpkg (Debian Package) itself is a low-level tool. APT (Advanced Package Tool), a higher-level tool, is more commonly used than dpkg as it can fetch packages from remote locations and deal with complex package relations, such as dependency resolution. Frontends for APT, like aptitude (ncurses) and synaptic (GTK), are used for their friendlier interfaces. The Debian package "dpkg" provides the dpkg program, as well as several other programs necessary for run-time functioning of the packaging system, including dpkg-deb, dpkg-split, dpkg-query, dpkg-statoverride, dpkg-divert and dpkg-trigger. It also includes the programs such as update-alternatives and start-stop-daemon. The install-info program used to be included as well, but was later removed as it is now developed and distributed separately. The Debian package "dpkg-dev" includes the numerous build tools described below. History The first attempt at a package management system was possibly the development of StopAlop by Greg Wettstein at the Roger Maris Cancer Center in Fargo, North Dakota. It provided inspiration for the creation of dpkg. dpkg was originally created by Ian Murdock in January 1994 as a Shell script. Matt Welsh, Carl Streeter and Ian Murdock then rewrote it in Perl, and then later the main part was rewritten in C by Ian Jackson in 1994. The name dpkg was originally a shortening of "Debian package", but the meaning of that phrase has evolved significantly, as dpkg the software is orthogonal to the deb package format as well as the Debian Policy Manual which defines how Debian packages behave in Debian. Example use To install a .deb package: dpkg -i filename.deb where filename.deb is the name of the Debian package (such as pkgname_0.00-1_amd64.deb). The list of installed packages can be obtained with: dpkg -l [optional pattern] To remove an installed package: dpkg -r packagename Development tools dpkg-dev contains a series of development tools required to unpack, build and upload Debian source packages. These include: dpkg-source packs and unpacks the source files of a Debian package. dpkg-gencontrol reads the information from an unpacked Debian tree source and generates a binary package control package, creating an entry for this in Debian/files. dpkg-shlibdeps calculates the dependencies of runs with respect to libraries. dpkg-genchanges reads the information from an unpacked Debian tree source that once constructed creates a control file (.changes). dpkg-buildpackage is a control script that can be used to construct the package automatically. dpkg-distaddfile adds a file input to debian/files. dpkg-parsechangelog reads the changes file (changelog) of an unpacked Debian tree source and creates a conveniently prepared output with the information for those c
https://en.wikipedia.org/wiki/Lifespan
Lifespan or life span may refer to: Lifespan (film), 1976 film starring Klaus Kinski Lifespan (video game), 1983 Atari 8-bit computer game Lifespan (album), 2004 album by Kris Davis Lifespan: Why We Age - and Why We Don't Have To, 2019 book by David Andrew Sinclair Lifespan.io, non-profit crowdfunding platform of the Lifespan Extension Advocacy Foundation See also Maximum life span, the maximum lifespan observed in a group Life expectancy, the average lifespan expected of a group Longevity, the average lifespan expected under ideal conditions Lifetime (disambiguation)
https://en.wikipedia.org/wiki/Serving%20channel
A serving channel (sometimes called a depot channel) is a slang term for a file sharing channel found on an IRC network. Here, users can share and download files including photos, videos, audio files, books, programs, etc. Users that are actively sharing their files are generally referred to as 'servers', whereas users that download without sharing their own files are generally referred to as 'leeches'. While serving normally implies pirated or questionable material, some channels are used for fully legitimate reasons. There are two styles of servers, Fserves, and serving scripts like OmenServe. Fserve type channels Using an Fserve script, a server is set up like an FTP. Using CTCP commands and server triggers, a user can initiate a connection with the server. Once connected, the user will be given access to a server's file archive. ex.: "/CTCP <username> <trigger>" Searching and requesting with Fserves Once a leech has gained access to a server's Fserve, they can navigate through folders using commands similar to DOS. Once inside a folder, the user is able to retrieve a listing of what files are found there. ex.: "cd <foldername>" & "dir" (to display files) To request a file, the user enters a filename from the folder display listing, along with the "get" command. ex.: "get <filename.ext>" Serving script type channels Using a serving script, servers have the ability to send files directly to another user using remote commands. The serving script compiles a listing of available files, and also listens for a leech to request a file. Serving scripts also allow for a user to search all of the servers in a channel at the same time with a single command. Searching and requesting with serving scripts A user initiates a search by typing a 'search command' followed by a 'search string' within the channel window. Various search commands exist, including '@find', '@search', and '@seek', depending on what serving script is being used. Wildcard characters such as * can also be used in the search string to simplify a search. The search command will then return a list of files to the user's query window if any servers have a file that matches the search string. ex.: "@find <keyword>" If there are any matches for the user's search string, the next step is to request those files from the server. The user can copy and paste the returned match, along with a short trigger command, from the query window directly into the channel window. The request is then placed in a file queue within the serving script, and downloaded on a first-come, first-served basis. ex.: "!<username> <filename.ext>" Users also have the ability to download the complete archive of a server's available files, commonly called a "list" due to the .txt format that the script's output code creates. To request a server's list, there is a separate 'list trigger' used. ex.: "@<username>" See also DCC XDCC File sharing Peer-to-peer Peer-to-peer file sharing Internet Rel
https://en.wikipedia.org/wiki/CTAN
CTAN (an acronym for "Comprehensive TeX Archive Network") is the authoritative place where TeX related material and software can be found for download. Repositories for other projects, such as the MiKTeX distribution of TeX, constantly mirror most of CTAN. History Before CTAN there were a number of people who made some TeX materials available for public download, but there was no systematic collection. At a podium discussion that Joachim Schrod organized at the 1991 EuroTeX conference, the idea arose to bring together the separate collections. (Joachim was interested in this topic because he is active in the TeX community since 1983 and ran one of the largest ftp servers in Germany at that time.) CTAN was built in 1992, by Rainer Schöpf and Joachim Schrod in Germany, Sebastian Rahtz in the UK, and George Greenwade in the U.S. (George came up with the name). Today, there are still only four people who maintain the archives and the TeX catalogue updates: Erik Braun, Ina Dau, Manfred Lotz, and Petra Ruebe-Pugliese. The site structure was put together at the start of 1992 – Sebastian did the main work – and synchronized at the start of 1993. The TeX Users Group provided a framework, a Technical Working Group, for this task's organization. CTAN was officially announced at the EuroTeX conference at Aston University, 1993. The WEB server itself is maintained by Gerd Neugebauer. The English site has been stable since the beginning, but both the American and the German sites have moved thrice. The American site was first at Sam Houston State University under George Greenwade, in 1995 it moved to UMass Boston where it was run by Karl Berry. In 1999 it moved to Saint Michael's College in Colchester, Vermont. There it was announced to go off-line in the end of January 2011. Since January 2013, a mirror has been hosted by the University of Utah (no upload node). The German site was first at the University of Heidelberg, operated by Rainer; in 1999 it moved to the University of Mainz, also operated by Rainer; 2002 to the University of Hamburg, operated by Reinhard Zierke; finally in 2005 it moved to a commercial hosting company since the amount of traffic got too high to get sponsored by a university. The German site is subsidized by DANTE, the Deutschsprachige Anwendervereinigung TeX. Today, the main CTAN nodes serve downloads of more than 6 TB per month, not counting its 94 mirror sites worldwide. See also CPAN CRAN CEAN CKAN References External links The TeX Catalogue Online TeX Archive networks
https://en.wikipedia.org/wiki/Feature
Feature may refer to: Computing Feature recognition, could be a hole, pocket, or notch Feature (computer vision), could be an edge, corner or blob Feature (software design) is an intentional distinguishing characteristic of a software item (in performance, portability, or—especially—functionality) Feature (machine learning), in statistics: individual measurable properties of the phenomena being observed Science and analysis Feature data, in geographic information systems, comprise information about an entity with a geographic location Features, in audio signal processing, an aim to capture specific aspects of audio signals in a numeric way Feature (archaeology), any dug, built, or dumped evidence of human activity Media Feature film, a film with a running time long enough to be considered the principal or sole film to fill a program Feature length, the standardized length of such films Feature story, a piece of non-fiction writing about news Radio documentary (feature), a radio program devoted to covering a particular topic in some depth, usually with a mixture of commentary and sound pictures A feature as a guest appearance Music Feature (band), a British punk trio. The Features, an American rock band Linguistics Feature (linguistics), a property of a class of linguistic items which describes individual members of that class Distinctive feature, the most basic unit of structure that can be analyzed by phonetics and phonology Other uses The Feature, a film collaboration between filmmakers Michel Auder and Andrew Neel The Feature (originally named Give Me Something to Read), a standalone website that features a few high-quality, long-form, nonfiction articles every day from Instapaper's most frequently saved articles See also Featurette
https://en.wikipedia.org/wiki/Percolation%20theory
In statistical physics and mathematics, percolation theory describes the behavior of a network when nodes or links are added. This is a geometric type of phase transition, since at a critical fraction of addition the network of small, disconnected clusters merge into significantly larger connected, so-called spanning clusters. The applications of percolation theory to materials science and in many other disciplines are discussed here and in the articles Network theory and Percolation (cognitive psychology). Introduction A representative question (and the source of the name) is as follows. Assume that some liquid is poured on top of some porous material. Will the liquid be able to make its way from hole to hole and reach the bottom? This physical question is modelled mathematically as a three-dimensional network of vertices, usually called "sites", in which the edge or "bonds" between each two neighbors may be open (allowing the liquid through) with probability , or closed with probability , and they are assumed to be independent. Therefore, for a given , what is the probability that an open path (meaning a path, each of whose links is an "open" bond) exists from the top to the bottom? The behavior for large  is of primary interest. This problem, called now bond percolation, was introduced in the mathematics literature by , and has been studied intensively by mathematicians and physicists since then. In a slightly different mathematical model for obtaining a random graph, a site is "occupied" with probability or "empty" (in which case its edges are removed) with probability ; the corresponding problem is called site percolation. The question is the same: for a given p, what is the probability that a path exists between top and bottom? Similarly, one can ask, given a connected graph at what fraction of failures the graph will become disconnected (no large component). The same questions can be asked for any lattice dimension. As is quite typical, it is actually easier to examine infinite networks than just large ones. In this case the corresponding question is: does an infinite open cluster exist? That is, is there a path of connected points of infinite length "through" the network? By Kolmogorov's zero–one law, for any given , the probability that an infinite cluster exists is either zero or one. Since this probability is an increasing function of (proof via coupling argument), there must be a critical (denoted by ) below which the probability is always 0 and above which the probability is always 1. In practice, this criticality is very easy to observe. Even for as small as 100, the probability of an open path from the top to the bottom increases sharply from very close to zero to very close to one in a short span of values of . History The Flory–Stockmayer theory was the first theory investigating percolation processes. The history of the percolation model as we know it has its root in the coal industry. Since the industrial revoluti
https://en.wikipedia.org/wiki/End-to-end
End-to-end or End to End may refer to: End-to-end auditable voting systems, a voting system End-to-end delay, the time for a packet to be transmitted across a network from source to destination End-to-end encryption, a cryptographic paradigm involving uninterrupted protection of data traveling between two communicating parties End-to-end data integrity End-to-end principle, a principal design element of the Internet End-to-end reinforcement learning End-to-end vector, points from one end of a polymer to the other end Land's End to John o' Groats, the journey from "End to End" across Great Britain End-to-end testing (see also: Verification and validation) See also E2E (disambiguation) Point-to-point (telecommunications)
https://en.wikipedia.org/wiki/Transport%20Layer%20Security
Transport Layer Security (TLS) is a cryptographic protocol designed to provide communications security over a computer network. The protocol is widely used in applications such as email, instant messaging, and voice over IP, but its use in securing HTTPS remains the most publicly visible. The TLS protocol aims primarily to provide security, including privacy (confidentiality), integrity, and authenticity through the use of cryptography, such as the use of certificates, between two or more communicating computer applications. It runs in the presentation layer and is itself composed of two layers: the TLS record and the TLS handshake protocols. The closely related Datagram Transport Layer Security (DTLS) is a communications protocol that provides security to datagram-based applications. In technical writing, references to "(D)TLS" are often seen when it applies to both versions. TLS is a proposed Internet Engineering Task Force (IETF) standard, first defined in 1999, and the current version is TLS 1.3, defined in August 2018. TLS builds on the now-deprecated SSL (Secure Sockets Layer) specifications (1994, 1995, 1996) developed by Netscape Communications for adding the HTTPS protocol to their Navigator web browser. Description Client-server applications use the TLS protocol to communicate across a network in a way designed to prevent eavesdropping and tampering. Since applications can communicate either with or without TLS (or SSL), it is necessary for the client to request that the server set up a TLS connection. One of the main ways of achieving this is to use a different port number for TLS connections. Port 80 is typically used for unencrypted HTTP traffic while port 443 is the common port used for encrypted HTTPS traffic. Another mechanism is to make a protocol-specific STARTTLS request to the server to switch the connection to TLS – for example, when using the mail and news protocols. Once the client and server have agreed to use TLS, they negotiate a stateful connection by using a handshaking procedure (see ). The protocols use a handshake with an asymmetric cipher to establish not only cipher settings but also a session-specific shared key with which further communication is encrypted using a symmetric cipher. During this handshake, the client and server agree on various parameters used to establish the connection's security: The handshake begins when a client connects to a TLS-enabled server requesting a secure connection and the client presents a list of supported cipher suites (ciphers and hash functions). From this list, the server picks a cipher and hash function that it also supports and notifies the client of the decision. The server usually then provides identification in the form of a digital certificate. The certificate contains the server name, the trusted certificate authority (CA) that vouches for the authenticity of the certificate, and the server's public encryption key. The client confirms the validity of the certifica
https://en.wikipedia.org/wiki/DL
DL, dL, or dl may stand for: In science and technology In electronics and computing , an HTML element used for a definition list Deep learning, a branch of algorithm-based machine learning Description logics, a family of knowledge representation languages Delete Line (ANSI), an ANSI X3.64 escape sequence Digital library, a library in which collections are stored in digital formats Diode logic, a logic family using diodes DVD-R DL, a DVD Dual Layer engineering method DL register, the low byte of an X86 16-bit DX register Dynamic loading, a mechanism for a computer program to load a library In telecommunications Data link, a computer connection for transmitting data Distribution list, a function of e-mail clients Downlink, the link from a satellite to a ground station Download, a transfer of electronic data Vehicles Subaru DL, an automobile Australian National DL class, a class of diesel locomotives built by Clyde Engineering New Zealand DL class locomotive, a diesel-electric class built by Dalian Locomotive and Rolling Stock Company Destroyer leader, a large naval vessel capable of leading a flotilla Other uses in science and technology Dimensionless quantity (dl, in lower case), the 'per unit' system of measurement Decilitre (or deciliter, dL), a unit of measurement of capacity or volume Discrete logarithms, in mathematics Distance learning, Internet-based education HPE ProLiant DL, density line servers Dextrorotation and levorotation or D/L nomenclature, used in naming chemical compounds In arts and entertainment D.L. (play), a Bulgarian play DL series, a series of adventures and some supplementary material for the Advanced Dungeons & Dragons role playing game D. L. Hawkins, fictional character on the American television series Heroes D. L. Hughley (born 1963), American actor and comedian Directors Lounge, an ongoing Berlin-based film and media-art platform and/or its annual film festival Dominant leittonwechselklänge In business Dai-ichi Life, an insurance company Delta Air Lines (IATA airline code: DL) Railroads Delaware–Lackawanna Railroad in Pennsylvania, U.S. District line, a London Underground line Places DL postcode area, Darlington, north-east England County Donegal, Ireland Delanggu railway station, a railway station in Indonesia (station code) Detroit Lakes, Minnesota, US In politics Democracy is Freedom – The Daisy, a former political party in Italy Drinking Liberally, a social organization that discusses politics in bars Liberal Democracy (France) (Démocratie Libérale), a former political party in France Rights and Freedom, a political party in Italy In sports Defensive lineman, an American/Canadian football player position Disabled list, the former name of the injured list, a list of injured baseball players Duckworth–Lewis method, a mathematical way to calculate the target score in cricket Deadlift, a powerlifting move Detroit Lions, a football team in the NFL Other uses Deputy Lieutenant, a British title Diaper lo
https://en.wikipedia.org/wiki/WNBC%20%28disambiguation%29
WNBC (channel 4) is the flagship station of the NBC television network, located in New York City. WNBC may also refer to: WFAN, a radio station (660 AM) in New York City, which held the call sign WNBC from 1946 to 1954 and from 1960 to 1988 WQHT, a radio station (97.1 FM) in New York City, which held the call sign WNBC-FM from 1946 to 1954 and from 1960 to 1975 WVIT, a television station (channel 30) in New Britain, Connecticut, which held the call sign WNBC from 1956 to 1960 WPOP, a radio station (1410 AM) in Hartford, Connecticut, which held the call sign WNBC from 1935 to 1944
https://en.wikipedia.org/wiki/Control%20key
In computing, a Control key is a modifier key which, when pressed in conjunction with another key, performs a special operation (for example, ). Similarly to the Shift key, the Control key rarely performs any function when pressed by itself. The Control key is located on or near the bottom left side of most keyboards (in accordance with the international standard ISO/IEC 9995-2), with many featuring an additional one at the bottom right. On keyboards that use English abbreviations for key labeling, it is usually labeled ( or are sometimes used, but it is uncommon). Abbreviations in the language of the keyboard layout also are in use, e.g., the German keyboard layout uses as required by the German standard DIN 2137:2012-06. Also, there is a standardized keyboard symbol (to be used when Latin lettering is not preferred), given in ISO/IEC 9995-7 as symbol 26, and in ISO 7000 "Graphical symbols for use on equipment" as symbol ISO-7000-2028. This symbol is encoded in Unicode as U+2388 (⎈). History On teletypewriters and computer terminals, holding down the Control key while pressing another key would send an ASCII C0 control character, instead of directly reporting a key press to the system. The control characters were used as non-printing characters that signal the terminal or teletypewriter to perform a special action, such as ringing a bell, ejecting a page or erasing the screen, or controlling where the next character will display. The first 32 ASCII characters are the control characters, representable by a 5-bit binary number. Because ASCII characters were represented as 7 bits, if a key is pressed while the Control key was held down, teletypewriters and terminals would simply set the first 2 bits of a character to 0, converting the character into a control character. For example, the character "a" has a binary ASCII code of 110 0001. This code would be converted to 000 0001, corresponding to the ASCII character with id 1 (the SOH Character). The table at shows the ASCII control characters, with the "Caret notation" column showing a caret (^), followed by the character to press while the Control key is held down to generate the character. If a teletypewriter or terminal is connected to a computer, the software on the computer can interpret control characters it receives however it is written to do so; a given control character can be interpreted differently from how it would be interpreted by a teletypewriter or terminal that receives it. For example, Control-C, received from a teletypewriter or terminal, is interpreted as "interrupt the current program" in some command-line interfaces, and Control-E is interpreted by the Emacs text editor as "move the editor cursor to the end of the line". Computer keyboards directly attached to a computer, as is the case for a personal computer or workstation, distinguish each physical key from every other and report all keypresses and releases to the controlling software. This allows the software
https://en.wikipedia.org/wiki/WPIX
WPIX (channel 11) is a television station in New York City, serving as the de facto flagship of The CW Television Network. Owned by Mission Broadcasting, the station is operated by CW majority owner Nexstar Media Group under a local marketing agreement (LMA). Since its inception in 1948, WPIX's studios and offices have been located in the Daily News Building on East 42nd Street (also known as "11 WPIX Plaza") in Midtown Manhattan. The station's transmitter is located at the Empire State Building. WPIX is also available as a regional superstation via satellite and cable in the United States and Canada. It is the largest Nexstar-operated station by population of market size. History As an independent station (1948–1995) The station first signed on the air on June 15, 1948; it was the fifth television station to sign on in New York City and was the market's second independent station. It was also the second of three stations to launch in the New York market during 1948, debuting one month after Newark, New Jersey–based independent WATV (channel 13, now WNET) and two months before WJZ-TV (channel 7, now WABC-TV). WPIX's call letters come from the slogan of the newspaper which founded the station, the New York Daily News, whose slogan was "New York's Picture Newspaper". The Daily Newss partial corporate parent was the Chicago-based Tribune Company, publishers of the Chicago Tribune. Until becoming owned outright by Tribune in 1991, WPIX operated separately from the company's other television and radio outlets (including WGN-TV in Chicago, which signed-on two months before WPIX in April 1948) through the News-owned license holder, WPIX, Incorporated – which in 1963, purchased New York radio station, WBFM (101.9 FM) and soon changed that station's call letters to WPIX-FM. British businessman Robert Maxwell bought the Daily News in 1991. Tribune retained WPIX and WQCD; the radio station was sold to Emmis Communications in 1997 (it is now WFAN-FM). WPIX initially featured programming that was standard among independents: children's programs, movies, syndicated reruns of network programs, public affairs programming, religious programs and sports – specifically, the New York Yankees, whose baseball games WPIX carried from 1951 to 1998. To generations of New York children, channel 11 was also the home of memorable personalities. In 1955, original WPIX staffer and weather forecaster Joe Bolton, donned a policeman's uniform and became "Officer Joe," hosting several programs based around Little Rascals, Three Stooges, and later Popeye shorts. Another early WPIX personality, Jack McCarthy, also hosted Popeye and Dick Tracy cartoons as "Captain Jack" in the early 1960s, though he was also the longtime host of channel 11's St. Patrick's Day parade coverage from 1949 to 1992. WPIX aired a local version of Bozo the Clown (with Bill Britten in the role) from 1959 to 1964; comic performers Chuck McCann and Allen Swift also hosted programs on WPIX during the mid-1
https://en.wikipedia.org/wiki/Arc%20%28programming%20language%29
Arc is a programming language, a dialect of the language Lisp, developed by Paul Graham and Robert Morris. It is free and open-source software released under the Artistic License 2.0. History In 2001, Paul Graham announced that he was working on a new dialect of Lisp named Arc. Over the years since, he has written several essays describing features or goals of the language, and some internal projects at Graham's startup business incubator named Y Combinator have been written in Arc, most notably the Hacker News web forum and news aggregator program. Arc is written in Racket. Motives In the essay Being Popular Graham describes a few of his goals for the language. While many of the goals are very general ("Arc should be hackable", "there should be good libraries"), he did give some specifics. For example, he believes it is important for a language to be terse: He also stated that it is better for a language to only implement a small number of axioms, even when that means the language may not have features that large organizations want, such as object-orientation (OO). Further, Graham thinks that OO is not useful as its methods and patterns are just "good design", and he views the language features used to implement OO as partly mistaken. At Arc's introduction in 2008, Graham stated one of its benefits was its brevity. A controversy among Lisp programmers is whether, and how much, the s-expressions of the language should be complemented by other forms of syntax. Graham thinks that added syntax should be used in situations where pure s-expressions would be overly verbose, saying, "I don't think we should be religiously opposed to introducing syntax into Lisp." Graham also thinks that efficiency problems should be solved by giving the programmer a good profiler. Reception When released in 2008, Arc generated mixed reactions, with some calling it simply an extension to Lisp or Scheme and not a programming language in its own right. Others applauded Arc for stripping Lisp down to bare essentials. Shortly after its release, Arc was ported to JavaScript, and was being supported by Schemescript, an integrated development environment (IDE) based on Eclipse. Examples Hello world in Arc : (prn "Hello, World") To illustrate Arc's terseness, Graham uses a brief program. It produces a form with one field at the url "/said". When the form is submitted, it leads to a page with a link that says "click here", which then leads to a page with the value of the original input field. (defop said req (aform [onlink "click here" (pr "you said: " (arg _ "foo"))] (input "foo") (submit))) Versions Official version The first publicly released version of Arc was made available on 29 January 2008, implemented on Racket (named PLT-Scheme then). The release comes in the form of a .tar archive, containing the Racket source code for Arc. A tutorial and a discussion forum are also available. The forum uses the same program that Hacker News does, and is writte
https://en.wikipedia.org/wiki/NSD
In Internet computing, NSD (for "name server daemon") is an open-source Domain Name System (DNS) server. It was developed by NLnet Labs of Amsterdam in cooperation with the RIPE NCC, from scratch as an authoritative name server (i.e., not implementing the recursive caching function by design). The intention of this development is to add variance to the "gene pool" of DNS implementations used by higher level name servers and thus increase the resilience of DNS against software flaws or exploits. NSD uses BIND-style zone-files (zone-files used under BIND can usually be used unmodified in NSD, once entered into the NSD configuration). NSD uses zone information compiled via zonec into a binary database file (nsd.db) which allows fast startup of the NSD name-service daemon, and allows syntax-structural errors in Zone-Files to be flagged at compile-time (before being made available to NSD service itself). The collection of programs/processes that make-up NSD are designed so that the NSD daemon itself runs as a non-privileged user and can be easily configured to run in a Chroot jail, such that security flaws in the NSD daemon are not so likely to result in system-wide compromise as without such measures. As of May, 2018, four of the Internet root nameservers are using NSD: k.root-servers.net was switched to NSD on February 19, 2003. One of the 2 load-balanced servers for h.root-servers.net (called "H1", "H2") was switched to NSD, and now there are 3 servers all running NSD (called "H1", "H2", "H3"). l.root-servers.net switched to NSD on February 6, 2007. d.root-servers.net was switched to NSD in May 2018. Several other TLDs use NSD for part of their servers. See also Unbound, a recursive DNS server, also developed by NLnet Labs Comparison of DNS server software References External links NSD License NSD DNS Tutorial with examples and explanations DNS software Free network-related software DNS server software for Linux Software using the BSD license
https://en.wikipedia.org/wiki/Reconfigurable%20computing
Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with very flexible high speed computing fabrics like field-programmable gate arrays (FPGAs). The principal difference when compared to using ordinary microprocessors is the ability to make substantial changes to the datapath itself in addition to the control flow. On the other hand, the main difference from custom hardware, i.e. application-specific integrated circuits (ASICs) is the possibility to adapt the hardware during runtime by "loading" a new circuit on the reconfigurable fabric. History The concept of reconfigurable computing has existed since the 1960s, when Gerald Estrin's paper proposed the concept of a computer made of a standard processor and an array of "reconfigurable" hardware. The main processor would control the behavior of the reconfigurable hardware. The latter would then be tailored to perform a specific task, such as image processing or pattern matching, as quickly as a dedicated piece of hardware. Once the task was done, the hardware could be adjusted to do some other task. This resulted in a hybrid computer structure combining the flexibility of software with the speed of hardware. In the 1980s and 1990s there was a renaissance in this area of research with many proposed reconfigurable architectures developed in industry and academia, such as: Copacobana, Matrix, GARP, Elixent, NGEN, Polyp, MereGen, PACT XPP, Silicon Hive, Montium, Pleiades, Morphosys, and PiCoGA. Such designs were feasible due to the constant progress of silicon technology that let complex designs be implemented on one chip. Some of these massively parallel reconfigurable computers were built primarily for special subdomains such as molecular evolution, neural or image processing. The world's first commercial reconfigurable computer, the Algotronix CHS2X4, was completed in 1991. It was not a commercial success, but was promising enough that Xilinx (the inventor of the Field-Programmable Gate Array, FPGA) bought the technology and hired the Algotronix staff. Later machines enabled first demonstrations of scientific principles, such as the spontaneous spatial self-organisation of genetic coding with MereGen. Theories Tredennick's Classification The fundamental model of the reconfigurable computing machine paradigm, the data-stream-based anti machine is well illustrated by the differences to other machine paradigms that were introduced earlier, as shown by Nick Tredennick's following classification scheme of computing paradigms (see "Table 1: Nick Tredennick's Paradigm Classification Scheme"). Hartenstein's Xputer Computer scientist Reiner Hartenstein describes reconfigurable computing in terms of an anti-machine that, according to him, represents a fundamental paradigm shift away from the more conventional von Neumann machine. Hartenstein calls it Reconfigurable Computing Paradox, that software-to-co
https://en.wikipedia.org/wiki/List%20of%20file%20formats
This is a list of file formats used by computers, organized by type. Filename extension it is usually noted in parentheses if they differ from the file format name or abbreviation. Many operating systems do not limit filenames to one extension shorter than 4 characters, as was common with some operating systems that supported the File Allocation Table (FAT) file system. Examples of operating systems that do not impose this limit include Unix-like systems, and Microsoft Windows NT, 95-98, and ME which have no three character limit on extensions for 32-bit or 64-bit applications on file systems other than pre-Windows 95 and Windows NT 3.5 versions of the FAT file system. Some filenames are given extensions longer than three characters. While MS-DOS and NT always treat the suffix after the last period in a file's name as its extension, in UNIX-like systems, the final period does not necessarily mean that the text after the last period is the file's extension. Some file formats, such as .txt or .text, may be listed multiple times. Archive and compressed .?Q? – files that are compressed, often by the SQ program. 7z – 7z: 7-Zip compressed file A – An external file extension for C/C++ AAC – Advanced Audio Coding ace – ace: ACE compressed file ALZ – ALZip compressed file APK – Android package: Applications installable on Android; package format of the Alpine Linux distribution APPX – Microsoft Application Package (.appx) APP - HarmonyOS APP Packs file format for HarmonyOS apps installable from AppGallery and third party OpenHarmony based app distribution stores. AT3 – Sony's UMD data compression ARC – ARC: pre-Zip data compression ARC – Nintendo U8 Archive (mostly Yaz0 compressed) ARJ – ARJ compressed file ASS, SSA – ASS (also SSA): a subtitles file created by Aegisub, a video typesetting application (also a Halo game engine file) B – (B file) Similar to .a, but less compressed. BA – BA: Scifer Archive (.ba), Scifer External Archive Type BIN – compressed archive, can be read and used by CD-ROMs and Java, extractable by 7-zip and WINRAR .bkf – Microsoft backup created by NTBackup.c Blend – An external 3D file format used by the animation software, Blender. .bz2 – bzip2 BMP – Bitmap Image – You can create one by right-clicking the home screen, next, click new, then, click Bitmap Image cab – A cabinet (.cab) file is a library of compressed files stored as one file. Cabinet files are used to organize installation files that are copied to the user's system. c4 – JEDMICS image files, a DOD system cals – JEDMICS image files, a DOD system xaml – Used in programs like Visual Studio to create exe files. CPT, SEA – Compact Pro (Macintosh) DAA – DAA: Closed-format, Windows-only compressed disk image deb – deb: Debian install package DMG – an Apple compressed/encrypted format DDZ – a file which can only be used by the "daydreamer engine" created by "fever-dreamer", a program similar to RAGS, it's mainly used to make somewhat short
https://en.wikipedia.org/wiki/ZIP%20%28file%20format%29
ZIP is an archive file format that supports lossless data compression. A ZIP file may contain one or more files or directories that may have been compressed. The ZIP file format permits a number of compression algorithms, though DEFLATE is the most common. This format was originally created in 1989 and was first implemented in PKWARE, Inc.'s PKZIP utility, as a replacement for the previous ARC compression format by Thom Henderson. The ZIP format was then quickly supported by many software utilities other than PKZIP. Microsoft has included built-in ZIP support (under the name "compressed folders") in versions of Microsoft Windows since 1998 via the "Plus! 98" addon for Windows 98. Native support was added as of the year 2000 in Windows ME. Apple has included built-in ZIP support in Mac OS X 10.3 (via BOMArchiveHelper, now Archive Utility) and later. Most free operating systems have built in support for ZIP in similar manners to Windows and Mac OS X. ZIP files generally use the file extensions or and the MIME media type . ZIP is used as a base file format by many programs, usually under a different name. When navigating a file system via a user interface, graphical icons representing ZIP files often appear as a document or other object prominently featuring a zipper. History The file format was designed by Phil Katz of PKWARE and Gary Conway of Infinity Design Concepts. The format was created after Systems Enhancement Associates (SEA) filed a lawsuit against PKWARE claiming that the latter's archiving products, named PKARC, were derivatives of SEA's ARC archiving system. The name "zip" (meaning "move at high speed") was suggested by Katz's friend, Robert Mahoney. They wanted to imply that their product would be faster than ARC and other compression formats of the time. By distributing the zip file format within APPNOTE.TXT, compatibility with the zip file format proliferated widely on the public Internet during the 1990s. PKWARE and Infinity Design Concepts made a joint press release on February 14, 1989, releasing the file format into the public domain. Version history The .ZIP File Format Specification has its own version number, which does not necessarily correspond to the version numbers for the PKZIP tool, especially with PKZIP 6 or later. At various times, PKWARE has added preliminary features that allow PKZIP products to extract archives using advanced features, but PKZIP products that create such archives are not made available until the next major release. Other companies or organizations support the PKWARE specifications at their own pace. The .ZIP file format specification is formally named "APPNOTE - .ZIP File Format Specification" and it is published on the PKWARE.com website since the late 1990s. Several versions of the specification were not published. Specifications of some features such as BZIP2 compression, strong encryption specification and others were published by PKWARE a few years after their creation. The UR
https://en.wikipedia.org/wiki/Spice%20World%20%28video%20game%29
Spice World is a music video game starring English pop girl group the Spice Girls as animated characters. It was developed by SCE Studios Soho and published by Sony Computer Entertainment exclusively for the PlayStation. Content With tracks like "Wannabe", "Who Do You Think You Are", "Move Over", "Spice Up Your Life" and "Say You'll Be There", each animated Spice Girl will offer a few comments as the player tours the game's stages, experiencing a DJ and dance instructor that speak in stereotypical fashions. There are eleven dance moves applicable, each one a different combination of four buttons: six "basic" ones (the sway, shoulder shimmy, point and sway, knee wiggle, twirl, and shuffle) and five "special" moves (freestyle point, freestyle wave, hip wiggle, and side-jump). There is one button combination which triggers a backflip for Mel C and a walk and wave for the other Spice Girls. The game also contains a dozen interviews along with other entertaining moments, such as Geri Halliwell groping the buttocks of the then-Prince Charles, and the girls wreaking havoc on a Japanese talk-show. In the game, players go through different stages to prepare the animated Spice Girls for a live television performance. The game starts out in the Mixing Room, where the player chooses the song the group will perform and the order each of its nine sections will be played. From the Mixing Room, the game then moves into Dance Practice, where the player gets to choreograph the dance routines for the group's performance by hitting button combinations as they appear on the screen. The player then records the routines by programming each animated Spice Girl's dance steps one by one; routines recorded in one member of the group can also be copied to another member. When it is time for the show at the TV Studio, the player acts as the camera-person, choosing from eight different camera shots that can be moved in four directions; the player gets to watch the animated group sing and dance as the player has directed them to, with the camera shots selected by the player. This is followed by a 20-minute video footage of the actual Spice Girls being interviewed in the South of France. Throughout the game, the player is instructed by a disco king on what to do. Up to 15 mixes, dance routines, and TV studio recordings can be saved on a single memory card. Songs Source: "Wannabe" "Say You'll Be There" "Who Do You Think You Are" "Spice Up Your Life" "Move Over" "If U Can't Dance" (featured only in the intro) "2 Become 1" (only in the Spice Network) "Naked" (only in the Spice Network) Development After seeing PaRappa the Rapper (1997) attract new types of users to the PlayStation market in Japan, Sony Computer Entertainment Europe thought they could do the same with the European market by creating and releasing a music video game; this inspired them to convince 19 Entertainment to produce a game featuring the girl group Spice Girls, a brand with enough leverage to be endo
https://en.wikipedia.org/wiki/TI%20Advanced%20Scientific%20Computer
The Advanced Scientific Computer (ASC) is a supercomputer designed and manufactured by Texas Instruments (TI) between 1966 and 1973. The ASC's central processing unit (CPU) supported vector processing, a performance-enhancing technique which was key to its high-performance. The ASC, along with the Control Data Corporation STAR-100 supercomputer (which was introduced in the same year), were the first computers to feature vector processing. However, this technique's potential was not fully realized by either the ASC or STAR-100 due to an insufficient understanding of the technique; it was the Cray Research Cray-1 supercomputer, announced in 1975 that would fully realize and popularize vector processing. The more successful implementation of vector processing in the Cray-1 would demarcate the ASC (and STAR-100) as first-generation vector processors, with the Cray-1 belonging in the second. History TI began as a division of Geophysical Service Incorporated (GSI), a company that performed seismic surveys for oil exploration companies. GSI was now a subsidiary of TI, and TI wanted to apply the latest computer technology to the processing and analysis of seismic datasets. The ASC project started as the Advanced Seismic Computer. As the project developed, TI decided to expand its scope. "Seismic" was replaced by "Scientific" in the name, allowing the project to retain the designation ASC. Originally the software, including an operating system and a FORTRAN compiler, were done under contract by Computer Usage Company, under direction of George R. Trimble, Jr. but later taken over by TI itself. Southern Methodist University in Dallas developed an ALGOL compiler for the ASC. Architecture The ASC was based around a single high-speed shared memory, which was accessed by the CPU and eight I/O channel controllers, in an organization similar to Seymour Cray's groundbreaking CDC 6600. Memory was accessed solely under the control of the memory control unit (MCU). The MCU was a two-way, 256-bit per channel parallel network that could support up to eight independent processors, with a ninth channel for accessing "main memory" (referred to as "extended memory"). The MCU also acted as a cache controller, offering high-speed access to a semiconductor-based memory for the eight processor ports, and handling all communications to the 24-bit address space in main memory. The MCU was designed to operate asynchronously, allowing it to work at a variety of speeds and scale across a number of performance points. For instance, main memory could be constructed out of slower but less expensive core memory, although this was not used in practice. At the fastest, it could sustain transfer rates of 80 million 32-bit words per second per port, for a total transfer rate of 640 million words per second. This was well beyond the capabilities of even the fastest memories of the era. The CPU had a 60 ns clock cycle (16.67 MHz clock frequency) and its logic was built from 20-gate emi
https://en.wikipedia.org/wiki/Apollo%20Guidance%20Computer
The Apollo Guidance Computer (AGC) was a digital computer produced for the Apollo program that was installed on board each Apollo command module (CM) and Apollo Lunar Module (LM). The AGC provided computation and electronic interfaces for guidance, navigation, and control of the spacecraft. The AGC was the first computer based on silicon integrated circuits. The computer's performance was comparable to the first generation of home computers from the late 1970s, such as the Apple II, TRS-80, and Commodore PET. The AGC has a 16-bit word length, with 15 data bits and one parity bit. Most of the software on the AGC is stored in a special read-only memory known as core rope memory, fashioned by weaving wires through and around magnetic cores, though a small amount of read/write core memory is available. Astronauts communicated with the AGC using a numeric display and keyboard called the DSKY (for "display and keyboard", pronounced "DIS-kee"). The AGC and its DSKY user interface were developed in the early 1960s for the Apollo program by the MIT Instrumentation Laboratory and first flew in 1966. Operation Astronauts manually flew Project Gemini with control sticks, but computers flew most of Project Apollo except briefly during lunar landings. Each Moon flight carried two AGCs, one each in the command module and the Apollo Lunar Module, with the exception of Apollo 7 which was an Earth orbit mission and Apollo 8 which did not need a lunar module for its lunar orbit mission. The AGC in the command module was the center of its guidance, navigation and control (GNC) system. The AGC in the lunar module ran its Apollo PGNCS (primary guidance, navigation and control system), with the acronym pronounced as pings. Each lunar mission had two additional computers: The Launch Vehicle Digital Computer (LVDC) on the Saturn V booster instrumentation ring the Abort Guidance System (AGS, pronounced ags) of the lunar module, to be used in the event of failure of the LM PGNCS. The AGS could be used to take off from the Moon, and to rendezvous with the command module, but not to land. Design The AGC was designed at the MIT Instrumentation Laboratory under Charles Stark Draper, with hardware design led by Eldon C. Hall. Early architectural work came from J. H. Laning Jr., Albert Hopkins, Richard Battin, Ramon Alonso, and Hugh Blair-Smith. The flight hardware was fabricated by Raytheon, whose Herb Thaler was also on the architectural team. According to Kurinec et al, the chips were welded onto the boards rather than soldered as might be expected. Logic hardware Following the use of integrated circuit (IC) chips in the Interplanetary Monitoring Platform (IMP) in 1963, IC technology was later adopted for the AGC. The Apollo flight computer was the first computer to use silicon IC chips. While the Block I version used 4,100 ICs, each containing a single three-input NOR gate, the later Block II version (used in the crewed flights) used about 2,800 ICs, mostly dual
https://en.wikipedia.org/wiki/Acting%20Sheriff
Acting Sheriff is an unsold, half-hour television pilot sitcom created by Walt Disney Television for television network CBS that aired across the United States on Saturday, August 17, 1991. Identified as episode number 895 in Walt Disney Television season number 35, the 30-minute comedy drama featured Robert Goulet as B movie actor Brent McCord who is elected to the unlikely job of sheriff in a small Northern California town. With only an actor's knowledge and experience of what a sheriff does, the McCord character clashes with the local district attorney, character Donna Singer, and eventually lets a bank robber-prisoner escape. Character Mike Swanson, a deputy who is loyal to McCord, captures the escaped prisoner and helps cover for McCord's mistake by informing news reporters that McCord made the capture. Response Initially, Acting Sheriff was thought to have a good chance of filling the Saturday, 10:30 PM slot in the CBS 1991 fall television schedule. In addition to the draw of noted actor Robert Goulet, the show was developed by the writing team of Larry Strawther and Gary Murphy, who were the writers of Night Court, a then-widely popular American television situation comedy, and the writers of Without a Clue, a 1988 comedy film starring Michael Caine and Ben Kingsley. However, the one-time-only, August 17, 1991, presentation of Acting Sheriff received poor ratings. In the August 21, 1991 Prime time ratings for the week of August 12 to August 18, Acting Sheriff received a 4.6 share and was ranked as number 83 out of a total of 90 prime time television shows. The 4.6 share represented 4.3 million TV homes out of a possible 93.1 million TV homes. Despite the poor showing by Acting Sheriff, CBS tied television network ABC for first place in the August 12 to August 18 network ratings battle. CBS eventually filled the Saturday 10:30 PM to 11:00 PM primetime slot with 48 Hours, a documentary and news program broadcast on the CBS television network since January 19, 1988. Critic reactions were mixed. The Florida daily newspaper St. Petersburg Times rated Acting Sheriff a "best bet." However, the weekly entertainment trade newspaper Variety found the Brent McCord character too cartoonish to support the show as a series. In describing Goulet's performance as Brent McCord, Variety stated that it was "a goof on Ronald Reagan by way of Ted Baxter" and came across as a "trigger-happy, ACLU-bashing boob whose disregard for the law is equaled only by his vanity." Variety also faulted the show's appearance and other characters as too closely resembling the look, feel, and characters of Night Court. Fourteen years later, American actor Lee Tergesen, who was on Acting Sheriff with Robert Goulet, characterized Goulet's performance as "quite good." Reaction to response The August 17, 1991, airing of the show was its only airing. In a December 1991 effort to raise more than $200 million to finance Disney's network television business, Disney created Zero Cou
https://en.wikipedia.org/wiki/Compilers%3A%20Principles%2C%20Techniques%2C%20and%20Tools
Compilers: Principles, Techniques, and Tools is a computer science textbook by Alfred V. Aho, Monica S. Lam, Ravi Sethi, and Jeffrey D. Ullman about compiler construction for programming languages. First published in 1986, it is widely regarded as the classic definitive compiler technology text. It is known as the Dragon Book to generations of computer scientists as its cover depicts a knight and a dragon in battle, a metaphor for conquering complexity. This name can also refer to Aho and Ullman's older Principles of Compiler Design. First edition The first edition (1986) is informally called the "red dragon book" to distinguish it from the second edition and from Aho & Ullman's 1977 Principles of Compiler Design sometimes known as the "green dragon book". Topics covered in the first edition include: Compiler structure Lexical analysis (including regular expressions and finite automata) Syntax analysis (including context-free grammars, LL parsers, bottom-up parsers, and LR parsers) Syntax-directed translation Type checking (including type conversions and polymorphism) Run-time environment (including parameter passing, symbol tables and register allocation) Code generation (including intermediate code generation) Code optimization Second edition Following in the tradition of its two predecessors, the second edition (2006) features a dragon and a knight on its cover, and is informally known as the purple dragon. Monica S. Lam of Stanford University became a co-author with this edition. The second edition includes several additional topics, including: Directed translation New data flow analyses Parallel machines Garbage collection New case studies See also Structure and Interpretation of Computer Programs References Further reading External links Book Website at Stanford with link to Errata 1986 books 2006 non-fiction books Compiler construction Computer science books Engineering textbooks Compiler theory
https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin%20primality%20test
The Miller–Rabin primality test or Rabin–Miller primality test is a probabilistic primality test: an algorithm which determines whether a given number is likely to be prime, similar to the Fermat primality test and the Solovay–Strassen primality test. It is of historical significance in the search for a polynomial-time deterministic primality test. Its probabilistic variant remains widely used in practice, as one of the simplest and fastest tests known. Gary L. Miller discovered the test in 1976; Miller's version of the test is deterministic, but its correctness relies on the unproven extended Riemann hypothesis. Michael O. Rabin modified it to obtain an unconditional probabilistic algorithm in 1980. Mathematical concepts Similarly to the Fermat and Solovay–Strassen tests, the Miller–Rabin primality test checks whether a specific property, which is known to hold for prime values, holds for the number under testing. Strong probable primes The property is the following. For a given odd integer , let’s write as where s is a positive integer and d is an odd positive integer. Let’s consider an integer a, called a base, which is coprime to n. Then, n is said to be a strong probable prime to base a if one of these congruence relations holds: ; for some . The idea beneath this test is that when n is an odd prime, it passes the test because of two facts: by Fermat's little theorem, (this property alone defines the weaker notion of probable prime to base a, on which the Fermat test is based); the only square roots of 1 modulo n are 1 and −1. Hence, by contraposition, if n is not a strong probable prime to base a, then n is definitely composite, and a is called a witness for the compositeness of n. However, this property is not an exact characterization of prime numbers. If n is composite, it may nonetheless be a strong probable prime to base a, in which case it is called a strong pseudoprime, and a is a strong liar. Choices of bases Thankfully, no composite number is a strong pseudoprime to all bases at the same time (contrary to the Fermat primality test for which Fermat pseudoprimes to all bases exist: the Carmichael numbers). However no simple way of finding a witness is known. A naïve solution is to try all possible bases, which yields an inefficient deterministic algorithm. The Miller test is a more efficient variant of this (see section Miller test below). Another solution is to pick a base at random. This yields a fast probabilistic test. When n is composite, most bases are witnesses, so the test will detect n as composite with a reasonably high probability (see section Accuracy below). We can quickly reduce the probability of a false positive to an arbitrarily small rate, by combining the outcome of as many independently chosen bases as necessary to achieve the said rate. This is the Miller–Rabin test. The number of bases to try does not depend on n. There seems to be diminishing returns in trying many bases, because if n is a
https://en.wikipedia.org/wiki/Code%20generation%20%28compiler%29
In computing, code generation is part of the process chain of a compiler and converts intermediate representation of source code into a form (e.g., machine code) that can be readily executed by the target system. Sophisticated compilers typically perform multiple passes over various intermediate forms. This multi-stage process is used because many algorithms for code optimization are easier to apply one at a time, or because the input to one optimization relies on the completed processing performed by another optimization. This organization also facilitates the creation of a single compiler that can target multiple architectures, as only the last of the code generation stages (the backend) needs to change from target to target. (For more information on compiler design, see Compiler.) The input to the code generator typically consists of a parse tree or an abstract syntax tree. The tree is converted into a linear sequence of instructions, usually in an intermediate language such as three-address code. Further stages of compilation may or may not be referred to as "code generation", depending on whether they involve a significant change in the representation of the program. (For example, a peephole optimization pass would not likely be called "code generation", although a code generator might incorporate a peephole optimization pass.)parse tree Major tasks In addition to the basic conversion from an intermediate representation into a linear sequence of machine instructions, a typical code generator tries to optimize the generated code in some way. Tasks which are typically part of a sophisticated compiler's "code generation" phase include: Instruction selection: which instructions to use. Instruction scheduling: in which order to put those instructions. Scheduling is a speed optimization that can have a critical effect on pipelined machines. Register allocation: the allocation of variables to processor registers Debug data generation if required so the code can be debugged. Instruction selection is typically carried out by doing a recursive postorder traversal on the abstract syntax tree, matching particular tree configurations against templates; for example, the tree W := ADD(X,MUL(Y,Z)) might be transformed into a linear sequence of instructions by recursively generating the sequences for t1 := X and t2 := MUL(Y,Z), and then emitting the instruction ADD W, t1, t2. In a compiler that uses an intermediate language, there may be two instruction selection stages—one to convert the parse tree into intermediate code, and a second phase much later to convert the intermediate code into instructions from the instruction set of the target machine. This second phase does not require a tree traversal; it can be done linearly, and typically involves a simple replacement of intermediate-language operations with their corresponding opcodes. However, if the compiler is actually a language translator (for example, one that converts Java to C++), then t
https://en.wikipedia.org/wiki/System%20software
System software is software designed to provide a platform for other software. Examples of system software include operating systems (OS) (like macOS, Linux, Android, and Microsoft Windows). Application software is software that allows users to do user-oriented tasks such as create text documents, play or develop games, create presentations, listen to music, draw pictures, or browse the web. Examples are: computational science software, game engines, search engines, industrial automation, and software as a service applications. In the late 1940s, application software was custom-written by computer users to fit their specific hardware and requirements. System software was usually supplied by the manufacturer of the computer hardware and was intended to be used by most or all users of that system. Many operating systems come pre-packaged with basic application software. Such software is not considered system software when it can be uninstalled without affecting the functioning of other software. Examples of such software are games and simple editing tools supplied with Microsoft Windows, or software development toolchains supplied with many Linux distributions. Some of the grayer areas between system and application software are web browsers integrated deeply into the operating system such as Internet Explorer in some versions of Microsoft Windows, or ChromeOS where the browser functions as the only user interface and the only way to run programs (and other web browser their place). Cloud-based software is another example of systems software, providing services to a software client (usually a web browser or a JavaScript application running in the web browser), not to the user directly. It is developed using system programming methodologies and systems programming languages. Operating systems or system control program The operating system (prominent examples being Microsoft Windows, macOS, Linux, and z/OS), allows the parts of a computer to work together by performing tasks like transferring data between memory and disks or rendering output onto a display device. It provides a platform (hardware abstraction layer) to run high-level system software and application software. A kernel is the core part of the operating system that defines an API for applications programs (including some system software) and an interface to device drivers. Device drivers and devices firmware, including computer BIOS, provide basic functionality to operate and control the hardware connected to or built into the computer. A user interface "allows users to interact with a computer." Either a command-line interface (CLI) or, since the 1980s a graphical user interface (GUI). This is the part of the operating system the user directly interacts with, it is considered an application and not system software. Utility software or system support programs Some organizations use the term systems programmer to describe a job function that is more accurately termed syste
https://en.wikipedia.org/wiki/Agnatha
Agnatha (; ) is an infraphylum of jawless fish in the phylum Chordata, subphylum Vertebrata, consisting of both present (cyclostomes) and extinct (conodonts and ostracoderms) species. Among recent animals, cyclostomes are sister to all vertebrates with jaws, known as gnathostomes. Recent molecular data, both from rRNA and from mtDNA as well as embryological data, strongly supports the hypothesis that living agnathans, the cyclostomes, are monophyletic. The oldest fossil agnathans appeared in the Cambrian, and two groups still survive today: the lampreys and the hagfish, comprising about 120 species in total. Hagfish are considered members of the subphylum Vertebrata, because they secondarily lost vertebrae; before this event was inferred from molecular and developmental data, the group Craniata was created by Linnaeus (and is still sometimes used as a strictly morphological descriptor) to reference hagfish plus vertebrates. While a few scientists still regard the living agnathans as only superficially similar, and argue that many of these similarities are probably shared basal characteristics of ancient vertebrates, recent taxonomic studies clearly place hagfish (the Myxini or Hyperotreti) with the lampreys (Hyperoartii) as being more closely related to each other than either is to the jawed fishes. Metabolism Agnathans are ectothermic, meaning they do not regulate their own body temperature. Agnathan metabolism is slow in cold water, and therefore they do not have to eat very much. They have no distinct stomach, but rather a long gut, more or less homogeneous throughout its length. Lampreys feed on other fish and mammals. Anticoagulant fluids preventing blood clotting are injected into the host, causing the host to yield more blood. Hagfish are scavengers, eating mostly dead animals. They use a row of sharp teeth to break down the animal. The fact that Agnathan teeth are unable to move up and down limits their possible food types. Morphology In addition to the absence of jaws, modern agnathans are characterised by absence of paired fins; the presence of a notochord both in larvae and adults; and seven or more paired gill pouches. Lampreys have a light sensitive pineal eye (homologous to the pineal gland in mammals). All living and most extinct Agnatha do not have an identifiable stomach or any appendages. Fertilization and development are both external. There is no parental care in the Agnatha class. The Agnatha are ectothermic or cold blooded, with a cartilaginous skeleton, and the heart contains 2 chambers. Body covering In modern agnathans, the body is covered in skin, with neither dermal or epidermal scales. The skin of hagfish has copious slime glands, the slime constituting their defense mechanism. The slime can sometimes clog up enemy fishes' gills, causing them to die. In direct contrast, many extinct agnathans sported extensive exoskeletons composed of either massive, heavy dermal armour or small mineralized scales. Appendages
https://en.wikipedia.org/wiki/Semantic%20analysis%20%28compilers%29
Semantic analysis or context sensitive analysis is a process in compiler construction, usually after parsing, to gather necessary semantic information from the source code. It usually includes type checking, or makes sure a variable is declared before use which is impossible to describe in the extended Backus–Naur form and thus not easily detected during parsing. See also Attribute grammar Context-sensitive language Semantic analysis (computer science) References Compiler construction Program analysis
https://en.wikipedia.org/wiki/Online%20analytical%20processing
Online analytical processing, or OLAP (), is an approach to answer multi-dimensional analytical (MDA) queries swiftly in computing. OLAP is part of the broader category of business intelligence, which also encompasses relational databases, report writing and data mining. Typical applications of OLAP include business reporting for sales, marketing, management reporting, business process management (BPM), budgeting and forecasting, financial reporting and similar areas, with new applications emerging, such as agriculture. The term OLAP was created as a slight modification of the traditional database term online transaction processing (OLTP). OLAP tools enable users to analyze multidimensional data interactively from multiple perspectives. OLAP consists of three basic analytical operations: consolidation (roll-up), drill-down, and slicing and dicing. Consolidation involves the aggregation of data that can be accumulated and computed in one or more dimensions. For example, all sales offices are rolled up to the sales department or sales division to anticipate sales trends. By contrast, the drill-down is a technique that allows users to navigate through the details. For instance, users can view the sales by individual products that make up a region's sales. Slicing and dicing is a feature whereby users can take out (slicing) a specific set of data of the OLAP cube and view (dicing) the slices from different viewpoints. These viewpoints are sometimes called dimensions (such as looking at the same sales by salesperson, or by date, or by customer, or by product, or by region, etc.). Databases configured for OLAP use a multidimensional data model, allowing for complex analytical and ad hoc queries with a rapid execution time. They borrow aspects of navigational databases, hierarchical databases and relational databases. OLAP is typically contrasted to OLTP (online transaction processing), which is generally characterized by much less complex queries, in a larger volume, to process transactions rather than for the purpose of business intelligence or reporting. Whereas OLAP systems are mostly optimized for read, OLTP has to process all kinds of queries (read, insert, update and delete). Overview of OLAP systems At the core of any OLAP system is an OLAP cube (also called a 'multidimensional cube' or a hypercube). It consists of numeric facts called measures that are categorized by dimensions. The measures are placed at the intersections of the hypercube, which is spanned by the dimensions as a vector space. The usual interface to manipulate an OLAP cube is a matrix interface, like Pivot tables in a spreadsheet program, which performs projection operations along the dimensions, such as aggregation or averaging. The cube metadata is typically created from a star schema or snowflake schema or fact constellation of tables in a relational database. Measures are derived from the records in the fact table and dimensions are derived from the dimension tab
https://en.wikipedia.org/wiki/MSN%20TV
MSN TV (formerly WebTV) was a web access product consisting of a thin client device that used a television for display (instead of using a computer monitor), and the online service that supported it. The original WebTV device design and service were developed by WebTV Networks, Inc., a company started in 1995. The WebTV product was announced in July 1996 and later released on September 18, 1996. In April 1997, the company was purchased by Microsoft Corporation and in July 2001, was rebranded to MSN TV and absorbed into MSN. While most thin clients developed in the mid-1990s were positioned as diskless workstations for corporate intranets, WebTV was positioned as a consumer product, primarily targeting those looking for a low-cost alternative to a computer for Internet access. The WebTV and MSN TV devices allowed a television set to be connected to the Internet, mainly for web browsing and e-mail. The WebTV/MSN TV service, however, also offered its own exclusive services such as a "walled garden" newsgroup service, news and weather reports, storage for user bookmarks (Favorites), IRC (and for a time, MSN Chat) chatrooms, a Page Builder service that let WebTV users create and host webpages that could later be shared to others via a link if desired, the ability to play background music from a predefined list of songs while surfing the web, dedicated sections for aggregated content covering various topics (entertainment, romance, stocks, etc.), and a few years after Microsoft bought out WebTV, integration with MSN Messenger and Hotmail. The setup included a thin client in the form of a set-top box, a remote, a network connection using dial-up, or with the introduction of Rogers Interactive TV and the MSN TV 2, the option to use broadband, and a wireless keyboard, which was sold optionally up until the 2000s. The MSN TV service lasted for 18 years, shutting down on September 30, 2013, and allowing subscribers to migrate their data well before that date arrived. The original WebTV network relied on a Solaris backend network and telephone lines to deliver service to customers via dial-up, with "frontend servers" that talk directly to boxes using a custom protocol, the WebTV Protocol (WTVP), to authenticate users and deliver content to boxes. For the MSN TV 2, however, a completely new service based on IIS servers and regular HTTP/HTTPS services was used. History Concept Co-founder Steve Perlman is credited with the idea for the device. He first combined computer and television as a high-school student when he decided his home PC needed a graphics display. He went on to build software for companies such as Apple and Atari. While working at General Magic, the idea of bringing TVs and computers together resurfaced. One night, Perlman was browsing the web and came across a Campbell's soup website with recipes. He thought that the people who might be interested in what the site had to offer were not using the web. It occurred to him that if the tel
https://en.wikipedia.org/wiki/CDM
CDM may refer to: Organizations College of Dental Medicine, US dental schools DePaul University College of Computing and Digital Media, in Chicago, Illinois, US Corona del Mar High School, a high school located in Newport Beach, California, US Science and technology Cash deposit machine, in banks Clean Development Mechanism, a mechanism in the Kyoto Protocol for reducing emissions Ceramic metal-halide lamp, a lamp/light source Charged-device model, used in electrostatic discharge testing Clinical data management Code division multiplexing Cold dark matter, a scientific theory Combining Diacritical Marks, for keyboards Combustion detection module Common Diagnostic Model, a standard of the Distributed Management Task Force Conceptual data model Content Decryption Module Customer data management, software and behaviors for businesses to handle customer data Continuous Diagnostics and Mitigation, a program of the Department of Homeland Security; see Federal Systems Integration and Management Center Congenital dermal melanocytosis, a benign birthmark Transport Chidambaram railway station, Cuddalore, Tamil Nadu, India (Indian Railways station code) Other uses Celebrity Deathmatch, a claymation TV series CD maxi, a maxi single Charge description master, a comprehensive price list of items billable to patients in a US hospital Construction (Design and Management) Construction (Design and Management) Regulations 2007, UK construction regulations Construction (Design and Management) Regulations 2015, UK construction regulations Corona del Mar, Newport Beach, a beach and community in California, US Clergy Discipline Measure 2003, a legal measure in the Church of England Central defensive midfielder, a position in association football See also Lambda-CDM model or ΛCDM model, standard cosmological model of the universe
https://en.wikipedia.org/wiki/LCARS
In the Star Trek fictional universe, LCARS (; an acronym for Library Computer Access/Retrieval System) is a computer operating system. Within Star Trek chronology, the term was first used in the Star Trek: The Next Generation series. Production The LCARS graphical user interface was designed by scenic art supervisor and technical consultant Michael Okuda. The original design concept was influenced by a request from Gene Roddenberry that the instrument panels not have a great deal of activity on them. This minimalized look was designed to give a sense that the technology was much more advanced than in the original Star Trek. On Star Trek: The Next Generation, many of the buttons were labeled with the initials of members of the production crew and were referred to as "Okudagrams." PADD The LCARS interface is often seen used on a PADD (Personal Access Display Device), a hand-held computer. At , similarly sized modern tablet computers such as the Nexus 7, Amazon Fire, BlackBerry PlayBook, and iPad Mini have been compared with the PADD. Several mobile apps were created which offered an LCARS-style interface. Legal CBS Television Studios claims to hold the copyright on LCARS. Google was sent a DMCA letter to remove the Android app called Tricorder since its use of the LCARS interface was un-licensed. The application was later re-uploaded under a different title, but it was removed again. References External links Fictional software Operating systems Star Trek terminology
https://en.wikipedia.org/wiki/Java%20APIs%20for%20Integrated%20Networks
Java APIs for Integrated Networks (JAIN) is an activity within the Java Community Process, developing APIs for the creation of telephony (voice and data) services. Originally, JAIN stood for Java APIs for Intelligent Network. The name was later changed to Java APIs for Integrated Networks to reflect the widening scope of the project. The JAIN activity consists of a number of "Expert Groups", each developing a single API specification. Trend JAIN is part of a general trend to open up service creation in the telephony network so that, by analogy with the Internet, openness should result in a growing number of participants creating services, in turn creating more demand and better, more targeted services. Goal A goal of the JAIN APIs is to abstract the underlying network, so that services can be developed independent of network technology, be it traditional PSTN or Next Generation Network. API The JAIN effort has produced around 20 APIs, in various stages of standardization, ranging from Java APIs for specific network protocols, such as SIP and TCAP, to more abstract APIs such as for call control and charging, and even including a non-Java effort for describing telephony services in XML. Parlay X There is overlap between JAIN and Parlay/OSA because both address similar problem spaces. However, as originally conceived, JAIN focused on APIs that would make it easier for network operators to develop their own services within the framework of Intelligent Network (IN) protocols. As a consequence, the first JAIN APIs focused on methods for building and interpreting SS7 messages and it was only later that JAIN turned its attention to higher-level methods for call control. Meanwhile, at about the same time JAIN was getting off the ground, work on Parlay began with a focus on APIs to enable development of network services by non-operator third parties. Standardized APIs From around 2001 to 2003, there was an effort to harmonize the not yet standardized JAIN APIs for call control with the comparable and by then standardized Parlay APIs. A number of difficulties were encountered, but perhaps the most serious was not technical but procedural. The Java Community Process requires that a reference implementation be built for every standardized Java API. Parlay does not have this requirement. Not surprisingly, given the effort that would have been needed to build a reference implementation of JAIN call control, the standards community decided, implicitly if not explicitly, that the Parlay call control APIs were adequate and work on JAIN call control faded off. Nonetheless, the work on JAIN call control did have an important impact on Parlay since it helped to drive the definition of an agreed-upon mapping of Parlay to the Java language. See also NGIN Parlay Group External links The JAIN APIs. JAIN-SIP. JAIN-SIP (new site). Books Integrated Networks Telecommunications standards Computer standards
https://en.wikipedia.org/wiki/Brad%20Cox
Brad J. Cox (May 2, 1944 – January 2, 2021) was an American computer scientist who was known mostly for creating the Objective-C programming language with his business partner Tom Love and for his work in software engineering (specifically software reuse) and software componentry. Biography Cox received his Bachelor of Science Degree in Organic Chemistry and Mathematics from Furman University, and his Ph.D. from the Department of Mathematical Biology at the University of Chicago. Among his first known software projects, he wrote a PDP-8 program for simulating clusters of neurons. He worked at the National Institutes of Health and Woods Hole Oceanographic Institute before moving into the software profession. Although Cox invented his own programming language, Objective-C, which he used in his early career, he stated in an interview for the Masterminds of Programming book that he wasn't interested in programming languages but rather in software components, and he regarded languages as mere tools for building and combining parts of software. Cox was also an entrepreneur, having founded the Stepstone company together with Tom Love, established to release the first Objective-C implementation. Stepstone folded in 1994 and in April 1995, NeXT acquired the Objective-C trademark and rights from Stepstone. At the same time, Stepstone licensed back from NeXT the right to continue selling their Objective-C based products. As Apple Computer acquired NeXT a year later, they now hold the rights to Objective-C. Stepstone appears to have gone out of business in the early 2000s. Awards Online course "Taming the Electronic Frontier" won a Paul Allen Distance Education Award ($25,000) in 1998. Notes Books External links Belaboring the Obvious - personal blog Virtual School (historical) 1944 births 2021 deaths Furman University alumni Programming language designers American software engineers University of Chicago alumni
https://en.wikipedia.org/wiki/Douglas%20McIlroy
Malcolm Douglas McIlroy (born 1932) is a mathematician, engineer, and programmer. As of 2019 he is an Adjunct Professor of Computer Science at Dartmouth College. McIlroy is best known for having originally proposed Unix pipelines and developed several Unix tools, such as spell, diff, sort, join, graph, speak, and tr. He was also one of the pioneering researchers of macro processors and programming language extensibility. He participated in the design of multiple influential programming languages, particularly PL/I, SNOBOL, ALTRAN, TMG and C++. His seminal work on software componentization and code reuse makes him a pioneer of component-based software engineering and software product line engineering. Biography McIlroy earned his bachelor's degree in engineering physics from Cornell University, and a Ph.D. in applied mathematics from MIT in 1959 for his thesis On the Solution of the Differential Equations of Conical Shells (advisor Eric Reissner). He taught at MIT from 1954 to 1958. McIlroy joined Bell Laboratories in 1958; from 1965 to 1986 was head of its Computing Techniques Research Department (the birthplace of the Unix operating system), and thereafter was Distinguished Member of Technical Staff. From 1967 to 1968, McIlroy also served as a visiting lecturer at Oxford University. In 1997, McIlroy retired from Bell Labs, and took a position as an Adjunct Professor in the Dartmouth College Computer Science Department. He has previously served the Association for Computing Machinery as national lecturer, Turing Award chairman, member of the publications planning committee, and associate editor for the Communications of the ACM, the Journal of the ACM, and ACM Transactions on Programming Languages and Systems. He also served on the executive committee of CSNET. Research and contributions Macro processors McIlroy is considered to be a pioneer of macro processors. In 1959, together with Douglas E. Eastwood of Bell Labs, he introduced conditional and recursive macros into popular SAP assembler, creating what is known as Macro SAP. His 1960 paper was also seminal in the area of extending any (including high-level) programming languages through macro processors. These contributions started the macro-language tradition at Bell Labs ("everything from L6 and AMBIT to C"). McIlroy's macro processing ideas were also the main inspiration for TRAC macro processor. He also coauthored M6 macro processor in FORTRAN IV, which was used in ALTRAN and later was ported to and included into early versions of Unix. Contributions to Unix Throughout the 1960s and 1970s McIlroy contributed programs for Multics (such as RUNOFF) and Unix operating systems (such as diff, echo, tr, join and look), versions of which are widespread to this day through adoption of the POSIX standard and Unix-like operating systems. He introduced the idea of Unix pipelines. He also implemented TMG compiler-compiler in PDP-7 and PDP-11 assembly, which became the first high-level prog
https://en.wikipedia.org/wiki/High-level%20programming%20language
In computer science, a high-level programming language is a programming language with strong abstraction from the details of the computer. In contrast to low-level programming languages, it may use natural language elements, be easier to use, or may automate (or even hide entirely) significant areas of computing systems (e.g. memory management), making the process of developing a program simpler and more understandable than when using a lower-level language. The amount of abstraction provided defines how "high-level" a programming language is. In the 1960s, a high-level programming language using a compiler was commonly called an autocode. Examples of autocodes are COBOL and Fortran. The first high-level programming language designed for computers was Plankalkül, created by Konrad Zuse. However, it was not implemented in his time, and his original contributions were largely isolated from other developments due to World War II, aside from the language's influence on the "Superplan" language by Heinz Rutishauser and also to some degree ALGOL. The first significantly widespread high-level language was Fortran, a machine-independent development of IBM's earlier Autocode systems. The ALGOL family, with ALGOL 58 defined in 1958 and ALGOL 60 defined in 1960 by committees of European and American computer scientists, introduced recursion as well as nested functions under lexical scope. ALGOL 60 was also the first language with a clear distinction between value and name-parameters and their corresponding semantics. ALGOL also introduced several structured programming concepts, such as the while-do and if-then-else constructs and its syntax was the first to be described in formal notation – Backus–Naur form (BNF). During roughly the same period, COBOL introduced records (also called structs) and Lisp introduced a fully general lambda abstraction in a programming language for the first time. Features "High-level language" refers to the higher level of abstraction from machine language. Rather than dealing with registers, memory addresses, and call stacks, high-level languages deal with variables, arrays, objects, complex arithmetic or boolean expressions, subroutines and functions, loops, threads, locks, and other abstract computer science concepts, with a focus on usability over optimal program efficiency. Unlike low-level assembly languages, high-level languages have few, if any, language elements that translate directly into a machine's native opcodes. Other features, such as string handling routines, object-oriented language features, and file input/output, may also be present. One thing to note about high-level programming languages is that these languages allow the programmer to be detached and separated from the machine. That is, unlike low-level languages like assembly or machine language, high-level programming can amplify the programmer's instructions and trigger a lot of data movements in the background without their knowledge. The responsib
https://en.wikipedia.org/wiki/Low-level%20programming%20language
A low-level programming language is a programming language that provides little or no abstraction from a computer's instruction set architecture—commands or functions in the language map that are structurally similar to processor's instructions. Generally, this refers to either machine code or assembly language. Because of the low (hence the word) abstraction between the language and machine language, low-level languages are sometimes described as being "close to the hardware". Programs written in low-level languages tend to be relatively non-portable, due to being optimized for a certain type of system architecture. Low-level languages can convert to machine code without a compiler or interpreter—second-generation programming languages use a simpler processor called an assembler—and the resulting code runs directly on the processor. A program written in a low-level language can be made to run very quickly, with a small memory footprint. An equivalent program in a high-level language can be less efficient and use more memory. Low-level languages are simple, but considered difficult to use, due to numerous technical details that the programmer must remember. By comparison, a high-level programming language isolates execution semantics of a computer architecture from the specification of the program, which simplifies development. Machine code Machine code is the only language a computer can process directly without a previous transformation. Currently, programmers almost never write programs directly in machine code, because it requires attention to numerous details that a high-level programming language handles automatically. Furthermore, unlike programming in an assembly language, it requires memorizing or looking up numerical codes for every instruction, and is extremely difficult to modify. True machine code is a stream of raw, usually binary, data. A programmer coding in "machine code" normally codes instructions and data in a more readable form such as decimal, octal, or hexadecimal which is translated to internal format by a program called a loader or toggled into the computer's memory from a front panel. Although few programs are written in machine languages, programmers often become adept at reading it through working with core dumps or debugging from the front panel. Example of a function in hexadecimal representation of x86-64 machine code to calculate the nth Fibonacci number, with each line corresponding to one instruction: 89 f8 85 ff 74 26 83 ff 02 76 1c 89 f9 ba 01 00 00 00 be 01 00 00 00 8d 04 16 83 f9 02 74 0d 89 d6 ff c9 89 c2 eb f0 b8 01 00 00 c3 Assembly language Second-generation languages provide one abstraction level on top of the machine code. In the early days of coding on computers like TX-0 and PDP-1, the first thing MIT hackers did was to write assemblers. Assembly language has little semantics or formal specification, being only a mapping of human-readable symbols, including symbolic addresse
https://en.wikipedia.org/wiki/Programming%20paradigm
Programming paradigms are a way to classify programming languages based on their features. Languages can be classified into multiple paradigms. Some paradigms are concerned mainly with implications for the execution model of the language, such as allowing side effects, or whether the sequence of operations is defined by the execution model. Other paradigms are concerned mainly with the way that code is organized, such as grouping a code into units along with the state that is modified by the code. Yet others are concerned mainly with the style of syntax and grammar. Some common programming paradigms are, Imperative in which the programmer instructs the machine how to change its state, procedural which groups instructions into procedures, object-oriented which groups instructions with the part of the state they operate on, Declarative in which the programmer merely declares properties of the desired result, but not how to compute it functional in which the desired result is declared as the value of a series of function applications, logic in which the desired result is declared as the answer to a question about a system of facts and rules, reactive in which the desired result is declared with data streams and the propagation of change Symbolic techniques such as reflection, which allow the program to refer to itself, might also be considered as a programming paradigm. However, this is compatible with the major paradigms and thus is not a real paradigm in its own right. For example, languages that fall into the imperative paradigm have two main features: they state the order in which operations occur, with constructs that explicitly control that order, and they allow side effects, in which state can be modified at one point in time, within one unit of code, and then later read at a different point in time inside a different unit of code. The communication between the units of code is not explicit. Meanwhile, in object-oriented programming, code is organized into objects that contain a state that is only modified by the code that is part of the object. Most object-oriented languages are also imperative languages. In contrast, languages that fit the declarative paradigm do not state the order in which to execute operations. Instead, they supply a number of available operations in the system, along with the conditions under which each is allowed to execute. The implementation of the language's execution model tracks which operations are free to execute and chooses the order independently. More at Comparison of multi-paradigm programming languages. Overview Just as software engineering (as a process) is defined by differing methodologies, so the programming languages (as models of computation) are defined by differing paradigms. Some languages are designed to support one paradigm (Smalltalk supports object-oriented programming, Haskell supports functional programming), while other programming languages support multiple paradigms
https://en.wikipedia.org/wiki/Constraint%20programming
Constraint programming (CP) is a paradigm for solving combinatorial problems that draws on a wide range of techniques from artificial intelligence, computer science, and operations research. In constraint programming, users declaratively state the constraints on the feasible solutions for a set of decision variables. Constraints differ from the common primitives of imperative programming languages in that they do not specify a step or sequence of steps to execute, but rather the properties of a solution to be found. In addition to constraints, users also need to specify a method to solve these constraints. This typically draws upon standard methods like chronological backtracking and constraint propagation, but may use customized code like a problem-specific branching heuristic. Constraint programming takes its root from and can be expressed in the form of constraint logic programming, which embeds constraints into a logic program. This variant of logic programming is due to Jaffar and Lassez, who extended in 1987 a specific class of constraints that were introduced in Prolog II. The first implementations of constraint logic programming were Prolog III, CLP(R), and CHIP. Instead of logic programming, constraints can be mixed with functional programming, term rewriting, and imperative languages. Programming languages with built-in support for constraints include Oz (functional programming) and Kaleidoscope (imperative programming). Mostly, constraints are implemented in imperative languages via constraint solving toolkits, which are separate libraries for an existing imperative language. Constraint logic programming Constraint programming is an embedding of constraints in a host language. The first host languages used were logic programming languages, so the field was initially called constraint logic programming. The two paradigms share many important features, like logical variables and backtracking. Today most Prolog implementations include one or more libraries for constraint logic programming. The difference between the two is largely in their styles and approaches to modeling the world. Some problems are more natural (and thus, simpler) to write as logic programs, while some are more natural to write as constraint programs. The constraint programming approach is to search for a state of the world in which a large number of constraints are satisfied at the same time. A problem is typically stated as a state of the world containing a number of unknown variables. The constraint program searches for values for all the variables. Temporal concurrent constraint programming (TCC) and non-deterministic temporal concurrent constraint programming (MJV) are variants of constraint programming that can deal with time. Constraint satisfaction problem A constraint is a relation between multiple variables that limits the values these variables can take simultaneously. Three categories of constraints exist: extensional constraints: constraints are
https://en.wikipedia.org/wiki/Universally%20unique%20identifier
A Universally Unique Identifier (UUID) is a 128-bit label used for information in computer systems. The term Globally Unique Identifier (GUID) is also used, mostly in Microsoft systems. When generated according to the standard methods, UUIDs are, for practical purposes, unique. Their uniqueness does not depend on a central registration authority or coordination between the parties generating them, unlike most other numbering schemes. While the probability that a UUID will be duplicated is not zero, it is generally considered close enough to zero to be negligible. Thus, anyone can create a UUID and use it to identify something with near certainty that the identifier does not duplicate one that has already been, or will be, created to identify something else. Information labeled with UUIDs by independent parties can therefore be later combined into a single database or transmitted on the same channel, with a negligible probability of duplication. Adoption of UUIDs is widespread, with many computing platforms providing support for generating them and for parsing their textual representation. History In the 1980s Apollo Computer originally used UUIDs in the Network Computing System (NCS). Later, the Open Software Foundation (OSF) used UUIDs for their Distributed Computing Environment (DCE). The design of the DCE UUIDs was partly based on the NCS UUIDs, whose design was in turn inspired by the (64-bit) unique identifiers defined and used pervasively in Domain/OS, an operating system designed by Apollo Computer. Later, the Microsoft Windows platforms adopted the DCE design as "Globally Unique IDentifiers" (GUIDs). registered a URN namespace for UUIDs and recapitulated the earlier specifications, with the same technical content. When in July 2005 was published as a proposed IETF standard, the ITU had also standardized UUIDs, based on the previous standards and early versions of . Standards UUIDs are standardized by the Open Software Foundation (OSF) as part of the Distributed Computing Environment (DCE). UUIDs are documented as part of ISO/IEC 11578:1996 "Information technology – Open Systems Interconnection – Remote Procedure Call (RPC)" and more recently in ITU-T Rec. X.667 | ISO/IEC 9834-8:2014. The Internet Engineering Task Force (IETF) published the Standards-Track , technically equivalent to ITU-T Rec. X.667 | ISO/IEC 9834-8. The "Revise Universally Unique Identifier Definitions Working Group" is working on an update which will introduce additional versions. Binary wire format A UUID is a 128 bit label. Initially, Apollo Computer designed the UUID with the following wire format: {| class="wikitable" |+ The legacy wire format ! Name ! Offset ! Length ! Description |- | time_high | 0x00 | 4 octets / 32 bits | The first 6 octets are the number of 4 µsec units of time that have passed since 1980-01-01, 0000 UTC. |- | time_low | 0x04 | 2 octets / 16 bits | See above. |- | reserved | 0x06 | 2 octets / 16 bits |These octets are reserved
https://en.wikipedia.org/wiki/Guggenheim
Guggenheim may refer to: Buildings Guggenheim Building, in Rochester, Minnesota Guggenheim Museums, global network of museums established by the Solomon R. Guggenheim Foundation Murry Guggenheim House, also known as the Guggenheim Library of Monmouth University, Monmouth County, New Jersey Other uses Guggenheim (surname), including a list of people with the name Guggenheim Exploration Company, notable for Beatty v. Guggenheim Exploration Co. Guggenheim family, an American family of Swiss Jewish ancestry Guggenheim Fellowship, an American grant awarded by the John Simon Guggenheim Memorial Foundation Guggenheim Partners, a financial services firm "Guggenheim", a song on the 2012 album Sounds from Nowheresville by The Ting Tings John Simon Guggenheim Memorial Foundation, founded in 1925 Solomon R. Guggenheim Foundation, founded in 1937 Martin Guggenheim, a character in the Amazon Prime video original Mozart in the Jungle Guggenheim, a variant of the word game Categories
https://en.wikipedia.org/wiki/Open%20Software%20Foundation
The Open Software Foundation (OSF) was a not-for-profit industry consortium for creating an open standard for an implementation of the operating system Unix. It was formed in 1988 and merged with X/Open in 1996, to become The Open Group. Despite the similarities in name, OSF was unrelated to the Free Software Foundation (FSF, also based in Cambridge, Massachusetts), or the Open Source Initiative (OSI). History The organization was first proposed by Armando Stettner of Digital Equipment Corporation (DEC) at an invitation-only meeting hosted by DEC for several Unix system vendors in January 1988 (called the "Hamilton Group", since the meeting was held at DEC's offices on Palo Alto's Hamilton Avenue). It was intended as an organization for joint development, mostly in response to a perceived threat of "merged UNIX system" efforts by AT&T Corporation and Sun Microsystems. After discussion during the meeting, the proposal was tabled so that members of the Hamilton Group could broach the idea of a joint development effort with Sun and AT&T. In the meantime, Stettner was asked to write an organization charter. That charter was formally presented to Apollo, HP, IBM and others after Sun and AT&T rejected the overture by the Hamilton Group members. The foundation's original sponsoring members were Apollo Computer, Groupe Bull, Digital Equipment Corporation, Hewlett-Packard, IBM, Nixdorf Computer, and Siemens AG, sometimes called the "Gang of Seven". Later sponsor members included Philips and Hitachi with the broader general membership growing to more than a hundred companies. It was registered under the U.S. National Cooperative Research Act of 1984, which reduces potential antitrust liabilities of research joint ventures and standards development organizations. The sponsors gave OSF significant funding, a broad mandate (the so-called "Seven Principles"), substantial independence, and support from sponsor senior management. Senior operating executives from the sponsoring companies served on OSF's initial Board of Directors. One of the Seven Principles was declaration of an "Open Process" whereby OSF staff would create Request for Proposals for source technologies to be selected by OSF, in a vendor neutral process. The selected technology would be licensed by the OSF to the public. Membership in the organization gave member companies a voice in the process for requirements. At the founding, five Open Process projects were named. The organization was seen as a response to the collaboration between AT&T and Sun on UNIX System V Release 4, and a fear that other vendors would be locked out of the standardization process. This led Scott McNealy of Sun to quip that "OSF" really stood for "Oppose Sun Forever". The competition between the opposing versions of Unix systems became known as the Unix wars. AT&T founded the Unix International (UI) project management organization later that year as a counter-response to the OSF. UI was led by Peter Cunningham, form
https://en.wikipedia.org/wiki/Evolutionary%20algorithm
In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the above operators. Evolutionary algorithms often perform well approximating solutions to all types of problems because they ideally do not make any assumption about the underlying fitness landscape. Techniques from evolutionary algorithms applied to the modeling of biological evolution are generally limited to explorations of microevolutionary processes and planning models based upon cellular processes. In most real applications of EAs, computational complexity is a prohibiting factor. In fact, this computational complexity is due to fitness function evaluation. Fitness approximation is one of the solutions to overcome this difficulty. However, seemingly simple EA can solve often complex problems; therefore, there may be no direct link between algorithm complexity and problem complexity. Implementation The following is an example of a generic single-objective genetic algorithm. Step One: Generate the initial population of individuals randomly. (First generation) Step Two: Repeat the following regenerational steps until termination: Evaluate the fitness of each individual in the population (time limit, sufficient fitness achieved, etc.) Select the fittest individuals for reproduction. (Parents) Breed new individuals through crossover and mutation operations to give birth to offspring. Replace the least-fit individuals of the population with new individuals. Types Similar techniques differ in genetic representation and other implementation details, and the nature of the particular applied problem. Genetic algorithm – This is the most popular type of EA. One seeks the solution of a problem in the form of strings of numbers (traditionally binary, although the best representations are usually those that reflect something about the problem being solved), by applying operators such as recombination and mutation (sometimes one, sometimes both). This type of EA is often used in optimization problems. Genetic programming – Here the solutions are in the form of computer programs, and their fitness is determined by their ability to solve a computational problem. There are many variants of Genetic Programming, including Cartesian genetic programming, gene expression programming, grammatical evolution, linear genetic programming, multi expression programming etc. Evolutionary programming – Similar to genetic programming, but the structure of the program is fixed and its numerical parameters are allowed
https://en.wikipedia.org/wiki/Library%20catalog
A library catalog (or library catalogue in British English) is a register of all bibliographic items found in a library or group of libraries, such as a network of libraries at several locations. A catalog for a group of libraries is also called a union catalog. A bibliographic item can be any information entity (e.g., books, computer files, graphics, realia, cartographic materials, etc.) that is considered library material (e.g., a single novel in an anthology), or a group of library materials (e.g., a trilogy), or linked from the catalog (e.g., a webpage) as far as it is relevant to the catalog and to the users (patrons) of the library. The card catalog was a familiar sight to library users for generations, but it has been effectively replaced by the online public access catalog (OPAC). Some still refer to the online catalog as a "card catalog". Some libraries with OPAC access still have card catalogs on site, but these are now strictly a secondary resource and are seldom updated. Many libraries that retain their physical card catalog will post a sign advising the last year that the card catalog was updated. Some libraries have eliminated their card catalog in favor of the OPAC for the purpose of saving space for other use, such as additional shelving. The largest international library catalog in the world is the WorldCat union catalog managed by the non-profit library cooperative OCLC. In January 2021, WorldCat had over half a billion catalog records and three billion library holdings. Goal Antonio Genesio Maria Panizzi in 1841 and Charles Ammi Cutter in 1876 undertook pioneering work in the definition of early cataloging rule sets formulated according to theoretical models. Cutter made an explicit statement regarding the objectives of a bibliographic system in his Rules for a Printed Dictionary Catalog. According to Cutter, those objectives were 1. to enable a person to find a book of which any of the following is known (Identifying objective): the author the title the subject the date of publication 2. to show what the library has (Collocating objective) by a given author on a given subject in a given kind of literature 3. to assist in the choice of a book (Evaluating objective) as to its edition (bibliographically) as to its character (literary or topical) These objectives can still be recognized in more modern definitions formulated throughout the 20th century. Other influential pioneers in this area were Shiyali Ramamrita Ranganathan and Seymour Lubetzky. Cutter's objectives were revised by Lubetzky and the Conference on Cataloging Principles (CCP) in Paris in 1960/1961, resulting in the Paris Principles (PP). A more recent attempt to describe a library catalog's functions was made in 1998 with Functional Requirements for Bibliographic Records (FRBR), which defines four user tasks: find, identify, select, and obtain. A catalog helps to serve as an inventory or bookkeeping of the library's contents. If an item is not
https://en.wikipedia.org/wiki/John%20George%20Hohman
John George Hohman (also spelled Johann Georg Hohman, and his surname sometimes misspelled as Hoffman), who was active between 1802 and 1846, was a German-American printer, book seller and compiler of collections of herbal remedies, magical healings, and charms. He immigrated to the USA from Germany in 1802, settled in the area around Reading, Pennsylvania, in the Pennsylvania Dutch community, where he printed and sold broadsides, chapbooks and books and practised and instructed in the arts of folk magic and folk religion which became known as pow-wow. Hohman's best known work is the collection of prayers and recipes for folk-healing titled Pow-Wows, or the Long Lost Friend, published in German in 1820 as Der Lange Verborgene Freund (The Long-Hidden Friend) and in two English translations—the first in 1846 in a rather crude translation by Hohman himself ("The Long Secreted Friend or a True and Christian Information for Every Body") and the second in 1856 by a different and more fluent translator ("The Long Lost Friend; a Collection of Mysterious and Invaluable Arts and Remedies for Man as well as Animals"). The name "Pow-Wows" was only added to the book in late 19th century reprints in the wake of the sudden popularity of Spiritualism in the United States, in which "Indian Spirit Guides" were frequently seen during seances. In addition to "The Long-Lost-Friend," Hohman also wrote and published, or at least had attributed to him, a number of further books in German, including Unsers Herran Jesu Christi Kinderbuch, oder, Merkwurdige Historische Beschreibung Von Joachim Und Anna (Our Lord Jesus Christ's Childhood-Book, or, The Strange Historical Description of Joachim and Anna), and Albertus Magnus, oder, Der Lange Verborgene und Getreuer und Christlicher Unterricht fur Jedermann (Albertus Magnus, or, Long Lost and True and Christian Instructions for Everyone). The last book attributed to Hohman was published in 1857. External links Excerpt from The Pennsylvania German Broadside: A History and Guide by Don Yoder American occult writers Year of death missing Year of birth missing 19th-century occultists
https://en.wikipedia.org/wiki/Schoolhouse%20Rock%21
Schoolhouse Rock! is an American interstitial programming series of animated musical educational short films (and later, music videos) which aired during the Saturday morning children's programming block on the U.S. television network ABC. The themes covered included grammar, science, economics, history, mathematics, and civics. The series' original run lasted from 1973 to 1985; it was later revived from 1993 to 1996. Additional episodes were produced in 2009 for direct-to-video release. History Development The series was the idea of David McCall, an advertising executive of McCaffrey and McCall, who noticed his young son was struggling with learning multiplication tables, despite being able to memorize the lyrics of many Rolling Stones songs. McCall hired musician Bob Dorough to write a song that would teach multiplication, which became "Three Is a Magic Number." Tom Yohe, an illustrator at McCaffrey and McCall, heard the song and created visuals to accompany it. Radford Stone, producer and writer at ABC, suggested they pitch it as a television series, which caught the attention of Michael Eisner, then the senior vice president in charge of programming and development at ABC, and cartoon director Chuck Jones. Original series The first video of the series, "Three Is a Magic Number," originally debuted during the debut episode of Curiosity Shop on September 2, 1971. The Curiosity Shop version is an extended cut which includes an additional scene/verse that explains the pattern of each set of ten containing three multiples of three, animated in the form of a carnival shooting game. This scene has never been rebroadcast on ABC, nor has it been included in any home media releases. Schoolhouse Rock! debuted as a series in January 1973 with Multiplication Rock, a collection of animated music videos adapting the multiplication tables to songs written by Bob Dorough. Dorough also performed most of the songs, with Grady Tate performing two and Blossom Dearie performing one during this season. General Foods was the series' first sponsor; later sponsors of the Schoolhouse Rock! segments also included Nabisco, Kenner Toys, Kellogg's, and McDonald's. During the early 1970s, Schoolhouse Rock was one of several short-form animated educational shorts that aired on ABC's children's lineup; others included Time for Timer and The Bod Squad. Of the three, Schoolhouse Rock was the longest running. George Newall and Tom Yohe were the executive producers and creative directors of every episode, along with Bob Dorough as musical director. This first season was followed in short order by a second season, run from 1973 to 1975, entitled Grammar Rock, which included nouns, verbs, adjectives, and other parts of speech (such as conjunctions, explained in "Conjunction Junction"). For this second season, the show added the services of Jack Sheldon, a member of The Merv Griffin Show house band, as well as Lynn Ahrens; both of them contributed to the series through the res
https://en.wikipedia.org/wiki/Multiple%20dispatch
Multiple dispatch or multimethods is a feature of some programming languages in which a function or method can be dynamically dispatched based on the run-time (dynamic) type or, in the more general case, some other attribute of more than one of its arguments. This is a generalization of single-dispatch polymorphism where a function or method call is dynamically dispatched based on the derived type of the object on which the method has been called. Multiple dispatch routes the dynamic dispatch to the implementing function or method using the combined characteristics of one or more arguments. Understanding dispatch Developers of computer software typically organize source code into named blocks variously called subroutines, procedures, subprograms, functions, or methods. The code in the function is executed by calling it – executing a piece of code that references its name. This transfers control temporarily to the called function; when the function's execution has completed, control is typically transferred back to the instruction in the caller that follows the reference. Function names are usually selected so as to be descriptive of the function's purpose. It is sometimes desirable to give several functions the same name, often because they perform conceptually similar tasks, but operate on different types of input data. In such cases, the name reference at the function call site is not sufficient for identifying the block of code to be executed. Instead, the number and type of the arguments to the function call are also used to select among several function implementations. In more conventional, i.e., single-dispatch object-oriented programming languages, when invoking a method (sending a message in Smalltalk, calling a member function in C++), one of its arguments is treated specially and used to determine which of the (potentially many) classes of methods of that name is to be applied. In many languages, the special argument is indicated syntactically; for example, a number of programming languages put the special argument before a dot in making a method call: special.method(other, arguments, here), so that lion.sound() would produce a roar, whereas sparrow.sound() would produce a chirp. In contrast, in languages with multiple dispatch, the selected method is simply the one whose arguments match the number and type of the function call. There is no special argument that owns the function/method carried out in a particular call. The Common Lisp Object System (CLOS) is an early and well-known example of multiple dispatch. Another notable example of the use of multiple dispatch is the Julia programming language. Multiple dispatch should be distinguished from function overloading, in which static typing information, such as a term's declared or inferred type (or base type in a language with subtyping) is used to determine which of several possibilities will be used at a given call site, and that determination is made at compile or link time
https://en.wikipedia.org/wiki/Common%20Lisp%20Object%20System
The Common Lisp Object System (CLOS) is the facility for object-oriented programming which is part of ANSI Common Lisp. CLOS is a powerful dynamic object system which differs radically from the OOP facilities found in more static languages such as C++ or Java. CLOS was inspired by earlier Lisp object systems such as MIT Flavors and CommonLoops, although it is more general than either. Originally proposed as an add-on, CLOS was adopted as part of the ANSI standard for Common Lisp and has been adapted into other Lisp dialects such as EuLisp or Emacs Lisp. Features The basic building blocks of CLOS are methods, classes, instances of those classes, and generic functions. CLOS provides macros to define those: defclass, defmethod, and defgeneric. Instances are created with the method make-instance. Classes can have multiple superclasses, a list of slots (member variables in C++/Java parlance) and a special metaclass. Slots can be allocated by class (all instances of a class share the slot) or by instance. Each slot has a name and the value of a slot can be accessed by that name using the function slot-value. Additionally special generic functions can be defined to write or read values of slots. Each slot in a CLOS class must have a unique name. CLOS is a multiple dispatch system. This means that methods can be specialized upon any or all of their required arguments. Most OO languages are single-dispatch, meaning that methods are only specialized on the first argument. Another unusual feature is that methods do not "belong" to classes; classes do not provide a namespace for generic functions or methods. Methods are defined separately from classes, and they have no special access (e.g. "this", "self", or "protected") to class slots. Methods in CLOS are grouped into generic functions. A generic function is an object which is callable like a function and which associates a collection of methods with a shared name and argument structure, each specialized for different arguments. Since Common Lisp provides non-CLOS classes for structures and built-in data types (numbers, strings, characters, symbols, ...), CLOS dispatch works also with these non-CLOS classes. CLOS also supports dispatch over individual objects (eql specializers). CLOS does not by default support dispatch over all Common Lisp data types (for example dispatch does not work for fully specialized array types or for types introduced by deftype). However, most Common Lisp implementations provide a metaobject protocol which allows generic functions to provide application specific specialization and dispatch rules. Dispatch in CLOS is also different from most OO languages: Given a list of arguments, a list of applicable methods is determined. This list is sorted according to the specificity of their parameter specializers. Selected methods from this list are then combined into an effective method using the method combination used by the generic function. The effective method is then calle
https://en.wikipedia.org/wiki/Generic%20function
In computer programming, a generic function is a function defined for polymorphism. In statically typed languages In statically typed languages (such as C++ and Java), the term generic functions refers to a mechanism for compile-time polymorphism (static dispatch), specifically parametric polymorphism. These are functions defined with TypeParameters, intended to be resolved with compile time type information. The compiler uses these types to instantiate suitable versions, resolving any function overloading appropriately. In Common Lisp Object System In some systems for object-oriented programming such as the Common Lisp Object System (CLOS) and Dylan, a generic function is an entity made up of all methods having the same name. Typically a generic function is an instance of a class that inherits both from function and standard-object. Thus generic functions are both functions (that can be called with and applied to arguments) and ordinary objects. The book The Art of the Metaobject Protocol explains the implementation and use of CLOS generic functions in detail. One of the early object-oriented programming extensions to Lisp is Flavors. It used the usual message sending paradigm influenced by Smalltalk. The Flavors syntax to send a message is: (send object :message) With New Flavors, it was decided the message should be a real function and the usual function calling syntax should be used: (message object) message now is a generic function, an object and function in its own right. Individual implementations of the message are called methods. The same idea was implemented in CommonLoops. New Flavors and CommonLoops were the main influence for the Common Lisp Object System. Example Common Lisp Define a generic function with two parameters object-1 and object-2. The name of the generic function is collide. (defgeneric collide (object-1 object-2)) Methods belonging to the generic function are defined outside of classes. Here we define a method for the generic function collide which is specialized for the classes asteroid (first parameter object-1) and spaceship (second parameter object-2). The parameters are used as normal variables inside the method body. There is no special namespace that has access to class slots. (defmethod collide ((object-1 asteroid) (object-2 spaceship)) (format t "asteroid ~a collides with spaceship ~a" object-1 object-2)) Calling the generic function: ? (collide (make-instance 'asteroid) (make-instance 'spaceship)) asteroid #<ASTEROID 4020003FD3> collides with spaceship #<SPACESHIP 40200048CB> Common Lisp can also retrieve individual methods from the generic function. FIND-METHOD finds the method from the generic function collide specialized for the classes asteroid and spaceship. ? (find-method #'collide nil (list (find-class 'asteroid) (find-class 'spaceship))) #<STANDARD-METHOD COLLIDE NIL (ASTEROID SPACESHIP) 4150015E43> Comparison to other languages Generic functions correspond roughly to what Sma
https://en.wikipedia.org/wiki/86-DOS
86-DOS (known internally as QDOS, for Quick and Dirty Operating System) is a discontinued operating system developed and marketed by Seattle Computer Products (SCP) for its Intel 8086-based computer kit. 86-DOS shared a few of its commands with other operating systems like OS/8 and CP/M, which made it easy to port programs from the latter. Its application programming interface was very similar to that of CP/M. The system was licensed and then purchased by Microsoft and developed further as MS-DOS and PC DOS. History Origins 86-DOS was created because sales of the Seattle Computer Products 8086 computer kit, demonstrated in June 1979 and shipped in November, were languishing due to the absence of an operating system. The only software that SCP could sell with the board was Microsoft's Standalone Disk BASIC-86, which Microsoft had developed on a prototype of SCP's hardware. SCP wanted to offer the 8086-version of CP/M that Digital Research had initially announced for November 1979, but it was delayed and its release date was uncertain. This was not the first time Digital Research had lagged behind hardware developments; two years earlier it had been slow to adapt CP/M for new floppy disk formats and hard disk drives. In April 1980, SCP assigned 24-year-old Tim Paterson to develop a substitute for CP/M-86. Using a CP/M-80 manual as reference Paterson modeled 86-DOS after its architecture and interfaces, but adapted to meet the requirements of Intel's 8086 16-bit processor, for easy (and partially automated) source-level translatability of the many existing 8-bit CP/M programs; porting them to either DOS or CP/M-86 was about equally difficult, and eased by the fact that Intel had already published a method that could be used to automatically translate software from the Intel 8080 processor, for which CP/M had been designed, to the new 8086 instruction set. At the same time he made a number of changes and enhancements to address what he saw as CP/M's shortcomings. CP/M cached file system information in memory for speed, but this required a user to force an update to a disk before removing it; if the user forgot, the disk would become corrupt. Paterson took the safer, but slower approach of updating the disk with each operation. CP/M's PIP command, which copied files, supported several special file names that referred to hardware devices such as printers and communication ports. Paterson built these names into the operating system as device files so that any program could use them. He gave his copying program the more intuitive name COPY. Rather than implementing CP/M's file system, he drew on Microsoft Standalone Disk BASIC-86's File Allocation Table (FAT) file system. By mid-1980 SCP advertised 86-DOS, priced at for owners of its 8086-board and for others. It touted the software's ability to read Zilog Z80 source code from a CP/M disk and translate it to 8086 source code, and promised that only "minor hand correction and optimization" was nee
https://en.wikipedia.org/wiki/No%20Border%20network
The No Border Network (In the United Kingdom also called "No Borders Network" or "Noborders Network") refers to loose associations of autonomous organisations, groups, and individuals in Western Europe, Central Europe, Eastern Europe and beyond. They support freedom of movement and resist human migration control by coordinating international border camps, demonstrations, direct actions, and anti-deportation campaigns. The Western European network opposes what it says are increasingly restrictive harmonisation of asylum and immigration policy in Europe, and aims to build alliances among migrant laborers and refugees. Common slogans used by the Network include; "No Border, No Nation, Stop Deportations!" and "No one is illegal." No Border Network has existed since 1999, and its website since 2000. The No Borders Network in the United Kingdom claims to have local groups in 11 cities. No Border Camps Groups from the No Border network have been involved in organising a number of protest camps (called "No Border Camps" or sometimes "Border Camps" or "Transborder Camps"), e.g. in Strasbourg, France (2002), Otranto, Italy (2003), Cologne (2003, 2012), Gatwick Airport (2007), United Kingdom, at Patras, Greece, Dikili, Turkey (2008), Calais, France (2009, 2015), Lesvos, Greece (2009), Brussels, Belgium (2010), Siva Reka, Bulgaria (2011), Stockholm, Sweden (2012), Rotterdam, the Netherlands (2013), Ventimiglia, Italy (2015), Thessaloniki, Greece (2016), near Nantes, France (2019) in Wassenaar, Netherlands (2019), near Nantes, France (2022), and in Rotterdam, Netherlands (2022). Activities On 18 December 2007, to coincide with the UN International Migrants Day, the network carried out a coordinated blockade of Border and Immigration Agency (now UK Border Agency) offices in Bristol, Portsmouth, Newcastle and Glasgow to prevent dawn raids by immigration officers from taking place. This form of action has been repeated across the UK by the network several times since. On 24 October 2008, Phil Woolas, UK Minister of State for Borders and Immigration was pied by No Borders activists following his remarks on population control. On 10 August 2013, No Border groups from The Netherlands squatted a large terrain at Rotterdam to gather and held several demonstrations. In February 2010 No Borders groups from the UK and France opened a large centre for refugees sleeping rough in Calais, France, under the name "Kronstadt Hangar". Calais authorities have accused "extremist activists" within to the No Borders network of being "driven by an anarchist ideology of hatred of all laws and frontiers" and engaging in, and encouraging, violence and harassment against French police and social workers at the Calais Jungle migrant camp, as well as "manipulating" and "misleading" the migrants living there. After the intercultural philosophy journal "polylog" demanded in connection with the book "Global Freedom of Movement: A Philosophical Plea for Open Borders" that the "debat
https://en.wikipedia.org/wiki/Name%20binding
In programming languages, name binding is the association of entities (data and/or code) with identifiers. An identifier bound to an object is said to reference that object. Machine languages have no built-in notion of identifiers, but name-object bindings as a service and notation for the programmer is implemented by programming languages. Binding is intimately connected with scoping, as scope determines which names bind to which objects – at which locations in the program code (lexically) and in which one of the possible execution paths (temporally). Use of an identifier in a context that establishes a binding for is called a binding (or defining) occurrence. In all other occurrences (e.g., in expressions, assignments, and subprogram calls), an identifier stands for what it is bound to; such occurrences are called applied occurrences. Binding time Static binding (or early binding) is name binding performed before the program is run. Dynamic binding (or late binding or virtual binding) is name binding performed as the program is running. An example of a static binding is a direct C function call: the function referenced by the identifier cannot change at runtime. An example of dynamic binding is dynamic dispatch, as in a C++ virtual method call. Since the specific type of a polymorphic object is not known before runtime (in general), the executed function is dynamically bound. Take, for example, the following Java code: public void foo(java.util.List<String> list) { list.add("bar"); } List is an interface, so list must refer to a subtype of it. list may reference a LinkedList, an ArrayList, or some other subtype of List. The method referenced by add is not known until runtime. In C, which does not have dynamic binding, a similar goal may be achieved by a call to a function pointed to by a variable or expression of a function pointer type, whose value is unknown until it is evaluated at run-time. Rebinding and mutation Rebinding should not be confused with mutation or assignment. Rebinding is a change to the referencing identifier. Assignment is a change to (the referenced) variable. Mutation is a change to an object in memory, possibly referenced by a variable or bound to an identifier. Consider the following Java code: LinkedList<String> list; list = new LinkedList<String>(); list.add("foo"); list = null; { LinkedList<Integer> list = new LinkedList<Integer>(); list.add(Integer(2)); } The identifier list is bound to a variable in the first line; in the second, a reference an object (a linked list of strings) is assigned to the variable. The linked list referenced by the variable is then mutated, adding a string to the list. Next, the variable is assigned the constant null. In the last line, the identifier is rebound for the scope of the block. Operations within the block access a new variable and not the variable previously bound to list. Late static Late static binding is a variant of binding somewhere between static and
https://en.wikipedia.org/wiki/Compile%20time
In computer science, compile time (or compile-time) describes the time window during which a language's statements are converted into binary instructions for the processor to execute. The term is used as an adjective to describe concepts related to the context of program compilation, as opposed to concepts related to the context of program execution (runtime). For example, compile-time requirements are programming language requirements that must be met by source code before compilation and compile-time properties are properties of the program that can be reasoned about during compilation. The actual length of time it takes to compile a program is usually referred to as compilation time. Compile time/Early binding vs Run time The determination of execution model have been set during the compile time stage. Run time- the method of execution and allocation - have been set during the run time and are based on the run time dynamicity. Overview Most compilers have at least the following compiler phases (which therefore occur at compile-time): syntax analysis, semantic analysis, and code generation. During optimization phases, constant expressions in the source code can also be evaluated at compile-time using compile-time execution, which reduces the constant expressions to a single value. This is not necessary for correctness, but improves program performance during runtime. Programming language definitions usually specify compile time requirements that source code must meet to be successfully compiled. For example, languages may stipulate that the amount of storage required by types and variables can be deduced. Properties of a program that can be reasoned about at compile time include range-checks (e.g., proving that an array index will not exceed the array bounds), deadlock freedom in concurrent languages, or timings (e.g., proving that a sequence of code takes no more than an allocated amount of time). Compile-time occurs before link time (when the output of one or more compiled files are joined) and runtime (when a program is executed). Although in the case of dynamic compilation, the final transformations into machine language happen at runtime. There is a trade-off between compile-time and link-time in that many compile time operations can be deferred to link-time without incurring run-time cost. See also Link time Run time (program lifecycle phase) Compiling Type system Dynamic compilation Just in time compilation References Compiler construction
https://en.wikipedia.org/wiki/Handheld%20Device%20Markup%20Language
The Handheld Device Markup Language (HDML) is a markup language intended for display on handheld computers, information appliances, smartphones, etc.. It is similar to HTML, but for wireless and handheld devices with small displays, like PDA, mobile phones and so on. It was originally developed in about 1996 by Unwired Planet, the company that became Phone.com and then Openwave. HDML was submitted to W3C for standardization, but was not turned into a standard. Instead it became an important influence on the development and standardization of WML, which then replaced HDML in practice. Unlike WML, HDML has no support for scripts. See also Wireless Application Protocol List of document markup languages Comparison of document markup languages References Markup languages Computer-related introductions in 1996 Mobile web
https://en.wikipedia.org/wiki/Camino%20%28web%20browser%29
Camino (from the Spanish word meaning "path") is a discontinued free, open source, GUI-based Web browser based on Mozilla's Gecko layout engine and specifically designed for the OS X operating system. In place of an XUL-based user interface used by most Mozilla-based applications, Camino used Mac-native Cocoa APIs. On May 30, 2013, the Camino Project announced that the browser is no longer being developed. As Camino's aim was to integrate as well as possible with OS X, it used the Aqua user interface and integrated a number of OS X services and features such as the Keychain for password management and Bonjour for scanning available bookmarks across the local network. Other notable features included an integrated pop-up blocker and ad blocker, and tabbed browsing that included an overview feature allowing tabs to be viewed all at once as pages. The browser was developed by the Camino Project, a community organization. Mike Pinkerton had been the technical lead of the Camino project since Dave Hyatt moved to the Safari team at Apple Inc. in mid-2002. History In late 2001, Mike Pinkerton and Vidur Apparao started a project within Netscape to prove that Gecko could be embedded in a Cocoa application. In early 2002 Dave Hyatt, one of the co-creators of Firefox (then called Phoenix), joined the team and built Chimera, a small, lightweight browser wrapper, around their work. "Chimera" is a mythological beast with parts taken from various animals and as the new browser represented an early example of Carbon/C++ code interacting with Cocoa/Objective-C code, the name must have seemed apt. The first downloadable build of Chimera 0.1 was released on February 13, 2002. The early releases became popular due to their fast page-loading speeds (as compared with then-dominant Mac browser, Microsoft's Internet Explorer version 5 or OmniGroup's OmniWeb, which then used the Cocoa text system as its rendering engine). Hyatt was hired by Apple Computer in mid-2002 to start work on what would become Safari. Meanwhile, the Chimera developers got a small team together within Netscape, with dedicated development and QA, to put together a Netscape-branded technology preview for the January 2003 Macworld Conference. However, two days before the show, AOL management decided to abandon the entire project. Despite this setback, a skeleton crew of QA and developers released Camino 0.7 on March 3, 2003. The name was changed from Chimera to Camino for legal reasons. Because of its roots in Greek mythology, Chimera has been a popular choice of name for hypermedia systems. One of the first graphical web browsers was called Chimera, and researchers at the University of California, Irvine, have also developed a complete hypermedia system of the same name. Camino is Spanish for "path" or "road" (as in El Camino Real, aka the Royal Road), and the name was chosen to continue the "Navigator" motif. While version 0.7 was primarily a Netscape-driven release kept afloat at the end
https://en.wikipedia.org/wiki/RADIUS
Remote Authentication Dial-In User Service (RADIUS) is a networking protocol that provides centralized authentication, authorization, and accounting (AAA) management for users who connect and use a network service. RADIUS was developed by Livingston Enterprises in 1991 as an access server authentication and accounting protocol. It was later brought into IEEE 802 and IETF standards. RADIUS is a client/server protocol that runs in the application layer, and can use either TCP or UDP. Network access servers, which control access to a network, usually contain a RADIUS client component that communicates with the RADIUS server. RADIUS is often the back-end of choice for 802.1X authentication. A RADIUS server is usually a background process running on UNIX or Microsoft Windows. Protocol components RADIUS is an AAA (authentication, authorization, and accounting) protocol that manages network access. RADIUS uses two types of packets to manage the full AAA process: Access-Request, which manages authentication and authorization; and Accounting-Request, which manages accounting. Authentication and authorization are defined in RFC 2865 while accounting is described by RFC 2866. Authentication and authorization The user or machine sends a request to a Network Access Server (NAS) to gain access to a particular network resource using access credentials. The credentials are passed to the NAS device via the link-layer protocol—for example, Point-to-Point Protocol (PPP) in the case of many dialup or DSL providers or posted in an HTTPS secure web form. In turn, the NAS sends a RADIUS Access Request message to the RADIUS server, requesting authorization to grant access via the RADIUS protocol. This request includes access credentials, typically in the form of username and password or security certificate provided by the user. Additionally, the request may contain other information which the NAS knows about the user, such as its network address or phone number, and information regarding the user's physical point of attachment to the NAS. The RADIUS server checks that the information is correct using authentication schemes such as PAP, CHAP or EAP. The user's proof of identification is verified, along with, optionally, other information related to the request, such as the user's network address or phone number, account status, and specific network service access privileges. Historically, RADIUS servers checked the user's information against a locally stored flat file database. Modern RADIUS servers can do this, or can refer to external sources—commonly SQL, Kerberos, LDAP, or Active Directory servers—to verify the user's credentials. The RADIUS server then returns one of three responses to the NAS: 1) Access Reject, 2) Access Challenge, or 3) Access Accept. Access Reject The user is unconditionally denied access to all requested network resources. Reasons may include failure to provide proof of identification or an unknown or inactive user account. Access Ch
https://en.wikipedia.org/wiki/Timeline%20of%20quantum%20computing%20and%20communication
This is a timeline of quantum computing. 1960s 1968 Stephen Wiesner invented conjugate coding (published in ACM SIGACT News 15(1):78–88). 1970s 1970 James Park articulated the no-cloning theorem. 1973 Alexander Holevo published a paper showing that n qubits can carry more than n classical bits of information, but at most n classical bits are accessible (a result known as "Holevo's theorem" or "Holevo's bound"). Charles H. Bennett showed that computation can be done reversibly. 1975 R. P. Poplavskii published "Thermodynamical models of information processing" (in Russian) which showed the computational infeasibility of simulating quantum systems on classical computers, due to the superposition principle. 1976 Polish mathematical physicist Roman Stanisław Ingarden published the paper "Quantum Information Theory" in Reports on Mathematical Physics, vol. 10, 43–72, 1976 (The paper was submitted in 1975). It is one of the first attempts at creating a quantum information theory, showing that Shannon information theory cannot directly be generalized to the quantum case, but rather that it is possible to construct a quantum information theory, which is a generalization of Shannon's theory, within the formalism of a generalized quantum mechanics of open systems and a generalized concept of observables (the so-called semi-observables). 1980s 1980 Paul Benioff described the first quantum mechanical model of a computer. In this work, Benioff showed that a computer could operate under the laws of quantum mechanics by describing a Schrödinger equation description of Turing machines, laying a foundation for further work in quantum computing. The paper was submitted in June 1979 and published in April 1980. Yuri Manin briefly motivated the idea of quantum computing. Tommaso Toffoli introduced the reversible Toffoli gate, which (together with initialized ancilla bits) is functionally complete for reversible classical computation. 1981 At the First Conference on the Physics of Computation, held at the Massachusetts Institute of Technology (MIT) in May, Paul Benioff and Richard Feynman gave talks on quantum computing. Benioff's built on his earlier 1980 work showing that a computer can operate under the laws of quantum mechanics. The talk was titled “Quantum mechanical Hamiltonian models of discrete processes that erase their own histories: application to Turing machines”. In Feynman's talk, he observed that it appeared to be impossible to efficiently simulate an evolution of a quantum system on a classical computer, and he proposed a basic model for a quantum computer. 1982 Paul Benioff further developed his original model of a quantum mechanical Turing machine. William Wootters and Wojciech Zurek, and independently Dennis Dieks rediscovered the no-cloning theorem of James Park. 1984 Charles Bennett and Gilles Brassard employed Wiesner's conjugate coding for distribution of cryptographic keys. 1985 David Deutsch, at the University of Ox
https://en.wikipedia.org/wiki/Electric%20Image%20Animation%20System
The Electric Image Animation System (EIAS) is a 3D computer graphics package published by EIAS3D. It currently runs on the macOS and Windows platforms. History Electric Image, Inc. was initially a visual effects production company. They developed their own in-house 3D animation and rendering package for the Macintosh beginning in the late 1980s, calling it ElectricImage Animation System. (To avoid confusion with the current product with its similar name, we will refer to this initial incarnation of the product simply as ElectricImage.) When the company later decided to offer their software for sale externally, it quickly gained a customer base that lauded the developers for the software's exceptionally fast rendering engine and high image quality. Because it was capable of film-quality output on commodity hardware, ElectricImage was popular in the movie and television industries throughout the decade. It was used by the "Rebel Unit" at Industrial Light and Magic quite extensively and was in use by a variety of game companies, such as Bad Mojo and Bad Day on the Midway. However, only these high end effects companies could afford it: Electric Image initially sold for US $7500. EIAS has been used in numerous film and television productions, such as: Piranha 3D, Alien Trespass, Pirates of the Caribbean: The Curse of the Black Pearl, Daddy Day Care, K-19: The Widowmaker, Gangs of New York, Austin Powers: Goldmember, Men In Black II, The Bourne Identity, Behind Enemy Lines, Time Machine, Ticker, JAG - Pilot Episode, Spawn, Star Trek: First Contact, Star Trek: Insurrection, Galaxy Quest, Mission to Mars, Austin Powers: The Spy Who Shagged Me, Star Wars Episode 1: The Phantom Menace, Titan A.E., U-571, Dinosaur, Terminator 2: Judgment Day, Terminator 2: Judgment Day - DVD Intro, Jungle Book 2, American President, Sleepers, Star Wars Special Edition, Empire Strikes Back Special Edition, Return of Jedi Special Edition, Bicentennial Man, Vertical Limit, Elf, Blade Trinity, and Lost In Space. TV Shows: Revolution, Breaking Bad, Alcatraz, Pan AM, The whole Truth, Lost, Flash Forward, Fringe, Surface, Weeds, Pushing Daisies, The X-Files, Alias, Smallville, Star Trek: The Next Generation, Babylon 5, Young Indiana Jones, Star Trek Voyager, Mists of Avalon, Star Trek Enterprise.... Electric Image, Inc. was always a small company that produced software on the Mac platform and so never had a large a market share. Play, Inc. purchased Electric Image corporation in November 1998. The first version of EIAS released under the Play moniker was version 2.9. Play later released the 3.0 version. This was the first version to run on Windows, and to mark this move, Play renamed the package Electric Image Universe. Play was never a greatly successful company, and so Electric Image Universe stagnated during the time they owned it. In 2000, Dwight Parscale (former CEO of Newtek) and original Electric Image founders Markus Houy and Jay Roth bought back the original company f
https://en.wikipedia.org/wiki/CD%20player
A CD player is an electronic device that plays audio compact discs, which are a digital optical disc data storage format. CD players were first sold to consumers in 1982. CDs typically contain recordings of audio material such as music or audiobooks. CD players may be part of home stereo systems, car audio systems, personal computers, or portable CD players such as CD boomboxes. Most CD players produce an output signal via a headphone jack or RCA jacks. To use a CD player in a home stereo system, the user connects an RCA cable from the RCA jacks to a hi-fi (or other amplifier) and loudspeakers for listening to music. To listen to music using a CD player with a headphone output jack, the user plugs headphones or earphones into the headphone jack. Modern units can play audio formats other than the original CD PCM audio coding, such as MP3, AAC and WMA. DJs playing dance music at clubs often use specialized players with an adjustable playback speed to alter the pitch and tempo of the music. Audio engineers using CD players to play music for an event through a sound reinforcement system use professional audio-grade CD players. CD playback functionality is also available on CD-ROM/DVD-ROM drive equipped computers as well as on DVD players and most optical disc-based home video game consoles. History American inventor James T. Russell is known for inventing the first system to record digital video information on an optical transparent foil that is lit from behind by a high-power halogen lamp. Russell's patent application was first filed in 1966, and he was granted a patent in 1970. Following litigation, Sony and Philips licensed Russell's recording patents (then held by a Canadian company, Optical Recording Corp.) in the 1980s. The compact disc is not based on Russell's invention, it is an evolution of LaserDisc technology, where a focused laser beam is used that enables the high information density required for high-quality digital audio signals. Prototypes were developed by Philips and Sony independently in the late 1970s. In 1979, Sony and Philips set up a joint task force of engineers to design a new digital audio disc. After a year of experimentation and discussion, the Red Book CD-DA standard was published in 1980. After their commercial release in 1982, compact discs and their players were extremely popular. Despite costing up to $1,000, over 400,000 CD players were sold in the United States between 1983 and 1984. The success of the compact disc has been credited to the cooperation between Philips and Sony, who came together to agree upon and develop compatible hardware. The unified design of the compact disc allowed consumers to purchase any disc or player from any company, and allowed the CD to dominate the at-home music market unchallenged. The Sony CDP-101, released in 1982, was the world's first commercially released compact disc player. Unlike early LaserDisc players, first CD players already used laser diodes instead of larger heliu
https://en.wikipedia.org/wiki/Self-hosting
Self-hosting may refer to: Self-hosting (compilers), a computer program that produces new versions of that same program Self-hosting (web services), the practice of running and maintaining a website using a private web server See also Self-booting disk
https://en.wikipedia.org/wiki/Runtime%20%28program%20lifecycle%20phase%29
In computer science, runtime, run time, or execution time is the final phase of a computer programs life cycle, in which the code is being executed on the computer's central processing unit (CPU) as machine code. In other words, "runtime" is the running phase of a program. A runtime error is detected after or during the execution (running state) of a program, whereas a compile-time error is detected by the compiler before the program is ever executed. Type checking, register allocation, code generation, and code optimization are typically done at compile time, but may be done at runtime depending on the particular language and compiler. Many other runtime errors exist and are handled differently by different programming languages, such as division by zero errors, domain errors, array subscript out of bounds errors, arithmetic underflow errors, several types of underflow and overflow errors, and many other runtime errors generally considered as software bugs which may or may not be caught and handled by any particular computer language. Implementation details When a program is to be executed, a loader first performs the necessary memory setup and links the program with any dynamically linked libraries it needs, and then the execution begins starting from the program's entry point. In some cases, a language or implementation will have these tasks done by the language runtime instead, though this is unusual in mainstream languages on common consumer operating systems. Some program debugging can only be performed (or is more efficient or accurate when performed) at runtime. Logic errors and array bounds checking are examples. For this reason, some programming bugs are not discovered until the program is tested in a production environment with real data, despite sophisticated compile-time checking and pre-release testing. In this case, the end-user may encounter a "runtime error" message. Application errors (exceptions) Exception handling is one language feature designed to handle runtime errors, providing a structured way to catch completely unexpected situations as well as predictable errors or unusual results without the amount of inline error checking required of languages without it. More recent advancements in runtime engines enable automated exception handling which provides "root-cause" debug information for every exception of interest and is implemented independent of the source code, by attaching a special software product to the runtime engine. See also Compile time and compiling Interpreter (computing) Runtime type information Runtime system Runtime library References Computer libraries Computing platforms Computing terminology
https://en.wikipedia.org/wiki/Wireless%20access%20point
In computer networking, a wireless access point, or more generally just access point (AP), is a networking hardware device that allows other Wi-Fi devices to connect to a wired network. As a standalone device, the AP may have a wired connection to a router, but, in a wireless router, it can also be an integral component of the router itself. An AP is differentiated from a hotspot, which is a physical location where Wi-Fi access is available. Although WAP has been used incorrectly to describe an Access Point, the clear definition is Wireless Application Protocol which describes a protocol rather than a physical device. Connections An AP connects directly to a wired local area network, typically Ethernet, and the AP then provides wireless connections using wireless LAN technology, typically Wi-Fi, for other devices to use that wired connection. APs support the connection of multiple wireless devices through their one wired connection. Wireless data standards There are many wireless data standards that have been introduced for wireless access point and wireless router technology. New standards have been created to accommodate the increasing need for faster wireless connections. Some wireless routers provide backward compatibility with older Wi-Fi technologies as many devices were manufactured for use with older standards. 802.11a 802.11b 802.11g 802.11n (Wi-Fi 4) 802.11ac (Wi-Fi 5) 802.11ax, (Wi-Fi 6) Wireless access point vs. ad hoc network Some people confuse wireless access points with wireless ad hoc networks. An ad hoc network uses a connection between two or more devices without using a wireless access point; The devices communicate directly when in range. Because setup is easy and does not require an access point, an ad hoc network is used in situations such as a quick data exchange or a multiplayer video game. Due to its peer-to-peer layout, ad hoc Wi-Fi connections are similar to connections available using Bluetooth. Ad hoc connections are generally not recommended for a permanent installation. Internet access via ad hoc networks, using features like Windows' Internet Connection Sharing, may work well with a small number of devices that are close to each other, but ad hoc networks do not scale well. Internet traffic will converge to the nodes with direct internet connection, potentially congesting these nodes. For internet-enabled nodes, access points have a clear advantage, with the possibility of having a wired LAN. Limitations It is generally recommended that one IEEE 802.11 AP should have, at a maximum, 10-25 clients. However, the actual maximum number of clients that can be supported can vary significantly depending on several factors, such as type of APs in use, density of client environment, desired client throughput, etc. The range of communication can also vary significantly, depending on such variables as indoor or outdoor placement, height above ground, nearby obstructions, other electronic devices that might acti
https://en.wikipedia.org/wiki/Deprecation
In several fields, especially computing, deprecation is the discouragement of use of some terminology, feature, design, or practice, typically because it has been superseded or is no longer considered efficient or safe, without completely removing it or prohibiting its use. Typically, deprecated materials are not completely removed to ensure legacy compatibility or back up practice in case new methods are not functional in an odd scenario. It can also imply that a feature, design, or practice will be removed or discontinued entirely in the future. Etymology In general English usage, the infinitive "to deprecate" means "to express disapproval of (something)". It derives from the Latin verb deprecari, meaning "to ward off (a disaster) by prayer". An early documented usage of "deprecate" in this sense is in Usenet posts in 1984, referring to obsolete features in 4.2BSD and the C programming language. An expanded definition of "deprecate" was cited in the Jargon File in its 1991 revision, and similar definitions are found in commercial software documentation from 2014 and 2023. Software While a deprecated software feature remains in the software, its use may raise warning messages recommending alternative practices. Deprecated status may also indicate the feature will be removed in the future. Features are deprecated, rather than immediately removed, to provide backward compatibility and to give programmers time to bring affected code into compliance with the new standard. Among the most common reasons for deprecation are: The feature has been replaced by a more powerful alternative feature. For instance, the Linux kernel contains two modules to communicate with Windows networks: smbfs and cifs. The latter provides better security, supports more protocol features, and integrates better with the rest of the kernel. Since the inclusion of cifs, smbfs has been deprecated. The feature contains a design flaw, frequently a security flaw, and so should be avoided, but existing code depends upon it. The simple C standard function gets() is an example, because using this function can introduce a buffer overflow into the program that uses it. The Java API methods Thread.stop, .suspend and .resume are further examples. The feature is considered extraneous, and will be removed in the future in order to simplify the system as a whole. Early versions of the Web markup language HTML included a FONT element to allow page designers to specify the font in which text should be displayed. With the release of Cascading Style Sheets and HTML 4.0, the FONT element became extraneous, and detracted from the benefits of noting structural markup in HTML and graphical formatting in CSS. Thus, the FONT element was deprecated in the Transitional HTML 4.0 standard, and eliminated in the Strict variant. A future version of the software will make major structural changes, making it impossible (or impractical) to support older features. For instance, when Apple Inc. planned the
https://en.wikipedia.org/wiki/DHT
DHT may refer to: Science and technology Discrete Hartley transform, in mathematics Distributed hash table, lookup service in computing Chemistry Dihydrotestosterone, hormone derived from testosterone Dihydrotachysterol, synthetic vitamin D analog Other DHT (band), Belgian dance duo Dr Hadwen Trust, UK charity promoting animal experiments alternatives Dalhart Municipal Airport, (IATA code), an airport near Dalhart, Texas Grande Prairie Daily Herald-Tribune, a newspaper in Canada David Hume Tower, the former name of 40 George Square, a University of Edinburgh building See also
https://en.wikipedia.org/wiki/Financial%20Crimes%20Enforcement%20Network
The Financial Crimes Enforcement Network (FinCEN) is a bureau of the United States Department of the Treasury that collects and analyzes information about financial transactions in order to combat domestic and international money laundering, terrorist financing, and other financial crimes. Mission FinCEN's director expressed its mission in November 2013 as "to safeguard the financial system from illicit use, combat money laundering and promote national security." FinCEN serves as the U.S. Financial Intelligence Unit (FIU) and is one of 147 FIUs making up the Egmont Group of Financial Intelligence Units. FinCEN's self-described motto is "follow the money." The website states: "The primary motive of criminals is financial gain, and they leave financial trails as they try to launder the proceeds of crimes or attempt to spend their ill-gotten profits." It is a network bringing people and information together, by coordinating information sharing with law enforcement agencies, regulators and other partners in the financial industry. History FinCEN was established by order of the Secretary of the Treasury (Treasury Order Numbered 105-08) on April 25, 1990. In May 1994, its mission was broadened to include regulatory responsibilities, and in October 1994 the Treasury Department's precursor of FinCEN, the Office of Financial Enforcement was merged with FinCEN. On September 26, 2002, after Title III of the PATRIOT Act was passed, Treasury Order 180-01 made it an official bureau in the Department of the Treasury. Since 1995, FinCEN employs the FinCEN Artificial Intelligence System (FAIS). In September 2012, FinCEN's information technology called FinCEN Portal and Query System migrated with 11 years of data into FinCEN Query, a search engine similar to Google. It is a "one stop shop" accessible via the FinCEN Portal allowing broad searches across more fields than before and returning more results. Since September 2012 FinCEN generates 4 new reports: Suspicious Activity Report (FinCEN SAR), Currency Transaction Report (FinCEN CTR), the Designation of Exempt Person (DOEP) and Registered Money Service Business (RMSB). Organization As of November 2013, FinCEN employed approximately 340 people, mostly intelligence professionals with expertise in the financial industry, illicit finance, financial intelligence, the AML/CFT (money laundering/terrorist financing) regulatory regime, computer technology, and enforcement". The majority of the staff are permanent FinCEN personnel, with about 20 long-term detailees assigned from 13 different regulatory and law enforcement agencies. FinCEN shares information with dozens of intelligence agencies, including the Bureau of Alcohol, Tobacco, and Firearms; the Drug Enforcement Administration; the Federal Bureau of Investigation; the U.S. Secret Service; the Internal Revenue Service; the Customs Service; and the U.S. Postal Inspection Service. FinCEN directors Brian M. Bruh (1990–1993) Stanley E. Morris (1994–1998) J
https://en.wikipedia.org/wiki/IBM%20701
The IBM 701 Electronic Data Processing Machine, known as the Defense Calculator while in development, was IBM’s first commercial scientific computer and its first series production mainframe computer, which was announced to the public on May 21, 1952. It was invented and developed by Jerrier Haddad and Nathaniel Rochester based on the IAS machine at Princeton. The IBM 701 was the first computer in the IBM 700/7000 series, which were IBM’s high-end computers until the arrival of the IBM System/360 in 1964. The business-oriented sibling of the 701 was the IBM 702 and a lower-cost general-purpose sibling was the IBM 650, which gained fame as the first mass-produced computer. History IBM 701 competed with Remington Rand's UNIVAC 1103 in the scientific computation market, which had been developed for the NSA, so it was held secret until permission to market it was obtained in 1951. In early 1954, a committee of the Joint Chiefs of Staff requested that the two machines be compared for the purpose of using them for a Joint Numerical Weather Prediction project. Based on the trials, the two machines had comparable computational speed, with a slight advantage for IBM's machine, but the UNIVAC was favored unanimously for its significantly faster input-output equipment. Nineteen IBM 701 systems were installed. The first 701 was delivered to IBM's world headquarters in New York. Eight went to aircraft companies. At the Lawrence Livermore National Laboratory, having an IBM 701 meant that scientists could run nuclear explosives computations faster. "I think there is a world market for maybe five computers" is often attributed to Thomas Watson Sr., chairman and CEO of IBM, in 1943. This misquote may stem from a statement by his son, Thomas Watson Jr. at the 1953 IBM annual stockholders' meeting. Watson Jr. was describing the market acceptance of the IBM 701 computer. Before production began, Watson visited with 20 companies that were potential customers. This is what he said at the stockholders' meeting, "as a result of our trip, on which we expected to get orders for five machines, we came home with orders for 18”. Aviation Week for 11 May 1953 says the 701 rental charge was about $12,000 a month; American Aviation 9 Nov 1953 says "$15,000 a month per 40-hour shift. A second 40-hour shift ups the rental to $20,000 a month". The successor of the 701 was the index register-equipped IBM 704, introduced 4 years after the 701. The 704 was not compatible with the 701, however, as the 704 increased the size of instructions from 18 bits to 36 bits to support the extra features. The 704 also marked the transition to magnetic core memory. Social impact In 1952 IBM paired with language scholars from Georgetown University to develop translation software for use on computers. On January 7, 1954, the team developed an experimental software program that allowed the IBM 701 computer to translate from Russian to English. The Mark 1 Translating Device, which was develop
https://en.wikipedia.org/wiki/History%20of%20the%20British%20canal%20system
The canal network of the United Kingdom played a vital role in the Industrial Revolution. The UK was the first country to develop a nationwide canal network which, at its peak, expanded to nearly in length. The canals allowed raw materials to be transported to a place of manufacture, and finished goods to be transported to consumers, more quickly and cheaply than by a land based route. The canal network was extensive and included feats of civil engineering such as the Anderton Boat Lift, the Manchester Ship Canal, the Worsley Navigable Levels and the Pontcysyllte Aqueduct. In the post-medieval period, some rivers were canalised for boat traffic. The Exeter Ship Canal was completed in 1567. The Sankey Canal was the first British canal of the Industrial Revolution, opening in 1757. The Bridgewater Canal followed in 1761 and proved to be highly profitable. The majority of the network was built in the "Golden Age" of canals, between the 1770s and the 1830s. From 1840 the canals began to decline, because the growing railway network was a more efficient means of transporting goods. From the beginning of the 20th century the road network became progressively more important; canals became uneconomic and were abandoned. In 1948, much of the network was nationalised. Since then, canals have been increasingly used for recreation and tourism. Different types of boat used the canals: the most common was the traditional narrowboat, painted in the Roses and Castles design. At the outset the boats were towed by horses, but later they were driven by diesel engines. Some closed canals have been restored, and canal museums have opened. History Post-medieval transport systems In the post-medieval period, some natural waterways were "canalised" or improved for boat traffic in the 16th century. The first Act of Parliament was obtained by the City of Canterbury in 1515, to extend navigation on the River Stour in Kent, followed by the River Exe in 1539, which led to the construction in 1566 of a new channel, the Exeter Canal. Simple flash locks were provided to regulate the flow of water and allow loaded boats to pass through shallow waters by admitting a rush of water, but these were not purpose-built canals as we understand them today. The transport system that existed before the canals were built consisted of coastal shipping and horses and carts struggling along mostly unsurfaced mud roads (although there were some surfaced turnpike roads). There was also a small amount of traffic carried along navigable rivers. In the 17th century, as early industry started to expand, this transport situation was highly unsatisfactory. The restrictions of coastal shipping and river transport were obvious, and horses and carts could only carry one or two tons of cargo at a time. The poor state of most of the roads meant that they could often become unusable after heavy rain. Because of the small loads that could be carried, supplies of essential commodities such as coal and i
https://en.wikipedia.org/wiki/Terry%20Winograd
Terry Allen Winograd (born February 24, 1946) is an American professor of computer science at Stanford University, and co-director of the Stanford Human–Computer Interaction Group. He is known within the philosophy of mind and artificial intelligence fields for his work on natural language using the SHRDLU program. Education Winograd grew up in Colorado and graduated from Colorado College in 1966. He wrote SHRDLU as a PhD thesis at MIT in the years from 1968–70. In making the program Winograd was concerned with the problem of providing a computer with sufficient "understanding" to be able to use natural language. Winograd built a blocks world, restricting the program's intellectual world to a simulated "world of toy blocks". The program could accept commands such as, "Find a block which is taller than the one you are holding and put it into the box" and carry out the requested action using a simulated block-moving arm. The program could also respond verbally, for example, "I do not know which block you mean." The SHRDLU program can be viewed historically as one of the classic examples of how difficult it is for a programmer to build up a computer's semantic memory by hand and how limited or "brittle" such programs are. Research In 1973, Winograd moved to Stanford University and developed an AI-based framework for understanding natural language which was to give rise to a series of books. But only the first volume (Syntax) was ever published. "What I came to realize is that the success of the communication depends on the real intelligence on the part of the listener, and that there are many other ways of communicating with a computer that can be more effective, given that it doesn’t have the intelligence." His approach shifted away from classical Artificial Intelligence after encountering the critique of cognitivism by Hubert Dreyfus and meeting with the Chilean philosopher Fernando Flores. They published a critical appraisal from a perspective based in phenomenology as Understanding Computers and Cognition: a new foundation for design in 1986. In the latter part of the 1980s, Winograd worked with Flores on an early form of groupware. Their approach was based on conversation-for-action analysis. In the early 1980s, Winograd was a founding member and national president of Computer Professionals for Social Responsibility, a group of computer scientists concerned about nuclear weapons, SDI, and increasing participation by the U.S. Department of Defense in the field of computer science. In general, Winograd's work at Stanford has focused on software design in a broader sense than software engineering. In 1991 he founded the "Project on People, Computers and Design" in order to promote teaching and research into software design. The book "Bringing Design to Software" describes some of this work. His thesis is that software design is a distinct activity from both analysis and programming, but it should be informed by both, as well as by design
https://en.wikipedia.org/wiki/Fahrenheit%20%28graphics%20API%29
Fahrenheit was an effort to create a unified high-level API for 3D computer graphics to unify Direct3D and OpenGL. It was designed primarily by Microsoft and SGI and also included work from an HP-MS joint effort. Direct3D and OpenGL are low-level APIs that concentrate primarily on the rendering steps of the 3D rendering pipeline. Programs that use these APIs have to supply a considerable amount of code to handle the rest of the pipeline. Fahrenheit hoped to provide a single API that would do most of this work, and then call either Direct3D or OpenGL for the last steps. Much of the original Fahrenheit project was abandoned, and Microsoft and SGI eventually gave up on attempts to work together. In the end, only the scene graph portion of the Fahrenheit system, known as XSG, saw a release and was discontinued shortly afterwards. History Background In the 1990s SGI's OpenGL was the de facto standard for 3D computer graphics. Prior to the mid-90s different platforms had used various custom solutions, but SGI's power in the graphics market, combined with the efforts of the OpenGL Architecture Review Board (ARB), led to the rapid standardization of OpenGL across the majority of the graphics workstation market. In the mid-1990s, Microsoft licensed OpenGL for their Windows NT operating system as its primary 3D system; Microsoft was positioning NT as a workstation-class system, and OpenGL was required in order to be a real competitor in this space. Initial support was released in Windows NT Workstation version 3.5 in 1994. Confusing matters was Microsoft's February 1995 purchase of RenderMorphics. Their Reality Lab product was a 3D library written specifically for gaming purposes, aimed primarily at the "low end" market. After renaming it to Direct3D 3.0, Microsoft released it as the primary 3D API for Windows 95 and game programming. This sparked off a massive debate, both within Microsoft and outside, about the merits of the two APIs and whether or not Direct3D should be promoted. Through the mid-90s SGI had been working on a series of efforts to provide a higher level API on top of OpenGL to make programming easier. By 1997 this had evolved into their OpenGL++ system, a retained-mode C++ API on top of OpenGL. They proposed that a modified version be used as a single API on top of either OpenGL or a new high-performance low-level API that Microsoft was known to be working on (not based on Reality Lab). This would not only hide the implementation details and make the OpenGL/DirectX war superfluous, but at the same time offer considerably better high-level interfaces for a more robust object oriented development environment. The OpenGL++ effort dragged on in the ARB through 1997. Although SGI committed resources to the project in order to provide a sample implementation, it appears they were unhappy with progress overall and complained "There's been lots of work, but relatively little communication." Microsoft in particular had stated in no uncert
https://en.wikipedia.org/wiki/Rate%E2%80%93distortion%20theory
Rate–distortion theory is a major branch of information theory which provides the theoretical foundations for lossy data compression; it addresses the problem of determining the minimal number of bits per symbol, as measured by the rate R, that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding an expected distortion D. Introduction Rate–distortion theory gives an analytical expression for how much compression can be achieved using lossy compression methods. Many of the existing audio, speech, image, and video compression techniques have transforms, quantization, and bit-rate allocation procedures that capitalize on the general shape of rate–distortion functions. Rate–distortion theory was created by Claude Shannon in his foundational work on information theory. In rate–distortion theory, the rate is usually understood as the number of bits per data sample to be stored or transmitted. The notion of distortion is a subject of on-going discussion. In the most simple case (which is actually used in most cases), the distortion is defined as the expected value of the square of the difference between input and output signal (i.e., the mean squared error). However, since we know that most lossy compression techniques operate on data that will be perceived by human consumers (listening to music, watching pictures and video) the distortion measure should preferably be modeled on human perception and perhaps aesthetics: much like the use of probability in lossless compression, distortion measures can ultimately be identified with loss functions as used in Bayesian estimation and decision theory. In audio compression, perceptual models (and therefore perceptual distortion measures) are relatively well developed and routinely used in compression techniques such as MP3 or Vorbis, but are often not easy to include in rate–distortion theory. In image and video compression, the human perception models are less well developed and inclusion is mostly limited to the JPEG and MPEG weighting (quantization, normalization) matrix. Distortion functions Distortion functions measure the cost of representing a symbol by an approximated symbol . Typical distortion functions are the Hamming distortion and the Squared-error distortion. Hamming distortion Squared-error distortion Rate–distortion functions The functions that relate the rate and distortion are found as the solution of the following minimization problem: Here , sometimes called a test channel, is the conditional probability density function (PDF) of the communication channel output (compressed signal) for a given input (original signal) , and is the mutual information between and defined as where and are the entropy of the output signal Y and the conditional entropy of the output signal given the input signal, respectively: The problem can also be formulated as a distortion–rate functio
https://en.wikipedia.org/wiki/Driver%20and%20Vehicle%20Licensing%20Agency
The Driver and Vehicle Licensing Agency (DVLA; ) is the organisation of the UK government responsible for maintaining a database of drivers in Great Britain and a database of vehicles for the entire United Kingdom. Its counterpart for drivers in Northern Ireland is the Driver and Vehicle Agency (DVA). The agency issues driving licences, organises collection of vehicle excise duty (also known as road tax and road fund licence) and sells personalised registrations. The DVLA is an executive agency of the Department for Transport (DfT). The current Chief Executive of the agency is Julie (Karen) Lennard. The DVLA is based in Swansea, Wales, with a prominent 16-storey building in Clase and offices in Swansea Vale. It was previously known as the Driver and Vehicle Licensing Centre (DVLC). The agency previously had a network of 39 offices around Great Britain, known as the Local Office Network, where users could attend to apply for licences and transact other business, but throughout the course of 2013, the local offices were gradually closed down, and all had been closed by December 2013. The agency's work is consequently fully centralised in Swansea, with the majority of users having to transact remotely - by post or (for some transactions) by phone. DVLA introduced Electronic Vehicle Licensing (EVL) in 2004, allowing customers to pay vehicle excise duty online and by telephone. However, customers still have the option to tax their vehicles via the Post Office. A seven-year contract enabling the Post Office to continue to process car tax applications was agreed in November 2012, with the option of a three-year extension. History Originally, vehicle registration was the responsibility of Borough and County councils throughout Great Britain, a system created by the Motor Car Act 1903. In 1965 a centralised licensing system was set up at a new Swansea Driver and Vehicle Licensing Centre (DVLC), taking over licences issued from County/Borough councils. A new purpose built centre was then built on the site of the old Clase Farm on Longview Road, Swansea in 1969. In April 1990, the DVLC was renamed as the Driver and Vehicle Licensing Agency (DVLA), becoming an executive agency of Department for Transport. British Forces Germany civilian vehicles Civilian vehicles used in Germany by members of British Forces Germany or their families are registered with the DVLA on behalf of the Ministry of Defence. Diplomatic and consular vehicles Official diplomatic and consular vehicles are registered with the DVLA on behalf of the Foreign and Commonwealth Office. DVLA database The vehicle register held by DVLA is used in many ways. For example, by the DVLA itself to identify untaxed vehicles, and by outside agencies to identify keepers of cars entering central London who have not paid the congestion charge, or who exceed speed limits on a road that has speed cameras by matching the cars to their keepers utilising the DVLA database. The current DVLA vehicle regist
https://en.wikipedia.org/wiki/Gnus
Gnus (), or Gnus Network User Services, is a message reader which is part of GNU Emacs. It supports reading and composing both e-mail and news and can also act as an RSS reader, web processor, and directory browser for both local and remote filesystems. Gnus blurs the distinction between news and e-mail, treating them both as "articles" that come from different sources. News articles are kept separate by group, and e-mail can be split into arbitrary groups, similar to folders in other mail readers. In addition, Gnus is able to use a number of web-based sources as inputs for its groups. Features Some Gnus features: a range of backends that support any or all of: reading email from the local filesystem, or over a network via IMAP or POP3 reading web pages via an RSS feed treating a directory of files, either local or remote (via FTP or other method) as articles to browse reading Usenet News, including the Gmane and Gwene mail-to-news archives of mailing lists searching local or remote indices of emails or news items, e.g. via Notmuch simple or advanced mail splitting (automatic sorting of incoming mail to user-defined groups) incoming mail can be set to expire instead of just plain deletion custom posting styles (e.g. a different From address, .signature etc.) for each group virtual groups (e.g., directory on the computer can be read as a group) an advanced message scoring system user-defined hooks for almost any method (in emacs lisp) many of the parameters (e.g., expiration, posting style) can be specified individually for all of the groups integration with the Insidious Big Brother Database (BBDB) to handle contacts in a highly automated fashion. integration with other Emacs packages, such as the W3 web browser, LDAP lookup code, etc. As part of Emacs, Gnus' features can be extended indefinitely through Emacs lisp. To quote the Gnus Manual: "You know that Gnus gives you all the opportunity you'd ever want for shooting yourself in the foot. Some people call it flexibility. Gnus is also customizable to a great extent, which means that the user has a say on how Gnus behaves. Other newsreaders might unconditionally shoot you in your foot, but with Gnus, you have a choice!" Note that the composition of HTML email messages (as users of more WYSIWYG editors may be used to) is not included by default; the lack of this "ability" is counted as a feature by Gnus' traditional user base. History Gnus is a rewrite of GNUS by Masanobu Umeda, which ceased to be developed in 1992. In autumn 1994, Lars Magne Ingebrigtsen started the rewrite under the name (ding) which is a recursive acronym for ding is not Gnus, intending to produce a version for which the interface and configuration would work almost exactly the same, but the internals would be completely revamped and improved. The new version proved to be popular and has undergone constant expansion and enhancement. Ingebrigtsen is also programmer of eww. Versions In general, users r
https://en.wikipedia.org/wiki/Parlay%20Group
The Parlay Group was a technical industry consortium (founded 1998, ended around 2007) that specified APIs for the telephone network. These APIs enable the creation of services by organizations both inside and outside of the traditional carrier environment. In fact, it is hoped that services can be created by IT developers, rather than telephony experts. Parlay/OSA Parlay/OSA was an open API for the telephone network. It was developed by The Parlay Group, which worked closely with ETSI and 3GPP, which all co-publish it. Within 3GPP, Parlay is part of Open Services Access. Parlay APIs include: call control, conferencing, user interaction (audio and text messaging, SMS/MMS), and billing. The APIs are specified in the CORBA Interface definition language and WSDL. The use of CORBA enables remote access between the Parlay gateway and the application code. A set of Java mappings allow the APIs to be invoked locally as well. A major goal of the APIs is to be independent of the underlying telephony network technology (e.g. CDMA, GSM, landline SS7). Parlay X In 2003 the Parlay Group released a new set of web services called Parlay X. These are a much simpler set of APIs intended to be used by a larger community of developers. The Parlay X web services include Third Party Call Control (3PCC), location and simple payment. The Parlay X specifications complement the more powerful but more complex Parlay APIs. Parlay X implementations are now (September 2004) in commercial service from BT and Sprint. Parlay work historically stems from the TINA effort. Parlay is somewhat related to JAIN, and is currently (early 2003) completely unrelated to the Service Creation Community. Parlay Technology The objective of Parlay/OSA is to provide an API that is independent of the underlying networking technology and of the programming technology used to create new services. As a result, the Parlay/OSA APIs are specified in UML. There are then a set of realizations, for specific programming environments: CORBA/IDL, Java, and Web services specified by WSDL. Parlay Framework The role of the Parlay/OSA Framework was to provide a way for the network to authenticate applications using the Parlay/OSA API. The Framework also allows applications to discover the capabilities of the network, and provides management functions for handling fault and overload situations. This is to ensure to a telecom network operator that any applications using the Parlay API cannot affect the security or integrity of the network. Implementing Parlay The Parlay/OSA specifications define an API, they do not say how the API is to be implemented. The typical Parlay/OSA implementation adds a new network element - the Parlay/OSA Gateway, which implements the Framework. It may implement the individual service APIs, or may interact with other network elements such as switches to provide individual service capabilities such as call control or location. Some vendors treat the Parlay/OSA Gateway as a st
https://en.wikipedia.org/wiki/Punched%20card%20sorter
A punched card sorter is a machine for sorting decks of punched cards. Sorting was a major activity in most facilities that processed data on punched cards using unit record equipment. The work flow of many processes required decks of cards to be put into some specific order as determined by the data punched in the cards. The same deck might be sorted differently for different processing steps. A popular family of sorters, the IBM 80 series sorters, sorted input cards into one of 13 pockets depending on the holes punched in a selected column and the sorter's settings. Basic operation The basic operation of a card sorter is to take a punched card, examine a single column, and place the card into a selected pocket. There are twelve rows on a punched card, and thirteen pockets in the sorter; one pocket is for blanks, rejects, and errors. Cards are normally passed through the sorter face down with the bottom edge ("9-edge") first. A small metal brush or optical sensor is positioned so that, as each card goes through the sorter, one column passes under the brush or optical sensor. The holes sensed in that column together with the settings of the sorter controls determine which pocket the card is to be directed to. This directing is done by slipping the card into a stack of metal strips (or chute blades) that run the length of the sorter feed mechanism. Each blade ends above one of the output pockets, and the card is thus routed to the designated pocket. Sorting operations Multiple column sorting was commonly done by first sorting the least significant column, then proceeding, column by column, to the most significant column. This is called a least significant digit radix sort. Numeric columns have one punch in rows 0-9, possibly a sign overpunch in rows 11-12, and can be sorted in a single pass through the sorter. Alphabetic columns have a zone punch in rows 12, 11, or 0, a digit punch in one of the rows 1-9, and can be sorted by passing some or all of the cards through the sorter twice on that column. For more details of punched card codes see Punched card#IBM 80-column format and character codes. There were several methods used for alphabetical sorting, depending on the features provided by the particular sorter and the characteristics of the data to be sorted. A commonly used method on the 082 and earlier sorters was to sort the cards twice on the same column, first on digit rows 1-9, then on the zone rows 12, 11, and 0 (or vice versa, zone rows first then digit rows). Operator switches allow zone-sorting by "switching off" rows 1-9 for the second pass of the card for each column. Other special characters and punctuation marks were added to the card code, involving as many as three punches per column (and in 1964 with the introduction of EBCDIC as many as six punches per column). The 083 and 084 sorters recognized these multiple digit or multiple zone punches, sorting them to the error pocket. Earlier sorters Original census sort
https://en.wikipedia.org/wiki/I486SL
The Intel i486SL is the power-saving variant of the i486DX microprocessor. The SL was designed for use in mobile computers. It was produced between November 1992 and June 1993. Clock speeds available were 20, 25 and 33 MHz. The i486SL contained all features of the i486DX. In addition, the System Management Mode (SMM) (the same mode introduced with i386SL) was included with this processor. The system management mode makes it possible to shut down the processor without losing data. To achieve this, the processor state is saved in an area of static RAM (SMRAM). In mid-1993, Intel incorporated the SMM feature in all its new 80486 processors and discontinued the SL series. Refer to the respective section of the list of Intel microprocessors for technical details. 486SL
https://en.wikipedia.org/wiki/Faxanadu
is an action role-playing platform video game developed by Hudson Soft for the Nintendo Entertainment System. The name was licensed by computer game developer Nihon Falcom ("Falcom") and was developed and released in Japan by Hudson Soft for the Famicom in 1987. Nintendo released the game in the United States and Europe as a first-party title under license from Hudson Soft. Faxanadu is a spin-off or side-story of Xanadu, which is the second installment of Falcom's long-running RPG series, Dragon Slayer. The title Faxanadu is a portmanteau formed from the names Famicom and Xanadu. The game uses side-scrolling and platforming game-play, while employing role-playing elements with an expansive story and medieval setting. Story The player-controlled protagonist of Faxanadu is an unnamed wanderer who returns home. He has no name, though the Japanese version allows the player to choose one. The game begins when he approaches Eolis, his hometown, after an absence to find it in disrepair and virtually abandoned. Worse still, the town is under attack by Dwarves. The Elven king explains that the Elves' fountain water, their life source, has been stopped and all other remaining water has been poisoned and provides the protagonist with 1500 gold, the game's currency, to prepare for his journey to uncover the cause. As the story unfolds, it is revealed that Elves and Dwarves lived in harmony among the World Tree until The Evil One emerged from a fallen meteorite. The Evil One then transformed the Dwarves into monsters against their will and set them against the Elves. The Dwarf King, Grieve, swallowed his magical sword before he was transformed, hiding it in his own body to prevent The Evil One from acquiring it. It is only with this sword that The Evil One can be destroyed. His journey takes him to four overworld areas: the tree's buttress, the inside of the trunk, the tree's branches and finally the Dwarves' mountain stronghold. Gameplay Faxanadu is a side scrolling action role playing game, sometimes classified as a metroidvania. Players guide the hero through a screen-by-screen series of fields, towns, and dungeons. The hero can walk, jump, and climb ladders – all typical characteristics of a platform game. Along the way, he may also purchase usable items with gold, equip and use bladed weapons against enemies, equip armor, and cast magic projectiles. In addition, he can access information regarding the game's events by speaking with townsfolk or by consulting other sources. The limits of physical damage the hero can sustain from enemies is tracked by a life bar, and the magical power he can exert is tracked by a magic bar. These are listed on the top of the screen along with total experience, total gold, time (for items with a timed duration), and the currently held item. When the hero defeats an enemy, it usually leaves behind gold or life-giving bread. The hero also gains a set amount of experience. Experience points help increase the hero's ra
https://en.wikipedia.org/wiki/Fuji%20Television
, also known as , with the call sign JOCX-DTV, is a Japanese television station based in Odaiba, Minato, Tokyo, Japan. It is the key station of the Fuji News Network (FNN) and the Fuji Network System (FNS). Fuji Television is one of the ''five private broadcasters based in Tokyo''. Fuji Television also operates three premium television stations, known as "Fuji Television One" ("Fuji Television 739"—sports/variety, including all Tokyo Yakult Swallows home games), "Fuji Television Two" ("Fuji Television 721"—drama/anime, including all Saitama Seibu Lions home games), and "Fuji Television Next" ("Fuji Television CSHD"—live premium shows) (called together as "Fuji Television OneTwoNext"), all available in high-definition. Fuji Television is owned by , a certified broadcasting holding company under the Japanese Broadcasting Act, and affiliated with the Fujisankei Communications Group. The current Fuji Television was established in October 2008. Fuji Media Holdings is the former Fuji Television founded in 1957. In the early days of Fuji TV's broadcasting, its ratings have long been in the middle of all Tokyo stations. In the early 1980s, the ratings of Fuji TV rose sharply. In 1982, it won the "Triple Crown" in the ratings among the flagship stations for the first time, and produced many famous TV dramas and variety shows. In 1997, Fuji Television moved from Kawata-cho, Shinjuku District to Odaiba, the sub-center of Rinkai, Tokyo, which led to the development of the Odaiba area, which was almost empty at that time. After the 2010s, the ratings of Fuji TV dropped sharply, and now the household ratings rank fifth among all stations in Tokyo. But on the other hand, Fuji TV is also a TV station with more diversified operations in the Japanese TV industry and a higher proportion of income from departments outside the main business. In addition, Fuji TV is the first TV station in Japan to broadcast and produce locally-made animated series. Offices The headquarters are located at 2-4-8, Daiba, Minato, Tokyo. The Kansai office is found at Aqua Dojima East, Dojima, Kita-ku, Osaka. The Nagoya office is found at Telepia, Higashi-sakura, Higashi-ku, Nagoya. The Japanese television station also has 12 bureau offices in other parts of the world in locations in countries such as France, Russia, United States, South Korea, China, Thailand and the UK. Branding The first logo of Fuji TV was designed by Kamekura Yusaku. Its design concept comes from the channel's frequency "8", commonly known as the "8 Mark" (8マーク). After Fuji TV adopted the "eyeball logo" (described later) as a trademark, the 8 logo did not completely withdraw from use. For example, there is a sculpture of the 8 logo at the entrance of the FCG building; the program logo of the variety show "Grand Slam of Performing Arts" also uses the 8 logo. In April 1985, in order to strengthen the unity of the group, the chairman of Fujisankei Group Haruo Kanai decided to formulate a new group unified trademark
https://en.wikipedia.org/wiki/SBN
SBN can mean: Naval Aircraft Factory SBN, a scout/torpedo bomber from the mid-1930s Sehar Broadcasting Network, a television channel in Pakistan Servizio bibliotecario nazionale, the National Library Service of Italy Small Business Network, by PCM, Inc. Society for Behavioral Neuroendocrinology Sonlife Broadcasting Network of the Jimmy Swaggart Ministries South Bend International Airport, IATA code Southern Broadcasting Network, in the Philippines Standard Beatbox Notation Standard Book Numbering, which developed into ISBN Strontium barium niobate Former Student Broadcast Network, UK Subtract and branch if negative, computer opcode Supervision Broadcasting Network, Mongolia
https://en.wikipedia.org/wiki/John%20Sherren%20Brewer
John Sherren Brewer, Jr. (March 1809 – February 1879) was an English clergyman, historian and scholar. He was a brother of E. Cobham Brewer, compiler of Brewer's Dictionary of Phrase & Fable. Birth and education Brewer was born in Norwich, the son of a Baptist schoolmaster. He matriculated at Queen's College, Oxford in 1827, graduating B.A. in 1833, M.A. 1835. He was ordained in the Church of England in 1837, and became chaplain to a central London workhouse. In 1839 he was appointed lecturer in classical literature at King's College London, and in 1858 he became professor of English language and literature and lecturer in modern history, succeeding FD Maurice. In 1854, Maurice invited him to teach at the newly opened Working Men's College; from 1869 to 1872 he was the College's Vice Principal. Brewer's son Henry William Brewer (1836-1903) was a noted architectural artist. Henry William Brewer's sons were the artist Henry Charles Brewer and the creator of etchings James Alphege Brewer. Journalism and history From 1854 onwards, Brewer combined these duties with journalistic work on the Morning Herald, Morning Post and Standard. In 1856 he was commissioned by the Master of the Rolls to prepare a calendar of the state papers of King Henry VIII, work demanding a vast amount of research. He was also made Reader, and subsequently Preacher, at the Rolls Chapel. He edited Fr. Rogeri Bacon Opera Quædam Hactenus Inedita (the edited Works of Roger Bacon) in 1859. In 1877 Benjamin Disraeli obtained for him the crown living of Toppesfield, Essex. There Brewer had time to continue his task of preparing his Letters and Papers of the Reign of Henry VIII, the "Introductions" to which (published separately, under the title The Reign of Henry VIII, in 1884) form a scholarly and authoritative history of Henry VIII's reign. New editions of several standard historical works were also produced under Brewer's direction. Primary sources Letters and papers, foreign and domestic, of the reign of Henry VIII: preserved in the Public Record Office, the British Museum and elsewhere, Volume 1 edited by John S. Brewer, Robert H. Brodie, James Gairdner. (1862), full text online vol 1; full text vol 3 Notes External links Attribution 1809 births 1879 deaths Clergy from Norwich Alumni of The Queen's College, Oxford 19th-century English Anglican priests 19th-century British historians Academics of King's College London English male non-fiction writers 19th-century English male writers Academics from Norwich