source
stringlengths
32
199
text
stringlengths
26
3k
https://en.wikipedia.org/wiki/TGN
TGN may refer to: Tarragona, abbreviation of the city of Tarragona, in Catalonia Thai Global Network, a Thai satellite television channel Texas Government Newsletter, for college students Tyco Global Network, fiber optic network by Tyco International Trans Golgi network in biology IEEE 802.11n Task Group N Thyroglobulin, a protein Getty Thesaurus of Geographic Names TGN (AM) radio station, Guatemala Latrobe Regional Airport, IATA airport code "TGN"
https://en.wikipedia.org/wiki/Endosome
Endosomes are a collection of intracellular sorting organelles in eukaryotic cells. They are parts of endocytic membrane transport pathway originating from the trans Golgi network. Molecules or ligands internalized from the plasma membrane can follow this pathway all the way to lysosomes for degradation or can be recycled back to the cell membrane in the endocytic cycle. Molecules are also transported to endosomes from the trans Golgi network and either continue to lysosomes or recycle back to the Golgi apparatus. Endosomes can be classified as early, sorting, or late depending on their stage post internalization. Endosomes represent a major sorting compartment of the endomembrane system in cells. Function Endosomes provide an environment for material to be sorted before it reaches the degradative lysosome. For example, low-density lipoprotein (LDL) is taken into the cell by binding to the LDL receptor at the cell surface. Upon reaching early endosomes, the LDL dissociates from the receptor, and the receptor can be recycled to the cell surface. The LDL remains in the endosome and is delivered to lysosomes for processing. LDL dissociates because of the slightly acidified environment of the early endosome, generated by a vacuolar membrane proton pump V-ATPase. On the other hand, epidermal growth factor (EGF) and the EGF receptor have a pH-resistant bond that persists until it is delivered to lysosomes for their degradation. The mannose 6-phosphate receptor carries ligands from the Golgi destined for the lysosome by a similar mechanism. Types There are three different types of endosomes: early endosomes, late endosomes, and recycling endosomes. They are distinguished by the time it takes for endocytosed material to reach them, and by markers such as Rabs. They also have different morphology. Once endocytic vesicles have uncoated, they fuse with early endosomes. Early endosomes then mature into late endosomes before fusing with lysosomes. Early endosomes mature in several ways to form late endosomes. They become increasingly acidic mainly through the activity of the V-ATPase. Many molecules that are recycled are removed by concentration in the tubular regions of early endosomes. Loss of these tubules to recycling pathways means that late endosomes mostly lack tubules. They also increase in size due to the homotypic fusion of early endosomes into larger vesicles. Molecules are also sorted into smaller vesicles that bud from the perimeter membrane into the endosome lumen, forming intraluminal vesicles (ILVs); this leads to the multivesicular appearance of late endosomes and so they are also known as multivesicular endosomes or multivesicular bodies (MVBs). Removal of recycling molecules such as transferrin receptors and mannose 6-phosphate receptors continues during this period, probably via budding of vesicles out of endosomes. Finally, the endosomes lose RAB5A and acquire RAB7A, making them competent for fusion with lysosomes. Fusion of late
https://en.wikipedia.org/wiki/Stochastic%20programming
In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. A stochastic program is an optimization problem in which some or all problem parameters are uncertain, but follow known probability distributions. This framework contrasts with deterministic optimization, in which all problem parameters are assumed to be known exactly. The goal of stochastic programming is to find a decision which both optimizes some criteria chosen by the decision maker, and appropriately accounts for the uncertainty of the problem parameters. Because many real-world decisions involve uncertainty, stochastic programming has found applications in a broad range of areas ranging from finance to transportation to energy optimization. Two-stage problems The basic idea of two-stage stochastic programming is that (optimal) decisions should be based on data available at the time the decisions are made and cannot depend on future observations. The two-stage formulation is widely used in stochastic programming. The general formulation of a two-stage stochastic programming problem is given by: where is the optimal value of the second-stage problem The classical two-stage linear stochastic programming problems can be formulated as where is the optimal value of the second-stage problem In such formulation is the first-stage decision variable vector, is the second-stage decision variable vector, and contains the data of the second-stage problem. In this formulation, at the first stage we have to make a "here-and-now" decision before the realization of the uncertain data , viewed as a random vector, is known. At the second stage, after a realization of becomes available, we optimize our behavior by solving an appropriate optimization problem. At the first stage we optimize (minimize in the above formulation) the cost of the first-stage decision plus the expected cost of the (optimal) second-stage decision. We can view the second-stage problem simply as an optimization problem which describes our supposedly optimal behavior when the uncertain data is revealed, or we can consider its solution as a recourse action where the term compensates for a possible inconsistency of the system and is the cost of this recourse action. The considered two-stage problem is linear because the objective functions and the constraints are linear. Conceptually this is not essential and one can consider more general two-stage stochastic programs. For example, if the first-stage problem is integer, one could add integrality constraints to the first-stage problem so that the feasible set is discrete. Non-linear objectives and constraints could also be incorporated if needed. Distributional assumption The formulation of the above two-stage problem assumes that the second-stage data is modeled as a random vector with a known probability distribution. This would be justified in many situations. For example, the
https://en.wikipedia.org/wiki/The%20Operational%20Art%20of%20War
The Operational Art of War (TOAW) is a series of computer wargames noted for their scope, detail, and flexibility in recreating, at an operational level, the major land battles of the 20th century. A Norm Koger design, TalonSoft published the first of the series in 1998. Matrix Games bought the rights to the franchise and created a new game in 2006, TOAW 3, which was the first non-Norm Koger designed game in the series. Gameplay Games in the series TalonSoft published: The Operational Art of War Vol. 1: 1939–1955 (1998) The Operational Art of War II: Modern Battles 1956–2000 (1999) The Operational Art of War II: Flashpoint Kosovo (1999) The Operational Art of War II: Elite Edition (2000) The Operational Art of War: Century of Warfare (2000) Matrix Games published: The Operational Art of War III (2006) The Operational Art of War IV (2017) Concept The basic appearance of the game is the traditional view onto a hexagonal grid, although the player may choose a map-like overhead view with military symbols and basic info for the units, or an isometric view that depicts the units with small pictures of soldiers, tanks, etc. Gameplay is turn-based. The scale of the game is variable, with distances ranging from 2.5 km per hex to 50 km per hex, and each turn simulating from 1/4 day to 1 week of time, but is fundamentally "operational", focusing on battalion, division, and corps combat. The option of scale is left to a maker of a particular scenario to choose, resulting in a wide range of user-made scenarios; ranging from, for example, a small engagement in northern Germany between several companies to an entire World War II on division scale. The maximum number of units that can be made in a scenario was 2,000 per side until TOAW IV, although managing more than 200 can often be complicated. Each unit is assigned unique equipment (types of infantry, tanks, aircraft, etc.) and given its own name, info and color code. The game also includes "events", which is a series of programmable events which display a message and can have several different causes and effects. The variability of these events makes each scenario—when properly designed—very complex and variable. The maximum number of in-game events is 500 (or 1,000 for TOAW III version). The games include a scenario editor, and much of the content in the follow-up games are designs developed by the community of avid players. Version IV Version IV was released November 2017, and included a large number of changes, among which are: unit limit increased from 2,000 to 10,000 event limit increased from 999 to 10,000 significant changes to naval combat significant changes to how combat uses turn time supply system Scenario depots These locations contain many user-made scenarios for the game: Wargamer.com The-strategist.net GameSquad.com Reception Volume 1 In the United States, The Operational Art of War sold 12,789 copies during 1998. These sales accounted for $555,681 in revenue that
https://en.wikipedia.org/wiki/Comma-separated%20values
Comma-separated values (CSV) is a text file format that uses commas to separate values. A CSV file stores tabular data (numbers and text) in plain text, where each line of the file typically represents one data record. Each record consists of the same number of fields, and these are separated by commas in the CSV file. If the field delimiter itself may appear within a field, fields can be surrounded with quotation marks. The CSV file format is one type of delimiter-separated file format. Delimiters frequently used include the comma, tab, space, and semicolon. Delimiter-separated files are often given a ".csv" extension even when the field separator is not a comma. Many applications or libraries that consume or produce CSV files have options to specify an alternative delimiter. The lack of adherence to the CSV standard RFC 4180 necessitates the support for a variety of CSV formats in data input software. Despite this drawback, CSV remains widespread in data applications and is widely supported by a variety of software, including common spreadsheet applications such as Microsoft Excel. Benefits cited in favor of CSV include human readability and the simplicity of the format. Applications CSV is a common data exchange format that is widely supported by consumer, business, and scientific applications. Among its most common uses is moving tabular data between programs that natively operate on incompatible (often proprietary or undocumented) formats. For example, a user may need to transfer information from a database program that stores data in a proprietary format, to a spreadsheet that uses a completely different format. Most database programs can export data as CSV. Most spreadsheet programs can read CSV data, allowing CSV to be used as an intermediate format when transferring data from a database to a spreadsheet. CSV is also used for storing data. Common data science tools such as Pandas include the option to export data to CSV for long-term storage. Benefits of CSV for data storage include the simplicity of CSV makes parsing and creating CSV files easy to implement and fast compared to other data formats, human readability making editing or fixing data simpler, and high compressibility leading to smaller data files. Alternatively, CSV does not support more complex data relations and makes no distinction between null and empty values, and in applications where these features are needed other formats are preferred. Specification proposes a specification for the CSV format; however, actual practice often does not follow the RFC and the term "CSV" might refer to any file that: is plain text using a character encoding such as ASCII, various Unicode character encodings (e.g. UTF-8), EBCDIC, or Shift JIS, consists of records (typically one record per line), with the records divided into fields separated by delimiters (typically a single reserved character such as comma, semicolon, or tab; sometimes the delimiter may include optional spaces),
https://en.wikipedia.org/wiki/Xargs
xargs (short for "extended arguments" ) is a command on Unix and most Unix-like operating systems used to build and execute commands from standard input. It converts input from standard input into arguments to a command. Some commands such as grep and awk can take input either as command-line arguments or from the standard input. However, others such as cp and echo can only take input as arguments, which is why xargs is necessary. A port of an older version of GNU is available for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. A ground-up rewrite named is part of the open-source TextTools project. The command has also been ported to the IBM i operating system. Examples One use case of the xargs command is to remove a list of files using the rm command. POSIX systems have an for the maximum total length of the command line, so the command may fail with an error message of "Argument list too long" (meaning that the exec system call's limit on the length of a command line was exceeded): rm /path/* or rm $(find /path -type f). (The latter invocation is incorrect, as it may expand globs in the output.) This can be rewritten using the xargs command to break the list of arguments into sublists small enough to be acceptable: $ find /path -type f -print | xargs rm In the above example, the find utility feeds the input of xargs with a long list of file names. xargs then splits this list into sublists and calls rm once for every sublist. Some implementations of xargs can also be used to parallelize operations with the -P maxprocs argument to specify how many parallel processes should be used to execute the commands over the input argument lists. However, the output streams may not be synchronized. This can be overcome by using an --output file argument where possible, and then combining the results after processing. The following example queues 24 processes and waits on each to finish before launching another. $ find /path -name '*.foo' | xargs -P 24 -I '{}' /cpu/bound/process '{}' -o '{}'.out xargs often covers the same functionality as the command substitution feature of many shells, denoted by the backquote notation (`...` or $(...)). xargs is also a good companion for commands that output long lists of files such as find, locate and grep, but only if one uses -0 (or equivalently --null), since xargs without -0 deals badly with file names containing ', " and space. GNU Parallel is a similar tool that offers better compatibility with find, locate and grep when file names may contain ', ", and space (newline still requires -0). Placement of arguments option: single argument The xargs command offers options to insert the listed arguments at some position other than the end of the command line. The -I option to xargs takes a string that will be replaced with the supplied input before the command is executed. A common choice is %. $ mkdir ~/backups $ find /path -type f -name '*~' -pri
https://en.wikipedia.org/wiki/David%20Leestma
David Cornell Leestma (born May 6, 1949) is a former American astronaut and retired Captain in the United States Navy. Personal data Born May 6, 1949, in Muskegon, Michigan. He and his wife have six children. He enjoys golfing, tennis, aviation, and fishing. Education Graduated from Tustin High School in Tustin, California, in 1967; received a Bachelor of Science degree in Aeronautical Engineering from the United States Naval Academy in 1971, and a Master of Science degree in Aeronautical Engineering from the U.S. Naval Postgraduate School in 1972. Organizations Associate Fellow, American Institute of Aeronautics and Astronautics (AIAA); Life Member, Association of Naval Aviation. Special honors The Distinguished Flying Cross, Legion of Merit, Defense Superior Service Medal, Defense Meritorious Service Medal, Navy Commendation Medal, Navy Achievement Medal, Meritorious Unit Commendation (VX-4), National Defense Service Medal, Battle "E" Award (VF-32), the Rear Admiral Thurston James Award (1973), the NASA Space Flight Medal (1984, 1989, 1992), the NASA Exceptional Service Medal (1985, 1988, 1991, 1992), and the NASA Outstanding Leadership Medal (1993, 1994). He was awarded the Presidential Rank of Meritorious Executive Award in 1998 and again in 2004. Experience Leestma graduated first in his class from the U.S. Naval Academy in 1971. As a first lieutenant afloat, he was assigned to in Long Beach, California, before reporting in January 1972 to the U.S. Naval Postgraduate School. He completed United States Naval Flight Officer training and received his NFO wings in October 1973. He was assigned to VF-124 in San Diego, California, for initial flight training in the F-14A Tomcat and then transferred to VF-32 in June 1974 and was stationed at Virginia Beach, Virginia. Leestma made three overseas deployments to the Mediterranean/North Atlantic areas while flying aboard the aircraft carrier . In 1977, he was reassigned to Air Test and Evaluation Squadron Four (VX-4) at Naval Air Station Point Mugu, California. As an operational test director with the F-14A, he conducted the first operational testing of new tactical software for the F-14 and completed the follow-on test and evaluation of new F-14A avionics, including the programmable signal processor. He also served as fleet model manager for the F-14A tactical manual. He has logged over 3,500 hours of flight time, including nearly 1,500 hours in the F-14A. Leestma retired from the Navy as a Captain. NASA experience He was selected by NASA to become an astronaut in 1980 and was the first member of NASA Astronaut Group 9 to go into space. Following his first flight, Leestma served as a capsule communicator (CAPCOM) for STS-51-C through STS-61-A. He was then assigned as the Chief, Mission Development Branch, responsible for assessing the operational integration requirements of payloads that will fly aboard the Space Shuttle. From February 1990 to September 1991, when he started training for his t
https://en.wikipedia.org/wiki/Inverse%20kinematics
In computer animation and robotics, inverse kinematics is the mathematical process of calculating the variable joint parameters needed to place the end of a kinematic chain, such as a robot manipulator or animation character's skeleton, in a given position and orientation relative to the start of the chain. Given joint parameters, the position and orientation of the chain's end, e.g. the hand of the character or robot, can typically be calculated directly using multiple applications of trigonometric formulas, a process known as forward kinematics. However, the reverse operation is, in general, much more challenging. Inverse kinematics is also used to recover the movements of an object in the world from some other data, such as a film of those movements, or a film of the world as seen by a camera which is itself making those movements. This occurs, for example, where a human actor's filmed movements are to be duplicated by an animated character. Robotics In robotics, inverse kinematics makes use of the kinematics equations to determine the joint parameters that provide a desired configuration (position and rotation) for each of the robot's end-effectors. This is important because robot tasks are performed with the end effectors, while control effort applies to the joints. Determining the movement of a robot so that its end-effectors move from an initial configuration to a desired configuration is known as motion planning. Inverse kinematics transforms the motion plan into joint actuator trajectories for the robot. Similar formulas determine the positions of the skeleton of an animated character that is to move in a particular way in a film, or of a vehicle such as a car or boat containing the camera which is shooting a scene of a film. Once a vehicle's motions are known, they can be used to determine the constantly-changing viewpoint for computer-generated imagery of objects in the landscape such as buildings, so that these objects change in perspective while themselves not appearing to move as the vehicle-borne camera goes past them. The movement of a kinematic chain, whether it is a robot or an animated character, is modeled by the kinematics equations of the chain. These equations define the configuration of the chain in terms of its joint parameters. Forward kinematics uses the joint parameters to compute the configuration of the chain, and inverse kinematics reverses this calculation to determine the joint parameters that achieve a desired configuration. Kinematic analysis Kinematic analysis is one of the first steps in the design of most industrial robots. Kinematic analysis allows the designer to obtain information on the position of each component within the mechanical system. This information is necessary for subsequent dynamic analysis along with control paths. Inverse kinematics is an example of the kinematic analysis of a constrained system of rigid bodies, or kinematic chain. The kinematic equations of a robot can be used to
https://en.wikipedia.org/wiki/Mary%20Ann%20Horton
Mary Ann Horton (born Mark R. Horton, on November 21, 1955), is a Usenet and Internet pioneer. Horton contributed to Berkeley UNIX (BSD), including the vi editor and terminfo database, created the first email binary attachment tool uuencode, and led the growth of Usenet in the 1980s. Horton successfully requested the first transgender-inclusive language added to the Equal Employment Policy in a large American company, and championed the language and insurance coverage of transgender health benefits at other companies. Horton is a computer scientist and a transgender educator and activist. Education Horton was born in Richland, Washington, and raised in the Pacific Northwest. Finding an interest in computer programming in 1970, Horton moved to San Diego County in 1971, and quickly fell in love with California. She graduated from San Dieguito High School in 1973. Earning a BSCS from the University of Southern California in 1976, Horton went on to obtain an MSCS at the University of Wisconsin, and transfer to the University of California at Berkeley in 1978, earning a PhD in Computer Science in 1981. Horton was introduced to UNIX at Wisconsin, creating an enhanced UNIX text editor called hed. At Berkeley, she contributed to the development of Berkeley UNIX, including the vi text editor, uuencode (the first mechanism for binary Email attachments), w and load averages, termcap, and curses. Her PhD dissertation was the creation of a new type of syntax-directed editor with a textual interface. This technology was later used to create computer-aided software engineering tools. In 1980, Horton brought Usenet's A News system to Berkeley and began to champion its growth from a 10-site network. To Usenet's original dialup UUCP technology, she added support for Berknet and ARPANET, and added a gateway between several popular ARPANET mailing lists and usenet "fa" newsgroups. In 1981, high school student Matt Glickman asked Horton for a spring break project, and the two designed and implemented B News, which offered major performance and user interface improvements needed to keep up with the explosive growth of Usenet traffic volume. UNIX and Internet work In 1981, Horton became a Member of Technical Staff of Bell Labs in Columbus, Ohio. At Bell Labs she brought parts of Berkeley UNIX to UNIX System V, including vi and curses; as part of the work on curses, she developed terminfo as a replacement for termcap (most of this work shipped as part of SVR2). In 1987 she joined the Bell Labs Computation Center to bring official support for Usenet and Email to Bell Labs. Horton continued to lead Usenet until 1988. During this time she promoted rapid growth by arranging news feeds for new sites. Each new site agreed to be the feed for two more new sites as the need arose. This policy contributed to the growth of Usenet to over 5000 sites by 1987. Horton recruited membership in and designed the original physical topology of the Usenet Backbone in 1983. Gene "Spa
https://en.wikipedia.org/wiki/Passwd
passwd is a command on Unix, Plan 9, Inferno, and most Unix-like operating systems used to change a user's password. The password entered by the user is run through a key derivation function to create a hashed version of the new password, which is saved. Only the hashed version is stored; the entered password is not saved for security reasons. When the user logs on, the password entered by the user during the log on process is run through the same key derivation function and the resulting hashed version is compared with the saved version. If the hashes are identical, the entered password is considered to be correct, and the user is authenticated. In theory, it is possible for two different passwords to produce the same hash. However, cryptographic hash functions are designed in such a way that finding any password that produces the same hash is very difficult and practically infeasible, so if the produced hash matches the stored one, the user can be authenticated. The passwd command may be used to change passwords for local accounts, and on most systems, can also be used to change passwords managed in a distributed authentication mechanism such as NIS, Kerberos, or LDAP. Password file The /etc/passwd file is a text-based database of information about users that may log into the system or other operating system user identities that own running processes. In many operating systems this file is just one of many possible back-ends for the more general passwd name service. The file's name originates from one of its initial functions as it contained the data used to verify passwords of user accounts. However, on modern Unix systems the security-sensitive password information is instead often stored in a different file using shadow passwords, or other database implementations. The /etc/passwd file typically has file system permissions that allow it to be readable by all users of the system (world-readable), although it may only be modified by the superuser or by using a few special purpose privileged commands. The /etc/passwd file is a text file with one record per line, each describing a user account. Each record consists of seven fields separated by colons. The ordering of the records within the file is generally unimportant. An example record may be: The fields, in order from left to right, are: : User name: the string a user would type in when logging into the operating system: the logname. Must be unique across users listed in the file. : Information used to validate a user's password. The format is the same as that of the analogous field in the shadow password file, with the additional convention that setting it to "x" means the actual password is found in the shadow file, a common occurrence on modern systems. : user identifier number, used by the operating system for internal purposes. It must be unique as it identifies users uniquely. : group identifier number, which identifies the primary group of the user; all files that are
https://en.wikipedia.org/wiki/Joule%20%28programming%20language%29
Joule is a capability-secure massively-concurrent dataflow programming language, designed for building distributed applications. It is so concurrent that the order of statements within a block is irrelevant to the operation of the block. Statements are executed whenever possible, based on their inputs. Everything in Joule happens by sending messages. There is no control flow. Instead, the programmer describes the flow of data, making it a dataflow programming language. Joule development started in 1994 at Agorics in Palo Alto, California. It is considered the precursor to the E programming language. Language syntax Numerals consist of ASCII digits 0–9; identifiers are Unicode sequences of digits, letters, and operator characters that begin with a letter. It is also possible to form identifiers by using Unicode sequences (including whitespace) enclosed by either straight (' ') or standard (‘ ’) single quotes, where the backslash is the escape character. Keywords have to start with a letter, except the • keyword to send information. Operators consist of Unicode sequences of digits, letters, and operator characters, beginning with an operator character. Labels are identifiers followed by a colon (':'). At the root, Joule is an imperative language and because of that a statement-based language. It has a rich expression syntax, which transforms easily to its relational syntax underneath. Complex expressions become separate statements, where the site of the original expression is replaced by a reference to the acceptor of the results channel. Therefore, nested expressions still compute completely concurrently with their embedding statement. If amount <= balance • account withdraw: amount else • account report-bounce: end An identifiers may name a channel to communicate with the server. If this is the case, it is said to be bound to that channel. References External links Joule: Distributed Application Foundations C2: Promise Pipelining Concurrent programming languages Object-oriented programming languages Secure programming languages Capability systems
https://en.wikipedia.org/wiki/College%20Democrats
College Democrats is an organization located on several college campuses. Their main focus is to elect Democratic Party candidates and provide networking and leadership opportunities for student members. The chapters have served as a way for college students to connect with the Democratic Party and Democratic campaigns, and has produced many prominent liberal and progressive activists. Many of these chapters are organized under the College Democrats of America, the official youth outreach arm of the Democratic National Committee, which claims over 100,000 college and university student members from across the United States. Other chapters are organized under the Young Democrats of America and its College Caucus. Activities Immediately leading up to election day, chapters are expected to participate in get out the vote (GOTV) activities, both on-campus and in surrounding communities. Other activities are not centrally determined, and thus vary from chapter to chapter. Typical activities might include inviting guest speakers (often elected officials or party activists) to campus, organizing issue advocacy and lobbying efforts (like letter-writing campaigns or phone banks), and arranging service activities for members to attend. College Democrats chapters also often organize social events (like sporting competitions with College Republicans chapters) and other recruitment activities. During the election season, campus chapters typically partake in campaign work. These efforts generally include voter registration drives and dorm storms to register youth voters that have just gained voter eligibility. They also include providing youth manpower to campaigns for canvassing and phone banks. During presidential election years, chapters have organized proxy debates and run mock elections. Presidential Primaries Many chapters of the College Democrats took part in the 2016 Democratic Primary between Secretary Hillary Clinton (D-NY) and Senator Bernie Sanders (I-VT). In the run-up to the campaign's launch, many students participated in Ready for Hillary PAC's efforts to build support for Clinton. The statewide organization in California actively supported the PAC and recruited supporters while many chapters hosted the PAC's Hillary Bus on their campuses to build support for Clinton. Notable College Democrats See also College Republicans Democratic Party Democratic National Committee High School Democrats of America References Democratic Party (United States) organizations Factions in the Democratic Party (United States) Student wings of political parties in the United States Student organizations established in 1932
https://en.wikipedia.org/wiki/PubMed
PubMed is a free search engine accessing primarily the MEDLINE database of references and abstracts on life sciences and biomedical topics. The United States National Library of Medicine (NLM) at the National Institutes of Health maintain the database as part of the Entrez system of information retrieval. From 1971 to 1997, online access to the MEDLINE database had been primarily through institutional facilities, such as university libraries. PubMed, first released in January 1996, ushered in the era of private, free, home- and office-based MEDLINE searching. The PubMed system was offered free to the public starting in June 1997. Content In addition to MEDLINE, PubMed provides access to: older references from the print version of Index Medicus, back to 1951 and earlier references to some journals before they were indexed in Index Medicus and MEDLINE, for instance Science, BMJ, and Annals of Surgery very recent entries to records for an article before it is indexed with Medical Subject Headings (MeSH) and added to MEDLINE a collection of books available full-text and other subsets of NLM records PMC citations NCBI Bookshelf Many PubMed records contain links to full text articles, some of which are freely available, often in PubMed Central and local mirrors, such as Europe PubMed Central. Information about the journals indexed in MEDLINE, and available through PubMed, is found in the NLM Catalog. , PubMed has more than 35 million citations and abstracts dating back to 1966, selectively to the year 1865, and very selectively to 1809. , 24.6 million of PubMed's records are listed with their abstracts, and 26.8 million records have links to full-text versions (of which 10.9 million articles are available, full-text for free). Over the last 10 years (ending 31 December 2019), an average of nearly one million new records were added each year. In 2016, NLM changed the indexing system so that publishers are able to directly correct typos and errors in PubMed indexed articles. PubMed has been reported to include some articles published in predatory journals. MEDLINE and PubMed policies for the selection of journals for database inclusion are slightly different. Weaknesses in the criteria and procedures for indexing journals in PubMed Central may allow publications from predatory journals to leak into PubMed. The National Library of Medicine had respond that individual journal articles can be included in PMC to support the public access policies of research funders and that rigorous policies about journals and publishers ensure integrity of NLM literature databases. Characteristics Website design A new PubMed interface was launched in October 2009 and encouraged the use of such quick, Google-like search formulations; they have also been described as 'telegram' searches. By default the results are sorted by Most Recent, but this can be changed to Best Match, Publication Date, First Author, Last Author, Journal, or Title. The PubMed website d
https://en.wikipedia.org/wiki/VISCII
VISCII is an unofficially-defined modified ASCII character encoding for using the Vietnamese language with computers. It should not be confused with the similarly-named officially registered VSCII encoding. VISCII keeps the 95 printable characters of ASCII unmodified, but it replaces 6 of the 33 control characters with printable characters. It adds 128 precomposed characters. Unicode and the Windows-1258 code page are now used for virtually all Vietnamese computer data, but legacy VSCII and VISCII files may need conversion. History and naming VISCII was designed by the Vietnamese Standardization Working Group (Viet-Std Group) led by Christopher Cuong T. Nguyen, Cuong M. Bui, and Hoc D. Ngo based in Silicon Valley, California in 1992 while they were working with the Unicode consortium to include pre-composed Vietnamese characters in the Unicode standard. VISCII, along with VIQR, was first published in a bilingual report in September 1992, in which it was dubbed the "Vietnamese Standard Code for Information Interchange". The report noted a proliferation in computer usage in Vietnam and the increasing volume of computer-based communications among Vietnamese abroad, that existing applications used vendor-specific encodings which were unable to interoperate with one another, and that standardisation between vendors was therefore necessary. The successful inclusion of composed and precomposed Vietnamese in Unicode 1.0 was the result of the lessons learned from the development of 8-bit VISCII and 7-bit VIQR. The next year, in 1993, Vietnam adopted TCVN 5712, its first national standard in the information technology domain. This defined a character encoding named VSCII, which had been developed by the TCVN Technical Committee on Information Technology (TCVN/TC1), and with its name standing for "Vietnamese Standard Code for Information Interchange". VSCII is incompatible with, and otherwise unrelated to, the earlier-published VISCII. Unlike VISCII, VSCII is a "Vietnamese Standard" in the sense of a national standard. VISCII and VIQR were approved as the informational-status , attributed to the Viet-Std group and dated May 1993. As is the case with IETF RFCs, RFC 1456 notes them to be "conventions" used by overseas Vietnamese speakers on Usenet, and that it "specifies no level of standard". In spite of this, it continues to call VISCII the "VIetnamese Standard Code for Information Interchange" (the same name taken by VSCII). The labels VISCII and csVISCII are registered with the IANA for VISCII, with reference to RFC 1456. (There is, on the other hand, no official IANA label for TCVN 5712 / VSCII, although x-viet-tcvn5712 was previously supported by Mozilla Firefox.) Design A traditional extended ASCII character set consists of the ASCII set plus up to 128 characters. Vietnamese requires 134 additional letter-diacritic combinations, which is six too many. There are (short of dropping tone mark support for capital letters, as in VSCII-3) essentially fo
https://en.wikipedia.org/wiki/Spoofing%20attack
In the context of information security, and especially network security, a spoofing attack is a situation in which a person or program successfully identifies as another by falsifying data, to gain an illegitimate advantage. Internet Spoofing and TCP/IP Many of the protocols in the TCP/IP suite do not provide mechanisms for authenticating the source or destination of a message, leaving them vulnerable to spoofing attacks when extra precautions are not taken by applications to verify the identity of the sending or receiving host. IP spoofing and ARP spoofing in particular may be used to leverage man-in-the-middle attacks against hosts on a computer network. Spoofing attacks which take advantage of TCP/IP suite protocols may be mitigated with the use of firewalls capable of deep packet inspection or by taking measures to verify the identity of the sender or recipient of a message. Domain name spoofing The term 'Domain name spoofing' (or simply though less accurately, 'Domain spoofing') is used generically to describe one or more of a class of phishing attacks that depend on falsifying or misrepresenting an internet domain name. These are designed to persuade unsuspecting users into visiting a web site other than that intended, or opening an email that is not in reality from the address shown (or apparently shown). Although website and email spoofing attacks are more widely known, any service that relies on domain name resolution may be compromised. Referrer spoofing Some websites, especially pornographic paysites, allow access to their materials only from certain approved (login-) pages. This is enforced by checking the referrer header of the HTTP request. This referrer header, however, can be changed (known as "referrer spoofing" or "Ref-tar spoofing"), allowing users to gain unauthorized access to the materials. Poisoning of file-sharing networks "Spoofing" can also refer to copyright holders placing distorted or unlistenable versions of works on file-sharing networks. E-mail address spoofing The sender information shown in e-mails (the From: field) can be spoofed easily. This technique is commonly used by spammers to hide the origin of their e-mails and leads to problems such as misdirected bounces (i.e. e-mail spam backscatter). E-mail address spoofing is done in quite the same way as writing a forged return address using snail mail. As long as the letter fits the protocol, (i.e. stamp, postal code) the Simple Mail Transfer Protocol (SMTP) will send the message. It can be done using a mail server with telnet. Geolocation Geolocation spoofing occurs when a user applies technologies to make their device appear to be located somewhere other than where it is actually located. The most common geolocation spoofing is through the use of a Virtual Private Network (VPN) or DNS Proxy in order for the user to appear to be located in a different country, state or territory other than where they are actually located. According to a study
https://en.wikipedia.org/wiki/List%20of%20High%20Priests%20of%20Israel
This article gives a list of the High Priests (Kohen Gadol) of Ancient Israel up to the destruction of the Second Temple in 70 AD. Because of a lack of historical data, this list is incomplete and there may be gaps. The High Priests, like all Jewish priests, belonged to the Aaronic line. The Bible mentions the majority of high priests before the Babylonian captivity, but does not give a complete list of office holders. Lists would be based on various historical sources. In several periods of non-Jewish rule, high priests were appointed and removed by kings, but still most high priests came from the Aaronic line. One exception is Menelaus, who may not have been from the Tribe of Levi at all, but from the Tribe of Benjamin. List From the Exodus to Solomon's Temple The following lineage appears in : Aaron Eleazar, son of Aaron Phinehas, son of Eleazar Abishua, son of Phinehas The Samaritans insert Sashai as the son of Abishua and father of Bukki. Bukki, son of Abishua Uzzi, son of Bukki Although Phinehas and his descendants are not directly attested as high priests, this portion of the genealogy is assumed by other sources to give the succession of the high priesthood from father to son. At some time, the office was transferred from descendants of Eleazar to those of his brother Itamar. The first known and most notable high priest of Itamar's line was Eli, a contemporary of Samuel. Eli, descendant of Ithamar, son of Aaron Ahitub, son of Phinehas and grandson of Eli Ahijah, son of Ahitub Ahimelech, son of Ahijah (or brother of Ahijah and son of Ahitub) Abiathar, son of Ahimelech Abiathar was removed from the high priesthood for conspiring against King Solomon, and was replaced by Zadok, son of Ahitub, who oversaw the construction of the First Temple. According to the genealogies given in , Zadok was a descendant of Uzzi (through Zerahiah, Meraioth, Amariah and Ahitub) and thus belonged to the line of Eleazar. First Temple period Priestly lists for this period appear in the Bible, Josephus and the Seder Olam Zutta, but with differences. While Josephus and Seder 'Olam Zuta each mention 18 high priests, the genealogy given in gives 12 names, culminating in the last high priest Seriah, father of Jehozadak. However, it is unclear whether all those mentioned in the genealogy between Zadok and Jehozadak were high priests, and whether high priests mentioned elsewhere (such as Jehoiada and Jehoiarib) are simply omitted or did not belong to the male line in this genealogy. Some name Jehozadak, son of Seriah, as a high priest prior to being sent to captivity in Babylonia, based on the biblical references to "Joshua, son of Jehozadak, the high priest". According to the commentary attributed to Rashi, this is a misreading of the phrase, as "the high priest" does not refer to Jehozadak (who was exiled without having served as high priest), but to his son Joshua. After the Babylonian captivity The high priests following the exile were: Jos
https://en.wikipedia.org/wiki/EMI%20%28protocol%29
External Machine Interface (EMI), an extension to Universal Computer Protocol (UCP), is a protocol primarily used to connect to short message service centres (SMSCs) for mobile telephones. The protocol was developed by CMG Wireless Data Solutions, now part of Mavenir. Syntax A typical EMI/UCP exchange looks like this : ^B01/00045/O/30/66677789///1//////68656C6C6F/CE^C ^B01/00041/R/30/A//66677789:180594141236/F3^C The start of the packet is signaled by ^B (STX, hex 02) and the end with ^C (ETX, hex 03). Fields within the packet are separated by / characters. The first four fields form the mandatory header. the third is the operation type (O for operation, R for result), and the fourth is the operation (here 30, "short message transfer"). The subsequent fields are dependent on the operation. In the first line above, '66677789' is the recipient's address (telephone number) and '68656C6C6F' is the content of the message, in this case the ASCII string "hello". The second line is the response with a matching transaction reference number, where 'A' indicates that the message was successfully acknowledged by the SMSC, and a timestamp is suffixed to the phone number to show time of delivery. The final field is the checksum, calculated simply by summing all bytes in the packet (including slashes) and taking the 8 least significant bits from the result. The full specification is available on the LogicaCMG website developers' forum, but registration is required. Technical limitations The two-digit transaction reference number means that an entity sending text messages can only have 100 outstanding messages (per session); this can limit performance, but only over a slow network and with incorrectly configured applications on one's SMSC (for example one session, with number of windows greater than 100). In practice it does not have any impact on delivery throughput. The EMI UCP documentation does not specify a default alphabet for alphanumeric messages after decoding from hex digits. (It specifies an alphabet of IRA for the encoded message, which is the same as 7 bit ASCII as 0-9 and A-Z are invariant characters). The related ETS 300 133-3 standard specifies the GSM-7 alphabet, which accommodates more languages than ASCII by replacing unprintable control codes with additional printable characters. In practice the GSM-7 alphabet is used. Other encodings, such as UCS-2, can be sent by using a transparent message and specifying the Data Coding Scheme. Alternatives Short message peer-to-peer protocol (SMPP) also provides SMS over TCP/IP. Computer Interface for Message Distribution (CIMD) developed by Nokia External links ETS 300 133-3 LogicaCMG: Downloads for developers (link no longer active as of 2007-12-24) UCP Specification (Vodafone Germany) A more detailed UCP Specification UCP Perl implementation (for developers) Kannel, Open-Source WAP and SMS Gateway with UCP/EMI 4.0 support. GSM standard Mobile technology Network p
https://en.wikipedia.org/wiki/CIMD
Computer Interface to Message Distribution (CIMD) is a proprietary short message service centre protocol developed by Nokia for their SMSC (now: Nokia Networks). Syntax An example CIMD exchange looks like the following: <STX>03:007<TAB>021:12345678<TAB>033:hello<TAB><ETX> <STX>53:007<TAB>021:12345678<TAB>060:971107131212<TAB><ETX> Each packet starts with STX (hex 02) and ends with ETX (hex 03). The content of the packet consists of fields separated by TAB (hex 09). Each field, in turn, consists of a parameter type, a colon (:), and the parameter value. Note that the last field must also be terminated with a TAB before the ETX. Two-digit parameter types are operation codes and each message must have exactly one. The number after the operation code is the sequence number used to match an operation to its response. The response code (acknowledgement) of the message is equal to the operation code plus 50. In the example above, the operation code 03 means submit message. Field 021 defines the destination address (telephone number), with field 033 is the user data (content) of the message. Response code 53 with a field 060 time stamp indicates that the message was accepted; if the message failed, the SMSC would reply with field 900 (error code) instead. A good amount of supporting software to implement CIMD is available from Nokia's Website to build CIMD client. You can fire SMS from message center with the help of CIMD client tools. See also Universal Computer Protocol/External Machine Interface (UCP/EMI) Short message peer-to-peer protocol (SMPP) External links Nokia: CIMD specification for SC v7.0 Nokia: CIMD specification for SC v8.0 Software Kannel, Open-Source WAP and SMS Gateway with CIMD 1.3 and CIMD 2.0 support. Ixonos MISP CIMD simulator, Open-Source CIMD v2 compliant server for testing CIMD client applications GSM standard Mobile technology Network protocols
https://en.wikipedia.org/wiki/LRP
LRP can refer to: Lateralized readiness potential, an electrophysiological brain response Layerwise Relevance Propagation, a method for understanding how artificial neural networks work Lead replacement petrol League for the Revolutionary Party The Linux Router Project Lipoprotein receptor-related proteins Lithuanian Regions Party Live action role-playing game Living free-radical polymerization Livestock risk protection, a type of crop insurance for livestock growers Long Range Patrol (disambiguation), military units that operate behind enemy lines LRP ration, a lightweight military food ration Lower riser package, for well intervention on a subsea oil well Long-range plan, business forecast Lo Rat Penat
https://en.wikipedia.org/wiki/Bloomberg%20Terminal
The Bloomberg Terminal is a computer software system provided by the financial data vendor Bloomberg L.P. that enables professionals in the financial service sector and other industries to access Bloomberg Professional Services through which users can monitor and analyze real-time financial market data and place trades on the electronic trading platform. It was developed by employees working for businessman Michael Bloomberg. The system also provides news, price quotes, and messaging across its proprietary secure network. It is well known among the financial community for its black interface, which has become a recognizable trait of the service. The first version of the terminal was released in December 1982. Most large financial firms have subscriptions to Bloomberg Professional Services. Many exchanges charge their own additional fees for access to real time price feeds across the terminal. The same applies to various news organizations. All Bloomberg Terminals are leased in two-year cycles (in the late 1990s and early 2000s, three-year contracts were an option), with leases originally based on how many displays were connected to each terminal (this predated the move to Windows-based application). Most Bloomberg setups have between two and six displays. As a data analytics and electronic trading platform, the Bloomberg terminal is available for an annual fee of around $24k per user or $27k per year for subscribers that use only one terminal. As of 2022, there were 325,000 Bloomberg Terminal subscribers worldwide. History In 1981, Michael Bloomberg was fired from Salomon Brothers. He was given no severance package, but owned $10 million worth of equity as a partner at the firm. Using this money, Bloomberg, having designed in-house computerized financial systems for Salomon, set up a data services company named Innovative Market Systems (IMS) based on his belief that Wall Street would pay a premium for high-quality business information, delivered instantaneously on computer terminals in a variety of usable formats. The company sold customized computer terminals that delivered real-time market data, financial calculations and other analytics to Wall Street firms. At first, the machine was called the Market Master terminal, but later became known as the Bloomberg Terminal or simply "The Bloomberg." The terminal was released to market in December 1982. Merrill Lynch became the company's first customer, purchasing a 30% stake in IMS for $30 million in exchange for a five-year restriction on marketing the terminals to Merrill Lynch's competitors. In 1984, Merrill Lynch released IMS from the restriction. In 1990, the Bloomberg keyboard was released with a trackball and built-in voice-chat features. In 1991, the first color edition of the terminal was released. Michael Bloomberg stepped away from working on the terminal in 2001 to run for New York City mayor, but returned to lead the project in 2014. Starting in 2012, Bloomberg Terminal had a gre
https://en.wikipedia.org/wiki/Gare%20de%20l%27Est%20%28Paris%20M%C3%A9tro%29
Gare de l'Est – Verdun () is a station of the Paris Métro, serving Lines 4, 5, and 7 is located in the 10th arrondissement in Paris, France. It is the fifth busiest station on the network. Location The metro station consisting of three lines is located in front of the Gare de l'Est at the intersection of Rue du 8-Mai-1945 and Boulevard de Strasbourg, Line 4 follows a north/south axis and Lines 5 and 7 follow an east / west axis. History The station was opened on 15 November 1907 as part of the extension of line 5 from Lancry (now Jacques Bonsergent) to Gare du Nord. The line 4 platforms were opened on 21 April 1908 as part of the first section of the line from Châtelet to Porte de Clignancourt. The line 7 platforms were opened on 5 November 1910 as part of the first section of the line from Opéra to Porte de la Villette. Lines 5 and 7 are parallel, running as four tracks with an island platform with two side platforms. Line 4 runs under 5 and 7 perpendicularly. The station bears the name of Gare de l'Est, the station under which it is built. Its full name is Gare de l'Est-Verdun, named after the Avenue de Verdun nearby. The name Verdun is in memory of World War I's Battle of Verdun to which French soldiers were sent from the railway station. From September 2006 to June 2007, Gare de l'Est and its metro station underwent a major renovation thanks to the Gares en mouvement and Un métro + beau projects, to accommodate a more beautiful and modern station the LGV Est line. On the platforms of Lines 5 and 7, orange tiles and blue paint gave way to traditional white tiles. The lights were also replaced, and the latest model of smiley style seating had been installed. Finally, the Parisine typeface replaced the Motte typeface, symbolizing the end of work on the platform. The new standard signage was installed throughout the station. Only slight changes occurred on the platforms of Line 4. Apart from the replacement of the orange Motte tiles at the ends of the platforms and the passage of a layer of paint on the damaged tile of the vault. In 2018, it was the fifth busiest metro station in the network, with 21,432,041 passengers passing through the train station. Passenger services Access The station has eight entrances: Access 1: Rue d'Alsace Access 2: SNCF Gare de l'Est Access 3: Place du 11-Novembre-1918 Access 4: Rue du Faubourg-Saint-Martin Access 5: Rue du 8-Mai-1945 Access 6: Boulevard de Strasbourg Access 7: Landing Access 8: Boulevard Magenta Station layout Bus and RER connections The station is served by Lines 31, 32, 35, 38, 39, 46, 56, 91 and the OpenTour tourist line of the RATP Bus Network and, at night, by Lines N01, N02, N13, N14, N41, N42, N43, N44, N45, N140, N141, N142, N143, N144 and N145 of the Noctilian network. As the name suggests, the metro station is connected to the train station Gare de Paris-Est. The maps for metro Line 7 indicate a connection with the RER E at the Gare de Magenta, although it is necessary to
https://en.wikipedia.org/wiki/Martian%20canals
During the late 19th and early 20th centuries, it was erroneously believed that there were "canals" on the planet Mars. These were a network of long straight lines in the equatorial regions from 60° north to 60° south latitude on Mars, observed by astronomers using early telescopes without photography. They were first described by the Italian astronomer Giovanni Schiaparelli during the opposition of 1877, and confirmed by later observers. Schiaparelli called these canali ("channels"), which was mis-translated into English as "canals". The Irish astronomer Charles E. Burton made some of the earliest drawings of straight-line features on Mars, although his drawings did not match Schiaparelli's. Around the turn of the century there was even speculation that they were engineering works, irrigation canals constructed by a civilization of intelligent aliens indigenous to Mars. By the early 20th century, improved astronomical observations revealed the "canals" to be an optical illusion, and modern high-resolution mapping of the Martian surface by spacecraft shows no such features. Supposed "discoveries" The Italian word canale (plural canali) can mean "canal", "channel", "duct" or "gully". The first person to use the word canale in connection with Mars was Angelo Secchi in 1858, although he did not see any straight lines and applied the term to large features—for example, he used the name "Canale Atlantico" for what later came to be called Syrtis Major Planum. The canals were named by Schiaparelli and others after both real and legendary rivers of various places on Earth, or the mythological underworld. At this time in the late 19th century, astronomical observations were made without photography. Astronomers had to stare for hours through their telescopes, waiting for a moment of still air when the image was clear, and then draw a picture of what they had seen. Astronomers believed at the time that Mars had a relatively substantial atmosphere. They knew that the rotation period of Mars (the length of its day) was almost the same as Earth's, and they knew that Mars' axial tilt was also almost the same as Earth's, which meant it had seasons in the astronomical and meteorological sense. They could also see Mars' polar ice caps shrinking and growing with these changing seasons. The similarities with Earth led them to interpret darker albedo features (for instance Syrtis Major) on the lighter surface as oceans. By the late 1920s, however, it was known that Mars is very dry and has a very low atmospheric pressure.In 1889, American astronomer Charles A. Young reported that Schiaparelli's canal discovery of 1877 had been confirmed in 1881, though new canals had appeared where there had not been any before, prompting "very important and perplexing" questions as to their origin. During the favourable opposition of 1892, W. H. Pickering observed numerous small circular black spots occurring at every intersection or starting-point of the "canals". Many of
https://en.wikipedia.org/wiki/CREN
The acronym CREN may refer to: Christian Real Estate Network, a private real estate association Club de Radioexperimentadores de Nicaragua, an amateur radio organization in Nicaragua Corporation for Research and Educational Networking, organizational home for the computer networks Bitnet and later CSNET
https://en.wikipedia.org/wiki/Silent%20Service%20%28video%20game%29
Silent Service is a submarine simulator video game designed by Sid Meier and published by MicroProse for various 8-bit home computers in 1985 and for 16-bit systems like the Amiga in 1987. A Nintendo Entertainment System version developed by Rare was published in 1989 by Konami in Europe and by Konami's Ultra Games subsidiary in North America. Silent Service II was released in 1990. Tommo purchased the rights to this game and published it online through its Retroism brand in 2015. Gameplay Silent Service is set in the Pacific Ocean during World War II, with the player assuming control of a U.S. Gato-class submarine for various war patrols against Japanese shipping. "Silent Service" was a nickname for the US Navy's submarine force in the Pacific during World War II. The player can choose when to attack from a range of realistic tactics, including the End Around and near invisibility at night (if the sub's profile is kept to a minimum). It allows four projectiles concurrently, a challenge when battling multiple destroyers. Real-time is accelerated when not in combat. Sid Meier described several key factors that influenced the design of the game: The size of the theater, the variety of tactical situations, and evolving technology, such as the use of surface radar and torpedoes that did or did not leave trails of bubbles on the surface—only simulations set after their real-life introduction had access to these. Tasks such as navigation, damage repair, and firing were compartmentalized into different screens to allow players access to a great deal of information, but also to focus on the immediate task. Development The game was designed by Sid Meier, the art which was made by Michael O. Haire. Silent Service was in development for 8 months and its creation was inspired by a fractal technological trick. Reception Silent Service was MicroProse's second best-selling Commodore game as of late 1987. The company sold 250,000 copies by March 1987, and roughly 400,000 overall. Info in 1985 rated Silent Service for the Commodore 64 four stars out of five, stating that its quality and graphics "are all unmistakably MicroProse" and "ensure a satisfying level of play for any wargamer". Antic wrote in 1986 that "Sid Meier and his team of simulation experts at MicroProse have outdone themselves". The magazine approved of how the game offered both beginner modes and "complex, historically accurate, and challenging war patrol scenarios" for experts, and noted the Atari 8-bit version's "superb" graphics and "well done" manual". Antic in 1987 also liked the Atari ST version's graphics, sound, adjustable difficulty levels, and documentation, concluding: "It's a traditional MicroProse product and it's nice to see that they've remained dedicated to detail". Compute! wrote in 1986 that "like F-15 Strike Eagle, Silent Service is both intriguing and addictive... a superior product". Computer Gaming World in 1986 called Silent Service "easily the best [submarine simula
https://en.wikipedia.org/wiki/Meta-information
Meta-information may refer to: Metadata Knowledge tagging
https://en.wikipedia.org/wiki/Phantom%20Entertainment
Phantom Entertainment, Inc. (known as Infinium Labs, Inc. until 2006) was a company founded in 2002 by Tim Roberts which made computer keyboards. However, Phantom was best known for the Phantom, a video game console advertised for Internet gaming on demand in 2004; it was never marketed, leading to suggestions that it was vaporware. The company's website was last updated in late 2011. History Infinium Labs was founded by Tim Roberts in 2002 as a private company. In January 2003 it issued a press release saying that it would soon release a "revolutionary new gaming platform" with an on-demand video-game service, delivering games through an online subscription. The press release had no specific information, but included a computer-generated prototype design. Due to the use of buzzwords and the lack of details, the product was derided nearly from the beginning by news sites such as IGN and Slashdot and in the Penny Arcade webcomic. The hardware and gaming site HardOCP researched and wrote an extensive article on the company and its operation, and was sued in turn. The Phantom placed first in Wired Newss "Vaporware 2004". In 2004, Infinium Labs went public. Roberts left the company in summer 2005 (with millions of shares of stock) before any products had been delivered. He later rejoined as chairman of the board, but in a July 2007 press release he again resigned from the company. Subsequent CEOs included Kevin Bachus (who took the post in August 2005), Greg Koler (in January 2006) and John Landino, who was appointed CEO and interim chief financial officer in July 2008. In September 2006 the company (which had changed its name from Infinium Labs) promised to introduce its Phantom Lapboard product in November 2006, with a gaming service to follow in March 2007. In June 2008, the company released the Lapboard. In August 2007, Phantom Entertainment signed an agreement with ProGames Network to provide Lapboards and "game-service content" in hotels worldwide. The Phantom The Phantom is a cancelled home video game console whose development was supposedly begun by Phantom Entertainment–then known as Infinium Labs–in 2003. The device was said to be capable of playing current and future PC games, giving the system a large initial game library and making it easier for developers to produce games for the system. The system was said to feature a direct-download content delivery service, instead of the discs and cartridges used by most game consoles at the time. Press releases said in 2003 that the console would be released that year, and the digital rights management software would be provided by DiStream. A prototype Phantom was first seen at the May 2004 Electronic Entertainment Expo (E3), although it was rumored to be fake. Robrady Design was hired to develop the first Phantom prototype, and Synopse ID was later retained to develop second- and third-generation prototypes. Two units of the first-generation prototype were known to exist, one publicly des
https://en.wikipedia.org/wiki/Point%20location
The point location problem is a fundamental topic of computational geometry. It finds applications in areas that deal with processing geometrical data: computer graphics, geographic information systems (GIS), motion planning, and computer aided design (CAD). In its most general form, the problem is, given a partition of the space into disjoint regions, to determine the region where a query point lies. For example, the problem of determining which window of a graphical user interface contains a given mouse click can be formulated as an instance of point location, with a subdivision formed by the visible parts of each window, although specialized data structures may be more appropriate than general-purpose point location data structures in this application. Another special case is the point in polygon problem, in which one needs to determine whether a point is inside, outside, or on the boundary of a single polygon. In many applications, one needs to determine the location of several different points with respect to the same partition of the space. To solve this problem efficiently, it is useful to build a data structure that, given a query point, quickly determines which region contains the query point (e.g. Voronoi Diagram). Planar case In the planar case, we are given a planar subdivision S, formed by multiple polygons called faces, and need to determine which face contains a query point. A brute force search of each face using the point-in-polygon algorithm is possible, but usually not feasible for subdivisions of high complexity. Several different approaches lead to optimal data structures, with O(n) storage space and O(log n) query time, where n is the total number of vertices in S. For simplicity, we assume that the planar subdivision is contained inside a square bounding box. Slab decomposition The simplest and earliest data structure to achieve O(log n) time was discovered by Dobkin and Lipton in 1976. It is based on subdividing S using vertical lines that pass through each vertex in S. The region between two consecutive vertical lines is called a slab. Notice that each slab is divided by non-intersecting line segments that completely cross the slab from left to right. The region between two consecutive segments inside a slab corresponds to a unique face of S. Therefore, we reduce our point location problem to two simpler problems: Given a subdivision of the plane into vertical slabs, determine which slab contains a given point. Given a slab subdivided into regions by non-intersecting segments that completely cross the slab from left to right, determine which region contains a given point. The first problem can be solved by binary search on the x coordinate of the vertical lines in O(log n) time. The second problem can also be solved in O(log n) time by binary search. To see how, notice that, as the segments do not intersect and completely cross the slab, the segments can be sorted vertically inside each slab. While this algorith
https://en.wikipedia.org/wiki/Probabilistically%20checkable%20proof
In computational complexity theory, a probabilistically checkable proof (PCP) is a type of proof that can be checked by a randomized algorithm using a bounded amount of randomness and reading a bounded number of bits of the proof. The algorithm is then required to accept correct proofs and reject incorrect proofs with very high probability. A standard proof (or certificate), as used in the verifier-based definition of the complexity class NP, also satisfies these requirements, since the checking procedure deterministically reads the whole proof, always accepts correct proofs and rejects incorrect proofs. However, what makes them interesting is the existence of probabilistically checkable proofs that can be checked by reading only a few bits of the proof using randomness in an essential way. Probabilistically checkable proofs give rise to many complexity classes depending on the number of queries required and the amount of randomness used. The class PCP[r(n),q(n)] refers to the set of decision problems that have probabilistically checkable proofs that can be verified in polynomial time using at most r(n) random bits and by reading at most q(n) bits of the proof. Unless specified otherwise, correct proofs should always be accepted, and incorrect proofs should be rejected with probability greater than 1/2. The PCP theorem, a major result in computational complexity theory, states that PCP[O(log n),O(1)] = NP. Definition Given a decision problem L (or a language L with its alphabet set Σ), a probabilistically checkable proof system for L with completeness c(n) and soundness s(n), where 0 ≤ s(n) ≤ c(n) ≤ 1, consists of a prover and a verifier. Given a claimed solution x with length n, which might be false, the prover produces a proof π which states x solves L (x ∈ L, the proof is a string ∈ Σ*). And the verifier is a randomized oracle Turing Machine V (the verifier) that checks the proof π for the statement that x solves L(or x ∈ L) and decides whether to accept the statement. The system has the following properties: Completeness: For any x ∈ L, given the proof π produced by the prover of the system, the verifier accepts the statement with probability at least c(n), Soundness: For any x ∉ L, then for any proof π, the verifier mistakenly accepts the statement with probability at most s(n). For the computational complexity of the verifier, we have the randomness complexity r(n) to measure the maximum number of random bits that V uses over all x of length n and the query complexity q(n) of the verifier is the maximum number of queries that V makes to π over all x of length n. In the above definition, the length of proof is not mentioned since usually it includes the alphabet set and all the witnesses. For the prover, we do not care how it arrives at the solution to the problem; we care only about the proof it gives of the solution's membership in the language. The verifier is said to be non-adaptive if it makes all its queries before it receive
https://en.wikipedia.org/wiki/Grands%20Boulevards%20station
Grands Boulevards (), formerly named Rue Montmartre (1931–1998), is a station on Lines 8 and 9 of the Paris Métro. In 2019, it was the 44th busiest station of the Métro network, with 6,807,424 yearly users. The section of both lines from just east of Richelieu–Drouot to west of République was built under the Grand Boulevards, partly on the border between the 2nd and 9th arrondissements, that replaced the Louis XIII wall and is in soft ground, which was once the course of the Seine. The lines are built on two levels, with Line 8 on the higher level and Line 9 in the lower level. The platforms are at the sides and the box containing the lines and supporting the road above is strengthened by a central wall between the tracks. There is no interconnection between the lines at Grands Boulevards, with each level having different accesses to the street. History Opening The station was opened on 5 May 1931 with the extension of Line 8 from Richelieu–Drouot to Porte de Charenton. The Line 9 platforms were opened on 10 December 1933 with the extension of the line from Richelieu–Drouot to Porte de Montreuil. Name change The station was originally called "Rue Montmartre," but the tiled nameplates read simply "Montmartre." This caused confusion for non-Parisians and tourists, as the Montmartre neighbhorhood lies significantly north of the station. In 1966, an attempt was made to improve clarity by covering the original nameplates with signs reading "Rue Montmartre," but confusion continued. The station was renamed to "Grands Boulevards" in 1998 to reflect the programme of the former Mayor of Paris, Jean Tiberi, to upgrade the main Boulevards of Paris and because the old name continued to be misleading. Passenger services Access The station has six entrances: Entrance 1 - Rue du Faubourg-Montmartre Entrance 2 - Boulevard Montmartre, Musée Grévin Entrance 3 - Rue Montmartre Entrance 4 - Boulevard Poissonnière Entrance 5 - Rue Saint-Fiacre Entrance 6 - Rue Rougemont Station layout Platforms The platforms of the two lines, 105 meters long, have a particular configuration. Two in number per stopping point, they are isolated in two half-stations separated by a central wall due to their construction in unstable land. Those of line 8 have an elliptical vault while those of line 9, arranged below, have vertical side walls and a horizontal reinforced concrete ceiling. Their decoration is in the Andreu-Motte style in both cases. Those of line 8 have two red light strips (one per half-station), a bench in flat red tiles and orange Motte seats. Those of line 9 have two green light canopies (one per half-station, offset on the side opposite the track), benches in flat green tiles and green Motte seats. These fittings are combined with the flat white ceramic tiles, which are placed horizontally and in staggered rows on the side walls and the vaults of line 8, while they are placed vertically and aligned on the side walls of line 9, the ceiling of the latter be
https://en.wikipedia.org/wiki/Subsim
SUBSIM is an online publication founded by Neal Stevens in Jan. 1997 that focuses on naval and submarine computer game reviews, articles, and news. Subsim is short for Submarine simulator. Subsim's forums have been online since 1999, with archives back to 2001. Membership totals were 117,023 at August 2016, with approximately 7,700 active members daily. International meets have been held in London, Houston, Amsterdam, Copenhagen, Groton, Germany, and Tokyo. Subsim is the media outlet for game publishers that feature naval content, a resource for military strategists, as well as a source of editorial comment on the state of PC computer simulations and games. Subsim members have been consultants and testers for game such as Enigma: Rising Tide, Silent Hunter II, Silent Hunter III, Dangerous Waters, Sub Command, Fleet Command, Destroyer Command, among others. Shortly after the release of the ill-fated Silent Hunter II, Ubisoft agreed to provide Subsim with the game source code to replace the rTime multiplayer engine with DirectPlay. Subsim members raised $10,000 to pay a programmer to make the improvements. Significant events 1998 - Subsim gets their first industry interview: Mike Jones of Aces of the Deep answers a series of questions about his groundbreaking U-boat sim. Also, the Subsim Fix My 688(I) Campaign begins, designed to urge EA/Sonalysts into producing a no-cheat patch for Jane's 688(I). 2002 - Subsim works with Ubisoft producer Carl Norman to initiate a broad reaching mod that will convert Silent Hunter II and Destroyer Command from the original MP engine (rTime) to Microsoft DirectPlay. The mod is deemed Projekt Messerwetzer (messerwetzer (ms-r-vt-zr-) n. [German] - A craftsman who sharpens old knives.) To avoid legal constraints, Projekt Messerwetzer is funded purely by donations and will be free to all. Subsim donates $1000 and Subsim players eventually raise $10,000 for the project. Multiplayer was significantly improved with this unofficial patch. 2003 - Subsim and Sub Club members meeting in London for the first Intercontinental Meeting. 2004 - Subsim Editor Neal Stevens attends the 2004 E3 convention in Los Angeles. First looks at Sid Meier's Pirates, Akella's PT Boats, and Ubisoft's Silent Hunter III reveal that 2005 will be a very good year for subsim players. 2006 - Neal Stevens of Subsim is invited aboard the new nuclear submarine USS Texas for a media tour. The Sept. 9 commissioning ceremony drew more than 10,000 guests and visitors to Galveston, including ship sponsor First Lady Laura Bush, Sens. Kay Bailey Hutchison and John Cornyn. 2007 - Subsim publishes the 2007 Submarine Almanac, celebrating 10 years on the web with a flotilla of stories, articles, and art from naval historians, subsim players, game developers, and Navy men. Foreword by Bestselling Author Joe Buff. Abraham Zeegers of Netherlands, moderator and host of the 2005 Subsim meet, is found dead in his Amsterdam apartment, less than one week after he had
https://en.wikipedia.org/wiki/D%26C
D&C or D and C or variant, may refer to: Dilation and curettage, a medical procedure involving the dilation of the cervix to remove uterine contents Divide and conquer algorithm, a strategy for dynamic programming Doctrine and Covenants, part of the scripture of the Latter Day Saint movement Drill & Ceremony, a term used in the U.S. Army for a method that enables leaders to direct the movement of soldiers in an orderly manner. Dennis and Callahan, an American morning radio show Democrat and Chronicle, a Rochester, New York, daily newspaper See also Divide and conquer (disambiguation) DNC (disambiguation) DC (disambiguation) D (disambiguation) C (disambiguation) C&D (disambiguation)
https://en.wikipedia.org/wiki/Quantum%20Corporation
Quantum Corporation is a data storage, management, and protection company that provides technology to store, manage, archive, and protect video and unstructured data throughout the data lifecycle. Their products are used by enterprises, media and entertainment companies, government agencies, big data companies, and life science organizations. Quantum is headquartered in San Jose, California and has offices around the world, supporting customers globally in addition to working with a network of distributors, VARs, DMRs, OEMs and other suppliers. History Quantum was founded in 1980 as Quantum Software Systems Inc. By 1984, it led the market for mid-capacity 5.25-inch drives. That year, a subsidiary was launched called Plus Development to focus on the development of hardcards. Plus Development became a successful designer of 3.5-inch drives with Matsushita Kotobuki Electronics (now Panasonic) as the contract manufacturer. By 1989, Quantum led the compact drive market. The company had 11 new models of 3.5-inch and 2.5-inch drives. It signed distribution agreements with Rein Electronik in Germany and Inelco Peripheriques in France. It also merged its subsidiary Plus Development Corporation into its Commercial Products Division. Quantum was the largest drive producer worldwide in 1994. In 2000, Maxtor agreed to acquire Quantum’s hard disk drive group. In 2004, Quantum became a member of the LTO Consortium after acquiring Certance. In 2012, Quantum announced Q-Cloud, which combines on-premise storage with cloud storage. In 2015, the company released a multi-tier storage product, StorNext 5.3, which supports Q-Cloud and powers the company’s Xcellis workflow storage technology. In 2018, Jamie Lerner became CEO and Quantum shifted focus from hard drives/tape to providing data storage, management, and protection for video and other unstructured data. In 2019, the company added a subscription for cloud-based device management and product, calling it Distributed Cloud Services. In 2020-2021 Quantum acquired technology to support the shift in focus including ActiveScale and CatDV. They also acquired video surveillance software from Pivot3. During the COVID-19 pandemic, Quantum was a recipient of a government loan of US$10 million as part of the Paycheck Protection Program (PPP). In October 2021, Quantum and IBM announced they would work together to develop LTO-10, the next generation of Linear Tape-Open (LTO) technology. In April 2023, Quantum announced a new scale-out unstructured data storage platform called Myriad. The solution is software-defined, initially running on Quantum-supplied hardware but capable of running on public cloud infrastructure. Acquisitions Prior to the 2000 merger of the hard drive division, Quantum began a series of tape technology acquisitions: 1998 – ATL Products, a manufacturer of automated tape libraries. 1999 – Meridian Data, a network-attached storage supplier 2001 – M4 Data (Holdings) Ltd., a manufacturer of
https://en.wikipedia.org/wiki/Actor%E2%80%93network%20theory
Actor–network theory (ANT) is a theoretical and methodological approach to social theory where everything in the social and natural worlds exists in constantly shifting networks of relationships. It posits that nothing exists outside those relationships. All the factors involved in a social situation are on the same level, and thus there are no external social forces beyond what and how the network participants interact at present. Thus, objects, ideas, processes, and any other relevant factors are seen as just as important in creating social situations as humans. ANT holds that social forces do not exist in themselves, and therefore cannot be used to explain social phenomena. Instead, strictly empirical analysis should be undertaken to "describe" rather than "explain" social activity. Only after this can one introduce the concept of social forces, and only as an abstract theoretical concept, not something which genuinely exists in the world. Although it is best known for its controversial insistence on the capacity of nonhumans to act or participate in systems or networks or both, ANT is also associated with forceful critiques of conventional and critical sociology. Developed by science and technology studies (STS) scholars Michel Callon, Madeleine Akrich and Bruno Latour, the sociologist John Law, and others, it can more technically be described as a "material-semiotic" method. This means that it maps relations that are simultaneously material (between things) and semiotic (between concepts). It assumes that many relations are both material and semiotic. The term actor-network theory was coined by John Law in 1992 to describe the work being done across case studies in different areas at the Centre de Sociologie de l'Innovation at the time. The theory demonstrates that everything in the social and natural worlds, human and nonhuman, interacts in shifting networks of relationships without any other elements out of the networks. ANT challenges many traditional approaches by defining nonhumans as actors equal to humans. This claim provides a new perspective when applying the theory in practice. Broadly speaking, ANT is a constructivist approach in that it avoids essentialist explanations of events or innovations (i.e. ANT explains a successful theory by understanding the combinations and interactions of elements that make it successful, rather than saying it is true and the others are false). Likewise, it is not a cohesive theory in itself. Rather, ANT functions as a strategy that assists people in being sensitive to terms and the often unexplored assumptions underlying them. It is distinguished from many other STS and sociological network theories for its distinct material-semiotic approach. Background and context ANT was first developed at the Centre de Sociologie de l'Innovation (CSI) of the École nationale supérieure des mines de Paris in the early 1980s by staff (Michel Callon, Madeleine Akrich, Bruno Latour) and visitors (including Joh
https://en.wikipedia.org/wiki/Continuation
In computer science, a continuation is an abstract representation of the control state of a computer program. A continuation implements (reifies) the program control state, i.e. the continuation is a data structure that represents the computational process at a given point in the process's execution; the created data structure can be accessed by the programming language, instead of being hidden in the runtime environment. Continuations are useful for encoding other control mechanisms in programming languages such as exceptions, generators, coroutines, and so on. The "current continuation" or "continuation of the computation step" is the continuation that, from the perspective of running code, would be derived from the current point in a program's execution. The term continuations can also be used to refer to first-class continuations, which are constructs that give a programming language the ability to save the execution state at any point and return to that point at a later point in the program, possibly multiple times. History The earliest description of continuations was made by Adriaan van Wijngaarden in September 1964. Wijngaarden spoke at the IFIP Working Conference on Formal Language Description Languages held in Baden bei Wien, Austria. As part of a formulation for an Algol 60 preprocessor, he called for a transformation of proper procedures into continuation-passing style, though he did not use this name, and his intention was to simplify a program and thus make its result more clear. Christopher Strachey, Christopher P. Wadsworth and John C. Reynolds brought the term continuation into prominence in their work in the field of denotational semantics that makes extensive use of continuations to allow sequential programs to be analysed in terms of functional programming semantics. Steve Russell invented the continuation in his second Lisp implementation for the IBM 704, though he did not name it. gives a complete history of the discovery of continuations. First-class continuations First-class continuations are a language's ability to completely control the execution order of instructions. They can be used to jump to a function that produced the call to the current function, or to a function that has previously exited. One can think of a first-class continuation as saving the execution state of the program. It is important to note that true first-class continuations do not save program data – unlike a process image – only the execution context. This is illustrated by the "continuation sandwich" description: Say you're in the kitchen in front of the refrigerator, thinking about a sandwich. You take a continuation right there and stick it in your pocket. Then you get some turkey and bread out of the refrigerator and make yourself a sandwich, which is now sitting on the counter. You invoke the continuation in your pocket, and you find yourself standing in front of the refrigerator again, thinking about a sandwich. But fortunately, ther
https://en.wikipedia.org/wiki/Register%20machine
In mathematical logic and theoretical computer science, a register machine is a generic class of abstract machines used in a manner similar to a Turing machine. All the models are Turing equivalent. Overview The register machine gets its name from its use of one or more "registers". In contrast to the tape and head used by a Turing machine, the model uses multiple, uniquely addressed registers, each of which holds a single positive integer. There are at least four sub-classes found in literature, here listed from most primitive to the most like a computer: Counter machine – the most primitive and reduced theoretical model of a computer hardware. Lacks indirect addressing. Instructions are in the finite state machine in the manner of the Harvard architecture. Pointer machine – a blend of counter machine and RAM models. Less common and more abstract than either model. Instructions are in the finite state machine in the manner of the Harvard architecture. Random-access machine (RAM) – a counter machine with indirect addressing and, usually, an augmented instruction set. Instructions are in the finite state machine in the manner of the Harvard architecture. Random-access stored-program machine model (RASP) – a RAM with instructions in its registers analogous to the Universal Turing machine; thus it is an example of the von Neumann architecture. But unlike a computer, the model is idealized with effectively infinite registers (and if used, effectively infinite special registers such as an accumulator). Compared to a computer, the instruction set is much reduced in number and complexity. Any properly defined register machine model is Turing equivalent. Computational speed is very dependent on the model specifics. In practical computer science, a similar concept known as a virtual machine is sometimes used to minimise dependencies on underlying machine architectures. Such machines are also used for teaching. The term "register machine" is sometimes used to refer to a virtual machine in textbooks. Formal definition A register machine consists of: An unbounded number of labeled, discrete, unbounded registers unbounded in extent (capacity): a finite (or infinite in some models) set of registers each considered to be of infinite extent and each of which holds a single non-negative integer (0, 1, 2, ...). The registers may do their own arithmetic, or there may be one or more special registers that do the arithmetic e.g. an "accumulator" and/or "address register". See also Random-access machine. Tally counters or marks: discrete, indistinguishable objects or marks of only one sort suitable for the model. In the most-reduced counter machine model, per each arithmetic operation only one object/mark is either added to or removed from its location/tape. In some counter machine models (e.g. Melzak, Minsky) and most RAM and RASP models more than one object/mark can be added or removed in one operation with "addition" and usually "subtraction"; sometim
https://en.wikipedia.org/wiki/GNN
GNN can stand for: GNN (news network) Start in 2023. Building a news network in 13,000 cities around the world through the 'newsg' platform Platform news agency GNN (Global news Network) Global News Network Global news network that the Japanese government tried and failed to gain power about 40 years ago GNNradio, a Christian radio network in the southeastern United States GNN (news channel), Pakistani news channel Garde Nationale et Nomade du Tchad (National and Nomadic Guard), a state security force in Chad Genome News Network, an online magazine focused on genomics news Global Network Navigator, an early commercial Web publication Global News Network, a news channel in the Philippines Goodnight Nurse, a New Zealand alternative rock band Graph neural network, a class of neural network for processing data best represented by graph data structures Guerrilla News Network, a defunct news website and TV studio :JA:GNN Web News , Japanese Web newsmagazine
https://en.wikipedia.org/wiki/Guerrilla%20News%20Network
Guerrilla News Network, Inc. (GNN) was a privately owned news website and television production company that operated from 2000 to 2009. It declared as its mission to "expose people to important global issues through cross-platform guerrilla programming." This was accomplished through the production of original articles, reporting and multimedia, as well as republishing of commentary and news articles from a number of sources including other progressive commentary sites, mainstream news agencies, and blogs. GNN also hosted blogs for registered users, a discussion forum, featured collaborative user-driven investigations and user-submitted photo- and video journalism. The company also produced feature documentaries, books and music videos. History GNN was founded in 2000 by Josh Shore and Stephen Marshall. Their headquarters were in New York City and they had production facilities in Berkeley, California. GNN produced a series of award-winning short web films about such subjects as the CIA's involvement in the drug trade during the 1980s and genetically engineered foods. They also produced two feature documentaries, numerous music videos, and a book. GNN's website, GNN.tv, was user driven; users/contributors receive a free blog page. GNN allowed submissions of original content in the form of articles. These had to be wholly original, sourced, and accompanied by a photograph or illustration. GNN published submissions based on a voting system, where users/contributors who have had more publications on GNN have more voting weight. Submissions with enough votes were published to the front page, while everything else remained on its creator's page until getting enough points. GNN also published headlines. The site was shut down in 2009, and the domain has since been occupied by a spam website. See also The Real News Network Adbusters Shooting War References External links Internet Archive: Collection of GNN−Guerrilla News Network videos The Guerrilla Underground Network (a project of GNN refugees) Stephen Marshall interviewed News agencies based in the United States Alternative journalism organizations Citizen journalism American companies established in 2000 Mass media companies established in 2000 Mass media companies disestablished in 2009 Internet properties established in 2000 Publishing companies established in 2000
https://en.wikipedia.org/wiki/HAKMEM
HAKMEM, alternatively known as AI Memo 239, is a February 1972 "memo" (technical report) of the MIT AI Lab containing a wide variety of hacks, including useful and clever algorithms for mathematical computation, some number theory and schematic diagrams for hardware – in Guy L. Steele's words, "a bizarre and eclectic potpourri of technical trivia". Contributors included about two dozen members and associates of the AI Lab. The title of the report is short for "hacks memo", abbreviated to six upper case characters that would fit in a single PDP-10 machine word (using a six-bit character set). History HAKMEM is notable as an early compendium of algorithmic technique, particularly for its practical bent, and as an illustration of the wide-ranging interests of AI Lab people of the time, which included almost anything other than AI research. HAKMEM contains original work in some fields, notably continued fractions. Introduction Compiled with the hope that a record of the random things people do around here can save some duplication of effort -- except for fun. Here is some little known data which may be of interest to computer hackers. The items and examples are so sketchy that to decipher them may require more sincerity and curiosity than a non-hacker can muster. Doubtless, little of this is new, but nowadays it's hard to tell. So we must be content to give you an insight, or save you some cycles, and to welcome further contributions of items, new or used. See also Hacker's Delight AI Memo References External links HAKMEM facsimile (PDF) (searchable version) Algorithms Computer science papers 1972 in Massachusetts Memoranda February 1972 events in the United States History of the Massachusetts Institute of Technology
https://en.wikipedia.org/wiki/Richard%20Greenblatt%20%28programmer%29
Richard D. Greenblatt (born December 25, 1944) is an American computer programmer. Along with Bill Gosper, he may be considered to have founded the hacker community, and holds a place of distinction in the communities of the programming language Lisp and of the Massachusetts Institute of Technology (MIT) Artificial Intelligence Laboratory. Early life Greenblatt was born in Portland, Oregon on December 25, 1944. His family moved to Philadelphia, Pennsylvania when he was a child. He later moved to Columbia, Missouri with his mother and sister when his parents divorced. Career Becoming a hacker Greenblatt enrolled in MIT in the fall of 1962, and around his second term as an undergraduate student, he found his way to MIT's famous Tech Model Railroad Club. At that time, Peter Samson had written a program in Fortran for the IBM 709 series machines, to automate the tedious business of writing the intricate timetables for the Railroad Club's vast model train layout. Greenblatt felt compelled to implement a Fortran compiler for the PDP-1, which then lacked one. There was no computer time available to debug the compiler, or even to type it into the computer. Years later, elements of this compiler (combined with some ideas from fellow TMRC member Steven Piner, the author of a very early PDP-4 Fortran compiler while working for Digital Equipment Corporation) were typed in and "showed signs of life". However, the perceived need for a Fortran compiler had evaporated by then, so the compiler was not pursued further. This and other experiences at TMRC, especially the influence of Alan Kotok, who worked at DEC and was the junior partner of the design team for the PDP-6 computer, led Greenblatt to the AI Lab, where he proceeded to become a "hacker's hacker" noted for his programming acumen as described in Steven Levy's Hackers: Heroes of the Computer Revolution, and as acknowledged by Gerald Jay Sussman and Harold Abelson when they said they were fortunate to have been apprentice programmers at the feet of Bill Gosper and Richard Greenblatt. Indeed, he spent so much time programming the Programmed Data Processor (PDP) machines there that he failed out of MIT as a first-term junior and had to take a job at a firm, Charles Adams Associates, until the AI Lab hired him about 6 months later. Lisp Machines, Inc. In 1979, he and Tom Knight were the main designers of the MIT Lisp machine. He founded Lisp Machines, Inc. (later renamed Gigamos Systems), according to his vision of an ideal hacker-friendly computer company, as opposed to the more commercial ideals of Symbolics. Significant software developed He was the main implementor of Maclisp on the PDP-6. He wrote Mac Hack, the first computer program to play tournament-level chess and the first to compete in a human chess tournament. AI skeptic Hubert Dreyfus, who famously made the claim that computers would not be able to play high-quality chess, was beaten by the program, marking the start of "respectable
https://en.wikipedia.org/wiki/Image%20segmentation
In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects (sets of pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics. The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image (see edge detection). Each of the pixels in a region are similar with respect to some characteristic or computed property , such as color, intensity, or texture. Adjacent regions are significantly different color respect to the same characteristic(s). When applied to a stack of images, typical in medical imaging, the resulting contours after image segmentation can be used to create 3D reconstructions with the help of geometry reconstruction algorithms like marching cubes. Applications Some of the practical applications of image segmentation are: Content-based image retrieval Machine vision Medical imaging, including volume rendered images from computed tomography and magnetic resonance imaging. Locate tumors and other pathologies Measure tissue volumes Diagnosis, study of anatomical structure Surgery planning Virtual surgery simulation Intra-surgery navigation Radiotherapy Object detection Pedestrian detection Face detection Brake light detection Locate objects in satellite images (roads, forests, crops, etc.) Recognition Tasks Face recognition Fingerprint recognition Iris recognition Prohibited Item at Airport security checkpoints Traffic control systems Video surveillance Video object co-segmentation and action localization Several general-purpose algorithms and techniques have been developed for image segmentation. To be useful, these techniques must typically be combined with a domain's specific knowledge in order to effectively solve the domain's segmentation problems. Classes of segmentation techniques There are two classes of segmentation techniques. Classical computer vision approaches AI based techniques Groups of image segmentation Semantic segmentation is an approach detecting, for every pixel, the belonging class. For example, in a figure with many people, all the pixels belonging to persons will have the same class id and the pixels in the background will be classified as background. Instance segmentation is an approach that identifies, for every pixel, the specific belonging instance of the object. It detects each distinct object of interest in the image. For example, when each person in a figure is segmented as an individual object. Panoptic
https://en.wikipedia.org/wiki/Eurodicautom
Eurodicautom was the pioneering terminology database of the European Commission, created in 1975, initially for use by translators and other Commission staff. By 1980 it was consultable on line within the Commission. As the European Community grew it was expanded from six to seven, nine and finally eleven languages (plus Latin for scientific names). Public user interfaces were added later, providing the general public with free access to multilingual terminology in the fields of activity of the European Union. The students of Rennes University UFR2, LEA, technical translator and terminologist department, regularly worked on reviewing and creating entries to the existing database in several languages. In 2007, Eurodicautom was replaced by Interactive Terminology for Europe (IATE). External links IATE - European Terminology Database Government databases of the European Union Language policy of the European Union Translation databases
https://en.wikipedia.org/wiki/Centrum%20Wiskunde%20%26%20Informatica
The (abbr. CWI; English: "National Research Institute for Mathematics and Computer Science") is a research centre in the field of mathematics and theoretical computer science. It is part of the institutes organization of the Dutch Research Council (NWO) and is located at the Amsterdam Science Park. This institute is famous as the creation site of the programming language Python. It was a founding member of the European Research Consortium for Informatics and Mathematics (ERCIM). Early history The institute was founded in 1946 by Johannes van der Corput, David van Dantzig, Jurjen Koksma, Hendrik Anthony Kramers, Marcel Minnaert and Jan Arnoldus Schouten. It was originally called Mathematical Centre (in Dutch: Mathematisch Centrum). One early mission was to develop mathematical prediction models to assist large Dutch engineering projects, such as the Delta Works. During this early period, the Mathematics Institute also helped with designing the wings of the Fokker F27 Friendship airplane, voted in 2006 as the most beautiful Dutch design of the 20th century. The computer science component developed soon after. Adriaan van Wijngaarden, considered the founder of computer science (or informatica) in the Netherlands, was the director of the institute for almost 20 years. Edsger Dijkstra did most of his early influential work on algorithms and formal methods at CWI. The first Dutch computers, the Electrologica X1 and Electrologica X8, were both designed at the centre, and Electrologica was created as a spinoff to manufacture the machines. In 1983, the name of the institute was changed to Centrum Wiskunde & Informatica (CWI) to reflect a governmental push for emphasizing computer science research in the Netherlands. Recent research The institute is known for its work in fields such as operations research, software engineering, information processing, and mathematical applications in life sciences and logistics. More recent examples of research results from CWI include the development of scheduling algorithms for the Dutch railway system (the Nederlandse Spoorwegen, one of the busiest rail networks in the world) and the development of the Python programming language by Guido van Rossum. Python has played an important role in the development of the Google search platform from the beginning, and it continues to do so as the system grows and evolves. Many information retrieval techniques used by packages such as SPSS were initially developed by Data Distilleries, a CWI spinoff. Work at the institute was recognized by national or international research awards, such as the Lanchester Prize (awarded yearly by INFORMS), the Gödel Prize (awarded by ACM SIGACT) and the Spinoza Prize. Most of its senior researchers hold part-time professorships at other Dutch universities, with the institute producing over 170 full professors during the course of its history. Several CWI researchers have been recognized as members of the Royal Netherlands Academy of Arts an
https://en.wikipedia.org/wiki/Tom%20Duff
Thomas Douglas Selkirk Duff (born December 8, 1952) is a computer programmer. Early life Duff was born in Toronto, Ontario, Canada, and was named for his putative ancestor, the fifth Earl of Selkirk. He grew up in Toronto and Leaside. In 1974 he graduated from the University of Waterloo with a B.Math and, two years later, was awarded an M.Sc. from the University of Toronto. Career Duff worked at the New York Institute of Technology Computer Graphics Lab and the Mark Williams Company in Chicago before moving to Lucasfilm's Computer Research and Development Division. He and Thomas Porter, another Lucasfilm employee, developed a new approach to compositing images; their 1984 paper, "Compositing Digital Images", is "[t]he seminal work on an algebra for image compositing", according to Keith Packard. and "Porter-Duff compositing" is now a key technique in computer graphics. (See, for example, XRender and Glitz.) Duff later worked for 12 years at Bell Labs Computing Science Research Center, where he worked on computer graphics, wireless networking, and Plan 9; in the course of his work there, he authored the well known "rc" shell for the Version 10 Unix operating system. Duff worked at Pixar Animation Studios from 1996 until his retirement in 2021. Achievements In 1995, he was awarded (with others) the Academy Scientific and Engineering Award for his work on digital image compositing. With Bill Reeves he designed the first version of Pixar's Marionette 3-D animation system, which won the same award in 1997. While working at Lucasfilm, he created Duff's device, a loop unrolling mechanism in C. On August 22, 2006, the United States Patent and Trademark Office issued to Pixar for a "Shot shading method and apparatus" invented by Tom Duff and Robert L. Cook. On October 31, 2006, the United States Patent and Trademark Office issued to Pixar for a "Shot rendering method and apparatus" invented by Tom Duff and Robert L. Cook. In 2015 he became the 21st awardee of the J.W. Graham Medal, named in honor of Wes Graham an early influential Professor of Computer Science at the University of Waterloo, and annually awarded to an influential alumnus of the University's Faculty of Mathematics. Appearances Tom Duff makes a cameo appearance in the Niven/Pournelle science fiction novel Footfall as a co-discoverer of the invading spaceship: "Chap named Tom Duff, a computer type, spotted it." Tom Duff appears briefly in the documentary film "Noisy People" (dir Tim Perkis, 2006) playing the banjo. See also Mothra — a Web browser Tom Duff wrote for Plan 9 Duff's device — a C programming language trick attributed to Tom Duff List of people by Erdős number — Duff has an Erdős number of 2 List of Pixar staff List of University of Waterloo people References External links Tom Duff (short cv on family web site) iq0 (personal web site) rc - The Plan 9 Shell A Quick Introduction to the Plan 9 Panel Library, a GUI toolkit by Tom Duff Tom Duff "Viral At
https://en.wikipedia.org/wiki/Cryptosystem
In cryptography, a cryptosystem is a suite of cryptographic algorithms needed to implement a particular security service, such as confidentiality (encryption). Typically, a cryptosystem consists of three algorithms: one for key generation, one for encryption, and one for decryption. The term cipher (sometimes cypher) is often used to refer to a pair of algorithms, one for encryption and one for decryption. Therefore, the term cryptosystem is most often used when the key generation algorithm is important. For this reason, the term cryptosystem is commonly used to refer to public key techniques; however both "cipher" and "cryptosystem" are used for symmetric key techniques. Formal definition Mathematically, a cryptosystem or encryption scheme can be defined as a tuple with the following properties. is a set called the "plaintext space". Its elements are called plaintexts. is a set called the "ciphertext space". Its elements are called ciphertexts. is a set called the "key space". Its elements are called keys. is a set of functions . Its elements are called "encryption functions". is a set of functions . Its elements are called "decryption functions". For each , there is such that for all . Note; typically this definition is modified in order to distinguish an encryption scheme as being either a symmetric-key or public-key type of cryptosystem. Examples A classical example of a cryptosystem is the Caesar cipher. A more contemporary example is the RSA cryptosystem. Another example of a cryptosystem is the Advanced Encryption Standard (AES). AES is a widely used symmetric encryption algorithm that has become the standard for securing data in various applications. References Cryptography
https://en.wikipedia.org/wiki/ODCI
ODCI may refer to: Office of the Director of Central Intelligence, a historical post (1946-2005) in the United States of America Oracle Data Cartridge Interface, a component of Oracle Database
https://en.wikipedia.org/wiki/Robert%20Caesar%20Childers
Robert Caesar Childers (; 183825 July 1876) was a British Orientalist and the compiler of the first PaliEnglish dictionary to be published. He was the father of the Irish nationalist Erskine Childers and the paternal grandfather of the fourth president of Ireland, Erskine Hamilton Childers. Life Early years Childers was born in 1838 in Cantley, South Yorkshire, the son of Reverend Charles Childers, an English chaplain in Nice. In 1857, at the age of nineteen, he was admitted to Wadham College, Oxford, where he studied Hebrew. Ceylon From 1860 to 1864, Childers was employed by the civil service in Ceylon, first as private secretary to the governor, Charles Justin MacCarthy, and then as office assistant to the government agent in Kandy. During his time in Ceylon, he studied Sinhala and Pali with Ven. Yātrāmulle Śrī Dhammārāma Thera at Bentota Vanavāsa Vihāra, and established a firm friendship with Ven. Waskaḍuwe Śrī Subhūti. His time there was brought to an end when ill health forced him to return to England. Pali dictionary Upon his return to England, Childers continued his study of Pali, influenced by Reinhold Rost and Viggo Fausböll. In 1870, he published the text of the Khuddaka Pāṭha with an English translation and notes in the Journal of the Royal Asiatic Society. This was the first Pali text ever printed in England. The first volume of his Pali dictionary was published in 1872. In the autumn of that year, he was appointed sub-librarian at the India Office under Reinhold Rost, and early in the following year became the first professor of Pali and Buddhist literature at University College, London. The second and concluding volume was published in 1875. A few months later, the dictionary was awarded the Prix Volney for 1876 by the Institut de France. Family Childers was married to Anna Mary Henrietta Barton, who came from an Anglo-Irish family with an estate in Glendalough, County Wicklow. Childers and his wife had five children (two sons and three daughters). Death Childers died from tuberculosis on 25 July 1876, at the age of thirty-eight. Thomas William Rhys Davids states in the Dictionary of National Biography that Childers died in Weybridge, but the Encyclopædia Britannica records his place of death as London. Notable works Papers Books References 1838 births 1876 deaths Academics of University College London Alumni of Wadham College, Oxford British Indologists British lexicographers British orientalists British scholars of Buddhism Burials at Highgate Cemetery Robert Caesar Linguists of Pali Pali Pali–English translators People from South Yorkshire
https://en.wikipedia.org/wiki/Bit%20%28disambiguation%29
A bit is a symbol used for communication or, equivalently, a unit of information storage on a computer. A bit is also used as a unit of information. Bit or BIT may also refer to: Tools and engineering Drill bit, for drilling holes Screwdriver bit Tool bit, for lathe turning Bit key, a key with a distinct part that engages the locking mechanism. The shape described by the bitting. Bit (horse), part of horse tack placed in the mouth The cutting edge of an axe The heated part of a soldering iron Arts and entertainment Unit of action or bit, in acting Bit part, a minor role Bit, material from a standup comedian's repertoire Bit, in the list of Tron characters Bit (film), a 2019 vampire film Organisations Behavioural Insights Team, a social engineering organization Bit Corporation, a video game company Bipolar Integrated Technology, a former American semiconductor company BIT Teatergarasjen, Norwegian theatre and dance company Bright Ideas Trust, British social enterprise Education Bearys Institute of Technology, a private technical co-educational college, Mangalore, India Bangalore Institute of Technology, an institution of higher learning in India Beijing Institute of Technology, a university in China Birla Institute of Technology, Mesra, an engineering institute in Ranchi, India Birla Institute of Technology, Patna, an engineering institute in Patna, India BIT International College, formerly the Bohol Institute of Technology or BIT, in Bohol, Philippines BIT Sathy or Bannari Amman Institute of Technology, India Bhilai Institute of Technology – Durg, in Central India Bangladesh International Tutorial- A school in Dhaka Science and technology Built-in test, in electronics BIT (alternative information centre), a communal information service which derived its name from the smallest unit of computer information Benzisothiazolinone, a biocide BIT Numerical Mathematics a journal of mathematics Binary indexed tree, a data structure Other uses Bit people, an ethnic group in Laos Bit language, spoken by the Bit people Bit (money) Bachelor of Information Technology, a degree Bilateral investment treaty Bit or Bitburger, the beer of the Bitburger brewery Bit, magazine for owners of home computers published by Ultrasoft BIT, the National Rail station code for Bicester Village railway station, Oxford, England Bit, a traditional valtelinese cheese See also BITS (disambiguation) Bitts, paired vertical posts used to secure a ship's mooring lines, ropes, hawsers, or cables Bitten (disambiguation) Language and nationality disambiguation pages
https://en.wikipedia.org/wiki/Slot%20time
Slot time is a concept in computer networking. It is at least twice the time it takes for an electronic pulse (OSI Layer 1 - Physical) to travel the length of the maximum theoretical distance between two nodes. In CSMA/CD networks such as Ethernet, the slot time is an upper limit on the acquisition of the medium, a limit on the length of a packet fragment generated by a collision, and the scheduling quantum for retransmission. Since a pulse's runtime will never exceed slot time (the maximum theoretical time for a frame to travel a network), the network interface controller, or NIC waits a minimum of slot time before retransmitting after a collision happened, in order to allow any pulse that was initiated at the time that the waiting NIC was requested to send, to reach all other nodes. By allowing the pulse to reach the waiting NIC, a local collision occurs (i.e. while still sending) rather than a late collision occurring (after sending may or may not have ended). By having the collision occur at the NIC (local) and not on the wire (late) CSMA/CD implementation recover the situation by retransmitting later. Some times for Ethernet slot time include: See DIFS for information on 802.11x slot times. References Ethernet Computer network analysis
https://en.wikipedia.org/wiki/Upgrade
Upgrading is the process of replacing a product with a newer version of the same product. In computing and consumer electronics an upgrade is generally a replacement of hardware, software or firmware with a newer or better version, in order to bring the system up to date or to improve its characteristics. Computing and consumer electronics Examples of common hardware upgrades include installing additional memory (RAM), adding larger hard disks, replacing microprocessor cards or graphics cards, and installing new versions of software. Many other upgrades are possible as well. Common software upgrades include changing the version of an operating system, of an office suite, of an anti-virus program, or of various other tools. Common firmware upgrades include the updating of the iPod control menus, the Xbox 360 dashboard, or the non-volatile flash memory that contains the embedded operating system for a consumer electronics device. Users can often download software and firmware upgrades from the Internet. Often the download is a patch—it does not contain the new version of the software in its entirety, just the changes that need to be made. Software patches usually aim to improve functionality or solve problems with security. Rushed patches can cause more harm than good and are therefore sometimes regarded with skepticism for a short time after release. Patches are generally free. A software or firmware upgrade can be major or minor and the release version code-number increases accordingly. A major upgrade will change the version number, whereas a minor update will often append a ".01", ".02", ".03", etc. For example, "version 10.03" might designate the third minor upgrade of version 10. In commercial software, the minor upgrades (or updates) are generally free, but the major versions must be purchased. Companies usually make software upgrades for the following reasons: 1.) to support industry regulatory requirements 2.) to access emerging technologies with new features, and tools 3.) to meet the demands of changing markets 4.) to continue to receive comprehensive product support. Risks Although developers usually produce upgrades in order to improve a product, there are risks involved—including the possibility that the upgrade will worsen the product. Upgrades of hardware involve a risk that new hardware will not be compatible with other pieces of hardware in a system. For example, an upgrade of RAM may not be compatible with existing RAM in a computer. Other hardware components may not be compatible after either an upgrade or downgrade, due to the non-availability of compatible drivers for the hardware with a specific operating system. Conversely, there is the same risk of non-compatibility when software is upgraded or downgraded for previously functioning hardware to no longer function. Upgrades of software introduce the risk that the new version (or patch) will contain a bug, causing the program to malfunction in some way or not to funct
https://en.wikipedia.org/wiki/Address%20space
In computing, an address space defines a range of discrete addresses, each of which may correspond to a network host, peripheral device, disk sector, a memory cell or other logical or physical entity. For software programs to save and retrieve stored data, each datum must have an address where it can be located. The number of address spaces available depends on the underlying address structure, which is usually limited by the computer architecture being used. Often an address space in a system with virtual memory corresponds to a highest level translation table, e.g., a segment table in IBM System/370. Address spaces are created by combining enough uniquely identified qualifiers to make an address unambiguous within the address space. For a person's physical address, the address space would be a combination of locations, such as a neighborhood, town, city, or country. Some elements of a data address space may be the same, but if any element in the address is different, addresses in said space will reference different entities. For example, there could be multiple buildings at the same address of "32 Main Street" but in different towns, demonstrating that different towns have different, although similarly arranged, street address spaces. An address space usually provides (or allows) a partitioning to several regions according to the mathematical structure it has. In the case of total order, as for memory addresses, these are simply chunks. Like the hierarchical design of postal addresses, some nested domain hierarchies appear as a directed ordered tree, such as with the Domain Name System or a directory structure. In the Internet, the Internet Assigned Numbers Authority (IANA) allocates ranges of IP addresses to various registries so each can manage their parts of the global Internet address space. Examples Uses of addresses include, but are not limited to the following: Memory addresses for main memory, memory-mapped I/O, as well as for virtual memory; Device addresses on an expansion bus; Sector addressing for disk drives; File names on a particular volume; Various kinds of network host addresses in computer networks; Uniform resource locators in the Internet. Address mapping and translation Another common feature of address spaces are mappings and translations, often forming numerous layers. This usually means that some higher-level address must be translated to lower-level ones in some way. For example, a file system on a logical disk operates using linear sector numbers, which have to be translated to absolute LBA sector addresses, in simple cases, via addition of the partition's first sector address. Then, for a disk drive connected via Parallel ATA, each of them must be converted to logical cylinder-head-sector address due to the interface historical shortcomings. It is converted back to LBA by the disk controller, then, finally, to physical cylinder, head and sector numbers. The Domain Name System maps its names to and from
https://en.wikipedia.org/wiki/Ken%20Forbus
Kenneth Dale "Ken" Forbus is an American computer scientist working as the Walter P. Murphy Professor of Computer Science and Professor of Education at Northwestern University. Education Forbus earned a Bachelor of Science in computer science, Master of Science in computer science, and PhD in artificial intelligence from the Massachusetts Institute of Technology. Career Forbus is notable for his work in qualitative process theory, automated sketch understanding, and automated analogical reasoning. He also developed the structure mapping engine based on the structure-mapping theory of Dedre Gentner. He is a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and the Cognitive Science Society. References Year of birth missing (living people) Living people Northwestern University faculty Artificial intelligence researchers Fellows of the Association for the Advancement of Artificial Intelligence Fellows of the Cognitive Science Society Massachusetts Institute of Technology alumni
https://en.wikipedia.org/wiki/Scott%20Fahlman
Scott Elliott Fahlman (born March 21, 1948) is an American computer scientist and Professor Emeritus at Carnegie Mellon University's Language Technologies Institute and Computer Science Department. He is notable for early work on automated planning and scheduling in a blocks world, on semantic networks, on neural networks (especially the cascade correlation algorithm), on the programming languages Dylan, and Common Lisp (especially CMU Common Lisp), and he was one of the founders of Lucid Inc. During the period when it was standardized, he was recognized as "the leader of Common Lisp." From 2006 to 2015, Fahlman was engaged in developing a knowledge base named Scone, based in part on his thesis work on the NETL Semantic Network. He also is credited with coining the use of the emoticon. Life and career Fahlman was born in Medina, Ohio, the son of Lorna May (Dean) and John Emil Fahlman. He attended the Massachusetts Institute of Technology (MIT), where he received a Bachelor of Science (B.S.) and Master of Science (M.S.) degree in electrical engineering and computer science in 1973, and a Doctor of Philosophy (Ph.D.) in artificial intelligence in 1977. He has noted that his doctoral diploma says the degree was awarded for "original research as demonstrated by a thesis in the field of Artificial Intelligence" and suggested that it may be the first doctorate to use that term. He is a fellow of the American Association for Artificial Intelligence. Fahlman acted as thesis advisor for Donald Cohen, David B. McDonald, David S. Touretzky, Skef , Justin Boyan, Michael Witbrock, and Alicia Tribble Sagae. From May 1996 to July 2001, Fahlman directed the Justsystem Pittsburgh Research Center. Emoticons Fahlman was not the first to suggest the concept of the emoticon – a similar concept for a marker appeared in an article of Reader's Digest in May 1967, although that idea was never put into practice. In an interview printed in The New York Times in 1969, Vladimir Nabokov noted: "I often think there should exist a special typographical sign for a smile – some sort of concave mark, a supine round bracket." Fahlman is credited with originating the first smiley emoticon, which he thought would help people on a message board at Carnegie Mellon to distinguish serious posts from jokes. He proposed the use of :-) and :-( for this purpose, and the symbols caught on. The original message from which these symbols originated was posted on 19 September 1982. The message was recovered by Jeff Baird on 10 September 2002 and read: 19-Sep-82 11:44 Scott E Fahlman :-) From: Scott E Fahlman <Fahlman at Cmu-20c> I propose that the following character sequence for joke markers: :-) Read it sideways. Actually, it is probably more economical to mark things that are NOT jokes, given current trends. For this, use :-( References External links 1948 births Living people American computer scientists Artificial intelligence researchers Fellows of the A
https://en.wikipedia.org/wiki/Geoffrey%20Hinton
Geoffrey Everest Hinton (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks. From 2013 to 2023, he divided his time working for Google (Google Brain) and the University of Toronto, before publicly announcing his departure from Google in May 2023, citing concerns about the risks of artificial intelligence (AI) technology. In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto. With David Rumelhart and Ronald J. Williams, Hinton was co-author of a highly cited paper published in 1986 that popularised the backpropagation algorithm for training multi-layer neural networks, although they were not the first to propose the approach. Hinton is viewed as a leading figure in the deep learning community. The dramatic image-recognition milestone of the AlexNet designed in collaboration with his students Alex Krizhevsky and Ilya Sutskever for the ImageNet challenge 2012 was a breakthrough in the field of computer vision. Hinton received the 2018 Turing Award (often referred to as the "Nobel Prize of Computing"), together with Yoshua Bengio and Yann LeCun, for their work on deep learning. They are sometimes referred to as the "Godfathers of Deep Learning", and have continued to give public talks together. In May 2023, Hinton announced his resignation from Google to be able to "freely speak out about the risks of A.I." He has voiced concerns about deliberate misuse by malicious actors, technological unemployment, and existential risk from artificial general intelligence. Education Hinton was educated at King's College, Cambridge. After repeatedly changing his degree between different subjects like natural sciences, history of art, and philosophy, he eventually graduated in 1970 with a bachelor of arts in experimental psychology. He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1978 for research supervised by Christopher Longuet-Higgins. Career and research After his PhD, Hinton worked at the University of Sussex and, (after difficulty finding funding in Britain), the University of California, San Diego and Carnegie Mellon University. He was the founding director of the Gatsby Charitable Foundation Computational Neuroscience Unit at University College London and a professor in the computer science department at the University of Toronto. He holds a Canada Research Chair in Machine Learning and is currently an advisor for the Learning in Machines & Brains program at the Canadian Institute for Advanced Research. Hinton taught a free online course on Neural Networks on the education platform Coursera in 2012. He joined Google in March 2013 when his company, DNNresearch Inc., was acquired, and was at that time planning to "divide his time between his university research and his work at Google". Hinton's research concerns ways of using neural networks for machin
https://en.wikipedia.org/wiki/Wizards%20Play%20Network
The Wizards Play Network (WPN) is the official sanctioning body for competitive play in Magic: The Gathering (Magic) and various other games produced by Wizards of the Coast and its subsidiaries, such as Avalon Hill. Originally, it was known as the DCI (formerly Duelists' Convocation International) but was rebranded in 2008. The WPN provided game rules, tournament operating procedures, and other materials to private tournament organizers and players. It also operated a judge certification program to provide consistent rules enforcement and promote fair play. The DCI's name was still commonly used, however, to refer to the player registration number ("DCI number") until 2020. History The DCI was formed in late 1993, and developed Magic's first tournament sanctioning and deckbuilding rules. Over the next decades, it filled several roles in Magic's organized play. It maintained policy documents as changes were needed, addressed new questions and supported new product releases. It maintained the registration systems for both players and sanctioned tournaments. It also developed and operated a certification program for tournament officials, known as Judges. Over time, the roles of the DCI were gradually absorbed by other organizations, such as Wizards of the Coast itself through its Wizards Play Network (WPN) program, or through the independent Judge Program. Part of the 2008 Wizards Play Network rebrand was "in response to feedback from organizers, particularly retailers". This also opened up Magic pre-release tournaments to participating WPN stores. Per the industry trade ICv2, the WPN was designed to include "a wider range of casual formats, including leagues, multi-player, and team play. Current sanctioned programs will remain; the new programs will be in addition to those that already exist"; previously, "the entire array of Magic organized play events was all one-on-one sanctioned tournament play". In May 2009, Wizards of the Coast announced that 138,500 active Magic players were registered in the new organized play program since launching the WPN. Also in 2009, stores at the WPN Core level or higher were allowed to release Dungeons & Dragons (D&D) products before the official publication date. In 2010, Wizards of the Coast restricted organized play not associated with a participating store; many sanctioned Magic and D&D events were now required to be hosted at a participating store or sponsored by a participating store. Wizards of the Coast began to advertise the D&D Encounters program, a D&D equivalent of Friday Night Magic, under the WPN umbrella in 2010. From 2014 to 2016, the D&D Adventurers League could only be run at participating WPN locations. Scott Thorne, for ICv2 in 2014, wrote that the WPN organized play is highly structured withstores expected, or at least encouraged, to run OP events, either provided by WotC itself (Friday Night Magic, Magic Game Days, the late Kaijudo Draft program, D&D Encounters and so forth and so on), o
https://en.wikipedia.org/wiki/PL/0
PL/0 is a programming language, intended as an educational programming language, that is similar to but much simpler than Pascal, a general-purpose programming language. It serves as an example of how to construct a compiler. It was originally introduced in the book, Algorithms + Data Structures = Programs, by Niklaus Wirth in 1976. It features quite limited language constructs: there are no real numbers, very few basic arithmetic operations and no control-flow constructs other than "if" and "while" blocks. While these limitations make writing real applications in this language impractical, it helps the compiler remain compact and simple. Features All constants and variables used must be declared explicitly. The only data types are integers. The only operators are arithmetic and comparison operators. There is an odd function that tests whether the argument is odd. In the original implementation presented by Wirth, there are no input and output routines. The new value of each variable is printed compiler prints as it changes. So the program: var i, s; begin i := 0; s := 0; while i < 5 do begin i := i + 1; s := s + i * i end end. gives the output: 0 0 1 1 2 5 3 14 4 30 5 55 However, most implementations have single input and single output routines. Flow control structures are if-then and while-do constructs and user-defined procedures. Procedures cannot accept parameters. Grammar The following is the syntax rules of the model language defined in EBNF: program = block "." ; block = [ "const" ident "=" number {"," ident "=" number} ";"] [ "var" ident {"," ident} ";"] { "procedure" ident ";" block ";" } statement ; statement = [ ident ":=" expression | "call" ident | "?" ident | "!" expression | "begin" statement {";" statement } "end" | "if" condition "then" statement | "while" condition "do" statement ]; condition = "odd" expression | expression ("="|"#"|"<"|"<="|">"|">=") expression ; expression = [ "+"|"-"] term { ("+"|"-") term}; term = factor {("*"|"/") factor}; factor = ident | number | "(" expression ")"; It is rather easy for students to write a recursive descent parser for such a simple syntax. Therefore, the PL/0 compiler is still widely used in courses on compiler construction throughout the world. Due to the lack of features in the original specification, students usually spend most of their time with extending the language and their compiler. They usually start with introducing REPEAT .. UNTIL and continue with more advanced features like parameter passing to procedures or data structures like arrays, strings or floating point numbers. Use in education The main article on compilers honours PL/0 for introducing several influential concepts (stepwise refinement, recursive descent parsing, EBNF, P-code, T-di
https://en.wikipedia.org/wiki/PAE
PAE may refer to: Science and technology Predicted Aligned Error, AlphaFold output file format for errors of protein structure prediction Physical Address Extension, an x86 computer processor feature for accessing more than 4 gigabytes of RAM Power added efficiency, a percentage that rates the efficiency of a power amplifier Post Antibiotic Effect, the period of time following removal of an antibiotic drug during which there is no growth of the target organism Port Access Entity, in the IEEE 802.1X networking environment Primary amoebic encephalitis, another name for primary amoebic meningoencephalitis Prostatic artery embolization, a treatment for benign prostatic hypertrophy. Places City of Port Adelaide Enfield, South Australia Paine Field (IATA airport code), an airport in Everett, Washington Other uses Pacific Architects and Engineers, a United States defense contractor Post-autistic economics, a criticism of neoclassical economics Provisional Admission Exercise, an interim exercise/period in Singapore education Patent assertion entity, a patent troll company See also Large Physical Address Extension (LPAE), in the ARM architecture
https://en.wikipedia.org/wiki/Observational%20astronomy
Observational astronomy is a division of astronomy that is concerned with recording data about the observable universe, in contrast with theoretical astronomy, which is mainly concerned with calculating the measurable implications of physical models. It is the practice and study of observing celestial objects with the use of telescopes and other astronomical instruments. As a science, the study of astronomy is somewhat hindered in that direct experiments with the properties of the distant universe are not possible. However, this is partly compensated by the fact that astronomers have a vast number of visible examples of stellar phenomena that can be examined. This allows for observational data to be plotted on graphs, and general trends recorded. Nearby examples of specific phenomena, such as variable stars, can then be used to infer the behavior of more distant representatives. Those distant yardsticks can then be employed to measure other phenomena in that neighborhood, including the distance to a galaxy. Galileo Galilei turned a telescope to the heavens and recorded what he saw. Since that time, observational astronomy has made steady advances with each improvement in telescope technology. Subdivisions of observational Astronomy A traditional division of observational astronomy is based on the region of the electromagnetic spectrum observed: Radio astronomy detects radiation of millimetre to decametre wavelength. The receivers are similar to those used in radio broadcast transmission but much more sensitive. See also Radio telescopes. Infrared astronomy deals with the detection and analysis of infrared radiation (this typically refers to wavelengths longer than the detection limit of silicon solid-state detectors, about 1 μm wavelength). The most common tool is the reflecting telescope, but with a detector sensitive to infrared wavelengths. Space telescopes are used at certain wavelengths where the atmosphere is opaque, or to eliminate noise (thermal radiation from the atmosphere). Optical astronomy is the part of astronomy that uses optical instruments (mirrors, lenses, and solid-state detectors) to observe light from near-infrared to near-ultraviolet wavelengths. Visible-light astronomy, using wavelengths detectable with the human eyes (about 400–700 nm), falls in the middle of this spectrum. High-energy astronomy includes X-ray astronomy, gamma-ray astronomy, and extreme UV astronomy. Occultation astronomy is the observation of the instant one celestial object occults or eclipses another. Multi-chord asteroid occultation observations measure the profile of the asteroid to the kilometre level. Methods In addition to using electromagnetic radiation, modern astrophysicists can also make observations using neutrinos, cosmic rays or gravitational waves. Observing a source using multiple methods is known as multi-messenger astronomy. Optical and radio astronomy can be performed with ground-based observatories, because the atmosphere i
https://en.wikipedia.org/wiki/Draco%20%28programming%20language%29
Draco was a shareware programming language created by Chris Gray. First developed for CP/M systems, Amiga version followed in 1987. Although Draco, a blend of Pascal and C, was well suited for general purpose programming, its uniqueness as a language was its main weak point. Gray used Draco for the Amiga to create a port of Peter Langston's game Empire. References External links CP/M distribution Draco Author Chris Grays compiler page covering Draco Freeware Draco-to-C converter at Aminet Source code of Draco at Aminet Algol programming language family Amiga development software CP/M software
https://en.wikipedia.org/wiki/PL-11
PL-11 is a high-level machine-oriented programming language for the PDP-11, developed by R.D. Russell of CERN in 1971. Written in Fortran IV, it is similar to PL360 and is cross-compiled on other machines. PL-11 was originally developed as part of the Omega project, a particle physics facility operational at CERN (Geneva, Switzerland) during the 1970s. The first version was written for the CII 10070, a clone of the XDS Sigma 7 built in France. Towards the end of the 1970s it was ported to the IBM 370/168, then part of CERN's computer centre. A report describing the language is available from CERN. References Procedural programming languages Programming languages created in 1971 CERN software
https://en.wikipedia.org/wiki/Film%20recorder
A film recorder is a graphical output device for transferring images to photographic film from a digital source. In a typical film recorder, an image is passed from a host computer to a mechanism to expose film through a variety of methods, historically by direct photography of a high-resolution cathode ray tube (CRT) display. The exposed film can then be developed using conventional developing techniques, and displayed with a slide or motion picture projector. The use of film recorders predates the current use of digital projectors, which eliminate the time and cost involved in the intermediate step of transferring computer images to film stock, instead directly displaying the image signal from a computer. Motion picture film scanners are the opposite of film recorders, copying content from film stock to a computer system. Film recorders can be thought of as modern versions of Kinescopes. Design Operation All film recorders typically work in the same manner. The image is fed from a host computer as a raster stream over a digital interface. A film recorder exposes film through various mechanisms; flying spot (early recorders); photographing a high resolution video monitor; electron beam recorder (Sony HDVS); a CRT scanning dot (Celco); focused beam of light from a light valve technology (LVT) recorder; a scanning laser beam (Arrilaser); or recently, full-frame LCD array chips. For color image recording on a CRT film recorder, the red, green, and blue channels are sequentially displayed on a single gray scale CRT, and exposed to the same piece of film as a multiple exposure through a filter of the appropriate color. This approach yields better resolution and color quality than possible with a tri-phosphor color CRT. The three filters are usually mounted on a motor-driven wheel. The filter wheel, as well as the camera's shutter, aperture, and film motion mechanism are usually controlled by the recorder's electronics and/or the driving software. CRT film recorders are further divided into analog and digital types. The analog film recorder uses the native video signal from the computer, while the digital type uses a separate display board in the computer to produce a digital signal for a display in the recorder. Digital CRT recorders provide a higher resolution at a higher cost compared to analog recorders due to the additional specialized hardware. Typical resolutions for digital recorders were quoted as 2K and 4K, referring to 2048×1366 and 4096×2732 pixels, respectively, while analog recorders provided a resolution of 640×428 pixels in comparison. Higher-quality LVT film recorders use a focused beam of light to write the image directly onto a film loaded spinning drum, one pixel at a time. In one example, the light valve was a liquid-crystal shutter, the light beam was steered with a lens, and text was printed using a pre-cut optical mask. The LVT will record pixel beyond grain. Some machines can burn 120-res or 120 lines per millimeter. The L
https://en.wikipedia.org/wiki/PC%20speaker
A PC speaker is a loudspeaker built into some IBM PC compatible computers. The first IBM Personal Computer, model 5150, employed a standard 2.25 inch magnetic driven (dynamic) speaker. More recent computers use a tiny moving-iron or piezo speaker instead. The speaker allows software and firmware to provide auditory feedback to a user, such as to report a hardware fault. A PC speaker generates waveforms using the programmable interval timer, an Intel 8253 or 8254 chip. Use cases BIOS/UEFI error codes The PC speaker is used during the power-on self-test (POST) sequence to indicate errors during the boot process. Since it is active before the graphics card, it can be used to communicate "beep codes" related to problems that prevent the much more complex initialization of the graphics card to take place. For example, the Video BIOS usually cannot activate a graphics card unless working RAM is present in the system while beeping the speaker is possible with just ROM and the CPU registers. Usually, different error codes will be signalled by specific beeping patterns, such as e.g. "one beep; pause; three beeps; pause; repeat". These patterns are specific to the BIOS/UEFI manufacturer and are usually documented in the technical manual of the motherboard. Software Several programs, including music software, operating systems or games, could play pulse-code modulation (PCM) sound through the PC speaker using special Pulse-width Modulation techniques explained later in this article. Games The PC speaker was often used in very innovative ways to create the impression of polyphonic music or sound effects within computer games of its era, such as the LucasArts series of adventure games from the mid-1980s, using swift arpeggios. Several games such as Space Hulk and Pinball Fantasies were noted for their elaborate sound effects; Space Hulk, in particular, even had full speech. However, because the method used to reproduce PCM was very sensitive to timing issues, these effects either caused noticeable sluggishness on slower PCs or sometimes failed on faster PCs (that is, significantly faster than the program was originally developed for). Also, it was difficult for programs to do much else, even update the display, during the playing of such sounds. Thus, when sound cards (which can output complex sounds independent from the CPU once initiated) became mainstream in the PC market after 1990, they quickly replaced the PC speaker as the preferred output device for sound effects. Most newly-released PC games stopped supporting the speaker during the second half of the 1990s. Other programs Several programs, including MP (Module Player, 1989), Scream Tracker, Fast Tracker, Impulse Tracker, and even device drivers for Linux and Microsoft Windows, could play PCM sound through the PC speaker. Modern Microsoft Windows systems have PC speaker support as a separate device with special capabilities – that is, it cannot be configured as a normal audio output dev
https://en.wikipedia.org/wiki/List%20of%20Billboard%20Hot%20100%20number%20ones%20of%201996
The Billboard Hot 100 is a chart that ranks the best-performing singles of the United States. Published by Billboard magazine, the data are compiled by Nielsen SoundScan based collectively on each single's weekly physical sales and airplays. The year started out with "One Sweet Day" by Mariah Carey and Boyz II Men and ended with "Un-Break My Heart" by Toni Braxton. There were nine singles that peaked atop the charts, but if "One Sweet Day" is excluded from the count (as the song had its peak in the previous year), the total would be eight, the second lowest for a single year, along with 2005 and 2015. The longest running number-one single of 1996 is "Macarena" (Bayside Boys Mix), which stayed at the top for 14 weeks. That year, 8 acts earned their first number one song, such as Bone Thugs-n-Harmony, 2Pac, K-Ci & JoJo, Dr. Dre, Roger Troutman, Toni Braxton, Los del Río, and Blackstreet. Mariah Carey, Dr. Dre and Toni Braxton were the only acts to hit number one more than once, with each of them hitting number one twice. Chart history Number-one artists See also 1996 in music List of Billboard number-one singles References Additional sources Fred Bronson's Billboard Book of Number 1 Hits, 5th Edition () Joel Whitburn's Top Pop Singles 1955-2008, 12 Edition () Joel Whitburn Presents the Billboard Hot 100 Charts: The Nineties () Additional information obtained can be verified within Billboard's online archive services and print editions of the magazine. United States Hot 100 1996
https://en.wikipedia.org/wiki/List%20of%20Billboard%20Hot%20100%20number%20ones%20of%201995
The Billboard Hot 100 is a chart that ranks the best-performing singles of the United States. Published by Billboard magazine, the data are compiled by Nielsen SoundScan based collectively on each single's weekly physical sales and airplays. Twelve singles topped the chart during the year. "On Bended Knee" by Boyz II Men began its peak at the top in 1994. The longest running number-one single of 1995 is "Fantasy" by Mariah Carey, which logged eight weeks atop the chart. The second longest-reigning number one single is a three-way tie between "Take a Bow" by Madonna, "This Is How We Do It" by Montell Jordan, and "Waterfalls" by TLC, with seven weeks each. "One Sweet Day" by Mariah Carey and Boyz II Men held the record for the longest running song at number-one on the Billboard Hot 100 with 16 weeks. It also broke the 14-week record held by Whitney Houston's "I Will Always Love You" and Boyz II Men's "I'll Make Love to You". Five of those weeks were logged in 1995 and the other 11 weeks were logged in 1996. Michael Jackson's "You Are Not Alone" became the first single to debut at number-one on the Billboard Hot 100 and holds the Guinness World Record as the first song in the 37-year history of the Billboard Hot 100 to debut at number one. In addition, "Fantasy", "Exhale (Shoop Shoop)" by Whitney Houston, and "One Sweet Day" also debuted at number-one on the Billboard Hot 100. That year, 5 acts earned their first number one song: TLC, Montell Jordan, Seal, Coolio, and L.V. Mariah Carey and TLC were the only acts to hit number one more than once, with two songs each. Chart history Number-one artists See also 1995 in music List of Billboard number-one singles References Additional sources Fred Bronson's Billboard Book of Number 1 Hits, 5th Edition () Joel Whitburn's Top Pop Singles 1955-2008, 12 Edition () Joel Whitburn Presents the Billboard Hot 100 Charts: The Nineties () Additional information obtained can be verified within Billboard's online archive services and print editions of the magazine. United States Hot 100 1995
https://en.wikipedia.org/wiki/Andrew%20Project
The Andrew Project was a distributed computing environment developed at Carnegie Mellon University beginning in 1982. It was an ambitious project for its time and resulted in an unprecedentedly vast and accessible university computing infrastructure. The project was named after Andrew Carnegie and Andrew Mellon, the founders of the institutions that eventually became Carnegie Mellon University. History The Information Technology Center, a partnership of Carnegie Mellon University (CMU) and the International Business Machines Corporation (IBM), began work on the Andrew Project in 1982. In its initial phase, the project involved both software and hardware, including wiring the campus for data and developing workstations to be distributed to students and faculty at CMU and elsewhere. The proposed "3M computer" workstations included a million pixel display and a megabyte of memory, running at a million instructions per second. Unfortunately, a cost on the order of US made the computers beyond the reach of students' budgets. The initial hardware deployment in 1985 established a number of university-owned "clusters" of public workstations in various academic buildings and dormitories. The campus was fully wired and ready for the eventual availability of inexpensive personal computers. Early development within the Information Technology Center, originally called VICE (Vast Integrated Computing Environment) and VIRTUE (Virtue Is Reached Through Unix and Emacs), focused on centralized tools, such as a file server, and workstation tools including a window manager, editor, email, and file system client code. Initially the system was prototyped on Sun Microsystems machines, and then to IBM RT PC series computers running a special IBM Academic Operating System. People involved in the project included James H. Morris, Nathaniel Borenstein, James Gosling, and David S. H. Rosenthal. The project was extended several times after 1985 in order to complete the software, and was renamed "Andrew" for Andrew Carnegie and Andrew Mellon, the founders of the institutions that eventually became Carnegie Mellon University. Mostly rewritten as a result of experience from early deployments, Andrew had four major software components: The Andrew Toolkit (ATK), a set of tools that allows users to create and distribute documents containing a variety of formatted and embedded objects, The Andrew Messaging System (AMS), an email and bulletin board system based on ATK, and The Andrew File System (AFS), a distributed file system emphasizing scalability for an academic and research environment. The Andrew Window Manager (WM), a tiled (non-overlapping windows) window system that allowed remote display of windows on a workstation display. It was one of the first network-oriented window managers to run on Unix as a graphical display. As part of the CMU's partnership with IBM, IBM retained the licensing rights to WM. WM was meant to be licensed under reasonable terms, which CMU t
https://en.wikipedia.org/wiki/List%20of%20Billboard%20Hot%20100%20number%20ones%20of%201997
The Billboard Hot 100 is a chart that ranks the best-performing singles of the United States. Published by Billboard magazine, the data are compiled by Nielsen SoundScan based collectively on each single's weekly physical sales and airplay. There were ten singles that peaked atop the charts, but if "Un-Break My Heart" is excluded from the count (for the song started its peak in the previous year), the total would be nine. The longest running number-one single of 1997 is "Candle in the Wind 1997"/"Something About the Way You Look Tonight", which logged 14 weeks at the top of the Billboard Hot 100. Two of those weeks were logged in 1998 while the remaining 12 were attained in 1997. With "Honey" becoming her 12th #1 single, Mariah Carey broke the record for most #1 songs by a female artist, surpassing Madonna and Whitney Houston with 11 each. That year, 7 acts earned their first number one song, such as Spice Girls, Puff Daddy, Mase, The Notorious B.I.G., Hanson, Faith Evans, and 112. The Notorious B.I.G. became the fifth artist to hit number one posthumously, after his death in March 1997. Puff Daddy, Mase, and The Notorious B.I.G. were the only artists to hit number one more than once, with Puff Daddy hitting the most with three, while Mase and The Notorious B.I.G. hit twice. Chart history Number-one artists See also 1997 in music References Additional sources Fred Bronson's Billboard Book of Number 1 Hits, 5th Edition () Joel Whitburn's Top Pop Singles 1955-2008, 12 Edition () Joel Whitburn Presents the Billboard Hot 100 Charts: The Nineties () Additional information obtained can be verified within Billboard's online archive services and print editions of the magazine. 1997 record charts 1997
https://en.wikipedia.org/wiki/List%20of%20cities%20and%20towns%20in%20Uganda
This is a list of cities and towns in Uganda: The population data are for 2014, except where otherwise indicated. The references from which the estimated populations are sourced are listed in each article for the cities and towns where the population estimates are given. Twenty largest cities by population The following population numbers are from the August 2014 national census, as documented in the final report of November 2016, by the Uganda Bureau of Statistics (UBOS). Cities In May 2019, the Cabinet of Uganda approved the creation of 15 cities, in a phased manner, over the course of the next one to three years, as illustrated in the table below. The 7 of the 15 cities started operations on 1 July 2020 as approved by the Parliament of Uganda. Cities and towns References External links Uganda: Regions, Major Cities & Towns - Population as per 2014 Census Uganda, List of cities in Uganda Cities
https://en.wikipedia.org/wiki/DECstation
The DECstation was a brand of computers used by DEC, and refers to three distinct lines of computer systems—the first released in 1978 as a word processing system, and the latter (more widely known) two both released in 1989. These comprised a range of computer workstations based on the MIPS architecture and a range of PC compatibles. The MIPS-based workstations ran ULTRIX, a DEC-proprietary version of UNIX, and early releases of OSF/1. DECstation 78 The first line of computer systems given the DECstation name were word processing systems based on the PDP-8. These systems, built into a VT52 terminal, were also known as the VT78. DECstation RISC workstations History The second (and completely unrelated) line of DECstations began with the DECstation 3100, which was released on 11 January 1989. The DECstation 3100 was the first commercially available RISC-based machine built by DEC. This line of DECstations was the fruit of an advanced development skunkworks project carried out in DEC's Palo Alto Hamilton Ave facility. Known as the PMAX project, its focus was to produce a computer systems family with the economics and performance to compete against the likes of Sun Microsystems and other RISC-based UNIX platforms of the day. The brainchild of James Billmaier, Mario Pagliaro, Armando Stettner and Joseph DiNucci, the systems family was to also employ a truly RISC-based architecture when compared to the heavier and very CISC VAX or the then still under development PRISM architectures. At the time DEC was mostly known for their CISC systems including the successful PDP-11 and VAX lines. Several architectures were considered from Intel, Motorola and others but the group quickly selected the MIPS line of microprocessors. The (early) MIPS microprocessors supported both big- and little-endian modes (configured during hardware reset). Little-endian mode was chosen both to match the byte ordering of VAX-based systems and the growing number of Intel-based PCs and computers. In contrast to the VAX and the later DEC Alpha architectures, the DECstation 3100 and family were specifically designed and built to run a UNIX system, ULTRIX, and no version of the VMS operating system was ever released for DECstations. One of the issues being debated at the project's inception was whether or not DEC could sustain, grow, and compete with an architecture it did not invent or own (manage). As the core advocates later left the company, the MIPS-based line of computers was shut down in favor of the Alpha-based computers, a DEC invented and owned architecture, descended from the PRISM development work. The first generation of commercially marketed DEC Alpha systems, the DEC 3000 AXP series, were similar in some respects to contemporaneous MIPS-based DECstations, which were sold alongside the Alpha systems as the DECstation line was gradually phased out. Both used the TURBOchannel expansion bus for video and network cards, as well as being sold with the same TURBOc
https://en.wikipedia.org/wiki/Inferno%20%28operating%20system%29
Inferno is a distributed operating system started at Bell Labs and now developed and maintained by Vita Nuova Holdings as free software under the MIT License. Inferno was based on the experience gained with Plan 9 from Bell Labs, and the further research of Bell Labs into operating systems, languages, on-the-fly compilers, graphics, security, networking and portability. The name of the operating system, many of its associated programs, and that of the current company, were inspired by Dante Alighieri's Divine Comedy. In Italian, Inferno means "hell", of which there are nine circles in Dante's Divine Comedy. Design principles Inferno was created in 1995 by members of Bell Labs' Computer Science Research division to bring ideas derived from their previous operating system, Plan 9 from Bell Labs, to a wider range of devices and networks. Inferno is a distributed operating system based on three basic principles: Resources as files: all resources are represented as files within a hierarchical file system Namespaces: a program's view of the network is a single, coherent namespace that appears as a hierarchical file system but may represent physically separated (locally or remotely) resources Standard communication protocol: a standard protocol, called Styx, is used to access all resources, both local and remote To handle the diversity of network environments it was intended to be used in, the designers decided a virtual machine (VM) was a necessary component of the system. This is the same conclusion of the Oak project that became Java, but arrived at independently. The Dis virtual machine is a register machine intended to closely match the architecture it runs on, in contrast to the stack machine of the Java virtual machine. An advantage of this approach is the relative simplicity of creating a just-in-time compiler for new architectures. The virtual machine provides memory management designed to be efficient on devices with as little as 1 MiB of memory and without memory-mapping hardware. Its garbage collector is a hybrid of reference counting and a real-time coloring collector that gathers cyclic data. The Inferno kernel contains the virtual machine, on-the-fly compiler, scheduler, devices, protocol stacks, and the name space evaluator for each process' file name space, and the root of the file system hierarchy. The kernel also includes some built-in modules that provide interfaces of the virtual operating system, such as system calls, graphics, security, and math modules. The Bell Labs Technical Journal paper introducing Inferno listed several dimensions of portability and versatility provided by the OS: Portability across processors: it currently runs on ARM, SGI MIPS, HP PA-RISC, IBM PowerPC, Sun SPARC, and Intel x86 architectures and is readily portable to others. Portability across environments: it runs as a stand-alone operating system on small terminals, and also as a user application under Bell Plan 9, MS Windows NT, Windows 95,
https://en.wikipedia.org/wiki/Linksys
Linksys Holdings, Inc., is an American brand of data networking hardware products mainly sold to home users and small businesses. It was founded in 1988 by the couple Victor and Janie Tsao, both Taiwanese immigrants to the United States. Linksys products include Wi-Fi routers, mesh Wi-Fi systems, Wifi extenders, access points, network switches, and Wi-Fi networking. It is headquartered in Irvine, California. Linksys products are sold direct-to-consumer from its website, through online retailers and marketplaces, as well as off-the-shelf in consumer electronics and big-box retail stores. As of 2020, Linksys products are sold in retail locations and value-added resellers in 64 countries and was the first router company to ship 100 million products. History In 1988, spouses Janie and Victor Tsao founded DEW International, later renamed Linksys, in the garage of their Irvine, California home. The Tsaos were immigrants from Taiwan who held second jobs as consultants specializing in pairing American technology vendors with manufacturers in Taiwan. The founders used Taiwanese manufacturing to achieve its early success. The company's first products were printer sharers that connected multiple PCs to printers. The company expanded into Ethernet hubs, network cards, and cords. In 1992, the Tsaos began running Linksys full time and moved the company and its growing staff to a formal office. By 1994, it had grown to 55 employees with annual revenues of $6.5 million. Linksys received a major boost in 1995, when Microsoft released Windows 95 with built-in networking functions that expanded the market for its products. Linksys established its first U.S. retail channels with Fry's Electronics (1995) and Best Buy (1996). In the late 1990s, Linksys released the first affordable multiport router, popularizing Linksys as a home networking brand. By 2003, when the company was acquired by Cisco, it had 305 employees and revenues of more than $500 million. Cisco expanded the company's product line, acquiring VoIP maker Sipura Technology in 2005 and selling its products under Linksys Voice System or later Linksys Business Series brands. In July 2008, Cisco acquired Seattle-based Pure Networks, a vendor of home networking-management software. Cisco announced in January 2013 that it would sell its home networking division and Linksys to Belkin, giving Belkin 30% of the home router market. In 2018, Belkin and its subsidiaries, including Linksys, were acquired by Foxconn, a Taiwanese multinational electronics firm and the largest provider of electronics manufacturing services, for $866 million. On June 4, 2021, Harry Dewhirst was appointed as CEO. In September, cybersecurity firm Fortinet made a $75 million investment in Linksys. Their focus is on the security of home networks for remote workplaces. On September 24, 2021, Fortinet invested an additional $85 million in cash for shares of Series A Preferred Stock of Linksys. Mark Sanders became CFO in October. Produ
https://en.wikipedia.org/wiki/Caddo
The Caddo people comprise the Caddo Nation of Oklahoma, a federally recognized tribe headquartered in Binger, Oklahoma. They speak the Caddo language. The Caddo Confederacy was a network of Indigenous peoples of the Southeastern Woodlands, who historically inhabited much of what is northeast Texas, west Louisiana, southwestern Arkansas, and southeastern Oklahoma. Prior to European contact, they were the Caddoan Mississippian culture, who constructed huge earthwork mounds at several sites in this territory, flourishing about 800 to 1400 CE. In the early 19th century, Caddo people were forced to a reservation in Texas. In 1859, they were removed to Indian Territory. Government and civic institutions The Caddo Nation of Oklahoma was previously known as the Caddo Tribe of Oklahoma. The tribal constitution provides for election of an eight-person council, with a chairperson. Some 6,000 people are enrolled in the nation, with 3,044 living within the state of Oklahoma. Individuals are required to document at least 1/16 Caddo ancestry in order to enroll as citizens. In July 2016, Tamara M. Francis was re-elected as the Chairman of the Caddo Nation. Chairman Tamara Francis is the daughter of the first elected female chairman, Mary Pat Francis. She was the fourth elected female leader of the Caddo Nation. As of 2021 the tribal council consists of: Chairman: Bobby Gonzalez Vice-chairman: Kelly Howell Factor Secretary: Jennifer Reeder Treasurer: Verna Castillo Anadarko Representative: Phillip Martin Binger Representative: Travis Threlkeld Fort Cobb Representative: Arlene Kay O'Neal Oklahoma City Representative: Jennifer Wilson. The tribe has several programs to invigorate Caddo culture. It sponsors a summer culture camp for children. The Hasinai Society and Caddo Culture Club both teach and perform Caddo songs and dances to keep the culture alive and pass it on to the next generations. The Kiwat Hasinay Foundation is dedicated to preserving and increasing use of the Caddo language. Precontact history Archaeology The Caddo are thought to be an extension of Woodland period peoples, the Fourche Maline and Mossy Grove cultures, whose members were living in the area of Arkansas, Louisiana, Oklahoma, and Texas areas between 200 BCE and 800 CE. The Wichita and Pawnee are also related to the Caddo, since both tribes historically spoke Caddoan languages. By 800 CE, this society had begun to coalesce into the Caddoan Mississippian culture. Some villages began to gain prominence as ritual centers. Leaders directed the construction of major earthworks known as platform mounds, which served as temple mounds and platforms for residences of the elite. The flat-topped mounds were arranged around leveled, large, open plazas, which were usually kept swept clean and were often used for ceremonial occasions. As complex religious and social ideas developed, some people and family lineages gained prominence over others. By 1000 CE, a society that is defined by
https://en.wikipedia.org/wiki/Microsoft%20Exchange%20Server
Microsoft Exchange Server is a mail server and calendaring server developed by Microsoft. It runs exclusively on Windows Server operating systems. The first version was called Exchange Server 4.0, to position it as the successor to the related Microsoft Mail 3.5. Exchange initially used the X.400 directory service but switched to Active Directory later. Until version 5.0, it came bundled with an email client called Microsoft Exchange Client. This was discontinued in favor of Microsoft Outlook. Exchange Server primarily uses a proprietary protocol called MAPI to talk to email clients, but subsequently added support for POP3, IMAP, and EAS. The standard SMTP protocol is used to communicate to other Internet mail servers. Exchange Server is licensed both as on-premises software and software as a service (SaaS). In the on-premises form, customers purchase client access licenses (CALs); as SaaS, Microsoft charges a monthly service fee instead. History Microsoft had sold a number of simpler email products before, but the first release of Exchange (Exchange Server 4.0 in April 1996) was an entirely new X.400-based client–server groupware system with a single database store, which also supported X.500 directory services. The directory used by Exchange Server eventually became Microsoft's Active Directory service, an LDAP-compliant directory service which was integrated into Windows 2000 as the foundation of Windows Server domains. As of 2020, there have been ten releases. Current version The current version, Exchange Server 2019, was released in October 2018. Unlike other Office Server 2019 products such as SharePoint and Skype for Business, Exchange Server 2019 could only be deployed on Windows Server 2019 when it was released. Since Cumulative Update 2022 H1 Exchange 2019 has been supported on Windows Server 2022. One of the key features of the new release is that Exchange Server can be deployed onto Windows Server Core for the first time. Additionally, Microsoft has retired the Unified Messaging feature of Exchange, meaning that Skype for Business on-premises customers will have to use alternative solutions for voicemail, such as Azure cloud voicemail. New features Security: support for installing Exchange Server 2019 onto Windows Server Core Performance: supports running Exchange Server with up to 48 processor cores and 256 GB of RAM Removed features Unified Messaging Clustering and high availability Exchange Server Enterprise Edition supports clustering of up to 4 nodes when using Windows 2000 Server, and up to 8 nodes with Windows Server 2003. Exchange Server 2003 also introduced active-active clustering, but for two-node clusters only. In this setup, both servers in the cluster are allowed to be active simultaneously. This is opposed to Exchange's more common active-passive mode in which the failover servers in any cluster node cannot be used at all while their corresponding home servers are active. They must wait, inactive, for the h
https://en.wikipedia.org/wiki/Apache%20Groovy
Apache Groovy is a Java-syntax-compatible object-oriented programming language for the Java platform. It is both a static and dynamic language with features similar to those of Python, Ruby, and Smalltalk. It can be used as both a programming language and a scripting language for the Java Platform, is compiled to Java virtual machine (JVM) bytecode, and interoperates seamlessly with other Java code and libraries. Groovy uses a curly-bracket syntax similar to Java's. Groovy supports closures, multiline strings, and expressions embedded in strings. Much of Groovy's power lies in its AST transformations, triggered through annotations. Groovy 1.0 was released on January 2, 2007, and Groovy 2.0 in July, 2012. Since version 2, Groovy can be compiled statically, offering type inference and performance near that of Java. Groovy 2.4 was the last major release under Pivotal Software's sponsorship which ended in March 2015. Groovy has since changed its governance structure to a Project Management Committee in the Apache Software Foundation. History James Strachan first talked about the development of Groovy on his blog in August 2003. In March 2004, Groovy was submitted to the JCP as JSR 241 and accepted by ballot. Several versions were released between 2004 and 2006. After the Java Community Process (JCP) standardization effort began, the version numbering changed, and a version called "1.0" was released on January 2, 2007. After various betas and release candidates numbered 1.1, on December 7, 2007, Groovy 1.1 Final was released and immediately renumbered as Groovy 1.5 to reflect the many changes made. In 2007, Groovy won the first prize at JAX 2007 innovation award. In 2008, Grails, a Groovy web framework, won the second prize at JAX 2008 innovation award. In November 2008, SpringSource acquired the Groovy and Grails company (G2One). In August 2009 VMware acquired SpringSource. In April 2012, after eight years of inactivity, the Spec Lead changed the status of JSR 241 to dormant. Strachan had left the project silently a year before the Groovy 1.0 release in 2007. In Oct 2016, Strachan stated "I still love groovy (jenkins pipelines are so groovy!), java, go, typescript and kotlin". On July 2, 2012, Groovy 2.0 was released, which, among other new features, added static compiling and static type checking. When the Pivotal Software joint venture was spun-off by EMC Corporation (EMC) and VMware in April 2013, Groovy and Grails formed part of its product portfolio. Pivotal ceased sponsoring Groovy and Grails from April 2015. That same month, Groovy changed its governance structure from a Codehaus repository to a Project Management Committee (PMC) in the Apache Software Foundation via its incubator. Groovy graduated from Apache's incubator and became a top-level project in November 2015. Features Most valid Java files are also valid Groovy files. Although the two languages are similar, Groovy code can be more compact, because it does not need all the e
https://en.wikipedia.org/wiki/Clarion%20%28programming%20language%29
Clarion is a commercial, proprietary, fourth-generation programming language (4GL), multi-paradigm, programming language and integrated development environment (IDE) from SoftVelocity used to program database applications. It is compatible with indexed sequential access method (ISAM), Structured Query Language (SQL), and ActiveX Data Objects (ADO) data access methods, reads and writes several flat file desktop database formats including ASCII, comma-separated values (CSV), DOS (binary), FoxPro, Clipper, dBase, and some relational databases via ODBC, Microsoft SQL Server, Sybase SQL Anywhere, and Oracle Database through the use of accelerated native database drivers, and XML, Clarion can be used to output to HTML, XML, plain text, and Portable Document Format (PDF), among others. The Clarion development environment (IDE) runs on the Clarion language. The IDE provides code generation facilities via a system of templates which allow programmers to describe the program from an abstract level higher than code statements. The generator then turns this higher level into code, which in turn is then compiled and linked using a normal compiler and linker. This generation layer is sometimes referred to as 4GL programming. Using the generation layer is optional. It is possible to create programs fully at the code level (the so-called 3 Lager), bypassing all code generation facilities. If the templates are used to generate code, then programmers are able to inject their own code into the generated code to alter, or extend, the functions offered by the template layer. This process of embedding code can be done while viewing the surrounding generated code. This mixing of template code and generated code allows the template settings to be updated, and the code regenerated, without loss of the embedded code. The templates (from which the code is generated) are provided in source form and developers are free to create their own templates. Many templates have been written by various developers: some are offered as commercial add-ons, and some are free. Three main Clarion products exist: Professional Edition, Enterprise Edition, and .NET. History The first release of the Clarion language was a DOS product named Clarion 1.0 and was first released in April 1986. Clarion was created by Bruce Barrington, one of the founders of healthcare firm "HBO & Company" (later acquired by McKesson Corporation,) and a small team of developers. Barrington's goal was to create a language that would be compact and expressive, and would maximize the use of the memory-mapped screen of the IBM PC by creating a screen designer. Version 1 produced pseudocode; the initial release included a screen designer, an interpreter, an editor, and a debugger. Initially it supported databases composed of DAT files which was Clarion’s proprietary ISAM file format. Bruce Barrington formed Barrington Systems and released version 1.0. Clarion 1.0 required use of a dongle, at a time when industry sent
https://en.wikipedia.org/wiki/NESN
New England Sports Network, popularly known as NESN , is an American regional sports cable and satellite television network owned by a joint venture of Fenway Sports Group (which owns a controlling 80% interest, and is the owner of the Boston Red Sox, Liverpool Football Club, and the Pittsburgh Penguins) and Delaware North (which owns the remaining 20% interest in the network as well as the Boston Bruins and TD Garden, home of the Bruins and the Boston Celtics). Headquartered in Watertown, Massachusetts, the network is primarily carried on cable providers throughout New England (except in Fairfield County, Connecticut, which is part of the greater New York City media market). NESN is also distributed nationally on satellite providers DirecTV and as NESN National via select cable providers. NESN is the primary broadcaster of the Boston Red Sox and the Boston Bruins – serving as the exclusive home for all games that are not televised by a national network. NESN also carries minor league baseball games, regional college sports events, various outdoor and sports talk shows. The network has become synonymous with local sports in New England, and is considered a local institution. History The New England Sports Network launched on April 4, 1984, originally operating as a joint-venture of the Boston Red Sox, Boston Bruins, and Storer Communications (the owner of WSBK-TV). The new service which featured 90 Red Sox and 40 Bruins games during its first year was sold as a premium channel with prices ranging from $7.50 to $10 per month. A number of these games were previously aired on WSBK. In 1996, NESN became the New England affiliate of Fox Sports Net (FSN), carrying the network's national sports and magazine programs; this lasted until 1999. In January 1998, then-FSN parent News Corporation acquired partial ownership of Cablevision-owned SportsChannel New England (and its sister networks), turning it into Fox Sports Net New England (now NBC Sports Boston). However, despite the name change Fox Sports New England was blocked from carrying any FSN programming due to NESN's existing affiliation agreement. Fox had hoped to negotiate an early termination of that agreement, but had to wait until it expired on December 31, 1999. NESN converted into a basic cable service in 2001, a model that has since been copied by other companies through their respective launches of new regional sports networks as well as similar conversions (many of which predate NESN's transition) of those that began as pay services. Afterwards, until early 2006, NESN carried simulcasts of ESPNews during the afternoon and overnight hours. NESN has carried regional Atlantic Coast Conference college basketball games since Boston College joined the conference, including games distributed for national broadcast for and by Fox Sports Networks. In September 2003, NESN began producing Red Sox games in high definition. In April 2006, NESN launched a full-time HD feed, after having re-located i
https://en.wikipedia.org/wiki/Richard%20Wallace%20%28scientist%29
Richard S. Wallace is an American author of AIML and Botmaster of A.L.I.C.E. (Artificial Linguistic Internet Computer Entity). He is also the founder of the A.L.I.C.E Artificial Intelligence Foundation. Dr. Wallace's work has appeared in the New York Times, WIRED, CNN, ZDTV and in numerous foreign language publications across Asia, Latin America and Europe. Wallace began work on A.L.I.C.E. in 1995, and the project has gained contributions from over 500 developers from around the world. A.L.I.C.E. won the Loebner Prize in 2000, 2001, and 2004. In 2002, Wallace began a collaboration with Franz, Inc. which resulted in Pandorabots, an AIML server and interpreter implemented in Common Lisp. Wallace then became the Chief Science Officer of Pandorabots, Inc. Richard Wallace was born in Portland, Maine in 1960. He earned his Ph.D. in computer science from Carnegie Mellon University in 1989. References External links 1960 births Living people Artificial intelligence researchers American computer programmers Carnegie Mellon University alumni American computer scientists
https://en.wikipedia.org/wiki/Broad%20Street%20railway%20station%20%28England%29
Broad Street was a major rail terminal in the City of London, adjacent to Liverpool Street station. It served as the main terminus of the North London Railway (NLR) network, running from 1865 to 1986. During its lifetime, it catered for mainly local suburban services around London, and over time struggled to compete with other modes of transport, leading to its closure. The station was built as a joint venture by the NLR and the London and North Western Railway (LNWR) in order to have a station serving freight closer to the City. It was immediately successful for both goods and passenger services, and saw a significant increase in NLR traffic. Usage peaked in the early 20th century, after which it suffered from competition from London trams, buses and, especially, the London Underground network. Patronage gradually fell and services decreased, while the building became increasingly dilapidated. Freight services were withdrawn towards the end of the 1960s and the station closed in 1986. The station building was replaced by Broadgate, an office and retail complex, while part of the connecting line to the station was reinstated in 2010 as part of the London Overground. Location The station was sited at the junction of Broad Street and Liverpool Street in the Broad Street ward of the City of London, with Liverpool Street station immediately to the east. It was near Liverpool Street and tube stations. History The station was proposed by the North London Railway (NLR). The line originally opened as the East & West India Docks & Birmingham Junction Railway in 1850, in order to transport freight between the London and Birmingham Railway and the London Docklands. By the time it had been renamed to the NLR in 1853, passenger traffic had grown in equal importance, so it was decided to build a station with direct access to the City. Opening The London and North Western Railway (LNWR) was also keen to have a goods depot in the City, and agreed to help the NLR fund the new extension. The connecting line to Broad Street (via the Kingsland Viaduct) was authorised by the North London Railway Act of 22 July 1861. The work involved a extension from Kingsland down towards Broad Street, and required the demolition of numerous properties in Shoreditch and Haggerston. During construction of the terminus, a large burial ground was unearthed, exposing human remains. This may have been a result of the plague, or burial pits from Bethlehem Hospital. The overall cost of the station and extension was £1.2m (£m in ). The station was opened on 1 November 1865 as the terminus of a network of commuter railways linking east and west London via the looping route of the NLR, originally with seven platforms and three approach tracks. The main building was designed by William Baker and constructed in an Italianate style and a Second Empire style roof. The frontage was long and wide, constructed from white Suffolk brick and Peterhead granite, with a clock tower as a centre
https://en.wikipedia.org/wiki/E-FIT
Electronic Facial Identification Technique (E-FIT, e-fit, efit) is a computer-based method of producing facial composites of wanted criminals, based on eyewitness descriptions. Uses The system first appeared in the late 1980s, programmed by John Platten and has since been progressively refined by Platten and latterly by Dr Matthew Maylin. E-FIT has developed a reputation as a highly reliable and flexible system for feature-based composite construction. Customers for this system exist in over 30 countries around the world. These include the Metropolitan Police Service, the Bureau of Alcohol, Tobacco and Firearms (ATF), the New York Police Department, the Stockholm Police, the Royal Canadian Mounted Police and the Jamaica Constabulary Force. E-FIT is used both for minor and serious crimes. In the United Kingdom, it is an ever-present feature on the BBC's Crimewatch television programme. The system is available in Spanish, German, English (both US and UK), French, Italian, Portuguese and Swedish. The widespread use of the original E-FIT approach is gradually being superseded by a new version of the program called EFIT-V. EFIT-V is a full-colour, hybrid system that offers increased flexibility and speed, allowing the face to be constructed using both evolutionary and systematic construction techniques. Efficacy The E-FIT, Pro-fit, and similar systems used in the UK have been subjected to a number of formal academic examinations. In these studies, volunteers were able to identify the person in the composite about 20% of the time if the composite was prepared immediately after viewing the subject. However, one study found that if witnesses were required to wait two days before constructing a composite, which matches real-life applications more closely, success rates fell to between 3 and 8 per cent. References External links VisionMetric Ltd Facial recognition Law enforcement techniques Identity documents
https://en.wikipedia.org/wiki/Gerard%20Salton
Gerard A. "Gerry" Salton (8 March 1927 – 28 August 1995) was a professor of Computer Science at Cornell University. Salton was perhaps the leading computer scientist working in the field of information retrieval during his time, and "the father of Information Retrieval". His group at Cornell developed the SMART Information Retrieval System, which he initiated when he was at Harvard. It was the very first system to use the now popular vector space model for Information Retrieval. Salton was born Gerhard Anton Sahlmann on March 8, 1927, in Nuremberg, Germany. He received a Bachelor's (1950) and Master's (1952) degree in mathematics from Brooklyn College, and a Ph.D. from Harvard in applied mathematics in 1958, the last of Howard Aiken's doctoral students, and taught there until 1965, when he joined Cornell University and co-founded its department of Computer Science. Salton was perhaps most well known for developing the now widely used vector space model for Information Retrieval. In this model, both documents and queries are represented as vectors of term counts, and the similarity between a document and a query is given by the cosine between the term vector and the document vector. In this paper, he also introduced TF-IDF, or term-frequency-inverse-document frequency, a model in which the score of a term in a document is the ratio of the number of terms in that document divided by the frequency of the number of documents in which that term occurs. (The concept of inverse document frequency, a measure of specificity, had been introduced in 1972 by Karen Sparck-Jones.) Later in life, he became interested in automatic text summarization and analysis, as well as automatic hypertext generation. He published over 150 research articles and 5 books during his life. Salton was editor-in-chief of the Communications of the ACM and the Journal of the ACM, and chaired Special Interest Group on Information Retrieval (SIGIR). He was an associate editor of the ACM Transactions on Information Systems. He was an ACM Fellow (elected 1995), received an Award of Merit from the American Society for Information Science (1989), and was the first recipient of the SIGIR Award for outstanding contributions to study of Information Retrieval (1983) -- now called the Gerard Salton Award. Bibliography Salton, Automatic Information Organization and Retrieval, 1968. --- and Michael J. McGill, Introduction to modern Information Retrieval, 1983. G. Salton, A. Wong, and C. S. Yang (1975), "A Vector Space Model for Automatic Indexing," Communications of the ACM, vol. 18, nr. 11, pages 613–620. (Article in which a vector space model was presented) See also List of pioneers in computer science References External links In Memoriam Fractals of Change: Search Down Memory Lane The Most Influential Paper Gerard Salton Never Wrote - This 2004 Library Trends paper by David Dubin serves as a historical review of the metamorphosis of the term discrimination value model (
https://en.wikipedia.org/wiki/Pizza%20%28programming%20language%29
Pizza is an open-source superset of Java 1.4, prior to the introduction of generics for the Java programming language. In addition to its own solution for adding generics to the language, Pizza also added function pointers and algebraic types with case classes and pattern matching. In August 2001, the developers made a compiler capable of working with Java. Most Pizza applications can run in a Java environment, but certain cases will cause problems. Pizza's last version was released in January 2002. Its main developers turned their focus afterwards to the Generic Java project: another attempt to add generics to Java that was officially adopted as of version 5 of the language. The pattern matching and other functional programming-like features have been further developed in the Scala programming language. Martin Odersky remarked, "we wanted to integrate the functional and object-oriented parts in a cleaner way than what we were able to achieve before with the Pizza language. [...] In Pizza we did a clunkier attempt, and in Scala I think we achieved a much smoother integration between the two." Example public final class Main { public int main(String args[]) { System.out.println( new Lines(new DataInputStream(System.in)) .takeWhile(nonEmpty) .map(fun(String s) -> int { return Integer.parseInt(s); }) .reduceLeft(0, fun(int x, int y) -> int { return x + y; })); while(x == 0) { map.create.newInstance() } } } References External links Java programming language family JVM programming languages
https://en.wikipedia.org/wiki/Gibbs%20sampling
In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is difficult. This sequence can be used to approximate the joint distribution (e.g., to generate a histogram of the distribution); to approximate the marginal distribution of one of the variables, or some subset of the variables (for example, the unknown parameters or latent variables); or to compute an integral (such as the expected value of one of the variables). Typically, some of the variables correspond to observations whose values are known, and hence do not need to be sampled. Gibbs sampling is commonly used as a means of statistical inference, especially Bayesian inference. It is a randomized algorithm (i.e. an algorithm that makes use of random numbers), and is an alternative to deterministic algorithms for statistical inference such as the expectation-maximization algorithm (EM). As with other MCMC algorithms, Gibbs sampling generates a Markov chain of samples, each of which is correlated with nearby samples. As a result, care must be taken if independent samples are desired. Generally, samples from the beginning of the chain (the burn-in period) may not accurately represent the desired distribution and are usually discarded. Introduction Gibbs sampling is named after the physicist Josiah Willard Gibbs, in reference to an analogy between the sampling algorithm and statistical physics. The algorithm was described by brothers Stuart and Donald Geman in 1984, some eight decades after the death of Gibbs, and became popularized in the statistics community for calculating marginal probability distribution, especially the posterior distribution. In its basic version, Gibbs sampling is a special case of the Metropolis–Hastings algorithm. However, in its extended versions (see below), it can be considered a general framework for sampling from a large set of variables by sampling each variable (or in some cases, each group of variables) in turn, and can incorporate the Metropolis–Hastings algorithm (or methods such as slice sampling) to implement one or more of the sampling steps. Gibbs sampling is applicable when the joint distribution is not known explicitly or is difficult to sample from directly, but the conditional distribution of each variable is known and is easy (or at least, easier) to sample from. The Gibbs sampling algorithm generates an instance from the distribution of each variable in turn, conditional on the current values of the other variables. It can be shown that the sequence of samples constitutes a Markov chain, and the stationary distribution of that Markov chain is just the sought-after joint distribution. Gibbs sampling is particularly well-adapted to sampling the posterior distribution of a Bayesian network, since Bayesian networks are typically specified as a collection of con
https://en.wikipedia.org/wiki/Ruth%20Aylett
Ruth S. Aylett (born 1951) is a British author, computer scientist, professor, poet, and political activist. She is a professor of computer science at Heriot-Watt University in Edinburgh, where she specialises in affective computing, social computing, software agents, and human–robot interaction. Research Aylett's research involves affective computing, social computing, software agents, and human–robot interaction. She is the leader of Socially Competent Robots (SoCoRo), a project of the Engineering and Physical Sciences Research Council that studies whether robots can assist autistic people in learning to recognize facial expressions and other social cues. She has also studied the use of "emotionally literate" robots for tutoring schoolchildren, developed interactive role-playing software intended to combat bullying, and performed with a robot poet named Sarah the Poetic Robot as part of the Edinburgh Free Fringe. Education and career Aylett earned a degree in mathematical economics at the London School of Economics, and began her career in computing by working in technical support at International Computers Limited for three years, before moving to the University of Sheffield to work in computing and robotics. She became a lecturer at Sheffield Hallam University for five years, and then moved to the University of Edinburgh in 1989, as part of the Artificial Intelligence Applications Institute there. She moved again to the University of Salford in 1992, first as part of the IT Institute there and later in the Centre for Virtual Environments. There, she became Professor of Intelligent Virtual Environments in 2000. In 2004, she moved to her present position at Heriot-Watt University. Publications Aylett's book Robots: Bringing Intelligent Machines to Life? (2002) is a historical exploration on robots, on the history of robotics, and on research problems confronting roboticists. Aimed at high-school age students, it consists of a sequence of two-page illustrated spreads on each of its topics. She is the coauthor, with Judy Robertson, Lisa Gjedde, Rose Luckin and Paul Brna, of the self-published book Inside Stories: A Narrative Journey (2008), on the use of story-telling in education and the use of technology to assist in storytelling. Aylett is also the coauthor of the poetry pamphlet Handfast: Poetry Duets with Beth McDonough (Mothers Milk, 2016). It has a challenge-response format, in which a poem by one of the authors inspires a poem by the other author exploring the same theme. Family and politics Aylett was born in London. She is the mother of writer and activist Owen Jones. She met her husband, union organiser Robert Jones, through their membership in the Militant tendency, a Trotskyist group within the Labour Party. He developed prostate cancer, and died in 2018. As a political activist, Aylett has advocated for the Labour Party, for improved labour security for university staff, for permanent residency for European Union citizens al
https://en.wikipedia.org/wiki/Function%20object
In computer programming, a function object is a construct allowing an object to be invoked or called as if it were an ordinary function, usually with the same syntax (a function parameter that can also be a function). In some languages, particularly C++, function objects are often called functors (not related to the functional programming concept). Description A typical use of a function object is in writing callback functions. A callback in procedural languages, such as C, may be performed by using function pointers. However it can be difficult or awkward to pass a state into or out of the callback function. This restriction also inhibits more dynamic behavior of the function. A function object solves those problems since the function is really a façade for a full object, carrying its own state. Many modern (and some older) languages, e.g. C++, Eiffel, Groovy, Lisp, Smalltalk, Perl, PHP, Python, Ruby, Scala, and many others, support first-class function objects and may even make significant use of them. Functional programming languages additionally support closures, i.e. first-class functions that can 'close over' variables in their surrounding environment at creation time. During compilation, a transformation known as lambda lifting converts the closures into function objects. In C and C++ Consider the example of a sorting routine that uses a callback function to define an ordering relation between a pair of items. The following C/C++ program uses function pointers: #include <stdlib.h> /* qsort() callback function, returns < 0 if a < b, > 0 if a > b, 0 if a == b */ int compareInts(const void* a, const void* b) { return ( *(int *)a - *(int *)b ); } ... // prototype of qsort is // void qsort(void *base, size_t nel, size_t width, int (*compar)(const void *, const void *)); ... int main(void) { int items[] = { 4, 3, 1, 2 }; qsort(items, sizeof(items) / sizeof(items[0]), sizeof(items[0]), compareInts); return 0; } In C++, a function object may be used instead of an ordinary function by defining a class that overloads the function call operator by defining an operator() member function. In C++, this may appear as follows: // comparator predicate: returns true if a < b, false otherwise struct IntComparator { bool operator()(const int &a, const int &b) const { return a < b; } }; int main() { std::vector<int> items { 4, 3, 1, 2 }; std::sort(items.begin(), items.end(), IntComparator()); return 0; } Notice that the syntax for providing the callback to the std::sort() function is identical, but an object is passed instead of a function pointer. When invoked, the callback function is executed just as any other member function, and therefore has full access to the other members (data or functions) of the object. Of course, this is just a trivial example. To understand what power a functor provides more than a regular function, consider the common use case of sorting objects by a particular field. In the follow
https://en.wikipedia.org/wiki/Function%20pointer
A function pointer, also called a subroutine pointer or procedure pointer, is a pointer referencing executable code, rather than data. Dereferencing the function pointer yields the referenced function, which can be invoked and passed arguments just as in a normal function call. Such an invocation is also known as an "indirect" call, because the function is being invoked indirectly through a variable instead of directly through a fixed identifier or address. Function pointers allow different code to be executed at runtime. They can also be passed to a function to enable callbacks. Function pointers are supported by third-generation programming languages (such as PL/I, COBOL, Fortran, dBASE dBL, and C) and object-oriented programming languages (such as C++, C#, and D). Simple function pointers The simplest implementation of a function (or subroutine) pointer is as a variable containing the address of the function within executable memory. Older third-generation languages such as PL/I and COBOL, as well as more modern languages such as Pascal and C generally implement function pointers in this manner. Example in C The following C program illustrates the use of two function pointers: func1 takes one double-precision (double) parameter and returns another double, and is assigned to a function which converts centimeters to inches. func2 takes a pointer to a constant character array as well as an integer and returns a pointer to a character, and is assigned to a C string handling function which returns a pointer to the first occurrence of a given character in a character array. #include <stdio.h> /* for printf */ #include <string.h> /* for strchr */ double cm_to_inches(double cm) { return cm / 2.54; } // "strchr" is part of the C string handling (i.e., no need for declaration) // See https://en.wikipedia.org/wiki/C_string_handling#Functions int main(void) { double (*func1)(double) = cm_to_inches; char * (*func2)(const char *, int) = strchr; printf("%f %s", func1(15.0), func2("Wikipedia", 'p')); /* prints "5.905512 pedia" */ return 0; } The next program uses a function pointer to invoke one of two functions (sin or cos) indirectly from another function (compute_sum, computing an approximation of the function's Riemann integration). The program operates by having function main call function compute_sum twice, passing it a pointer to the library function sin the first time, and a pointer to function cos the second time. Function compute_sum in turn invokes one of the two functions indirectly by dereferencing its function pointer argument funcp multiple times, adding together the values that the invoked function returns and returning the resulting sum. The two sums are written to the standard output by main. #include <math.h> #include <stdio.h> // Function taking a function pointer as an argument double compute_sum(double (*funcp)(double), double lo, double hi) { double sum = 0.0; // Add values returned by the pointed-to funct
https://en.wikipedia.org/wiki/Stack%20machine
In computer science, computer engineering and programming language implementations, a stack machine is a computer processor or a virtual machine in which the primary interaction is moving short-lived temporary values to and from a push down stack. In the case of a hardware processor, a hardware stack is used. The use of a stack significantly reduces the required number of processor registers. Stack machines extend push-down automata with additional load/store operations or multiple stacks and hence are Turing-complete. Design Most or all stack machine instructions assume that operands will be from the stack, and results placed in the stack. The stack easily holds more than two inputs or more than one result, so a rich set of operations can be computed. In stack machine code (sometimes called p-code), instructions will frequently have only an opcode commanding an operation, with no additional fields identifying a constant, register or memory cell, known as a zero address format. This greatly simplifies instruction decoding. Branches, load immediates, and load/store instructions require an argument field, but stack machines often arrange that the frequent cases of these still fit together with the opcode into a compact group of bits. The selection of operands from prior results is done implicitly by ordering the instructions. Some stack machine instruction sets are intended for interpretive execution of a virtual machine, rather than driving hardware directly. Integer constant operands are pushed by or instructions. Memory is often accessed by separate or instructions containing a memory address or calculating the address from values in the stack. All practical stack machines have variants of the load–store opcodes for accessing local variables and formal parameters without explicit address calculations. This can be by offsets from the current top-of-stack address, or by offsets from a stable frame-base register. The instruction set carries out most ALU actions with postfix (reverse Polish notation) operations that work only on the expression stack, not on data registers or main memory cells. This can be very convenient for executing high-level languages, because most arithmetic expressions can be easily translated into postfix notation. For example, consider the expression A*(B-C)+(D+E), written in reverse Polish notation as A B C - * D E + +. Compiling and running this on a simple imaginary stack machine would take the form: # stack contents (leftmost = top = most recent): push A # A push B # B A push C # C B A subtract # B-C A multiply # A*(B-C) push D # D A*(B-C) push E # E D A*(B-C) add # D+E A*(B-C) add # A*(B-C)+(D+E) The arithmetic operations 'subtract', 'multiply', and 'add' act on the two topmost operands of the stack. The computer takes both
https://en.wikipedia.org/wiki/1971%20in%20video%20games
At the beginning of the 1970s, video games existed almost entirely as novelties passed around by programmers and technicians with access to computers, primarily at research institutions and large companies. The history of video games transitioned into a new era early in the decade, however, with the rise of the commercial video game industry. November Computer Space is released in North America. Galaxy Game is released. December December 3 – The Oregon Trail is first demonstrated to students at Carleton College in Northfield, Minnesota. See also Early history of video games 1971 in games References Video games by year Video games
https://en.wikipedia.org/wiki/Geomatics
Geomatics is defined in the ISO/TC 211 series of standards as the "discipline concerned with the collection, distribution, storage, analysis, processing, presentation of geographic data or geographic information". Under another definition, it consists of products, services and tools involved in the collection, integration and management of geographic (geospatial) data. It is also known as geomatic(s) engineering (geodesy and geoinformatics engineering or geospatial engineering). Surveying engineering was the widely used name for geomatic(s) engineering in the past. History and etymology The term was proposed in French ("géomatique") at the end of the 1960s by scientist Bernard Dubuisson to reflect at the time recent changes in the jobs of surveyor and photogrammetrist. The term was first employed in a French Ministry of Public Works memorandum dated 1 June 1971 instituting a "standing committee of geomatics" in the government. The term was popularised in English by French-Canadian surveyor Michel Paradis in his The little Geodesist that could article, in 1981 and in a keynote address at the centennial congress of the Canadian Institute of Surveying (now known as the Canadian Institute of Geomatics) in April 1982. He claimed that at the end of the 20th century the needs for geographical information would reach a scope without precedent in history and that, in order to address these needs, it was necessary to integrate in a new discipline both the traditional disciplines of land surveying and the new tools and techniques of data capture, manipulation, storage and diffusion. Geomatics includes the tools and techniques used in land surveying, remote sensing, cartography, geographic information systems (GIS), global navigation satellite systems (GPS, GLONASS, Galileo, BeiDou), photogrammetry, geophysics, geography, and related forms of earth mapping. The term was originally used in Canada but has since been adopted by the International Organization for Standardization, the Royal Institution of Chartered Surveyors, and many other international authorities, although some (especially in the United States) have shown a preference for the term geospatial technology, which may be defined as synonym of "geospatial information and communications technology". Although many definitions of geomatics, such as the above, appear to encompass the entire discipline relating to geographic information – including geodesy, geographic information systems, remote sensing, satellite navigation, and cartography –, the term is almost exclusively restricted to the perspective of surveying and engineering toward geographic information. Geoinformatics has been proposed as an alternative comprehensive term, but its use is only common in some parts of the world, especially Europe. The related field of hydrogeomatics covers the area associated with surveying work carried out on, above or below the surface of the sea or other areas of water. The older term of hydrographics w
https://en.wikipedia.org/wiki/Wireframe
Wireframe or wire-frame may refer to: Wire-frame model, visual model of a three-dimensional object in computer graphics Website wireframe, a basic visual guide used in web design See also Wire sculpture, used in plastic arts
https://en.wikipedia.org/wiki/PayPoint
PayPoint plc is a British business offering a system for paying bills in United Kingdom, Ireland and Romania. It is listed on the London Stock Exchange. History The PayPoint network was set up in 1996 with the aim of enabling customers to load gas and electricity onto their pre-paid energy meters in cash at their local convenience store. Prepayment meters are intended to help customers to manage energy use, thereby helping the environment, and control their spending, thereby enabling to live within their limited means. Typically about 40% of customers use prepayment meters for their electricity and gas: this percentage has remained roughly constant over the last five years. First tested in Northern Ireland, the system was expanded to London in 1997 and in 1998, British Gas prepayment meter customers were able to charge their Quantum smart cards at PayPoint retailers. Following continued growth and public listing, in 2006, the company became the exclusive cash payment network for the BBC's TV Licence fee. In November 2006 and February 2007, PayPoint acquired online payment service providers Metacharge and SECPay respectively. In September 2010, PayPoint completed the acquisition of Verrus, a pay-by-phone parking payment provider and re-branded in North America and Europe under the brand name, PayByPhone. In May 2014, PayPoint.net and PayByPhone were merged under a single identity, PayPoint Mobile and Online. Operations PayPoint allows cash payments at any one of 28,200 United Kingdom PayPoint outlets, 500 in Ireland and 9,000 in Romania. It also provides multi-channel payment for retailers – desktop, laptop, tablet, mobile, mPOS. In most cases, PayPoint's fees are usually paid by the payee organisation rather than by the payer, the notable exception being deposits into Monzo bank accounts, for which Monzo deducts a £1 fee from the deposited amount. Collect+ In February 2011, Collect+ a parcel sending and collection service, was launched as a joint venture between PayPoint and Yodel. This service is available through almost 6,000 of the PayPoint retail network in the United Kingdom and allows customers to collect and send packages at their local convenience store. On 6 April 2020, PayPoint announced an agreement with Yodel to take full ownership of Collect+. References External links Financial services companies of the United Kingdom Payment service providers Companies based in Welwyn Hatfield Financial services companies established in 1996 Companies listed on the London Stock Exchange 1996 establishments in the United Kingdom Companies established in 1996 Payment systems British companies established in 1996
https://en.wikipedia.org/wiki/Computational%20number%20theory
In mathematics and computer science, computational number theory, also known as algorithmic number theory, is the study of computational methods for investigating and solving problems in number theory and arithmetic geometry, including algorithms for primality testing and integer factorization, finding solutions to diophantine equations, and explicit methods in arithmetic geometry. Computational number theory has applications to cryptography, including RSA, elliptic curve cryptography and post-quantum cryptography, and is used to investigate conjectures and open problems in number theory, including the Riemann hypothesis, the Birch and Swinnerton-Dyer conjecture, the ABC conjecture, the modularity conjecture, the Sato-Tate conjecture, and explicit aspects of the Langlands program. Software packages Magma computer algebra system SageMath Number Theory Library PARI/GP Fast Library for Number Theory Further reading References External links Number theory Number theory
https://en.wikipedia.org/wiki/CityLink
CityLink is a network of tollways in Melbourne, Victoria, Australia, linking the Tullamarine, West Gate and Monash Freeways and incorporating Bolte Bridge, Burnley Tunnel and other works. In 1996, Transurban was awarded the contract to augment two existing freeways and construct two new toll roads – labelled the Western and Southern Links– directly linking a number of existing freeways to provide a continuous, high-capacity road route to, and around, the central business district. CityLink uses a free-flow tolling electronic toll collection system, called e-TAG. CityLink is currently maintained by Lendlease Services. History The first mention of a southern and western inner city bypass was in the 1969 Melbourne Transportation Plan. The plan advocated for reservations and set aside sinking funds for the new inner city freeway system. It was one of the few freeways connecting to the inner city (along with the Eastern Freeway to Clifton Hill) which was not later abandoned. The proposal to build CityLink was first announced in May 1992 and received the State Government's formal approval in mid-1994. The contract was awarded in 1995 to a consortium of Australia's Transfield Holdings and Japan's Obayashi Corporation, named Transurban Consortium. Transurban was formed in March 1996 to operate CityLink when completed. The total value of the project was estimated in 1996 at about $1.8 billion, and the concession to operate the road was initially due to expire in 2034. This concession has since been extended, and is now due to expire in 2045. CityLink was built by the Transfield Obayashi joint venture under contract to Transurban between 1996 and 2000. The design and construction of the Western Link was subcontracted to Baulderstone Hornibrook, and the supply of the electronic tolling system was subcontracted to Translink Systems, a company jointly owned by Transfield Holdings and Transroute of France. The ongoing operation and maintenance of City Link was subcontracted by Transurban to Translink Operations, also jointly owned by Transfield and Transroute, which would manager the performance of CityLink assets. In May 1999, the operations were reorganised, with Transurban taking over the customer service operations from Translink Operations, who would retain responsibility for management of the tolling system, roadside assistance and maintenance. The CityLink project was eight times larger than any other road project in Melbourne of that time. Toll plazas for manual tolling were deemed impractical, and delays associated with plaza operations would have decreased the advantages of using the new road. The decision to use only electronic toll collection was made in 1992; at a time when there was little practical experience of such systems. The first of the sections opened to traffic in 15 August 1999, with tolling commencing on 3 January 2000 before final completion occurred on 28 December 2000 with tolling commencing the same year. When CityLink opened
https://en.wikipedia.org/wiki/Mac%20OS%208
Mac OS 8 is an operating system that was released by Apple Computer on July 26, 1997. It includes the largest overhaul of the classic Mac OS experience since the release of System 7, approximately six years before. It places a greater emphasis on color than prior versions. Released over a series of updates, Mac OS 8 represents an incremental integration of many of the technologies which had been developed from 1988 to 1996 for Apple's overly ambitious OS named Copland. Mac OS 8 helped modernize the Mac OS while Apple developed its next-generation operating system, Mac OS X (renamed in 2012 to OS X and then in 2016 to macOS). Mac OS 8 is one of Apple's most commercially successful software releases, selling over 1.2 million copies in the first two weeks. As it came at a difficult time in Apple's history, many pirate groups refused to traffic in the new OS, encouraging people to buy it instead. Mac OS 8.0 introduces the most visible changes in the lineup, including the Platinum interface and a native PowerPC multithreaded Finder. Mac OS 8.1 introduces a new, more efficient file system named HFS Plus. Mac OS 8.5 is the first version of the Mac OS to require a PowerPC processor. It features PowerPC native versions of QuickDraw, AppleScript, and the Sherlock search utility. Its successor, Mac OS 9, was released on October 23, 1999. Copland Starting in 1988, Apple's next-generation operating system, which it originally envisioned to be "System 8" was codenamed Copland. It was announced in March 1994 alongside the introduction of the first PowerPC Macs. Apple intended Copland as a fully modern system, including native PowerPC code, intelligent agents, a microkernel, a customizable interface named Appearance Manager, a hardware abstraction layer, and a relational database integrated into the Finder. Copland was to be followed by Gershwin, which promised memory protection spaces and full preemptive multitasking. The system was intended to be a full rewrite of the Mac OS, and Apple hoped to beat Microsoft Windows 95 to market with a development cycle of only one year. The Copland development was hampered by many missed deadlines. The release date was first pushed back to the end of 1995, then to mid-1996, late 1996, and finally to the end of 1997. With a dedicated team of 500 software engineers and an annual budget of $250 million, Apple executives began to grow impatient with the project continually falling behind schedule. In August 1996, Apple chief technology officer Ellen Hancock froze development of Copland and Apple began a search for an operating system developed outside the company. This ultimately led to Apple buying NeXT and developing Rhapsody which would eventually evolve into Mac OS X in 2001 (now named macOS). At the Worldwide Developers Conference in January 1997, Apple chief executive officer (CEO) Gil Amelio announced that, rather than release Copland as one monolithic release, Copland features would be phased into the Mac OS follow
https://en.wikipedia.org/wiki/Buddy%20Jewell
Buddy Jewell Jr. (born April 2, 1961) is an American country music singer who was the first winner on the USA Network talent show Nashville Star. Signed to Columbia Records in 2003, Jewell made his debut on the American country music scene with the release of his self-titled album, which produced the singles "Help Pour Out the Rain" and "Sweet Southern Comfort". Another album, Times Like These, followed in 2005. Biography Buddy Jewell was born in Lepanto, Arkansas, on April 2, 1961. He began playing guitar after buying one from a schoolmate during childhood, and saved the money that he earned bagging groceries to buy guitar lesson books. Jewell also listened to the music that his father, also named Buddy, played for him, and was taught by his uncle Clyde how to play "What a Friend We Have in Jesus". By age 15, Jewell had also taught himself how to play Johnny Cash's "I Still Miss Someone." After graduating from Osceola High School, he attended Arkansas State University where he was a member of Pi Kappa Alpha. Jewell majored in television and radio in college, although he left in his junior year to marry, despite the marriage only lasting two-and-a-half years. Jewell later moved to Camden, Arkansas, at age 21 in pursuit of a musical career. There, he discovered a band called White Oak, which was seeking a new lead singer. This band was sponsored by a booking agency whose roster also included Canyon and a band founded by a then-unknown Trace Adkins. After touring with White Oak for four years, he moved to Dallas, Texas, where he took a role in a gunfighting show at Six Flags over Texas. He later entered a singing competition that was sponsored by the band Alabama, whose music was also an inspiration to him. He won the competition's top prize, which was an opening slot for the band. After winning the competition, he competed on Star Search where he won Male Vocalist on several episodes. He later decided to move to Nashville, Tennessee, in 1993, and found work two years later as a demo singer. As a demo singer, he recorded more than 5,000 demos. Among the songs that Jewell recorded demos for were "Write This Down" for George Strait, "A Little Past Little Rock" for Lee Ann Womack, "The One" for Gary Allan and "You're Beginning to Get to Me" for Clay Walker. Jewell also self-released albums entitled One in a Row and Far Enough Away in 2001 and 2002 respectively. Nashville Star and major-label music career In 2003, Jewell competed in the first season of the television singing competition Nashville Star. He became the show's first winner that season, and was soon signed to a recording contract with Columbia Records Nashville. 2003–2004: Buddy Jewell On May 5, 2003, two days after his win, Jewell's debut single "Help Pour Out the Rain" was shipped to radio. It became the highest-debuting single by a new country artist since the singles charts were first tabulated via Nielsen SoundScan in 1990. This song reached number three on the country charts an
https://en.wikipedia.org/wiki/Nashville%20Star
Nashville Star is an American reality television singing competition program that aired for six seasons, from 2003 to 2008. Its first five seasons aired on USA Network, while the last season aired on NBC. Its five seasons on USA made it the longest-running competition series on cable television at the time. In Canada, the show aired on CMT through season 5, but moved to E! beginning with season 6. CMT in the United States reaired each episode in season 6. It was similar to American Idol, in that performers had to sing to impress both celebrity judges and the public via call-in and/or internet votes. Unlike American Idol, however, the performers were limited to country music. This restriction was relaxed for Season 6, allowing for the finalists to choose from many genres of music, but the songs were arranged to maintain a country sound. The show is credited with jump-starting the careers of singers Buddy Jewell, Miranda Lambert, Chris Young, and Kacey Musgraves among others. A Nashville Star-themed gifts and souvenirs shop featuring local items and city souvenirs opened in July 2008 at Nashville International Airport, one month before the show's final episode, and closed after over a decade. Show format Comparisons to American Idol In a format nearly identical to the final round of American Idol, finalists performed one song per week individually and face criticism and/or praise from a panel of three judges. At the end of the show, voting opened to the viewing public, who cast votes by calling a toll-free telephone number or logging on to the show's official website (texting was added as a voting option in 2008). The performer with the fewest votes was eliminated. However, because Nashville Star aired only once per week, eliminations were not announced until the following week. The finalists who have not been eliminated are called in random order to the stage one by one to perform until there are only two remaining. At that point, one was called to perform and the other was eliminated for receiving the fewest votes from the previous week. The finalists did not know the order in which they would perform and had less than one minute to prepare once their names were called. No votes are tallied on the season finale. Much like American Idol, the judges were present to offer criticism to the finalists in an attempt to sway the voting public. Unlike Idol, however, Nashville Star'''s judges did not participate in the preliminary auditions (leaving that task to the show's producers), but they did act as mentors to the finalists (beginning with the 2008 season). The audition process was not seen on-air on USA Network versions, except for the first season (2003 season), but portions of it was seen in a montage during the premiere of the NBC version. Beginning with the 2008 move to NBC, the judges did assist producers in narrowing the field from 50 to 12. Each season (except for 2005), the judges eliminated finalists based on consensus on the premiere
https://en.wikipedia.org/wiki/PTH
PTH may refer to: Biology and Medicine Parathyroid hormone phenylthiohydantoin, an amino acid derivative formed by the Edman degradation Computing GNU Portable Threads in computing Pass the hash attack in computing Languages Pataxó language, by ISO 639 code Standard Chinese, also known as putonghua and abbreviated PTH Places Port Huron (Amtrak station), Michigan, US, station code Perth railway station, Scotland, station code Provincial Trunk Highway in list of Manitoba provincial highways Port Heiden Airport, by IATA code Other Plated through-hole in PCB through-hole technology Polskie Towarzystwo Historyczne, the Polish Historical Society
https://en.wikipedia.org/wiki/TasWireless
TasWireless is a group of wireless networking enthusiasts in Tasmania, Australia. Between them they have set up wireless community networks in both Hobart and Launceston. The group has gone through many names, tas.air, www.tas.air.net.au, TPAN (Tasmanian Public Airwave Network) and now TasWireless. With users from several different backgrounds, including computer networking, amateur radio, amateur television, programming, Linux/BSD server administration, antenna and satellite dish installations, and lots more, they are willing to assist with any community networks in any part of the state. Introduction The TasWireless site was first started in 1999. It started as a splitter group from TasLUG, the Tasmanian Linux Users Group. There was only a small number of people who were interested in wireless networking at this time, less than five each in Hobart and Launceston. A node database () for Tasmanian regions was started, the mailing list was put on line, but due to the lack of practical experience and knowledge, very little happened. The cost of Wi-Fi cards and wireless access points was also a problem. In early 2002, a flood of cheap SkyNet Global 802.11b PC card cards flooded the market. These cards were liquidated stock and cost around A$50-60 each - the average retail price was still around A$200. A lot of these cards were shipped to the state and distributed (both by TasWireless admins and otherwise). Wireless networks in Hobart The predominant network in Hobart is called StarNet. This was started as a private network by a small group of amateur radio enthusiasts, around April 2002. It included around six or seven sites. In April 2003, an operator of the TasWireless website stumbled upon one of their nodes, with SSID StarNet, and posted his find to the mailing list. As a result, all users involved were able to share knowledge and make some minor changes to the network routing. Another network RexNet, based in Kingston was also found; they had already been working with the StarNet group to eventually join the networks. In mid-2003, various 802.11b wireless access points appeared on the market at low prices - Svec and Minitar brand access points were selling for around $100. This made setting up nodes easier, as the compatibility issues between various brands of PCI cradles, PC cards, and operating systems caused some problems. By the start of March 2004, there were around 25 nodes on StarNet, reaching from Tea Tree, Otago, Rosetta, Lutana, Glenorchy, Moonah, Lindisfarne, Lenah Valley, Bellerive, Acton, Tranmere, Sandy Bay and Kingston. Wireless networks in Launceston Wireless networks in Launceston were a lot slower to take off than in Hobart, but is to be expected with a smaller population. Several small peer-to-peer links were tested, but no major infrastructure was rolled out. At the end of January 2004, two groups appeared, around the same time. One group was unnamed (though was working under the TasWireless name), t
https://en.wikipedia.org/wiki/Gross-Rosen%20concentration%20camp
Gross-Rosen was a network of Nazi concentration camps built and operated by Nazi Germany during World War II. The main camp was located in the German village of Gross-Rosen, now the modern-day Rogoźnica in Lower Silesian Voivodeship, Poland; directly on the rail-line between the towns of Jawor (Jauer) and Strzegom (Striegau). Its prisoners were mostly Jews, Poles and Soviet citizens. At its peak activity in 1944, the Gross-Rosen complex had up to 100 subcamps located in eastern Germany and in German-occupied Czechoslovakia and Poland. The population of all Gross-Rosen camps at that time accounted for 11% of the total number of inmates incarcerated in the Nazi concentration camp system. The camp KZ Gross-Rosen was set up in the summer of 1940 as a satellite camp of the Sachsenhausen concentration camp from Oranienburg. Initially, the slave labour was carried out in a huge stone quarry owned by the SS-Deutsche Erd- und Steinwerke GmbH (SS German Earth and Stone Works). In the fall of 1940 the use of labour in Upper Silesia was taken over by the new Organization Schmelt formed on the orders of Heinrich Himmler. It was named after its leader SS-Oberführer Albrecht Schmelt. The company was put in charge of employment from the camps with Jews intended to work for food only. The Gross-Rosen location close to occupied Poland was of considerable advantage. Prisoners were put to work in the construction of a system of subcamps for expellees from the annexed territories. Gross Rosen became an independent camp on 1 May 1941. As the complex grew, the majority of inmates were put to work in the new Nazi enterprises attached to these subcamps. In October 1941 the SS transferred about 3,000 Soviet POWs to Gross-Rosen for execution by shooting. Gross-Rosen was known for its brutal treatment of the so-called Nacht und Nebel prisoners vanishing without a trace from targeted communities. Most died in the granite quarry. The brutal treatment of the political and Jewish prisoners was not only in the hands of guards and German criminal prisoners brought in by the SS, but to a lesser extent also fuelled by the German administration of the stone quarry responsible for starvation rations and denial of medical help. In 1942, for political prisoners, the average survival time-span was less than two months. Due to a change of policy in August 1942, prisoners were likely to survive longer because they were needed as slave workers in German war industries. Among the companies that benefited from the slave labour of the concentration camp inmates were German electronics manufacturers such as Blaupunkt, Siemens, as well as Krupp, IG Farben, and Daimler-Benz, among others. Some prisoners who were not able to work but not yet dying were sent to the Dachau concentration camp in so-called invalid transports. The largest population of inmates, however, were Jews, initially from the Dachau and Sachsenhausen camps, and later from Buchenwald. During the camp's existence, the Jewi
https://en.wikipedia.org/wiki/IDL%20%28programming%20language%29
IDL, short for Interactive Data Language, is a programming language used for data analysis. It is popular in particular areas of science, such as astronomy, atmospheric physics and medical imaging. IDL shares a common syntax with PV-Wave and originated from the same codebase, though the languages have subsequently diverged in detail. There are also free or costless implementations, such as GNU Data Language (GDL) and Fawlty Language (FL). Overview IDL is vectorized, numerical, and interactive, and is commonly used for interactive processing of large amounts of data (including image processing). The syntax includes many constructs from Fortran and some from C. IDL originated from early VMS Fortran, and its syntax still shows its heritage: x = findgen(100)/10 y = sin(x)/x plot,x,y The function in the above example returns a one-dimensional array of floating point numbers, with values equal to a series of integers starting at 0. Note that the operation in the second line applies in a vectorized manner to the whole 100-element array created in the first line, analogous to the way general-purpose array programming languages (such as APL, J or K) would do it. This example contains a division by zero; IDL will report an arithmetic overflow, and store a NaN value in the corresponding element of the array (the first one), but the other array elements will be finite. The NaN is excluded from the visualization generated by the command. As with most other array programming languages, IDL is very fast at doing vector operations (sometimes as fast as a well-coded custom loop in Fortran or C) but quite slow if elements need processing individually. Hence part of the art of using IDL (or any other array programming language, for that matter) for numerically heavy computations is to make use of the built-in vector operations. History The predecessor versions of IDL were developed in the 1970s at the Laboratory for Atmospheric and Space Physics (LASP) at the University of Colorado at Boulder. At LASP, David Stern was involved in efforts to allow scientists to test hypotheses without employing programmers to write or modify individual applications. The first program in the evolutionary chain to IDL that Stern developed was named Rufus; it was a simple vector-oriented calculator that ran on the PDP-12. It accepted two-letter codes that specified an arithmetic operation, the input registers to serve as operands, and the destination register. A version of Rufus developed on the PDP-8 was the Mars Mariner Spectrum Editor (MMED). MMED was used by LASP scientists to interpret data from Mariner 7 and Mariner 9. Later, Stern wrote a program named SOL, which also ran on the PDP-8. Unlike its predecessors, it was a true programming language with a FORTRAN-like syntax. SOL was an array-oriented language with some primitive graphics capabilities. Stern left LASP to found Research Systems Inc. (RSI) in 1977. The first RSI product was IDL for the PDP-11. In thi
https://en.wikipedia.org/wiki/RPL%20%28programming%20language%29
RPL is a handheld calculator operating system and application programming language used on Hewlett-Packard's scientific graphing RPN (Reverse Polish Notation) calculators of the HP 28, 48, 49 and 50 series, but it is also usable on non-RPN calculators, such as the 38, 39 and 40 series. Internally, it was also utilized by the 17B, 18C, 19B and 27S. RPL is a structured programming language based on RPN, but equally capable of processing algebraic expressions and formulae, implemented as a threaded interpreter. RPL has many similarities to Forth, both languages being stack-based, as well as the list-based LISP. Contrary to previous HP RPN calculators, which had a fixed four-level stack, the dynamic stack used by RPL is only limited by available RAM, with the calculator displaying an error message when running out of memory rather than silently dropping arguments off the stack as in fixed-sized RPN stacks. RPL originated from HP's Corvallis, Oregon development facility in 1984 as a replacement for the previous practice of implementing the operating systems of calculators in assembly language. The first calculator utilizing it internally was the HP-18C and the first calculator making it available to users was the HP-28C, both from 1986. The last pocket calculator supporting RPL, the HP 50g, was discontinued in 2015. However, multiple emulators that can emulate HP's RPL calculators exist that run on a range of operating systems, and devices, including iOS and Android smartphones. There are also a number of community projects to recreate and extend RPL on newer calculators, like newRPL or DB48X, which may add features or improve performance. Variants The internal low- to medium-level variant of RPL, called System RPL (or SysRPL) is used on some earlier HP calculators as well as the aforementioned ones, as part of their operating system implementation language. In the HP 48 series this variant of RPL is not accessible to the calculator user without the use of external tools, but in the HP 49/50 series there is a compiler built into ROM to use SysRPL. It is possible to cause a serious crash while coding in SysRPL, so caution must be used while using it. The high-level User RPL (or UserRPL) version of the language is available on said graphing calculators for developing textual as well as graphical application programs. All UserRPL programs are internally represented as SysRPL programs, but use only a safe subset of the available SysRPL commands. The error checking that is a part of UserRPL commands, however, makes UserRPL programs noticeably slower than equivalent SysRPL programs. The UserRPL command SYSEVAL tells the calculator to process designated parts of a UserRPL program as SysRPL code. Control blocks RPL control blocks are not strictly postfix. Although there are some notable exceptions, the control block structures appear as they would in a standard infix language. The calculator manages this by allowing the implementation of these blocks to
https://en.wikipedia.org/wiki/Windsor%20Star
The Windsor Star is a daily newspaper based in Windsor, Ontario, Canada. Owned by Postmedia Network, it is published Tuesdays through Saturdays. History The paper began as the weekly Windsor Record in 1888, changing its name to the Border Cities Star in 1918, when it was bought by W. F. Herman. The Border Cities Star was a daily newspaper published from September 3, 1918, until June 28, 1935. The founders W. F. Herman and Hugh Graybiel purchased the existing daily newspaper, the Windsor Record (known as the Evening Record from 1890 to November 1917), from John A. McKay on August 6, 1918. There was some conflict before the men purchased the newspaper. The Windsor Record had only partial wire service, and some felt that the national and international news was not sufficiently covered. Originally, the Border Cities Star was intended to be a rival daily newspaper to the Windsor Record. However, Herman's application to Canadian Press Limited for full wire service was denied because of opposition by McKay.. He had held a variety of committee executive positions at the organization over the years. McKay eventually agreed to subscribe to the full wire service and sold the Windsor Record to W. F. Herman for an inflated price. Many viewed that as a flaw of the Canadian Press Limited. The wire service, which was subsidized by government funds, was run mainly by a group of publishers that could use it as a way of limiting competition and increasing the value of their own newspapers (Border Cities Era: October 18, 1918, page 7) Herman had previous experience in the newspaper industry since he had owned the Prince Albert Daily Herald, the Saskatoon Capital, and the Regina Leader-Post. Herman became the paper's president, and Graybiel assumed the role of business manager. They changed the name of the Windsor Record to the Border Cities Star to reflect not only Windsor but also all the surrounding communities. On page 4 of its inaugural issue, the new owners state that in their "Aims and Endeavors" that they intend to make it "a worth-while newspaper for worth-while people." They proposed two main goals: one was to work with and build up local institutions and organizations. The newspaper "must endeavor to become one with its community, to enter closely into its daily life and being, and to voice for the community the otherwise largely inarticulate striving for the attainment of the largest self-development." The other goal was "to be worthy of Canada." They appealed to Canadian pride and nationalism, in particular with regards to Canadians' contributions to the ongoing war, and stated their intention "to be broad, to be faithful, to be progressive and forward-looking, to be free and independent and unprejudiced. The Canadian who is not proud of our mighty country has no right or title to its citizenship." They identified two other goals: the revision of tariffs and to "uphold the English language as the only proper language and method of instruction in the
https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes%20theory
In mathematics and computer science, the Krohn–Rhodes theory (or algebraic automata theory) is an approach to the study of finite semigroups and automata that seeks to decompose them in terms of elementary components. These components correspond to finite aperiodic semigroups and finite simple groups that are combined in a feedback-free manner (called a "wreath product" or "cascade"). Krohn and Rhodes found a general decomposition for finite automata. The authors discovered and proved an unexpected major result in finite semigroup theory, revealing a deep connection between finite automata and semigroups. Definitions and description of the Krohn–Rhodes theorem Let T be a semigroup. A semigroup S that is a homomorphic image of a subsemigroup of T is said to be a divisor of T. The Krohn–Rhodes theorem for finite semigroups states that every finite semigroup S is a divisor of a finite alternating wreath product of finite simple groups, each a divisor of S, and finite aperiodic semigroups (which contain no nontrivial subgroups). In the automata formulation, the Krohn–Rhodes theorem for finite automata states that given a finite automaton A with states Q and input set I, output alphabet U, then one can expand the states to Q' such that the new automaton A' embeds into a cascade of "simple", irreducible automata: In particular, A is emulated by a feed-forward cascade of (1) automata whose transformation semigroups are finite simple groups and (2) automata that are banks of flip-flops running in parallel. The new automaton A' has the same input and output symbols as A. Here, both the states and inputs of the cascaded automata have a very special hierarchical coordinate form. Moreover, each simple group (prime) or non-group irreducible semigroup (subsemigroup of the flip-flop monoid) that divides the transformation semigroup of A must divide the transformation semigroup of some component of the cascade, and only the primes that must occur as divisors of the components are those that divide A's transformation semigroup. Group complexity The Krohn–Rhodes complexity (also called group complexity or just complexity) of a finite semigroup S is the least number of groups in a wreath product of finite groups and finite aperiodic semigroups of which S is a divisor. All finite aperiodic semigroups have complexity 0, while non-trivial finite groups have complexity 1. In fact, there are semigroups of every non-negative integer complexity. For example, for any n greater than 1, the multiplicative semigroup of all (n+1) × (n+1) upper-triangular matrices over any fixed finite field has complexity n (Kambites, 2007). A major open problem in finite semigroup theory is the decidability of complexity: is there an algorithm that will compute the Krohn–Rhodes complexity of a finite semigroup, given its multiplication table? Upper bounds and ever more precise lower bounds on complexity have been obtained (see, e.g. Rhodes & Steinberg, 2009). Rhodes has co