source stringlengths 32 199 | text stringlengths 26 3k |
|---|---|
https://en.wikipedia.org/wiki/Super | Super may refer to:
Computing
SUPER (computer program), or Simplified Universal Player Encoder & Renderer, a video converter / player
Super (computer science), a keyword in object-oriented programming languages
Super key (keyboard button)
Film and television
Super (2005 film), a Telugu film starring Nagarjuna, Anushka Shetty and Ayesha Takia
Super (2010 Indian film), a Kannada language film starring Upendra and Nayantara
Super (2010 American film), a film written and directed by James Gunn, and starring Rainn Wilson and Elliot Page
"Super" (Person of Interest), an episode of the TV series Person of Interest
Music
"Super" (Cordae song), a 2021 song by American rapper Cordae
"Super" (Neu! song), a 1972 song by German band Neu!
"Super (1, 2, 3)", a 2000 song by Italian DJ Gigi D'Agostino
Super (album), a 2016 album by Pet Shop Boys
"Super" (Seventeen song), a 2023 song by South Korean band Seventeen
Other uses
Hillary Super, American business executive
Super!, an Italian television network
Super (company), film distributor
Super (gamer) (born 2000), professional Overwatch player
Building superintendent, a manager, maintenance or repair person, custodian, or janitor
Pension (abbreviation of superannuation)
Supernumerary actor, the stage equivalent of an extra in film
or SuPer, Finnish Union of Practical Nurses
Zab Judah or Super, American boxer
.38 Super, a pistol cartridge
The "Super", Teller's H-bomb idea, a thermonuclear fusion bomb ignited by a smaller fission bomb
See also
Honey super, the part of a commercial beehive that is used to collect honey
Super unleaded, a grade of gasoline
Extraordinary (disambiguation)
Supra (disambiguation)
Hyper (disambiguation)
Meta (disambiguation) |
https://en.wikipedia.org/wiki/SGF | SGF may mean:
Smart Game Format, a computer file format
Société générale de financement, Québeca, Canada
South Glens Falls, a village in upstate New York
Sovereign Grace Fellowship of Canada, for Baptist churches
Springfield–Branson National Airport, Springfield, Missouri, US, IATA code
The Spaceguard Foundation, to protect Earth from collisions with astronomical objects |
https://en.wikipedia.org/wiki/John%20F.%20Sowa | John Florian Sowa (born 1940) is an American computer scientist, an expert in artificial intelligence and computer design, and the inventor of conceptual graphs.<ref>Kecheng Liu (2000) Semiotics in Information Systems Engineering. p.54 states: Conceptual graphs are devised as a language of knowledge representation by Sowa (1984), based on philosophy, psychology and linguistics. Knowledge in conceptual graph form is highly structured by modelling specialised facts that can be subjected to generalised reasoning.</ref>
Biography
Sowa received a BS in mathematics from Massachusetts Institute of Technology in 1962, an MA in applied mathematics from Harvard University in 1966, and a PhD in computer science from the Vrije Universiteit Brussel in 1999 with a dissertation titled "Knowledge Representation: Logical, Philosophical, and Computational Foundations".
Sowa spent most of his professional career at IBM, starting in 1962 at IBM's applied mathematics group. Over the decades he has researched and developed emerging fields of computer science from compilers, programming languages, and system architecture to artificial intelligence and knowledge representation. In the 1990s Sowa was associated with the IBM Educational Center in New York. Over the years he taught courses at the IBM Systems Research Institute, Binghamton University, Stanford University, the Linguistic Society of America and the Université du Québec à Montréal. He is a fellow of the Association for the Advancement of Artificial Intelligence.
After early retirement at IBM, Sowa in 2001 cofounded VivoMind Intelligence, Inc. with Arun K. Majumdar. With this company he was developing data-mining and database technology, more specifically high-level "ontologies" for artificial intelligence and automated natural language understanding. Currently Sowa is working with Kyndi Inc., also founded by Majumdar.
John Sowa is married to the philologist Cora Angier Sowa, and they live in Croton-on-Hudson, New York.
Work
Sowa's research interests since the 1970s were in the field of artificial intelligence, expert systems and database query linked to natural languages. In his work he combines ideas from numerous disciplines and eras modern and ancient, for example, applying ideas from Aristotle, the medieval scholastics to Alfred North Whitehead and including database schema theory, and incorporating the model of analogy of Islamic scholar Ibn Taymiyyah in his works.
Conceptual graph
Sowa invented conceptual graphs, a graphic notation for logic and natural language, based on the structures in semantic networks and on the existential graphs of Charles S. Peirce. He introduced the concept in the 1976 article "Conceptual graphs for a data base interface" in the IBM Journal of Research and Development. He elaborated upon it in the 1983 book Conceptual structures: information processing in mind and machine.
In the 1980s, this theory had "been adopted by a number of research and development groups t |
https://en.wikipedia.org/wiki/G.723.1 | G.723.1 is an audio codec for voice that compresses voice audio in frames. An algorithmic look-ahead of duration means that total algorithmic delay is . Its official name is Dual rate speech coder for multimedia communications transmitting at 5.3 and . It is sometimes associated with a Truespeech trademark in coprocessors produced by DSP Group.
This is a completely different codec from G.723.
There are two bit rates at which G.723.1 can operate:
(using 24-byte frames) using a MPC-MLQ algorithm (MOS 3.9)
(using 20-byte frames) using an ACELP algorithm (MOS 3.62)
Use
G.723.1 is mostly used in Voice over IP (VoIP) applications due to its low bandwidth requirement. Music or tones such as DTMF or fax tones cannot be transported reliably with this codec, and thus some other method such as G.711 or out-of-band methods should be used to transport these signals. The complexity of the algorithm is below 16 MIPS. 2.2 kilobytes of RAM is needed for codebooks.
G.723.1 is a required audio codec in the H.324 ITU-T recommendation for H.324 terminals offering audio communication. In 3GPP 3G-324M specification support for G.723.1 is not mandatory, but recommended.
Features
Sampling frequency /16-bit (240 samples for frames)
Fixed bit rate ( with 20 byte frames, with 24 byte frames)
Fixed frame size for each rate ( with 20 byte frames, with 24 byte frames)
Algorithmic delay is per frame, with look-ahead delay
G.723.1 is a hybrid speech coder, with high bit rate using multi-pulse maximum likelihood quantization (MP-MLQ) and low bit rate using algebraic code-excited linear prediction (ACELP)
The complexity of the algorithm is rated at 25, using a relative scale where G.711 is 1 and G.729a is 15.
G.723.1 Annex A defines 4 byte silence insertion descriptor (SID) frame for Comfort Noise Generation
PSQM testing under ideal conditions yields mean opinion scores of 4.08 for G.723.1 (), compared to 4.45 for G.711 (μ-law)
PSQM testing under network stress yields mean opinion scores of 3.57 for G.723.1 (), compared to 4.13 for G.711 (μ-law)
Licensing
As of January 1, 2017, the patent terms of most patents applying to G.723.1 have expired. With regard to the unexpired licensed patents of their G.723.1 patent license agreement, the licensors of G.723.1 patents, namely AudioCodes, Orange SA, and Université de Sherbrooke have agreed to license their patents under the existing terms on a royalty-free basis starting January 1, 2017.
The authorized intellectual property rights licensing administrator for G.723.1 technology is Sipro Lab Telecom.
Members of the G.723.1 patent pool are AudioCodes, France Telecom, Université de Sherbrooke, Nippon Telegraph and Telephone Corporation and Nokia.
See also
List of codecs
Comparison of audio coding formats
RTP audio video profile
References
External links
ITU-T Recommendation G.723.1 - technical specification
Intellectual Property Rights page on ITU website (with link to patent declaration database |
https://en.wikipedia.org/wiki/Vladimir%20Levin | Vladimir Leonidovitch Levin (Владимир Леонидович Левин) is a Russian individual famed for his involvement in hacking attempt to fraudulently transfer USD 10.7 million via Citibank's computers.
The commonly known story
At the time, the mass media claimed he was a mathematician and had a degree in biochemistry from Saint Petersburg State Institute of Technology.
According to the coverage, in 1994 Levin accessed the accounts of several large corporate customers of Citibank via their dial-up wire transfer service (Financial Institutions Citibank Cash Manager) and transferred funds to accounts set up by accomplices in Finland, the United States, the Netherlands, Germany and Israel.
Three of his accomplices were arrested attempting to withdraw funds in Tel Aviv, Rotterdam and San Francisco. Interrogation of his accomplices directed investigations to Levin, then working as a computer programmer for St. Petersburg based computer company AO Saturn. However, Russia's Constitution prohibits extradition of its citizens to foreign countries.
In March 1995 Levin was lured to London and apprehended at London's Stansted Airport by Scotland Yard officers when making an interconnecting flight from Moscow. Levin's lawyers fought against extradition to the U.S., but their appeal was rejected by the House of Lords in June 1997.
Levin was delivered into U.S. custody in September 1997, and was tried in the United States District Court for the Southern District of New York. In his plea agreement he admitted to only one count of conspiracy to defraud, and to stealing US$3.7 million. In February 1998 he was convicted and sentenced to three years in jail, and ordered to make restitution of US$240,015. Citibank claimed that all but US$400,000 of the stolen US$10.7 million had been recovered.
After the compromise of their system, Citibank updated their systems to use Dynamic Encryption Card, a physical authentication token. However, it was not revealed how Levin had gained access to the relevant account access details. Following his arrest in 1995, anonymous members of hacking groups based in St. Petersburg claimed that Levin did not have the technical abilities to break into Citibank's systems, that they had cultivated access to systems deep within the bank's network, and that these access details had been sold to Levin for $100.
The revelation a decade later
In 2005 an alleged member of the former St. Petersburg hacker group, claiming to be one of the original Citibank penetrators, published under the name ArkanoiD a memorandum on popular Provider.net.ru website dedicated to telecom market. According to him, Levin was not actually a scientist (mathematician, biologist or the like) but a kind of ordinary system administrator who managed to get hands on the ready data about how to penetrate Citibank machines and then exploit them.
ArkanoiD emphasized all the communications were carried over X.25 network and the Internet was not involved. ArkanoiD's group in 1994 |
https://en.wikipedia.org/wiki/Dell | Dell Inc. is an American based technology company. It develops, sells, repairs, and supports computers and related products and services. Dell is owned by its parent company, Dell Technologies.
Dell sells personal computers (PCs), servers, data storage devices, network switches, software, computer peripherals, HDTVs, cameras, printers, and electronics built by other manufacturers. The company is known for how it manages its supply chain and electronic commerce. This includes Dell selling directly to customers and delivering PCs that the customer wants. Dell was a pure hardware vendor until 2009 when it acquired Perot Systems. Dell then entered the market for IT services. The company has expanded storage and networking systems. It is now expanding from offering computers only to delivering a range of technology for enterprise customers.
Dell is a subsidiary of Dell Technologies, Inc., a publicly traded company (), as well as a component of the NASDAQ-100 and S&P 500. It is the third-largest personal computer vendor as of January 2021. Dell is ranked 31st on the Fortune 500 list in 2022, up from 76th in 2021. It is also the sixth-largest company in Texas by total revenue, according to Fortune magazine. It is the second-largest non-oil company in Texas.
In 2015, Dell acquired the enterprise technology firm EMC Corporation. Dell and EMC became divisions of Dell Technologies. Dell EMC sells data storage, information security, virtualization, analytics, and cloud computing.
History
Founding and start-up
Michael Dell founded Dell Computer Corporation, doing business as PC's Limited in 1984 while a student at the University of Texas at Austin, operating from Michael Dell's off-campus dormitory room at Dobie Center. The start-up aimed to sell IBM PC compatible computers built from stock components. Michael Dell started trading in the belief that, by selling personal computer systems directly to customers, PC's Limited could better understand customers' needs and provide the most effective computing solutions to meet those needs. Dell dropped out of college upon completion of his freshman year at the University of Texas in order to focus full-time on his fledgling business, after getting about $1,000 in expansion-capital from his family. As of April 2021, Dell's net worth was estimated to be over $50 billion ().
In 1985, the company produced the first computer of its own design, the "Turbo PC", selling for US$795 () and containing an Intel 8088-compatible processor capable of running at a maximum speed of 8 MHz. PC's Limited advertised the systems in national computer magazines for sale directly to consumers, and custom assembled each ordered unit according to a selection of options. This offered buyers prices lower than those of retail brands, but with greater convenience than assembling the components themselves. Pc's Limited was not the first company to use this business model, but they became one of the first to succeed with it. The company gro |
https://en.wikipedia.org/wiki/Protein%20Data%20Bank | The Protein Data Bank (PDB) is a database for the three-dimensional structural data of large biological molecules, such as proteins and nucleic acids. The data, typically obtained by X-ray crystallography, NMR spectroscopy, or, increasingly, cryo-electron microscopy, and submitted by biologists and biochemists from around the world, are freely accessible on the Internet via the websites of its member organisations (PDBe, PDBj, RCSB, and BMRB). The PDB is overseen by an organization called the Worldwide Protein Data Bank, wwPDB.
The PDB is a key in areas of structural biology, such as structural genomics. Most major scientific journals and some funding agencies now require scientists to submit their structure data to the PDB. Many other databases use protein structures deposited in the PDB. For example, SCOP and CATH classify protein structures, while PDBsum provides a graphic overview of PDB entries using information from other sources, such as Gene ontology.
History
Two forces converged to initiate the PDB: a small but growing collection of sets of protein structure data determined by X-ray diffraction; and the newly available (1968) molecular graphics display, the Brookhaven RAster Display (BRAD), to visualize these protein structures in 3-D. In 1969, with the sponsorship of Walter Hamilton at the Brookhaven National Laboratory, Edgar Meyer (Texas A&M University) began to write software to store atomic coordinate files in a common format to make them available for geometric and graphical evaluation. By 1971, one of Meyer's programs, SEARCH, enabled researchers to remotely access information from the database to study protein structures offline. SEARCH was instrumental in enabling networking, thus marking the functional beginning of the PDB.
The Protein Data Bank was announced in October 1971 in Nature New Biology as a joint venture between Cambridge Crystallographic Data Centre, UK and Brookhaven National Laboratory, US.
Upon Hamilton's death in 1973, Tom Koeztle took over direction of the PDB for the subsequent 20 years. In January 1994, Joel Sussman of Israel's Weizmann Institute of Science was appointed head of the PDB. In October 1998,
the PDB was transferred to the Research Collaboratory for Structural Bioinformatics (RCSB); the transfer was completed in June 1999. The new director was Helen M. Berman of Rutgers University (one of the managing institutions of the RCSB, the other being the San Diego Supercomputer Center at UC San Diego). In 2003, with the formation of the wwPDB, the PDB became an international organization. The founding members are PDBe (Europe), RCSB (US), and PDBj (Japan). The BMRB joined in 2006. Each of the four members of wwPDB can act as deposition, data processing and distribution centers for PDB data. The data processing refers to the fact that wwPDB staff review and annotate each submitted entry. The data are then automatically checked for plausibility (the source code for this validation software has been mad |
https://en.wikipedia.org/wiki/Meiko%20Scientific | Meiko Scientific Ltd. was a British supercomputer company based in Bristol, founded by members of the design team working on the Inmos transputer microprocessor.
History
In 1985, when Inmos management suggested the release of the transputer be delayed, Miles Chesney, David Alden, Eric Barton, Roy Bottomley, James Cownie, and Gerry Talbot resigned and formed Meiko (Japanese for "well-engineered") to start work on massively parallel machines based on the processor. Nine weeks later in July 1985, they demonstrated a transputer system based on experimental 16-bit transputers at the SIGGRAPH in San Francisco.
In 1986, a system based on 32-bit T414 transputers was launched as the Meiko Computing Surface. By 1990, Meiko had sold more than 300 systems and grown to 125 employees. In 1993, Meiko launched the second-generation Meiko CS-2 system, but the company ran into financial difficulties in the mid-1990s. The technical team and technology was transferred to a joint venture company named Quadrics Supercomputers World Ltd. (QSW), formed by Alenia Spazio of Italy in mid-1996. At Quadrics, the CS-2 interconnect technology was developed into QsNet.
, a vestigial Meiko website still exists.
Computing Surface
The Meiko Computing Surface (sometimes retrospectively referred to as the CS-1) was a massively parallel supercomputer. The system was based on the Inmos transputer microprocessor, later also using SPARC and Intel i860 processors.
The Computing Surface architecture comprised multiple boards containing transputers connected together by their communications links via Meiko-designed link switch chips. A variety of different boards were produced with different transputer variants, random-access memory (RAM) capacities and peripherals.
The initial software environments provided for the Computing Surface was Occam Programming System (OPS), Meiko's version of Inmos's D700 Transputer Development System. This was soon superseded by a multi-user version, MultiOPS. Later, Meiko introduced Meiko Multiple Virtual Computing Surfaces (M²VCS), a multi-user resource management system let the processors of a Computing Surface be partitioned into several domains of different sizes. These domains were allocated by M²VCS to individual users, thus allowing several simultaneous users access to their own virtual Computing Surfaces. M²VCS was used in conjunction with either OPS or MeikOS, a Unix-like single-processor operating system.
In 1988, Meiko launched the In-Sun Computing Surface, which repackaged the Computing Surface into VMEbus boards (designated the MK200 series) suitable for installation in larger Sun-3 or Sun-4 systems. The Sun acted as front-end host system for managing the transputers, running development tools and providing mass storage. A version of M²VCS running as a SunOS daemon named Sun Virtual Computing Surfaces (SVCS) provided access between the transputer network and the Sun host.
As the performance of the transputer became less competitive toward |
https://en.wikipedia.org/wiki/Backyard%20Sports | Backyard Sports (originally branded as Junior Sports) is a video game series released for consoles, computers and mobile devices. The series is best known for starring kid-sized versions of popular professional sports stars, such as Albert Pujols, Paul Pierce, Barry Bonds, Tim Duncan, Clint Mathis, Kevin Garnett, Tom Brady, David Ortiz, Joe Thornton and Andy Macdonald. The Backyard Sports series is licensed by the major professional U.S. sports leagues: Major League Baseball (MLB), the National Basketball Association (NBA), the National Football League (NFL), the National Hockey League (NHL), and Major League Soccer (MLS).
The series includes Backyard Baseball, Backyard Basketball, Backyard Football (American football), Backyard Soccer (association football), Backyard Hockey (ice hockey), and Backyard Skateboarding. In the games, players form a team consisting of Backyard Kids and pro players, which they take through a "Backyard League" season, attempting to become the champions. Gamers can "Create-A-Player", starting in Backyard Football (1999). Another aspect of the games is the use of Power-Ups, allowing players to gain "Super-abilities". For instance, "Super Dunk" allows a basketball player to make a dunk from nearly anywhere on the court, "Leap Frog" allows a football player to jump over the entire defensive line, and "Ice Cream Truck" causes the other team to be distracted for a brief period.
Some of these games are playable with ScummVM.
History
The series began in late 1997 when Humongous Entertainment, owned by GT Interactive, created the first game in the franchise: Backyard Baseball. Later, GT Interactive was purchased by Infogrames. Infogrames allowed Humongous Entertainment to expand the series, and Humongous developed more titles such as Backyard Soccer, Backyard Football, Backyard Basketball, Backyard Hockey, and Backyard Skateboarding. Following the buyout by Infogrames, these titles from the Backyard series were released for game consoles, including the GameCube, Game Boy Advance, PlayStation 2, Xbox 360, and Wii. Infogrames in North America eventually changed its name to Atari Interactive.
In July 2013, private equity firm The Evergreen Group bought the Backyard Sports franchise during the Atari bankruptcy proceedings for its portfolio company Epic Gear LLC. It was later sold by Epic Gear to Day6 Sports Group.
In December 2014, Day6 Sports Group began to relaunch the Backyard Sports series with Backyard Sports NBA Basketball for smartphones and tablets, with Golden State Warriors point guard Stephen Curry as the cover athlete.
In 2016, Day6 Sports Group was acquired by a European investment group.
In April 2019, Humongous Entertainment tweeted an image of the original Junior Sports logo, hinting at a possible re-release of the original games and/or the developer having re-secured the rights to the series proper. However a week prior, Humongous replied to a Twitter post saying they didn't own the rights to the franchise.
|
https://en.wikipedia.org/wiki/MAME | MAME (formerly an acronym of Multiple Arcade Machine Emulator) is a free and open-source emulator designed to recreate the hardware of arcade game systems in software on modern personal computers and other platforms. Its intention is to preserve gaming history by preventing vintage games from being lost or forgotten. It does this by emulating the inner workings of the emulated arcade machines; the ability to actually play the games is considered "a nice side effect". Joystiq has listed MAME as an application that every Windows and Mac gamer should have.
The first public MAME release was by Nicola Salmoria on 5 February 1997. It now supports over 7,000 unique games and 10,000 actual ROM image sets, though not all of the games are playable. MESS, an emulator for many video game consoles and computer systems, based on the MAME core, was integrated into MAME in 2015.
With OTVDM (WineVDM) a version of MAME is available to emulate 16-Bit DOS and Windows applications on x64 and AArch64 versions of Windows. The NTVDM from Microsoft is only supported for the 32-bit versions of Windows.
History and overview
The MAME project was started by Italian programmer Nicola Salmoria. It began as a project called Multi-Pac, intended to preserve games in the Pac-Man family, but the name was changed as more games were added to its framework. The first MAME version was released in 1996. In April 1997, Salmoria stepped down for his national service commitments, handing stewardship of the project to fellow Italian Mirko Buffoni for half a year. In May 2003, David Haywood took over as project coordinator; and from April 2005 to April 2011, the project was coordinated by Aaron Giles; then Angelo Salese stepped in as the coordinator; and in 2012, Miodrag Milanovic took over. The project is supported by hundreds of developers around the world and thousands of outside contributors.
At first, MAME was developed exclusively for MS-DOS, but it was soon ported to Unix-like systems (X/MAME), Macintosh (MacMAME and later MAME OS X) and Windows (MAME32). Since 24 May 2001, with version 0.37b15, MAME's main development has occurred on the Windows platform, and most other platforms are supported through the SDLMAME project, which was integrated into the main development source tree in 2006. MAME has also been ported to other computers, game consoles, mobile phones and PDAs and, at one point, even to digital cameras. In 2012, Google ported MAME to Native Client, which allows MAME to run inside Chrome.
Major releases of MAME occur approximately once a month. Windows executables in both 32-bit and 64-bit fashion are released on the development team's official website, along with the complete source code. Smaller, incremental "u" (for update) releases were released weekly (until version 0.149u1) as source diffs against the most recent major version, to keep code in synchronization among developers. MAME's source code is developed on a public GitHub repository, allowing those with the |
https://en.wikipedia.org/wiki/Filmation | Filmation Associates was an American production company that produced animation and live-action programming for television from 1963 until 1989. Located in Reseda, California, the animation studio was founded in 1962. Filmation's founders and principal producers were Lou Scheimer, Hal Sutherland and Norm Prescott.
Background
Lou Scheimer and Filmation's main director Hal Sutherland met in 1957 while working at Larry Harmon Pictures on the made-for-TV Bozo and Popeye cartoons. Eventually Larry Harmon closed the studio by 1961. Scheimer and Sutherland went to work at a small company called True Line, one of whose owners was Marcus Lipsky, who then owned Reddi-wip whipped cream. SIB Productions, a Japanese firm with U.S. offices in Chicago, approached them about producing a cartoon called Rod Rocket. The two agreed to take on the work and also took on a project for Family Films, owned by the Lutheran Church–Missouri Synod, for ten short animated films based on the life of Christ. Paramount Pictures soon purchased SIB Productions, and True Line's staff increased, including the arrival of former radio disc jockey Norm Prescott, who became a partner in the firm. He had already been working on the animated feature Pinocchio in Outer Space which was primarily produced by Belvision Studios.
History
They eventually left True Line, and Scheimer began working on commercials, including for Gillette and others, which began what became Filmation. He met lawyer Ira Epstein, who had worked for Harmon but had left the firm, and now put together the new corporation with Scheimer and Sutherland. It officially became Filmation Associates as of September 1962, so named because "We were working on film, but doing animation"; so putting them together yielded the portmanteau "Filmation".
Both Rod Rocket and the Life of Christ series credited "Filmation Associates" with "Production Design" in addition to Scheimer and Sutherland as directors. (SIB Productions, whose logo bore a resemblance to the original Filmation logo designed by Ted Littlefield, would soon go on to become "Sib-Tower 12 Productions" and produce the first few of Chuck Jones' Tom and Jerry films for MGM, until becoming MGM Animation/Visual Arts for the remainder of the films).
Norm Prescott brought in Filmation's first major project, Journey Back to Oz, an animated sequel to the MGM film The Wizard of Oz (1939). Begun in 1962, storyboarding, voice recording, and most of the music scoring and animation had been completed when financial challenges caused the project to be put on hold for nearly eight years.
In the meantime, the new Filmation studio turned their attention to a more successful medium, network television. For the next few years they made television commercials and some other projects for other companies and made an unsuccessful pilot film for a Marx Brothers cartoon series. They also tried to develop an original series named The Adventures of Stanley Stoutheart (later renamed Yank and D |
https://en.wikipedia.org/wiki/System%20call | In computing, a system call (commonly abbreviated to syscall) is the programmatic way in which a computer program requests a service from the operating system on which it is executed. This may include hardware-related services (for example, accessing a hard disk drive or accessing the device's camera), creation and execution of new processes, and communication with integral kernel services such as process scheduling. System calls provide an essential interface between a process and the operating system.
In most systems, system calls can only be made from userspace processes, while in some systems, OS/360 and successors for example, privileged system code also issues system calls.
Privileges
The architecture of most modern processors, with the exception of some embedded systems, involves a security model. For example, the rings model specifies multiple privilege levels under which software may be executed: a program is usually limited to its own address space so that it cannot access or modify other running programs or the operating system itself, and is usually prevented from directly manipulating hardware devices (e.g. the frame buffer or network devices).
However, many applications need access to these components, so system calls are made available by the operating system to provide well-defined, safe implementations for such operations. The operating system executes at the highest level of privilege, and allows applications to request services via system calls, which are often initiated via interrupts. An interrupt automatically puts the CPU into some elevated privilege level and then passes control to the kernel, which determines whether the calling program should be granted the requested service. If the service is granted, the kernel executes a specific set of instructions over which the calling program has no direct control, returns the privilege level to that of the calling program, and then returns control to the calling program.
The library as an intermediary
Generally, systems provide a library or API that sits between normal programs and the operating system. On Unix-like systems, that API is usually part of an implementation of the C library (libc), such as glibc, that provides wrapper functions for the system calls, often named the same as the system calls they invoke. On Windows NT, that API is part of the Native API, in the library; this is an undocumented API used by implementations of the regular Windows API and directly used by some system programs on Windows. The library's wrapper functions expose an ordinary function calling convention (a subroutine call on the assembly level) for using the system call, as well as making the system call more modular. Here, the primary function of the wrapper is to place all the arguments to be passed to the system call in the appropriate processor registers (and maybe on the call stack as well), and also setting a unique system call number for the kernel to call. In this way the libr |
https://en.wikipedia.org/wiki/Plexus | In neuroanatomy, a plexus (from the Latin term for "braid") is a branching network of vessels or nerves. The vessels may be blood vessels (veins, capillaries) or lymphatic vessels. The nerves are typically axons outside the central nervous system.
The standard plural form in English is plexuses. Alternatively, the Latin plural plexūs may be used.
Types
Nerve plexuses
The four primary nerve plexuses are the cervical plexus, brachial plexus, lumbar plexus, and the sacral plexus.
Cardiac plexus
Celiac plexus
Renal plexus
Venous plexus
Choroid plexus
The choroid plexus is a part of the central nervous system in the brain and consists of capillaries, brain ventricles, and ependymal cells.
Invertebrates
The plexus is the characteristic form of nervous system in the coelenterates and persists with modifications in the flatworms. The nerves of the radially symmetric echinoderms also take this form, where a plexus underlies the ectoderm of these animals and deeper in the body other nerve cells form plexuses of limited extent.
See also
Cranial nerve
Spinal nerve
Nerve plexus
Brachial nerve
List of anatomy mnemonics
References
Nervous system |
https://en.wikipedia.org/wiki/Brute-force%20search | In computer science, brute-force search or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of systematically checking all possible candidates for whether or not each candidate satisfies the problem's statement.
A brute-force algorithm that finds the divisors of a natural number n would enumerate all integers from 1 to n, and check whether each of them divides n without remainder. A brute-force approach for the eight queens puzzle would examine all possible arrangements of 8 pieces on the 64-square chessboard and for each arrangement, check whether each (queen) piece can attack any other.
While a brute-force search is simple to implement and will always find a solution if it exists, implementation costs are proportional to the number of candidate solutionswhich in many practical problems tends to grow very quickly as the size of the problem increases (§Combinatorial explosion). Therefore, brute-force search is typically used when the problem size is limited, or when there are problem-specific heuristics that can be used to reduce the set of candidate solutions to a manageable size. The method is also used when the simplicity of implementation is more important than processing speed.
This is the case, for example, in critical applications where any errors in the algorithm would have very serious consequences or when using a computer to prove a mathematical theorem. Brute-force search is also useful as a baseline method when benchmarking other algorithms or metaheuristics. Indeed, brute-force search can be viewed as the simplest metaheuristic. Brute force search should not be confused with backtracking, where large sets of solutions can be discarded without being explicitly enumerated (as in the textbook computer solution to the eight queens problem above). The brute-force method for finding an item in a tablenamely, check all entries of the latter, sequentiallyis called linear search.
Implementing the brute-force search
Basic algorithm
In order candidate for P after the current one c.
valid (P, c): check whether candidate c is a solution for P.
output (P, c): use the solution c of P as appropriate to the application.
The next procedure must also tell when there are no more candidates for the instance P, after the current one c. A convenient way to do that is to return a "null candidate", some conventional data value Λ that is distinct from any real candidate. Likewise the first procedure should return Λ if there are no candidates at all for the instance P. The brute-force method is then expressed by the algorithm
c ← first(P)
while c ≠ Λ do
if valid(P,c) then
output(P, c)
c ← next(P, c)
end while
For example, when looking for the divisors of an integer n, the instance data P is the number n. The call first(n) should return the integer 1 if n ≥ 1, or Λ otherwise; the call next(n,c) should return c + 1 if c < n, and Λ otherwise; and va |
https://en.wikipedia.org/wiki/Voice%20of%20America | Voice of America (VOA or VoA) is the state-owned news network and international radio broadcaster of the United States of America as a propaganda outlet. It is the largest and oldest of the U.S.-funded international broadcasters. VOA produces digital, TV, and radio content in 48 languages, which it distributes to affiliate stations around the world. Its targeted and primary audience is non-American.
VOA was established in 1942, and the VOA charter (Public Laws 94-350 and 103–415) was signed into law in 1976 by President Gerald Ford.
VOA is headquartered in Washington, D.C., and overseen by the U.S. Agency for Global Media (USAGM), an independent agency of the U.S. government. Funds are appropriated annually under the budget for embassies and consulates. As of 2022, VOA has a weekly worldwide audience of approximately 326 million (up from 236.6 million in 2016) and employs 961 staff with annual budget of $252 million.
Voice of America is seen by some listeners as having a positive impact while others see it as American propaganda; it also serves US diplomacy.
Current languages
The Voice of America website had five English-language broadcasts as of 2014 (worldwide, Learning English, Cambodia, Zimbabwe and Tibet). Additionally, the VOA website has versions in 47 foreign languages.
Radio programs are marked with an "R"; TV programs with a "T":
Afan Oromo
Albanian
Amharic
Armenian
Azerbaijani
Bambara
Bangla
Bosnian
Burmese
Cantonese
Mandarin
Dari Persian
French
Georgian
Haitian Creole
Hausa
Indonesian
Khmer
Kinyarwanda
Korean
Kurdish
Lao
Lingala
Macedonian
Pashto
Persian
Portuguese
Russian
Sango
Serbian
Shona
Somali
Spanish
Swahili
Thai
Tibetan
Tigrinya
Turkish
Ukrainian
Urdu
Uzbek
Vietnamese
Wolof
English
The number of languages varies according to the priorities of the United States government and the world situation.
History
American private shortwave broadcasting before World War II
Before World War II, all American shortwave stations were in private hands. Privately controlled shortwave networks included the National Broadcasting Company's International Network (or White Network), which broadcast in six languages, the Columbia Broadcasting System's Latin American international network, which consisted of 64 stations located in 18 countries, the Crosley Broadcasting Corporation in Cincinnati, Ohio, and General Electric which owned and operated WGEO and WGEA, both based in Schenectady, New York, and KGEI in San Francisco, all of which had shortwave transmitters. Experimental programming began in the 1930s, but there were fewer than 12 transmitters in operation. In 1939, the Federal Communications Commission set the following policy:
A licensee of an international broadcast station shall render only an international broadcast service which will reflect the culture of this country and which will promote international goodwill, understanding and cooperati |
https://en.wikipedia.org/wiki/Hull%20River%20National%20Park | Hull River is a national park in Queensland (Australia), 1275 km northwest of Brisbane. GIS mapping data from Queensland Department of Natural Resources (2002) showed an area of 3,240 hectares, of which about 2,100 hectares are estuarine mangroves, with the remainder being swamp forests dominated by Melaleuca and specialist Eucalypt species.
Rainfall averages 3,600 mm per year. The park is part of the Coastal Wet Tropics Important Bird Area, identified as such by BirdLife International because of its importance for the conservation of lowland tropical rainforest birds.
The former Hull River Aboriginal Settlement was located in this park.
Hull River is a habitat for 267 species of animals and 522 species of plants. The average elevation of the terrain is 32 meters.
See also
Protected areas of Queensland
References
External links
Dave Kimble's Rainforest Photo Catalog 2400+ images taken at Hull River wetland locations. Mostly taxonomy of rainforest plants, also fungi, cassowaries, other birds, insects, spiders, ecosystems. Links to photo essays.
National parks of Far North Queensland
Protected areas established in 1968
1968 establishments in Australia
Important Bird Areas of Queensland |
https://en.wikipedia.org/wiki/Automata%20theory | Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science with close connections to mathematical logic. The word automata comes from the Greek word αὐτόματος, which means "self-acting, self-willed, self-moving". An automaton (automata in plural) is an abstract self-propelled computing device which follows a predetermined sequence of operations automatically. An automaton with a finite number of states is called a Finite Automaton (FA) or Finite-State Machine (FSM). The figure on the right illustrates a finite-state machine, which is a well-known type of automaton. This automaton consists of states (represented in the figure by circles) and transitions (represented by arrows). As the automaton sees a symbol of input, it makes a transition (or jump) to another state, according to its transition function, which takes the previous state and current input symbol as its arguments.
Automata theory is closely related to formal language theory. In this context, automata are used as finite representations of formal languages that may be infinite. Automata are often classified by the class of formal languages they can recognize, as in the Chomsky hierarchy, which describes a nesting relationship between major classes of automata. Automata play a major role in the theory of computation, compiler construction, artificial intelligence, parsing and formal verification.
History
The theory of abstract automata was developed in the mid-20th century in connection with finite automata. Automata theory was initially considered a branch of mathematical systems theory, studying the behavior of discrete-parameter systems. Early work in automata theory differed from previous work on systems by using abstract algebra to describe information systems rather than differential calculus to describe material systems. The theory of the finite-state transducer was developed under different names by different research communities. The earlier concept of Turing machine was also included in the discipline along with new forms of infinite-state automata, such as pushdown automata.
1956 saw the publication of Automata Studies, which collected work by scientists including Claude Shannon, W. Ross Ashby, John von Neumann, Marvin Minsky, Edward F. Moore, and Stephen Cole Kleene. With the publication of this volume, "automata theory emerged as a relatively autonomous discipline". The book included Kleene's description of the set of regular events, or regular languages, and a relatively stable measure of complexity in Turing machine programs by Shannon.
In the same year, Noam Chomsky described the Chomsky hierarchy, a correspondence between automata and formal grammars, and Ross Ashby published An Introduction to Cybernetics, an accessible textbook explaining automata and information using basic set theory.
The study of linear bounded automata led to the Myhill–Ne |
https://en.wikipedia.org/wiki/Tail%20%28disambiguation%29 | A tail is the section at the rear end of an animal's body, a distinct, flexible appendage to the torso.
Tail or tails may also refer to:
Science and technology
Tails (operating system) or The Amnesic Incognito Live System, a Linux distribution designed for anonymity and privacy
Tail (Unix), a Unix program used to display the last few lines of a file
Tail, one of the extreme ends of a probability density function
Terminal amine isotopic labeling of substrates
Poly-A tail, part of a mature mRNA
Entertainment
Tails (album), an album by Lisa Loeb
Tails (Sonic the Hedgehog), a character in the Sonic the Hedgehog video games and comics
Tail, a character in the Kaiketsu Zorori movie
Other uses
Tails, the reverse side of a coin
Tailcoat or its rear section, a type of coat/suit used for evening dress
Tail, the final batsmen in the batting order for cricket
Fee tail or tail, an obsolescent term in common law
Jabot (window), a kind of soft window treatment
See also
Aircraft tail, the empennage of an aircraft
Comet tail, a visible part of a comet
Tail recursion, a type of recursion in computer programming
Tail rotor, a small vertical propeller mounted at the rear of a helicopter
Tailing (disambiguation) |
https://en.wikipedia.org/wiki/Radio%20programming | Radio programming is the process of organising a schedule of radio content for commercial broadcasting and public broadcasting by radio stations.
History
The original inventors of radio, from Guglielmo Marconi's time on, expected it to be used for one-on-one wireless communication tasks where telephones and telegraphs could not be used because of the problems involved in stringing copper wires from one point to another, such as in ship-to-shore communications. Those inventors had no expectations whatever that radio would become a major mass media entertainment and information medium earning many millions of dollars in revenues annually through radio advertising commercials or sponsorship. These latter uses were brought about after 1920 by business entrepreneurs such as David Sarnoff, who created the National Broadcasting Company (NBC), and William S. Paley, who built Columbia Broadcasting System (CBS). These broadcasting (as opposed to narrowcasting) business organizations began to be called network affiliates, because they consisted of loose chains of individual stations located in various cities, all transmitting the standard overall-system supplied fare, often at synchronized agreed-upon times. Some of these radio network stations were owned and operated by the networks, while others were independent radio owned by entrepreneurs allied with the respective networks. By selling blocks of time to advertisers, the medium was able to quickly become profitable and offer its products to listeners for free, provided they invested in a radio receiver set.
The new medium had grown rapidly through the 1930s, vastly increasing both the size of its audience and its profits. In those early days, it was customary for a corporation to sponsor an entire half-hour radio program, placing its commercials at the beginning and the end. This is in contrast to the pattern which developed late in the 20th century in both television and radio, where small slices of time were sold to many sponsors and no corporation claimed or wanted sponsorship of the entire show, except in rare cases. These later commercials also filled a much larger portion of the total program time than they had in the earlier days.
In the early radio age, content typically included a balance of comedy, drama, news, music and sports reporting. Variety radio programs included the most famous Hollywood talent of the day. During the 1920s, radio focused on musical entertainment, the Grand Ole Opry, has been focused on broadcasting country music since it began in 1925. Radio soap operas began in the U.S. in 1930 with Painted Dreams. Lørdagsbarnetimen, a Norwegian children's show, with its premiere in 1924 interrupted only by the Second World War, was the longest running radio show in the world until it ceased production in 2010.
In the early 1950s, television programming eroded the popularity of radio comedy, drama and variety shows. By the late 1950s, radio broadcasting took on much the form it ha |
https://en.wikipedia.org/wiki/Power%20Mac%20G4%20Cube | The Power Mac G4 Cube is a Mac personal computer sold by Apple Computer, Inc. between July 2000 and 2001. The Cube was conceived by Apple chief executive officer (CEO) Steve Jobs (who held an interest in a powerful, miniaturized desktop computer) and designed by Jonathan Ive. Apple's designers developed new technologies and manufacturing methods for the product—a cubic computer housed in clear acrylic glass. Apple positioned the Cube in the middle of its product range, between the consumer iMac G3 and the professional Power Mac G4. The Cube was announced to the general public at the Macworld Expo on July 19, 2000.
The Cube won awards and plaudits for its design upon release, but reviews noted the high cost of the machine compared to its power, its limited expandability, and cosmetic defects. The product was an immediate commercial failure, selling only 150,000 units before production was suspended within a year of its announcement. The Cube was one of the rare failures for the company under Jobs, after a successful period that brought the company back from the brink of bankruptcy. However, it ultimately proved influential to future Apple products, from the iPod to the Mac Mini. The Museum of Modern Art, located in New York City, holds a G4 Cube as part of its collection.
Overview
The Power Mac G4 Cube is a small cubic computer, suspended in a acrylic glass enclosure. The designers intended the transparent plastic to give the impression that the computer is floating. The enclosure houses the computer's vital functions, including a slot-loading optical disc drive. The Cube requires a separate monitor with either an Apple Display Connector (ADC) or a Video Graphics Array (VGA) connection. The machine has no fan to move air and heat through the case. Instead, it is passively cooled, with heat dissipated via a grille at the top of the case. The base model shipped with a 450 MHz PowerPC G4 processor, 64 MB of random-access memory (RAM), 20 GB hard drive, and an ATI Rage 128 Pro video card. A higher-end model with a 500 MHz processor, double the RAM, and a 30 GB hard drive was available only through Apple's online store.
To fit the components of a personal computer in the case's confined space, the Cube does not feature expansion slots; it does have a video card in a standard Accelerated Graphics Port (AGP) slot, but cannot fit a full-length card. The power supply is located externally to save space, and the Cube features no input or outputs for audio on the machine itself. Instead, the Cube shipped with round Harman Kardon speakers and digital amplifier, attached to the computer via Universal Serial Bus (USB). Despite its size, the Cube fits three RAM slots, two FireWire 400 ports, and two USB 1.1 ports for connecting peripherals in its frame. These ports and the power cable are located on the underside of the machine. Access to the machine's internal components is accomplished by inverting the unit and using a pop-out handle to slide the entire |
https://en.wikipedia.org/wiki/Sprinter%20%28computer%29 | The Sprinter (also called Peters Plus Sprinter or PPS ) is a microcomputer made by the Russian firm Peters Plus, Ltd. It was the last ZX Spectrum clone produced in a factory.
It was built using what the company called a "Flex architecture", using an Altera PLD as part of the core logic. This allows the machine's hardware to be reconfigured on the fly for different ZX-Spectrum models compatibility or its own enhanced native mode (set by default on boot and running the Estex operating system). This design is comparable to the design of Jeri Ellsworth's C-One reprogrammable computer.
Specifications
The computer is built on a standard computer tower configuration, using standard floppy discs, CD-ROM and hard disk drives.
CPU: Z84C15 at 21 MHz or 3.5 MHz, Altera PLD
Video output: SECAM TV or CGA monitor
Graphic modes: 320 x 256 with 256 colors, 640 x 256 with 16 colors, text mode 80 x 32 with 16 colors, 16 million color palette, 256/512 Kb video RAM
Sound: Beeper, AY-3-8910, 16-bit DAC
IDE & FDD onboard controllers
Two ISA-8 slots
References
External links
Ivan Mak's website
Sprinter unofficial site
Home computer remakes
ZX Spectrum clones
Microcomputers |
https://en.wikipedia.org/wiki/Heinz%20von%20Foerster | Heinz von Foerster (; November 13, 1911 – October 2, 2002) was an Austrian-American scientist combining physics and philosophy, and widely attributed as the originator of second-order cybernetics. He was twice a Guggenheim fellow (1956–57 and 1963–64) and also was a fellow of the American Association for the Advancement of Science, 1980. He is well known for his 1960 Doomsday equation formula published in Science predicting future population growth.
As a polymath, he wrote nearly two hundred professional papers, gaining renown in fields from computer science and artificial intelligence to epistemology, and researched high-speed electronics and electro-optics switching devices as a physicist, and in biophysics, the study of memory and knowledge. He worked on cognition based on neurophysiology, mathematics, and philosophy and was called "one of the most consequential thinkers in the history of cybernetics". He came to the United States, and stayed after meeting with Warren Sturgis McCulloch, where he received funding from The Pentagon to establish the Biological Computer Laboratory, which built the first parallel computer, the Numa-Rete. Working with William Ross Ashby, one of the original Ratio Club members, and together with Warren McCulloch, Norbert Wiener, John von Neumann and Lawrence J. Fogel, Heinz von Foerster was an architect of cybernetics and one of the members of the Macy conferences, eventually becoming editor of its early proceedings alongside Hans-Lukas Teuber and Margaret Mead.
Biography
Von Foerster was born in 1911 in Vienna, Austria-Hungary, as Heinz von Förster. His paternal grandfather was Austrian architect . His maternal grandmother was Marie Lang, an Austrian feminist, theosophist and publisher. He studied physics at the Technical University of Vienna and at the University of Breslau, where in 1944 he received a PhD in physics. His relatives included Ludwig Wittgenstein, Erwin Lang and Hugo von Hofmannsthal. Ludwig Förster was his great-grandfather. His Jewish roots didn't cause him much troubles while he worked in radar laboratories during the Nazi era, as "he hid his ancestry with the help of an employer who chose not to press him for documents on his family."
He moved to the US in 1949, and worked at the University of Illinois at Urbana–Champaign, where he was a professor of electrical engineering from 1951 to 1975. He also was professor of biophysics (1962–1975) and Director of the Biological Computer Laboratory (1958–1975). Additionally, in 1956–57 and 1963–64 he was a Guggenheim Fellow and also President of the Wenner-Gren-Foundation for anthropological research from 1963 to 1965.
He knew well and was in conversation with John von Neumann, Norbert Wiener, Humberto Maturana, Francisco Varela, Gordon Pask, Gregory Bateson, Lawrence J. Fogel and Margaret Mead, among many others. He influenced generations of students as a teacher and inclusive, enthusiastic collaborator.
He died on October 2, 2002, in Pescadero, Calif |
https://en.wikipedia.org/wiki/Deadlock | In concurrent computing, deadlock is any situation in which no member of some group of entities can proceed because each waits for another member, including itself, to take action, such as sending a message or, more commonly, releasing a lock. Deadlocks are a common problem in multiprocessing systems, parallel computing, and distributed systems, because in these contexts systems often use software or hardware locks to arbitrate shared resources and implement process synchronization.
In an operating system, a deadlock occurs when a process or thread enters a waiting state because a requested system resource is held by another waiting process, which in turn is waiting for another resource held by another waiting process. If a process remains indefinitely unable to change its state because resources requested by it are being used by another process that itself is waiting, then the system is said to be in a deadlock.
In a communications system, deadlocks occur mainly due to loss or corruption of signals rather than contention for resources.
Individually necessary and jointly sufficient conditions for deadlock
A deadlock situation on a resource can arise only if all of the following conditions occur simultaneously in a system:
Mutual exclusion: At least one resource must be held in a non-shareable mode; that is, only one process at a time can use the resource. Otherwise, the processes would not be prevented from using the resource when necessary. Only one process can use the resource at any given instant of time.
Hold and wait or resource holding: a process is currently holding at least one resource and requesting additional resources which are being held by other processes.
No preemption: a resource can be released only voluntarily by the process holding it.
Circular wait: each process must be waiting for a resource which is being held by another process, which in turn is waiting for the first process to release the resource. In general, there is a set of waiting processes, P = {P1, P2, ..., PN}, such that P1 is waiting for a resource held by P2, P2 is waiting for a resource held by P3 and so on until PN is waiting for a resource held by P1.
These four conditions are known as the Coffman conditions from their first description in a 1971 article by Edward G. Coffman, Jr.
While these conditions are sufficient to produce a deadlock on single-instance resource systems, they only indicate the possibility of deadlock on systems having multiple instances of resources.
Deadlock handling
Most current operating systems cannot prevent deadlocks. When a deadlock occurs, different operating systems respond to them in different non-standard manners. Most approaches work by preventing one of the four Common conditions from occurring, especially the fourth one. Major approaches are as follows.
Ignoring deadlock
In this approach, it is assumed that a deadlock will never occur. This is also an application of the Ostrich algorithm. This approach was initial |
https://en.wikipedia.org/wiki/Old%20Salt%20Route | The Old Salt Route was a medieval trade route in Northern Germany, one of the ancient network of salt roads which were used primarily for the transport of salt and other staples. In Germany it was referred to as Alte Salzstraße.
Salt was very valuable and essential at that time; it was sometimes referred to as "white gold." The vast majority of the salt transported on the road was produced from brine near Lüneburg, a city in the northern central part of the country and then transported to Lübeck, a major seaport on Germany's Baltic Sea coast.
History
Historians generally recognize the Old Salt Route as part of a much longer path, which functioned as an important connection between the northern and southern reaches of the country. One of the oldest documents that confirms Lüneburg and its role in refining and transporting salt dates from 956 A.D. According to that document, King Otto I the Great granted the St. Michaelis Monastery in Lüneburg the customs revenue from the saltworks. Even at those early times, the city's wealth was based in large part on the salt found in the area. The Old Salt Route attained its peak of success between the 12th and the 16th century.
The trade route led from Lüneburg northward to Lübeck. From that port city, most of the salt was shipped to numerous destinations that also lie on the Baltic Sea, including Falsterbo, which boasted a Scania Market. There it was used for the preservation of herring, an immensely important food in the Middle Ages, as well as for other foods. The salt trade was a major reason for the power of Lübeck and the Hanseatic League.
Transport of salt
Horse-drawn carts brought the salt from Lüneburg to a crossing of the Elbe river at Artlenburg (near Lauenburg) and from there, via Mölln, to Lübeck. For the most part, however, the historic trade route was composed of unsurfaced, sandy and often muddy roads through heathland, woods and small villages, making the transport of salt an arduous task. In addition, the route was somewhat dangerous, since the valuable cargo attracted thieves, bandits and marauders. The dangers faced by those who make the long trek and the fact that only relatively small quantities of the precious crystalline substance could be carried in any single journey, made moving salt via overland routes very expensive.
In 1398, though, the Stecknitz Canal, one of the first manmade waterways in Europe, was completed, making it possible to transport much more salt in a single shipment and to do so with much greater ease and safety. That change helped merchants satisfy the salt requirements of an ever-growing demand. In the 16th century, for example, about 19,000 tons of the product were carried from Lüneburg to Lübeck each year either by land or water. However, it still took about twenty days to complete each trip.
Tourism
In modern times, a trip along the Salt Road promises a rich blend of nature and culture. The trip can be made on foot or on bicycle and part of the distance |
https://en.wikipedia.org/wiki/Generic%20programming | Generic programming is a style of computer programming in which algorithms are written in terms of data types to-be-specified-later that are then instantiated when needed for specific types provided as parameters. This approach, pioneered by the ML programming language in 1973, permits writing common functions or types that differ only in the set of types on which they operate when used, thus reducing duplicate code.
Generics was introduced to the main-stream programming with Ada in 1977 and then with templates in C++ it became part of the repertoire of professional library design. The techniques were further improved and parameterized types were introduced in the influential 1994 book Design Patterns.
New techniques were introduced by Andrei Alexandrescu in his 2001 book, Modern C++ Design: Generic Programming and Design Patterns Applied. Subsequently, D implemented the same ideas.
Such software entities are known as generics in Ada, C#, Delphi, Eiffel, F#, Java, Nim, Python, Go, Rust, Swift, TypeScript, and Visual Basic .NET. They are known as parametric polymorphism in ML, Scala, Julia, and Haskell (Haskell terminology also uses the term "generic" for a related but somewhat different concept).
The term "generic programming" was originally coined by David Musser and Alexander Stepanov in a more specific sense than the above, to describe a programming paradigm whereby fundamental requirements on data types are abstracted from across concrete examples of algorithms and data structures and formalized as concepts, with generic functions implemented in terms of these concepts, typically using language genericity mechanisms as described above.
Stepanov–Musser and other generic programming paradigms
Generic programming is defined in as follows,
The "generic programming" paradigm is an approach to software decomposition whereby fundamental requirements on types are abstracted from across concrete examples of algorithms and data structures and formalized as concepts, analogously to the abstraction of algebraic theories in abstract algebra. Early examples of this programming approach were implemented in Scheme and Ada, although the best known example is the Standard Template Library (STL), which developed a theory of iterators that is used to decouple sequence data structures and the algorithms operating on them.
For example, given N sequence data structures, e.g. singly linked list, vector etc., and M algorithms to operate on them, e.g. find, sort etc., a direct approach would implement each algorithm specifically for each data structure, giving combinations to implement. However, in the generic programming approach, each data structure returns a model of an iterator concept (a simple value type that can be dereferenced to retrieve the current value, or changed to point to another value in the sequence) and each algorithm is instead written generically with arguments of such iterators, e.g. a pair of iterators pointing to the beginning and end |
https://en.wikipedia.org/wiki/International%20mobile%20subscriber%20identity | The international mobile subscriber identity (IMSI) is a number that uniquely identifies every user of a cellular network. It is stored as a field and is sent by the mobile device to the network. It is also used for acquiring other details of the mobile in the home location register (HLR) or as locally copied in the visitor location register. To prevent eavesdroppers from identifying and tracking the subscriber on the radio interface, the IMSI is sent as rarely as possible and a randomly-generated TMSI is sent instead.
The IMSI is used in any mobile network that interconnects with other networks. For GSM, UMTS and LTE networks, this number was provisioned in the SIM card and for cdmaOne and CDMA2000 networks, in the phone directly or in the R-UIM card (the CDMA equivalent of the SIM card). Both cards have been superseded by the UICC.
An IMSI is usually presented as a 15-digit number but can be shorter. For example, MTN South Africa's old IMSIs that are still in use in the market are 14 digits long. The first 3 digits represent the mobile country code (MCC), which is followed by the mobile network code (MNC), either 2-digit (European standard) or 3-digit (North American standard). The length of the MNC depends on the value of the MCC, and it is recommended that the length is uniform within a MCC area. The remaining digits are the mobile subscription identification number (MSIN) within the network's customer base, usually 9 to 10 digits long, depending on the length of the MNC.
The IMSI conforms to the ITU E.212 numbering standard.
IMSIs can sometimes be mistaken for the ICCID (E.118), which is the identifier for the physical SIM card itself (or now the virtual SIM card if it is an eSIM). The IMSI lives as part of the profile (or one of several profiles if the SIM and operator support multi-IMSI SIMs) on the SIM/ICCID.
Examples of IMSI numeric presentational
IMSI analysis
IMSI analysis is the process of examining a subscriber's IMSI to identify the network the IMSI belongs to, and whether subscribers from that network may use a given network (if they are not local subscribers, this requires a roaming agreement).
If the subscriber is not from the provider's network, the IMSI must be converted to a Global Title, which can then be used for accessing the subscriber's data in the remote HLR. This is mainly important for international mobile roaming. Outside North America, the IMSI is converted to the Mobile Global Title (MGT) format, standard E.214, which is similar to an E.164 number. E.214 provides a method to convert the IMSI into a number that can be used for routing to international SS7 switches. E.214 can be interpreted as implying that there are two separate stages of conversion; first determine the MCC and convert to E.164 country calling code then determine MNC and convert to national network code for the carrier's network. But this process is not used in practice and the GSM numbering authority has clearly stated that a one-sta |
https://en.wikipedia.org/wiki/XOR%20%28disambiguation%29 | XOR may mean:
Exclusive or (logic)
XOR cipher, an encryption algorithm
XOR gate
bitwise XOR, an operator used in computer programming
XOR (video game)
XOR, an x200 instruction
Xor DDoS
See also
Exor (disambiguation) |
https://en.wikipedia.org/wiki/Lex%20%28software%29 | Lex is a computer program that generates lexical analyzers ("scanners" or "lexers").
Lex is commonly used with the yacc parser generator. Lex, originally written by Mike Lesk and Eric Schmidt and described in 1975, is the standard lexical analyzer generator on many Unix systems, and an equivalent tool is specified as part of the POSIX standard.
Lex reads an input stream specifying the lexical analyzer and writes source code which implements the lexical analyzer in the C programming language.
In addition to C, some old versions of Lex could generate a lexer in Ratfor.
Open source
Although originally distributed as proprietary software, some versions of Lex are now open-source. Open-source versions of Lex, based on the original proprietary code, are now distributed with open-source operating systems such as OpenSolaris and Plan 9 from Bell Labs. One popular open-source version of Lex, called flex, or the "fast lexical analyzer", is not derived from proprietary coding.
Structure of a Lex file
The structure of a Lex file is intentionally similar to that of a yacc file: files are divided into three sections, separated by lines that contain only two percent signs, as follows:
The definitions section defines macros and imports header files written in C. It is also possible to write any C code here, which will be copied verbatim into the generated source file.
The rules section associates regular expression patterns with C statements. When the lexer sees text in the input matching a given pattern, it will execute the associated C code.
The C code section contains C statements and functions that are copied verbatim to the generated source file. These statements presumably contain code called by the rules in the rules section. In large programs it is more convenient to place this code in a separate file linked in at compile time.
Example of a Lex file
The following is an example Lex file for the flex version of Lex. It recognizes strings of numbers (positive integers) in the input, and simply prints them out.
/*** Definition section ***/
%{
/* C code to be copied verbatim */
#include <stdio.h>
%}
%%
/*** Rules section ***/
/* [0-9]+ matches a string of one or more digits */
[0-9]+ {
/* yytext is a string containing the matched text. */
printf("Saw an integer: %s\n", yytext);
}
.|\n { /* Ignore all other characters. */ }
%%
/*** C Code section ***/
int main(void)
{
/* Call the lexer, then quit. */
yylex();
return 0;
}
If this input is given to flex, it will be converted into a C file, . This can be compiled into an executable which matches and outputs strings of integers. For example, given the input:
abc123z.!&*2gj6
the program will print:
Saw an integer: 123
Saw an integer: 2
Saw an integer: 6
Using Lex with other programming tools
Using Lex with parser generators
Lex and parser generators, such as Yacc or Bison, are commonly used together. Parser generators use a formal gramm |
https://en.wikipedia.org/wiki/Joint%20Tactical%20Information%20Distribution%20System | The Joint Tactical Information Distribution System (JTIDS) is an L band Distributed Time Division Multiple Access (DTDMA) network radio system used by the United States armed forces and their allies to support data communications needs, principally in the air and missile defense community. It produces a spread spectrum signal using Frequency-shift keying (FSK) and Phase-shift keying (PSK) to spread the radiated power over a wider spectrum (range of frequencies) than normal radio transmissions. This reduces susceptibility to noise, jamming, and interception. In JTIDS Time Division Multiple Access (TDMA) (similar to cell phone technology), each time interval (e.g., 1 second) is divided into time slots (e.g. 128 per second). Together, all 1536 time slots in a 12-second interval are called a "frame". Each time slot is "bursted" (transmitted) at several different carrier frequencies sequentially. Within each slot, the phase angle of the transmission burst is varied to provide PSK. Each type of data to be transmitted is assigned a slot or block of slots (channel) to manage information exchanges among user participation groups. In traditional TDMA, the slot frequencies remain fixed from second to second (frame to frame). In JTIDS TDMA, the slot frequencies and/or slot assignments for each channel do not remain fixed from frame to frame but are varied in a pseudo-random manner. The slot assignments, frequencies, and information are all encrypted to provide computer-to-computer connectivity in support of every type of military platform to include Air Force fighters and Navy submarines.
The full development of JTIDS commenced in 1981 when a contract was placed with Singer-Kearfott (later GEC-Marconi Electronic Systems, now BAE Systems E&IS). Fielding proceeded slowly throughout the late 1980s and early 1990s with rapid expansion (following 9/11) in preparation for Operation Enduring Freedom (Afghanistan) and Operation Iraqi Freedom. Development is now carried out by Data Link Solutions, a joint BAE/Rockwell Collins company, ViaSat, and the MIDS International consortium.
About
JTIDS is one of the family of radio equipment implementing what is called Link 16. Link 16, a highly-survivable radio communications design to meet the most stringent requirements of modern combat, provides reliable Situational Awareness (SA) for fast-moving forces. Link 16 equipment has proven, in detailed field demonstrations as well as in the AWACS and JSTARS deployment in Desert Storm, the capability of basic Link 16 to exchange user data at 115 kbit/s, error-correction-coded. (Compare this to typical tactical systems at 16 kbit/s, which also have to accommodate overheads in excess of 50% to supply the same transmission reliability.)
While principally a data network, Link 16 radios can provide high quality voice channels and navigation services as accurate as any in the inventory. Every Link 16 user can identify itself to other similarly equipped platforms at ranges well beyon |
https://en.wikipedia.org/wiki/KLIA%20Ekspres | {
"type": "ExternalData",
"service": "geoline",
"ids": "Q1431592",
"properties": {
"stroke": "#800080",
"stroke-width": 6
}
}
The ERL KLIA Ekspres is an express airport rail link servicing the Kuala Lumpur International Airport (KLIA) in Malaysia. It runs from KL Sentral, the main railway station of Kuala Lumpur to KLIA as well as its low-cost terminal, klia2. The line is one of the two services on the Express Rail Link (ERL) system, sharing the same tracks as the KLIA Transit. The KLIA Transit stops at all stations along the line, whereas the KLIA Ekspres runs as a direct non-stop express service between KL Sentral and KLIA/klia2. The is operated by Express Rail Link Sdn. Bhd. (ERL).
The line is one of the components of the Klang Valley Integrated Transit System. It is numbered 6 and coloured purple on official transit maps.
Line information
KLIA Ekspres serves three stations. The service runs non-stop from KL Sentral to KLIA and klia2, skipping the three KLIA Transit stops in between.
At KL Sentral, the two platforms of the ERL are accessed from different parts of the station building. The KLIA Ekspres side platforms are accessed from the KL City Air Terminal (KL CAT) while the KLIA Transit island platform is accessed from the main Transit Concourse at Level 1. At KLIA T1 and T2, both KLIA Ekspres and KLIA Transit share the same island platform for both north-bound and south-bound trains.
At KLIA Terminal 1 station, KLIA Ekspres uses the same platform for Terminal 2- or city-bound trains. Displays are installed at the platform to indicate the travelling direction of the approaching train.
Extension
A extension to the new terminal was completed in 2013. Commercial service began on 1 May 2014, when klia2 opened. Inter-terminal travel time from KLIA Main Terminal to the new terminal is 3 minutes with a fare of RM2.
Rolling stock
History
Accidents
On 24 August 2010, Express Rail Link suffered their first reported accident, in which 3 passengers were injured. Two ERL trains collided at Kuala Lumpur Sentral, Of the trains involved one of them was about to depart at 9:45 pm for Kuala Lumpur International Airport while the other train, which was empty, rammed into its rear.
Suspension in 2020 to 2022
On 4 April 2020, due to the Malaysian movement control order, which resulted in a significant reduction in ridership, all ERL rail services were temporarily suspended. Limited ERL services recommenced on 4 May 2020 with KLIA Transit service patterns.
Again in Service
The (full) service was reopened on 3 January 2023. For August of this year, a raise of the frequency on one train every 20 minutes, every day, was announced.
Operations
Timetable
The KLIA Ekspres service officially began operations on 14 April 2002 connecting Kuala Lumpur with the Kuala Lumpur International Airport. The non-stop 57-kilometer journey takes around 28 minutes with trains departing at 15-minute intervals (until 2020) during peak hours and 20- |
https://en.wikipedia.org/wiki/Computational%20physics | Computational physics is the study and implementation of numerical analysis to solve problems in physics. Historically, computational physics was the first application of modern computers in science, and is now a subset of computational science. It is sometimes regarded as a subdiscipline (or offshoot) of theoretical physics, but others consider it an intermediate branch between theoretical and experimental physics — an area of study which supplements both theory and experiment.
Overview
In physics, different theories based on mathematical models provide very precise predictions on how systems behave. Unfortunately, it is often the case that solving the mathematical model for a particular system in order to produce a useful prediction is not feasible. This can occur, for instance, when the solution does not have a closed-form expression, or is too complicated. In such cases, numerical approximations are required. Computational physics is the subject that deals with these numerical approximations: the approximation of the solution is written as a finite (and typically large) number of simple mathematical operations (algorithm), and a computer is used to perform these operations and compute an approximated solution and respective error.
Status in physics
There is a debate about the status of computation within the scientific method. Sometimes it is regarded as more akin to theoretical physics; some others regard computer simulation as "computer experiments", yet still others consider it an intermediate or different branch between theoretical and experimental physics, a third way that supplements theory and experiment. While computers can be used in experiments for the measurement and recording (and storage) of data, this clearly does not constitute a computational approach.
Challenges in computational physics
Computational physics problems are in general very difficult to solve exactly. This is due to several (mathematical) reasons: lack of algebraic and/or analytic solvability, complexity, and chaos. For example, - even apparently simple problems, such as calculating the wavefunction of an electron orbiting an atom in a strong electric field (Stark effect), may require great effort to formulate a practical algorithm (if one can be found); other cruder or brute-force techniques, such as graphical methods or root finding, may be required. On the more advanced side, mathematical perturbation theory is also sometimes used (a working is shown for this particular example here). In addition, the computational cost and computational complexity for many-body problems (and their classical counterparts) tend to grow quickly. A macroscopic system typically has a size of the order of constituent particles, so it is somewhat of a problem. Solving quantum mechanical problems is generally of exponential order in the size of the system and for classical N-body it is of order N-squared. Finally, many physical systems are inherently nonlinear at best, and at |
https://en.wikipedia.org/wiki/Library%20%28computing%29 | In computer science, a library is a collection of non-volatile resources used by computer programs, often for software development. These may include configuration data, documentation, help data, message templates, pre-written code and subroutines, classes, values or type specifications. In IBM's OS/360 and its successors they are referred to as partitioned data sets.
A library is also a collection of implementations of behavior, written in terms of a language, that has a well-defined interface by which the behavior is invoked. For instance, people who want to write a higher-level program can use a library to make system calls instead of implementing those system calls over and over again. In addition, the behavior is provided for reuse by multiple independent programs. A program invokes the library-provided behavior via a mechanism of the language. For example, in a simple imperative language such as C, the behavior in a library is invoked by using C's normal function-call. What distinguishes the call as being to a library function, versus being to another function in the same program, is the way that the code is organized in the system.
Library code is organized in such a way that it can be used by multiple programs that have no connection to each other, while code that is part of a program is organized to be used only within that one program. This distinction can gain a hierarchical notion when a program grows large, such as a multi-million-line program. In that case, there may be internal libraries that are reused by independent sub-portions of the large program. The distinguishing feature is that a library is organized for the purposes of being reused by independent programs or sub-programs, and the user only needs to know the interface and not the internal details of the library.
The value of a library lies in the reuse of standardized program elements. When a program invokes a library, it gains the behavior implemented inside that library without having to implement that behavior itself. Libraries encourage the sharing of code in a modular fashion and ease the distribution of the code.
The behavior implemented by a library can be connected to the invoking program at different program lifecycle phases. If the code of the library is accessed during the build of the invoking program, then the library is called a static library. An alternative is to build the executable of the invoking program and distribute that, independently of the library implementation. The library behavior is connected after the executable has been invoked to be executed, either as part of the process of starting the execution, or in the middle of execution. In this case the library is called a dynamic library (loaded at runtime). A dynamic library can be loaded and linked when preparing a program for execution, by the linker. Alternatively, in the middle of execution, an application may explicitly request that a module be loaded.
Most compiled languages have a stan |
https://en.wikipedia.org/wiki/Disjoint | Disjoint may refer to:
Disjoint sets, sets with no common elements
Mutual exclusivity, the impossibility of a pair of propositions both being true
See also
Disjoint union
Disjoint-set data structure |
https://en.wikipedia.org/wiki/Chart%20parser | In computer science, a chart parser is a type of parser suitable for ambiguous grammars (including grammars of natural languages). It uses the dynamic programming approach—partial hypothesized results are stored in a structure called a chart and can be re-used. This eliminates backtracking and prevents a combinatorial explosion.
Chart parsing is generally credited to Martin Kay.
Types of chart parsers
A common approach is to use a variant of the Viterbi algorithm. The Earley parser is a type of chart parser mainly used for parsing in computational linguistics, named for its inventor. Another chart parsing algorithm is the Cocke-Younger-Kasami (CYK) algorithm.
Chart parsers can also be used for parsing computer languages. Earley parsers in particular have been used in compiler-compilers where their ability to parse using arbitrary Context-free grammars eases the task of writing the grammar for a particular language. However their lower efficiency has led to people avoiding them for most compiler work.
In bidirectional chart parsing, edges of the chart are marked with a direction, either forwards or backwards, and rules are enforced on the direction in which edges must point in order to be combined into further edges.
In incremental chart parsing, the chart is constructed incrementally as the text is edited by the user, with each change to the text resulting in the minimal possible corresponding change to the chart.
Chart parsers are distinguished between top-down and bottom-up, as well as active and passive.
See also
Brute-force search
References
External links
Bottom-up Chart parsing Web Implementation
Natural language parsing
Parsing algorithms |
https://en.wikipedia.org/wiki/Cyberjaya | Cyberjaya (a portmanteau of cyber and Putrajaya) is a city with a science park as its core that forms a key part of the Multimedia Super Corridor in Malaysia. It is located in Sepang District, Selangor. Cyberjaya is adjacent to, and developed along with Putrajaya, Malaysia's government seat. This city aspires to be known as the Silicon Valley of Malaysia.
The official opening ceremony for Cyberjaya was held on 17 May 1997 by the Prime Minister, Mahathir bin Mohamad.
Many multinational companies and data centres are located in the city.
History
Until 1975, what is today Cyberjaya, Putrajaya and Dengkil were under the administration of Hulu Langat (Kajang) district. On the site of today's Cyberjaya once stood an estate, Prang Besar (Great War).
The idea of an IT-themed city, Cyberjaya, arose out of a study by management consultancy McKinsey for the Multimedia Super Corridor commissioned by the Federal Government of Malaysia in 1995. The implementation agency was the Town & Country Planning Department of the Ministry of Housing and Local Government. The catalyst is the agreement by NTT in 1996 to site an R&D centre at a site to the west of the new Malaysian administration centre, Putrajaya.
Multimedia Development Corporation (then known as MDC), the agency overseeing the implementation of the MSC was located in Cyberjaya to oversee the creation. The real estate implementation was privatised to Cyberview Sdn Bhd (Cyberview) in early 1997. At the time, Cyberview was set up a joint-venture comprising entities such as Setia Haruman Sdn Bhd (SHSB), Nippon Telephone and Telegraph (NTT), Golden Hope, MDeC, Permodalan Nasional Berhad (PNB) and Kumpulan Darul Ehsan Berhad (KDEB), representative of the Selangor Government. SHSB, a consortium comprising Renong, Landmarks, MKLand and Country Heights, was asked to take the lead regarding the development. Federal government-linked companies Telekom Malaysia and Tenaga Nasional were conscripted to provide the telecommunication and power supply infrastructure. The ambitious plan was to develop the first phase, comprising 1,430 hectares by 2006, with the remaining 1,460 hectares to be developed after the year 2011. The engineering management consultant, Pengurusan Lebuhraya Bhd (now acquired by Opus International Malaysia) was appointed to manage the construction of utilities and infrastructure, overseeing major construction firms of Peremba and United Engineers Malaysia (UEM).
However, due to the late 1997 Asian Financial Crisis, the undertaking was deemed no longer viable and necessitated the Government taking over of the 55% and 15% stake in Cyberview shares held by SHSB and NTT respectively via the Ministry of Finance Inc. (MOF Inc.). The transaction gave MOF Inc a 70% stake, and Cyberview has remained a government-owned company ever since. Cyberview then entered into an agreement with SHSB with shareholders comprising Country Heights Holdings Berhad (CHHB), Landmarks, Menara Embun (an MKLand Controlled C |
https://en.wikipedia.org/wiki/Forest%20City%2C%20Florida | Forest City is a census-designated place and an area in Seminole County, Florida, United States. Its historic center is now in the City of Altamonte Springs. Data in this article deals only with the unincorporated section. The population was 12,612 at the 2000 census. It is part of the Orlando–Kissimmee Metropolitan Statistical Area.
Geography
According to the United States Census Bureau, the CDP has a total area of , of which is land and (13.21%) is water.
Demographics
As of the census of 2000, there were 12,612 people, 4,777 households, and 3,363 families residing in the CDP. The population density was . There were 4,976 housing units at an average density of . The racial makeup of the CDP was 85.32% White, 4.86% African American, 0.25% Native American, 3.40% Asian, 0.04% Pacific Islander, 3.64% from other races, and 2.49% from two or more races. Hispanic or Latino of any race were 15.57% of the population.
There were 4,777 households, out of which 35.4% had children under the age of 18 living with them, 56.2% were married couples living together, 10.5% had a female householder with no husband present, and 29.6% were non-families. 23.7% of all households were made up of individuals, and 5.6% had someone living alone who was 65 years of age or older. The average household size was 2.62 and the average family size was 3.12.
In the CDP, the population was spread out, with 25.5% under the age of 18, 7.4% from 18 to 24, 33.3% from 25 to 44, 22.8% from 45 to 64, and 11.0% who were 65 years of age or older. The median age was 36 years. For every 100 females, there were 96.5 males. For every 100 females age 18 and over, there were 93.6 males.
The median income for a household in the CDP was $50,191, and the median income for a family was $55,109. Males had a median income of $40,669 versus $30,259 for females. The per capita income for the CDP was $24,464. About 4.2% of families and 6.0% of the population were below the poverty line, including 8.3% of those under age 18 and 9.2% of those age 65 or over.
References
External links
Seminole County Convention and Visitors Bureau
Census-designated places in Seminole County, Florida
Greater Orlando
Census-designated places in Florida |
https://en.wikipedia.org/wiki/Odell%2C%20Illinois | Odell is a village in Livingston County, Illinois, United States. The population was 1,046 at the 2010 census.
Media
In October 2006, Odell was featured on the USA Food Network's "Riding Old Route 66", which visited the Standard Oil station.
Geography
Odell is in northern Livingston County, in the northern part of Odell Township. Interstate 55 passes north and west of the village, with access from Exit 209. I-55 leads northeast to Dwight and to downtown Chicago, while to the southwest it leads to Pontiac, the Livingston county seat, and to Bloomington. Historic US 66 passes through the northwest side of the village, on an older bypass than I-55.
According to the 2010 census, Odell has a total area of , of which (or 98.58%) is land and (or 1.42%) is water.
History
Founding
Odell was laid out by Sydney S. Morgan (1823 – 1884) and Henry A. Gardner (1816 – 1875) on August 10, 1856. Both men were railroad engineers who had worked on the survey and construction of what soon became the Chicago and Alton Railroad. For a time Sydney S. Morgan divided his time between Joliet and Odell, but soon settled in Odell on a permanent basis where he became the town's chief promoter. Gardner was born in Berkshire County, Massachusetts, and had begun his railroad career working as a rodman on an extension of the Great Western Railroad in Massachusetts. He rose quickly through the ranks until he became chief engineer of the Mohawk and Hudson Railroad. Gardner came west in 1853 to work as assistant engineer to Oliver H. Lee on the Chicago and Mississippi Railroad. He purchased land near Dwight and later went on to become Chief Engineer on the Michigan Central Railroad. Gardner was never a resident of Odell.
The town was platted when it became clear that the railroad would pass through Morgan's and Gardner's land. The railroad was originally known as the Chicago and Mississippi, but quickly became the Chicago, Alton and St. Louis, and then the Chicago and Alton. An excursion train ran through the town on July 4, 1854, and regular service began in August 1854. Before the coming of the railroad, the land which became Odell Township was completely unsettled. Between 1852 and 1855 almost all of the land in the township was entered, and farms were rapidly developed. The land on which the town would soon be erected had been first purchased from the government by James C. Spencer and Henry A. Gardner on May 4, 1853. Through a series of quick transactions, Spencer sold his land to William H. Odell who then transferred it to Sydney S. Morgan.
Original design
The town was surveyed by Thomas F. Norton, deputy surveyor of Livingston County. The railroad had been granted a swath of land extending diagonally through the town. This presented a problem in town design, which was solved at Odell by aligning the entire original town with the tracks. A similar problem was presented by several towns along this railroad. Unlike the Toledo, Peoria and Western Railroad, built |
https://en.wikipedia.org/wiki/Macintosh%20Plus | The Macintosh Plus computer is the third model in the Macintosh line, introduced on January 16, 1986, two years after the original Macintosh and a little more than a year after the Macintosh 512K, with a price tag of US$2,599. As an evolutionary improvement over the 512K, it shipped with 1 MB of RAM standard, expandable to 4 MB, and an external SCSI peripheral bus, among smaller improvements. Originally, the computer's case was the same beige color as the original Macintosh, Pantone 453; however, in 1987, the case color was changed to the long-lived, warm gray "Platinum" color. It is the earliest Macintosh model able to run System Software 5, System 6, and System 7, up to System 7.5.5, but not System 7.5.2.
Overview
Bruce Webster of BYTE reported a rumor in December 1985: "Supposedly, Apple will be releasing a Big Mac by the time this column sees print: said Mac will reportedly come with 1 megabyte of RAM ... the new 128K-byte ROM ... and a double-sided (800K bytes) disk drive, all in the standard Mac box". Introduced as the Macintosh Plus, it was the first Macintosh model to include a SCSI port, which launched the popularity of external SCSI devices for Macs, including hard disks, tape drives, CD-ROM drives, printers, Zip drives, and even monitors. The SCSI implementation of the Plus was engineered shortly before the initial SCSI spec was finalized and, as such, is not 100% SCSI-compliant. SCSI ports remained standard equipment for all Macs until the introduction of the iMac in 1998.
The Macintosh Plus was the last classic Mac to have an RJ11 port on the front of the unit for the keyboard, as well as the DE-9 connector for the mouse; models released after the Macintosh Plus would use ADB ports.
The Mac Plus was the first Apple computer to utilize user-upgradable SIMM memory modules instead of single DIP DRAM chips. Four SIMM slots were provided and the computer shipped with four 256 KB SIMMs, for 1 MB total RAM. By replacing them with 1 MB SIMMs, it was possible to have 4 MB of RAM. (Although 30-pin SIMMs could support up to 16 MB total RAM, the Mac Plus motherboard had only 22 address lines connected, for a 4 MB maximum.)
It has what was then a new -inch double-sided 800 KB floppy drive, offering double the capacity of floppy disks from previous Macs, along with backward compatibility. The drive is controlled by the same IWM chip as in previous models, implementing variable speed GCR. The drive was still completely incompatible with PC drives. The 800 KB drive has two read/write heads, enabling it to simultaneously use both sides of the floppy disk and thereby double storage capacity. Like the 400 KB drive before it, a companion Macintosh 800K External Drive was an available option. However, with the increased disk storage capacity combined with 2-4x the available RAM, the external drive was less of a necessity than it had been with the 128K and 512K.
The Mac Plus has 128 KB of ROM on the motherboard, which is double the amount of ROM in |
https://en.wikipedia.org/wiki/Technical%20analysis | In finance, technical analysis is an analysis methodology for analysing and forecasting the direction of prices through the study of past market data, primarily price and volume. As a type of active management, it stands in contradiction to much of modern portfolio theory. The efficacy of technical analysis is disputed by the efficient-market hypothesis, which states that stock market prices are essentially unpredictable, and research on whether technical analysis offers any benefit has produced mixed results. It is distinguished from fundamental analysis, which considers a company's financial statements, health, and the overall state of the market and economy.
History
The principles of technical analysis are derived from hundreds of years of financial market data. Some aspects of technical analysis began to appear in Amsterdam-based merchant Joseph de la Vega's accounts of the Dutch financial markets in the 17th century. In Asia, technical analysis is said to be a method developed by Homma Munehisa during the early 18th century which evolved into the use of candlestick techniques, and is today a technical analysis charting tool.
Journalist Charles Dow (1851-1902) compiled and closely analyzed American stock market data, and published some of his conclusions in editorials for The Wall Street Journal. He believed patterns and business cycles could possibly be found in this data, a concept later known as "Dow theory". However, Dow himself never advocated using his ideas as a stock trading strategy.
In the 1920s and 1930s, Richard W. Schabacker published several books which continued the work of Charles Dow and William Peter Hamilton in their books Stock Market Theory and Practice and Technical Market Analysis. In 1948, Robert D. Edwards and John Magee published Technical Analysis of Stock Trends which is widely considered to be one of the seminal works of the discipline. It is exclusively concerned with trend analysis and chart patterns and remains in use to the present. Early technical analysis was almost exclusively the analysis of charts because the processing power of computers was not available for the modern degree of statistical analysis. Charles Dow reportedly originated a form of point and figure chart analysis. With the emergence of behavioral finance as a separate discipline in economics, Paul V. Azzopardi combined technical analysis with behavioral finance and coined the term "Behavioral Technical Analysis".
Other pioneers of analysis techniques include Ralph Nelson Elliott, William Delbert Gann, and Richard Wyckoff who developed their respective techniques in the early 20th century.
General description
Fundamental analysts examine earnings, dividends, assets, quality, ratios, new products, research and the like. Technicians employ many methods, tools and techniques as well, one of which is the use of charts. Using charts, technical analysts seek to identify price patterns and market trends in financial markets and attempt to ex |
https://en.wikipedia.org/wiki/Maxtor | Maxtor was an American computer hard disk drive manufacturer. Founded in 1982, it was the third largest hard disk drive manufacturer in the world before being purchased by Seagate in 2006.
History
Overview
In 1981, three former IBM employees began searching for funding, and Maxtor was founded the following year. In 1983, Maxtor shipped its first product, the Maxtor XT-1140. In 1985, Maxtor filed its initial public offering and started trading on the New York Stock Exchange as "MXO." Maxtor bought hard drive manufacturer MiniScribe in 1990. Maxtor was getting close to bankruptcy in 1992 and closed its engineering operations in San Jose, California, in 1993. In 1996, Maxtor introduced its DiamondMax line of hard drives with DSP-based architecture. In 2000, Maxtor acquired Quantum's hard drive division, which gave Maxtor the ATA/133 hard drive interface and helped Maxtor revive its server hard drive market. In 2006, Maxtor was acquired by Seagate.
Early financing
The Maxtor founders, James McCoy, Jack Swartz, and Raymond Niedzwiecki—graduates of the San Jose State University School of Engineering and former employees of IBM—began the search for funding in 1981. In early 1982, B.J. Cassin and Chuck Hazel (Bay Partners) provided the initial $3 million funding and the company officially began operations on July 1, 1982. In February 1983, it shipped its first product to Convergent Technology and immediately received an additional $5.5 million in its second round of funding. The company also began negotiations with the EDB (Economic Development Board) of Singapore for favorable terms before committing to Singapore as its offshore manufacturing location. The DBS (Development Bank of Singapore) agreed to provide financing to help grow the company in Singapore. In 1983, the company established a liaison and procurement office in Tokyo, headed by Tatsuya Yamamoto.
Maxtor's product architecture used eight disks; 15 surfaces recorded data and the final surface was where the servo track information was located. The company developed its own spindle motor, which was fitted within the casting containing the disks. This was a major departure as the spindle motor was usually mounted external to the disks. The first product was designed to provide 190 MB of storage, but delays in getting magnetic heads to the Maxtor design resulted in the company taking what was available, and the first drive—the XT-1140—was shipped with a capacity of only 140 MB. The company received an additional round of financing of approximately $37 million in 1984 before going public in 1985, with Goldman Sachs as the prime underwriter.
MiniScribe acquisition
In 1990, Maxtor entered the mass market with its purchase of the assets (but not the liabilities) of bankrupt MiniScribe in Longmont, Colorado. The transition was a tough one as the early products of this union (notably the 7120AT 3.5-inch 120 MB drive) had many quality and design problems. Later products managed to sell well des |
https://en.wikipedia.org/wiki/Autodesk%20Maya | Autodesk Maya, commonly shortened to just Maya ( ), is a 3D computer graphics application that runs on Windows, macOS and Linux, originally developed by Alias and currently owned and developed by Autodesk. It is used to create assets for interactive 3D applications (including video games), animated films, TV series, and visual effects.
History
Maya was originally an animation product based on code from The Advanced Visualizer by Wavefront Technologies, Thomson Digital Image (TDI) Explore, PowerAnimator by Alias, and Alias Sketch!. The IRIX-based projects were combined and animation features were added; the project codename was Maya. Walt Disney Feature Animation collaborated closely with Maya's development during its production of Dinosaur. Disney requested that the user interface of the application be customizable to allow for a personalized workflow. This was a particular influence in the open architecture of Maya, and partly responsible for its popularity in the animation industry.
After Silicon Graphics Inc. acquired both Alias and Wavefront Technologies, Inc., Wavefront's technology (then under development) was merged into Maya. SGI's acquisition was a response to Microsoft Corporation acquiring Softimage 3D. The new wholly owned subsidiary was named "AliasWavefront".
In the early days of development Maya started with Tcl as the scripting language, in order to leverage its similarity to a Unix shell language, but after the merger with Wavefront it was replaced with Maya Embedded Language (MEL). Sophia, the scripting language in Wavefront's Dynamation, was chosen as the basis of MEL.
Maya 1.0 was released in February 1998. Following a series of acquisitions, Maya was bought by Autodesk in 2005. Under the name of the new parent company, Maya was renamed Autodesk Maya. However, the name "Maya" continues to be the dominant name used for the product.
Overview
Maya is an application used to generate 3D assets for use in film, television, games, and commercials. The software was initially released for the IRIX operating system. However, this support was discontinued in August 2006 after the release of version 6.5. Maya was available in both "Complete" and "Unlimited" editions until August 2008, when it was turned into a single suite.
Users define a virtual workspace (scene) to implement and edit media of a particular project. Scenes can be saved in a variety of formats, the default being .mb (Maya D). Maya exposes a node graph architecture. Scene elements are node-based, each node having its own attributes and customization. As a result, the visual representation of a scene is based entirely on a network of interconnecting nodes, depending on each other's information. For the convenience of viewing these networks, there is a dependency and a directed acyclic graph.
Nowadays, the 3D models can be imported to game engines such as Unreal Engine and Unity.
Industry usage
The widespread use of Maya in the film industry is usually associated with |
https://en.wikipedia.org/wiki/Intrusion%20detection%20system | An intrusion detection system (IDS; also intrusion prevention system or IPS) is a device or software application that monitors a network or systems for malicious activity or policy violations. Any intrusion activity or violation is typically reported either to an administrator or collected centrally using a security information and event management (SIEM) system. A SIEM system combines outputs from multiple sources and uses alarm filtering techniques to distinguish malicious activity from false alarms.
IDS types range in scope from single computers to large networks. The most common classifications are network intrusion detection systems (NIDS) and host-based intrusion detection systems (HIDS). A system that monitors important operating system files is an example of an HIDS, while a system that analyzes incoming network traffic is an example of an NIDS. It is also possible to classify IDS by detection approach. The most well-known variants are signature-based detection (recognizing bad patterns, such as malware) and anomaly-based detection (detecting deviations from a model of "good" traffic, which often relies on machine learning). Another common variant is reputation-based detection (recognizing the potential threat according to the reputation scores). Some IDS products have the ability to respond to detected intrusions. Systems with response capabilities are typically referred to as an intrusion prevention system. Intrusion detection systems can also serve specific purposes by augmenting them with custom tools, such as using a honeypot to attract and characterize malicious traffic.
Comparison with firewalls
Although they both relate to network security, an IDS differs from a firewall in that a traditional network firewall (distinct from a Next-Generation Firewall) uses a static set of rules to permit or deny network connections. It implicitly prevents intrusions, assuming an appropriate set of rules have been defined. Essentially, firewalls limit access between networks to prevent intrusion and do not signal an attack from inside the network. An IDS describes a suspected intrusion once it has taken place and signals an alarm. An IDS also watches for attacks that originate from within a system. This is traditionally achieved by examining network communications, identifying heuristics and patterns (often known as signatures) of common computer attacks, and taking action to alert operators. A system that terminates connections is called an intrusion prevention system, and performs access control like an application layer firewall.
Intrusion detection category
IDS can be classified by where detection takes place (network or host) or the detection method that is employed (signature or anomaly-based).
Analyzed activity
Network intrusion detection systems
Network intrusion detection systems (NIDS) are placed at a strategic point or points within the network to monitor traffic to and from all devices on the network. It performs an analysis of |
https://en.wikipedia.org/wiki/Scrollbar | A scrollbar is an interaction technique or widget in which continuous text, pictures, or any other content can be scrolled in a predetermined direction (up, down, left, or right) on a computer display, window, or viewport so that all of the content can be viewed, even if only a fraction of the content can be seen on a device's screen at one time. It offers a solution to the problem of navigation to a known or unknown location within a two-dimensional information space. It was also known as a handle in the very first GUIs. They are present in a wide range of electronic devices including computers, graphing calculators, mobile phones, and portable media players. The user interacts with the scrollbar elements using some method of direct action, the scrollbar translates that action into scrolling commands, and the user receives feedback through a visual updating of both the scrollbar elements and the scrolled content.
Although scrollbar designs differ throughout their history, they usually appear on one or two sides of the viewing area as long rectangular areas containing a bar (or thumb) that can be dragged along a trough (or track) to move the body of the document. This can be placed vertically, horizontally, or both in the window depending on which direction the content extends past its boundaries. Two arrows are often included on either end of the thumb or trough for more precise adjustments. The “thumb” has different names in different environments: on the Mac OS X 10.4 it is called a "scroller"; on the Java platform it is called "thumb" or "knob"; Microsoft's .NET documentation refers to it as "scroll box" or "scroll thumb"; in other environments it is called "elevator", "quint", "puck", "wiper" or "grip"; in certain environments where browsers use agnostic language to the scrollbar terminology, the thumb is referred to as the 'pea' for vertical movement of the bar and still use 'puck' for horizontal movement.
Additional functions may be found, such as zooming in/out or various application-specific tools. Depending on the GUI, the size of the thumb can be fixed or variable in size; in the later case of proportional thumbs, its length would indicate the size of the window in relation to the size of the whole document, indicated by the full track. While proportional thumbs were available in several GUIs, including GEM, AmigaOS and PC/GEOS, even in the mid 1980s, Microsoft did not implement them until Windows 95. A proportional thumb that completely fills the trough indicates that the entire document is being viewed, at which point the scrollbar may temporarily become hidden. The proportional thumb can also sometimes be adjusted by dragging its ends, such as in Sony Vegas, a non-linear video editing software package. In this case it would adjust both the position and the zooming of the document, where the size of the thumb represents the degree of zooming applied.
A scrollbar should be distinguished from a slider which is another visually sim |
https://en.wikipedia.org/wiki/Sahlgrenska%20University%20Hospital | The Sahlgrenska University Hospital (Swedish: Sahlgrenska Universitetssjukhuset) is a hospital network associated with the Sahlgrenska Academy at the University of Gothenburg in Gothenburg, Sweden. With 17,000 employees the hospital is the largest hospital in Sweden by a considerable margin, and the second largest hospital in Europe. It has 2,000 beds distributed across three campuses in Sahlgrenska, Östra, and Mölndal. It provides emergency and basic care for the 700,000 inhabitants of the Göteborg region and offers highly specialised care for the 1.7 million inhabitants of West Sweden. It is named after philanthropist Niclas Sahlgren.
History
Sahlgrenska University Hospital was formed in 1997 by the merger of three hospitals: Sahlgrenska Hospital, Östra Hospital, and Mölndal Hospital. The Sahlgrenska University Hospital has been operated by the Västra Götaland Regional Council since its formation in 1999.
The Sahlgrenska Academy
Sahlgrenska Academy is the University of Gothenburg's faculty of education and research in health sciences. It operates in close conjunction with the university hospital. The academy was formed the 1st of July 2001 by combining the three previous faculties for medicine, odontology and health sciences. Within the academy is the Sahlgrenska Cancer Center, focusing on translational oncology research. The center is a joint effort between the Sahlgrenska Academy at University of Gothenburg and the Sahlgrenska University Hospital. The long-term goal of the center is to improve the care of cancer patients by facilitating new scientific discoveries and translating these into clinical practice.
Educational programs are available in biomedical, dietitian sciences, physician, nursing, medical specialist, dentist, and medical physicist. With Sahlgrenska academy's focus, University of Gothenburg is ranked worldwide 33 and 40 for Clinical medicine and Biomedical sciences respectively in the subject ranking by Academic Ranking of World Universities AWRU Shanghai (2018).
The Sahlgrenska University Hospital in the Webometrics Hospital specific ranking 2017, was 1st in Sweden, 10th in Europe and 41st worldwide.
Hospitals
Sahlgrenska Hospital
Sahlgrenska Hospital is the oldest and largest hospital in the network. It was founded in 1782 in Sillgatan (now Postgatan) in Gothenburg with a donation by Niclas Sahlgren. In 1823, it was moved to Oterdahl House, today a museum of medical history. In 1855, it was moved again to a building (now named Sociala Huset) in Carolus Dux at Västra Hamngatan and named Allmänna and Sahlgrenska Hospital. Since 1900, it was moved to its present premises in Änggården, and in 1936 it was named the Sahlgrenska Hospital.
On 24 June 2009, a new facility with 312 beds was officially opened. The new facility will enable rebuilding and renovation of older facilities at Sahlgrenska. The facility also features nephrology centre, dialysis, transplantation centre, stroke unit, hematology, and wards for medi |
https://en.wikipedia.org/wiki/List%20of%20Special%20Areas%20of%20Conservation%20in%20Northern%20Ireland | Special Areas of Conservation in Northern Ireland are part of the European Union's Natura 2000 network of sites with special flora or fauna.
Northern Ireland has 54 SACs:
See also
Special Area of Conservation
Special Protection Area
References
Northern Ireland coast and countryside
Special Areas of Conservation in Northern Ireland
Norrn Iron |
https://en.wikipedia.org/wiki/Constant%20folding | Constant folding and constant propagation are related compiler optimizations used by many modern compilers. An advanced form of constant propagation known as sparse conditional constant propagation can more accurately propagate constants and simultaneously remove dead code.
Constant folding
Constant folding is the process of recognizing and evaluating constant expressions at compile time rather than computing them at runtime. Terms in constant expressions are typically simple literals, such as the integer literal 2, but they may also be variables whose values are known at compile time. Consider the statement:
i = 320 * 200 * 32;
Most compilers would not actually generate two multiply instructions and a store for this statement. Instead, they identify constructs such as these and substitute the computed values at compile time (in this case, 2,048,000).
Constant folding can make use of arithmetic identities. If x is numeric, the value of 0 * x is zero even if the compiler does not know the value of x (note that this is not valid for IEEE floats since x could be Infinity or NaN. Still, some environments that favor performance such as GLSL shaders allow this for constants, which can occasionally cause bugs).
Constant folding may apply to more than just numbers. Concatenation of string literals and constant strings can be constant folded. Code such as "abc" + "def" may be replaced with "abcdef".
Constant folding and cross compilation
In implementing a cross compiler, care must be taken to ensure that the behaviour of the arithmetic operations on the host architecture matches that on the target architecture, as otherwise enabling constant folding will change the behaviour of the program. This is of particular importance in the case of floating point operations, whose precise implementation may vary widely.
Constant propagation
Constant propagation is the process of substituting the values of known constants in expressions at compile time. Such constants include those defined above, as well as intrinsic functions applied to constant values. Consider the following pseudocode:
int x = 14;
int y = 7 - x / 2;
return y * (28 / x + 2);
Propagating x yields:
int x = 14;
int y = 7 - 14 / 2;
return y * (28 / 14 + 2);
Continuing to propagate yields the following (which would likely be further optimized by dead-code elimination of both x and y.)
int x = 14;
int y = 0;
return 0;
Constant propagation is implemented in compilers using reaching definition analysis results. If all variable's reaching definitions are the same assignment which assigns a same constant to the variable, then the variable has a constant value and can be replaced with the constant.
Constant propagation can also cause conditional branches to simplify to one or more unconditional statements, when the conditional expression can be evaluated to true or false at compile time to determine the only possible outcome.
The optimizations in action
Constant f |
https://en.wikipedia.org/wiki/Lists%20of%20programming%20languages | There are thousands of programming languages. These are listed in various ways:
Lists of language lists |
https://en.wikipedia.org/wiki/Graphics%20card | A graphics card (also called a video card, display card, graphics adapter, VGA card/VGA, video adapter, display adapter, or colloquially GPU) is a computer expansion card that generates a feed of graphics output to a display device such as a monitor. Graphics cards are sometimes called discrete or dedicated graphics cards to emphasize their distinction to integrated graphics processor on the motherboard or the CPU. A graphics processing unit (GPU) that performs the necessary computations is the main component in a graphics card, but the acronym "GPU" is sometimes also used to erroneously refer to the graphics card as a whole.
Most graphics cards are not limited to simple display output. The graphics processing unit can be used for additional processing, which reduces the load from the central processing unit. Additionally, computing platforms such as OpenCL and CUDA allow using graphics cards for general-purpose computing. Applications of general-purpose computing on graphics cards include AI training, cryptocurrency mining, and molecular simulation.
Usually, a graphics card comes in the form of a printed circuit board (expansion board) which is to be inserted into an expansion slot. Others may have dedicated enclosures, and they are connected to the computer via a docking station or a cable. These are known as external GPUs (eGPUs).
Graphics cards are often preferred over integrated graphics for increased performance.
History
Graphics cards historically supported different computer display standards as they evolved. For the IBM PC compatibles, common early standards were MDA, CGA, Hercules, EGA and VGA.
In the late 1980s the like of Radius produced graphics cards for the Apple Macintosh II with discrete 2D QuickDraw capabilities.
3dfx Interactive was one of the first companies to develop a consumer-facing GPU with 3D acceleration (with the Voodoo series) and the first to develop a graphical chipset dedicated to 3D, but without 2D support (which therefore required the presence of a 2D card to work).
NVIDIA RIVA 128 was one of the first consumer-facing GPU integrated 3D processing unit and 2D processing unit on a chip.
The majority of modern graphics cards are built with either AMD-sourced or Nvidia-sourced graphics chips. Most graphics cards offer various functions such as 3D rendering, 2D graphics, video decoding, TV output, and the ability to connect multiple monitors (multi-monitor). Graphics cards also have sound card capabilities to output sound along with video output for connected TVs or monitors with integrated speakers.
Within the industry, graphics cards are sometimes called graphics add-in-boards, abbreviated as AIBs, with the word "graphics" usually omitted.
Discrete vs integrated graphics
As an alternative to the use of a graphics card, video hardware can be integrated into the motherboard, CPU, or a system-on-chip as integrated graphics. Motherboard-based implementations are sometimes called "on-board video". Some moth |
https://en.wikipedia.org/wiki/Node-to-node%20data%20transfer | In telecommunications, node-to-node data transfer is the movement of data from one node of a network to the next. In the OSI model it is handled by the lowest two layers, the data link layer and the physical layer.
In most communication systems, the transmitting point applies source coding, followed by channel coding, and lastly, line coding. This produces the baseband signal. The presence of filters may perform pulse shaping. Some systems then use modulation to multiplex many baseband signals into a broadband signal. The receiver un-does these transformations in reverse order: demodulation, trellis decoding, error detection and correction, decompression.
Some communication systems omit one or more of these steps, or use techniques that combine several of these steps together. For example, a Morse code transmitter combines source coding, channel coding, and line coding into one step, typically followed by an amplitude modulation step. Barcodes, on the other hand, add a checksum digit during channel coding, then translate each digit into a barcode symbol during line coding, omitting modulation.
Source coding
See main article Data compression
Source coding is the elimination of redundancy to make efficient use of storage space and/or transmission channels.
Examples of source coding are:
Huffman coding
Morse code
Binary coding
Channel coding
See main article Error correction and detection.
In digital telecommunications, channel coding is a pre-transmission mapping applied to a digital signal or data file, usually designed to make error-correction (or at least error detection) possible.
Error correction is implemented by using more digits (bits in cases of binary channel) than the number strictly necessary for the samples and having the receiver compute the most likely valid message that could have resulted in the received one.
Types of channel coding include:
Parity checks
Hamming code
Reed-Muller code
Reed-Solomon code
Turbo coding
Line coding
See main article Line code
Line coding consists of representing the digital signal to be transported, by an amplitude- and time-discrete signal, that is optimally tuned for the specific properties of the physical channel (and of the receiving equipment). The waveform pattern of voltage or current used to represent the 1s and 0s of a digital signal on a transmission link is called line encoding.
After line coding, the signal can directly be put on a transmission line, in the form of variations of the current. The common types of line encoding are unipolar, polar, bipolar and Manchester encoding.
Line coding should make it possible for the receiver to synchronise itself to the phase of the received signal. It is also preferred for the line code to have a structure that will enable error detection.
Examples of line coding include:
(see main article line code)
B8ZS
HDB3
2B1Q
AMI
Gray coding
Modulation
Modulation is the process of varying a carrier signal, typically a sine wave to use that signa |
https://en.wikipedia.org/wiki/World%20Network%20of%20Biosphere%20Reserves | The UNESCO World Network of Biosphere Reserves (WNBR) covers internationally designated protected areas, known as biosphere reserves, which are meant to demonstrate a balanced relationship between people and nature (e.g. encourage sustainable development). They are created under the Man and the Biosphere Programme (MAB).
Aug
Mission
The World Network of Biosphere Reserves (WNBR) of the MAB Programme consists of a dynamic and interactive network of sites. It works to foster the harmonious integration of people and nature for sustainable development through participatory dialogue, knowledge sharing, poverty reduction, human well-being improvements, respect for cultural values and by improving society's ability to cope with climate change. It promotes north–south and South-South collaboration and represents a unique tool for international cooperation through the exchange of experiences and know-how, capacity-building and the promotion of best practices.
The network
total membership had reached 738 biosphere reserves in 134 countries (including 22 transboundary sites) occurring in all regions of the world. Myanmar had its first biosphere reserve inscribed in 2015. This already takes into account some biosphere reserves that have been withdrawn or revised through the years, as the program's focus has shifted from simple protection of nature to areas displaying close interaction between man and environment.
Criteria and periodic review process
Article 4 of the defines general criteria for an area to be qualified for designation as a biosphere reserve as follows:
It should encompass a mosaic of ecological systems representative of major biogeographic regions, including a gradation of human interventions.
It should be of significance for biological diversity conservation.
It should provide an opportunity to explore and demonstrate approaches to sustainable development on a regional scale.
It should have an appropriate size to serve the three functions of biosphere reserves — conservation, development, logistic support.
It should include these functions through appropriate zonation, recognizing core, buffer, and outer transition areas.
Organizational arrangements should be provided for the involvement and participation of a suitable range of inter alia public authorities, local communities and private interests in the design and carrying out the functions of a biosphere reserve.
In addition, provisions should be made for:
mechanisms to manage human use and activities in the buffer zone or zones;
a management policy or plan for the area as a biosphere reserve;
a designated authority or mechanism to implement this policy or plan;
programmes for research, monitoring, education and training.
Article 9 of the Statutory Framework states that "the status of each biosphere reserve should be subject to a periodic review every ten years, based on a report prepared by the concerned authority, on the basis of |
https://en.wikipedia.org/wiki/ABSET | ABSET was an early declarative programming language from the University of Aberdeen.
See also
ABSYS
References
"ABSET: A Programming Language Based on Sets", E.W. Elcock et al., Mach Intell 4, Edinburgh U Press, 1969, pp. 467–492
Declarative programming languages |
https://en.wikipedia.org/wiki/Absys | Absys was an early declarative programming language from the University of Aberdeen. It anticipated a number of features of Prolog such as negation as failure, aggregation operators, the
central role of backtracking and constraint solving. Absys was the first implementation of a logic programming language.
The name Absys was chosen as an abbreviation for Aberdeen System.
See also
ABSET
References
"ABSYS: An Incremental Compiler for Assertions", J.M. Foster et al., Mach Intell 4, Edinburgh U Press, 1969, pp. 423–429
Declarative programming languages
Prolog programming language family
Academic programming languages
Logic programming languages
Programming languages created in 1967 |
https://en.wikipedia.org/wiki/Amiga%20E | Amiga E is a programming language created by Wouter van Oortmerssen on the Amiga computer. The work on the language started in 1991 and was first released in 1993. The original incarnation of Amiga E was being developed until 1997, when the popularity of the Amiga platform dropped significantly after the bankruptcy of Amiga intellectual property owner Escom AG.
According to Wouter van Oortmerssen:"It is a general-purpose programming language, and the Amiga implementation is specifically targeted at programming system applications. [...]"In his own words:"Amiga E was a tremendous success, it became one of the most popular programming languages on the Amiga."
Overview
Amiga E combines features from several languages but follows the original C programming language most closely in terms of basic concepts. Amiga E's main benefits are fast compilation (allowing it to be used in place of a scripting language), very readable source code, flexible type system, powerful module system, exception handling (not C++ variant), and Object-oriented programming.
Amiga E was used to create the core of the popular Amiga graphics software Photogenics.
"Hello, world" example
A "hello world" program in Amiga E looks like this:
History
1993: The first public release of Amiga E; the first release on Aminet was in September, although the programming language source codes were published on the Amiga E mailing list at least since May.
1997: The last version of Amiga E is released (3.3a).
1999: Unlimited compiler executable of Amiga E is released.
1999: Source code of the Amiga E compiler in m68k assembler is released under the GPL.
Implementations and derivatives
Discontinued
Amiga E
The first compiler. It was written by Wouter van Oortmerssen in the m68k assembler. It supports tools that are written in E. The compiler generates 68000 machine code directly.
Platforms: AmigaOS and compatibles.
Targets: Originally AmigaOS with 68000 CPU, but has modules that can handle 68060 architecture.
Status: Stable, mature, discontinued, source available, freeware.
CreativE
It was created by Tomasz Wiszkowski. It is based on the GPL sources of Amiga E and adds many extensions to the compiler.
Platforms: AmigaOS and compatibles.
Targets: Like Amiga E, plus some limited support for the last generations of m68k CPUs.
Status: Stable, mature, discontinued in 2001, source available, freeware.
PowerD
It was created by Martin Kuchinka, who cooperated with Tomasz Wiszkowski in the Amiga development group "The Blue Suns." It is derived from the Amiga E and CreativE languages but is incompatible with the former due to syntax changes.
Platforms: AmigaOS and compatibles.
Targets: AmigaOS 3.0 or newer; at least 68020 CPU+FPU or PowerPC (PPC); and 4MB of RAM.
Status: Stable, mature, closed source, freeware. The project has been dormant since 2010.
YAEC
Written from scratch in Amiga E by Leif Salomonsson and published in 2001. It uses an external assembler and linker. The project wa |
https://en.wikipedia.org/wiki/Vivian%2C%20Louisiana | Vivian is a town in Caddo Parish, Louisiana, United States and is home to the Redbud Festival. The population was 3,671 at the 2010 census, down from 4,031 in 2000. According to 2020 census data, Vivian is now the fourth-largest municipality in Caddo Parish by population (after Blanchard, Greenwood, and Shreveport).
History
Vivian developed as a trading center and center of a retail area that included smaller towns in the area. During its heyday, people from the region used to visit Vivian for shopping and movies, especially on the weekends.
Geography
Vivian is in northwestern Caddo Parish. Louisiana Highway 1 passes through the center of the town, leading north to the Texas border at the northwest corner of Louisiana, and south to Shreveport. LA 2 leads east to U.S. Route 71 in Hosston.
According to the United States Census Bureau, Vivian has an area of , all land.
Demographics
As of the 2020 United States census, there were 3,073 people, 1,395 households, and 898 families residing in the town. As of the census of 2000, there were 4,031 people, 1,569 households, and 1,019 families residing in the town. The population density was . There were 1,812 housing units at an average density of .
In 2000, the racial makeup of the town was 63.90% White, 34.19% African American, 0.52% Native American, 0.35% Asian, 0.02% from other races, and 1.02% from two or more races. Hispanic or Latino of any race were 0.72% of the population. By 2020, the racial makeup was 50.15% non-Hispanic white, 41.91% African American, 0.91% Asian, 4.62% multiracial, and 1.95% Hispanic or Latino of any race.
There were 1,569 households, out of which 32.3% had children under the age of 18 living with them, 42.2% were married couples living together, 19.4% had a female householder with no husband present, and 35.0% were non-families. 31.7% of all households were made up of individuals, and 17.0% had someone living alone who was 65 years of age or older. The average household size was 2.51 and the average family size was 3.16.
In the town, the population was spread out, with 29.9% under the age of 18, 8.4% from 18 to 24, 23.6% from 25 to 44, 19.9% from 45 to 64, and 18.2% who were 65 years of age or older. The median age was 36 years. For every 100 females, there were 81.7 males. For every 100 females age 18 and over, there were 74.9 males.
At the 2000 census, the median income for a household in the town was $23,800, and the median income for a family was $29,867. Males had a median income of $26,844 versus $17,500 for females. The per capita income for the town was $13,267. About 21.4% of families and 26.2% of the population were below the poverty line, including 42.0% of those under age 18 and 14.6% of those age 65 or over.
Government
The current mayor is Mike VanSchoick.
Education
The town's single government-sponsored cultural organization is the North Caddo Branch of the Shreve Memorial Library. The library is housed in the once-abandoned, now-restored North Cad |
https://en.wikipedia.org/wiki/Li-Chen%20Wang | Li-Chen Wang (born 1935) is an American computer engineer, best known for his Palo Alto Tiny BASIC for Intel 8080-based microcomputers. He was a member of the Homebrew Computer Club and made significant contributions to the software for early microcomputer systems from Tandy Corporation and Cromemco. He made early use of the word copyleft, in Palo Alto Tiny BASIC's distribution notice "@COPYLEFT ALL WRONGS RESERVED" in June 1976.
Homebrew Computer Club
The Homebrew Computer Club was a hotbed of BASIC development, with members excited by Altair BASIC. Fellow members Steve Wozniak and Tom Pittman would develop their own BASICs (Integer BASIC and 6800 Tiny BASIC respectively). Wang analyzed the Altair BASIC code and contributed edits to Tiny BASIC Extended. Wang published in the newsletter a loader for the 8080, commenting on the Open Letter to Hobbyists:
Palo Alto Tiny BASIC
Palo Alto Tiny BASIC was the fourth version of a Tiny BASIC interpreter that appeared in Dr. Dobb's Journal of Computer Calisthenics & Orthodontia, but probably the most influential. It appeared in the May 1976 Vol 1, No. 5 issue, and distinguished itself from other versions of Tiny BASIC through a novel means of abbreviating commands to save memory, and the inclusion of an array variable ("@"). The interpreter occupied 1.77 kilobytes of memory and assumed the use of a Teletype Machine (TTY) for user input/output. An erratum to the original article appeared in the June/July issue of Dr. Dobb's (Vol. 1, No 6). This article also included information on adding additional I/O devices, using code for the VDM video display by Processor Technology as an example.
Wang was one of the first to use word copyleft, in June 1976. In Palo Alto Tiny BASIC's distribution notice, he had written "@COPYLEFT ALL WRONGS RESERVED". Tiny BASIC was not distributed under any formal form of copyleft distribution terms but was presented in a context where source code was being shared and modified. In fact, Wang had earlier contributed edits to Tiny BASIC Extended before writing his own interpreter. He encouraged others to adapt his source code and publish their adaptions, as with Roger Rauskolb's version published in Interface Age.
Wang also wrote a STARTREK program in his Tiny BASIC that appeared in the July 1976 issue of the People's Computer Company Newsletter.
Tandy Corporation
The original prototype TRS-80 Model I that was demonstrated for Charles Tandy to sell the idea ran Li-Chen's BASIC.
Wang's mark also shows up in and on the Exatron Stringy Floppy ROM for the TRS-80 Model I. Embedded Systems columnist Jack Crenshaw calls Wang's Manchester encoding code, achieving 14K read/write speeds, a "work of art."
Cromemco
The first color graphics interface for microcomputers, developed by Cromemco and called the Dazzler, was introduced in 1976 with a demonstration program called "Kaleidoscope" written by Wang. According to BYTE Magazine the program, written in 8080 assembly code, was only 127 by |
https://en.wikipedia.org/wiki/Category%203 | Category 3 or Category III can refer to:
Category 3 cable, a specification for data cabling
British firework classification
Category 3 tropical cyclone, on any of the tropical cyclone scales
Category 3 pandemic, on the Pandemic Severity Index, an American influenza pandemic with a case-fatality ratio between 0.5% and 1%
Category 3 winter storm, on the Northeast Snowfall Impact Scale and the Regional Snowfall Index
Any of several winter storms listed at list of Northeast Snowfall Impact Scale winter storms
Category 03 non-silicate mineral - Halides
Category III, a rating in the Hong Kong motion picture rating system
Category III, a capability level of aircraft instrument landing systems
Category III New Testament manuscripts - Eclectic
Category III measurement - performed in the building installation
Category III protected area (IUCN) - natural monument
See also
Class 3 (disambiguation) - class/category equivalence (for labeling)
Type III (disambiguation) - type/category equivalence (for labeling)
Group 3 (disambiguation) - group/category equivalence (for labeling) |
https://en.wikipedia.org/wiki/CSV | CSV may refer to:
Computing
Certified Server Validation, a spam fighting technique
Cluster Shared Volumes, a Microsoft Windows Server 2008 technology
Comma-separated values, a file format and extension
Computerized system validation, a documentation process
Organizations
CSV Apeldoorn, a Netherlands football club
Christian Social People's Party, a political party in Luxembourg
Clerics of Saint Viator, a Roman Catholic institute
Community Service Volunteers, a British charity
Confederación Sudamericana de Voleibol (South American Volleyball Confederation)
Conseil scolaire Viamonde, a public school board in Ontario, Canada
Transportation
Chevrolet Special Vehicles, Holden racing car
Corsa Specialised Vehicles, an Australia car-maker
GM U platform, General Motors cross-over sport vans
Other
C. S. Venkataraman, a mathematician from Kerala, India
ČSV, a Sámi initialism
Character Strengths and Virtues, 2004 book
Creating shared value, a business concept |
https://en.wikipedia.org/wiki/Alphen-Chaam | Alphen-Chaam () is a municipality in the southern Netherlands.
Population centres
Towns:
Alphen (4,000)
Chaam (3,810)
Galder (1,190)
Hamlets (the population data of these hamlets is included in the population data of the towns near which they are located):
Topography
Topographic map of the municipality of Alphen-Chaam, Sept. 2014.
Notable people
Ruud de Moor (born 1928 in Chaam – 2001) a Dutch professor of sociology
Piet A. Verheyen (born 1931 in Alphen) a Dutch economist and academic
Natasha den Ouden (born 1973 in Galder) a Dutch cyclist
Jelle Klaasen (born 1984 in Alphen) a Dutch professional darts player, the youngest winner of the World Darts Championship at age 21
Gallery
References
External links
Municipalities of North Brabant
Municipalities of the Netherlands established in 1997 |
https://en.wikipedia.org/wiki/Grave%20%28disambiguation%29 | A grave is a location where a dead body is buried.
Grave may also refer to:
Phonetics, diacritics and music
Grave accent, a diacritical mark
Backtick or backquote, character on computer keyboards
Grave (phonetic), a term used to classify sounds
Grave, a term for a slow and solemn music tempo or a solemn mood in general
Grave (band), a Swedish death metal band
Places
Grave, Netherlands, a municipality in the Dutch province North Brabant
La Grave, a commune in southeastern France
Grave (crater), on the Moon
People
Dmitry Grave (1863–1939), Russian mathematician
Franz Grave (1932-2022), German Roman Catholic bishop
Ivan Grave (1874–1960), Russian scientist
Other uses
Cognate of German Graf, a historical title of the German nobility, as in margrave
Grave (unit), an old unprefixed name for the kilogram
Grave, the main character in the third-person shooter video games Gungrave and Beyond the Grave
"Grave" (Buffy the Vampire Slayer), the final episode of the sixth season of Buffy the Vampire Slayer
Grave (film), a 2016 film also titled as Raw
See also
Graves (disambiguation)
The Grave (disambiguation)
The Graves (disambiguation) |
https://en.wikipedia.org/wiki/Transdifferentiation | Transdifferentiation, also known as lineage reprogramming, is the process in which one mature somatic cell is transformed into another mature somatic cell without undergoing an intermediate pluripotent state or progenitor cell type. It is a type of metaplasia, which includes all cell fate switches, including the interconversion of stem cells. Current uses of transdifferentiation include disease modeling and drug discovery and in the future may include gene therapy and regenerative medicine. The term 'transdifferentiation' was originally coined by Selman and Kafatos in 1974 to describe a change in cell properties as cuticle producing cells became salt-secreting cells in silk moths undergoing metamorphosis.
Discovery
Davis et al. 1987 reported the first instance (sight) of transdifferentiation where a cell changed from one adult cell type to another. Forcing mouse embryonic fibroblasts to express MyoD was found to be sufficient to turn those cells into myoblasts.
Natural examples
The only known instances where adult cells change directly from one lineage to another occurs in the species Turritopsis dohrnii (also known as the immortal jellyfish) and Turritopsis nutricula.
In newts, when the eye lens is removed, pigmented epithelial cells de-differentiate and then redifferentiate into the lens cells. Vincenzo Colucci described this phenomenon in 1891 and Gustav Wolff described the same thing in 1894; the priority issue is examined in Holland (2021).
In humans and mice, it has been demonstrated that alpha cells in the pancreas can spontaneously switch fate and transdifferentiate into beta cells. This has been demonstrated for both healthy and diabetic human and mouse pancreatic islets. While it was previously believed that oesophageal cells were developed from the transdifferentiation of smooth muscle cells, that has been shown to be false.
Induced and therapeutic examples
The first example of functional transdifferentiation has been provided by Ferber et al. by inducing a shift in the developmental fate of cells in the liver and converting them into 'pancreatic beta-cell-like' cells. The cells induced a wide, functional and long-lasting transdifferentiation process that reduced the effects of hyperglycemia in diabetic mice. Moreover, the trans-differentiated beta-like cells were found to be resistant to the autoimmune attack that characterizes type 1 diabetes.
The second step was to undergo transdifferentiation in human specimens. By transducing liver cells with a single gene, Sapir et al. were able to induce human liver cells to transdifferentiate into human beta cells.
This approach has been demonstrated in mice, rat, xenopus and human tissues.
Schematic model of the hepatocyte-to-beta cell transdifferentiation process. Hepatocytes are obtained by liver biopsy from diabetic patient, cultured and expanded ex vivo, transduced with a PDX1 virus, transdifferentiated into functional insulin-producing beta cells, and transplanted back into the |
https://en.wikipedia.org/wiki/Guile | Guile may refer to:
Astuteness, deception.
GNU Guile, an implementation of the Scheme programming language
Guile (Street Fighter), a video game character from the Street Fighter series
Guile (Chrono Cross), a video game character from Chrono Cross
Guile Island, Antarctica
Guilé Foundation, a Swiss organisation for business ethics
People with the surname
Melanie Guile (born 1949), Australian writer
See also
Guille (disambiguation) |
https://en.wikipedia.org/wiki/Dynamic%20programming | Dynamic programming is both a mathematical optimization method and an algorithmic paradigm. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.
In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have optimal substructure.
If sub-problems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a relation between the value of the larger problem and the values of the sub-problems. In the optimization literature this relationship is called the Bellman equation.
Overview
Mathematical optimization
In terms of mathematical optimization, dynamic programming usually refers to simplifying a decision by breaking it down into a sequence of decision steps over time. This is done by defining a sequence of value functions V1, V2, ..., Vn taking y as an argument representing the state of the system at times i from 1 to n. The definition of Vn(y) is the value obtained in state y at the last time n. The values Vi at earlier times i = n −1, n − 2, ..., 2, 1 can be found by working backwards, using a recursive relationship called the Bellman equation. For i = 2, ..., n, Vi−1 at any state y is calculated from Vi by maximizing a simple function (usually the sum) of the gain from a decision at time i − 1 and the function Vi at the new state of the system if this decision is made. Since Vi has already been calculated for the needed states, the above operation yields Vi−1 for those states. Finally, V1 at the initial state of the system is the value of the optimal solution. The optimal values of the decision variables can be recovered, one by one, by tracking back the calculations already performed.
Control theory
In control theory, a typical problem is to find an admissible control which causes the system to follow an admissible trajectory on a continuous time interval that minimizes a cost function
The solution to this problem is an optimal control law or policy , which produces an optimal trajectory and a cost-to-go function . The latter obeys the fundamental equation of dynamic programming:
a partial differential equation known as the Hamilton–Jacobi–Bellman equation, in which and . One finds that minimizing in terms of , , and the unknown function and then substitutes the result into the Hamilton–Jacobi–Bellman equation to get the partial differential equation to be solved with boundary condition . In practice, this generally requires numerical techniques for some discrete approximation to the e |
https://en.wikipedia.org/wiki/NuBus | NuBus () is a 32-bit parallel computer bus, originally developed at MIT and standardized in 1987 as a part of the NuMachine workstation project. The first complete implementation of the NuBus was done by Western Digital for their NuMachine, and for the Lisp Machines Inc. LMI Lambda. The NuBus was later incorporated in Lisp products by Texas Instruments (Explorer), and used as the main expansion bus by Apple Computer and a variant called NeXTBus was developed by NeXT. It is no longer widely used outside the embedded market.
Architecture
Early microcomputer buses like S-100 were often just connections to the pins of the microprocessor and to the power rails. This meant that a change in the computer's architecture generally led to a new bus as well. Looking to avoid such problems in the future, NuBus was designed to be independent of the processor, its general architecture and any details of its I/O handling.
Among its many advanced features for the era, NuBus used a 32-bit backplane when 8- or 16-bit busses were common. This was seen as making the bus "future-proof", as it was generally believed that 32-bit systems would arrive in the near future while 64-bit buses and beyond would remain impractical and excessive.
In addition, NuBus was agnostic about the processor itself. Most buses up to this point conformed to the signalling and data standards of the machine they were plugged into (being big or little endian for instance). NuBus made no such assumptions, which meant that any NuBus card could be plugged into any NuBus machine, as long as there was an appropriate device driver.
In order to select the proper device driver, NuBus included an ID scheme that allowed the cards to identify themselves to the host computer during startup. This meant that the user didn't have to configure the system, the bane of bus systems up to that point. For instance, with ISA the driver had to be configured not only for the card, but for any memory it required, the interrupts it used, and so on. NuBus required no such configuration, making it one of the first examples of plug-and-play architecture.
On the downside, while this flexibility made NuBus much simpler for the user and device driver authors, it made things more difficult for the designers of the cards themselves. Whereas most "simple" bus systems were easily supported with a handful of input/output chips designed to be used with that CPU in mind, with NuBus every card and computer had to convert everything to a platform-agnostic "NuBus world". Typically this meant adding a NuBus controller chip between the bus and any I/O chips on the card, increasing costs. While this is a trivial exercise today, one that all newer buses require, in the 1980s NuBus was considered needlessly complex and expensive.
Implementations
The NuBus became an IEEE standard in 1987 as IEEE 1196. This version used a standard DIN 41612 96-pin three-row connector, running the system on a 10 MHz clock for a maximum burst throughpu |
https://en.wikipedia.org/wiki/Versioning | Versioning may refer to:
Version control, the management of changes to documents, computer programs, large web sites, and other collections of information
Versioning file system, which allows a computer file to exist in several versions at the same time
Software versioning, the process of assigning either unique version names or numbers to unique states of computer software
See also
Version (disambiguation) |
https://en.wikipedia.org/wiki/IDS | IDS may refer to:
Computing
IBM Informix Dynamic Server, a relational database management system
Ideographic Description Sequence, describing a Unihan character as a combination of other characters
Integrated Data Store, one of the first database management systems from the 1960s
Internet distribution system, a travel industry sales and marketing channel
Intrusion detection system, detecting unwanted network access
Intelligent Decision System, a software package for multiple criteria decision analysis
Iterative deepening search, a graph search algorithm performing depth-first search repeatedly with increasing depth limits
Organizations
Incomes Data Services, a British employment research organisation
Institute of Development Studies, a British international development organisation
International Distributions Services, a legal name of Royal Mail since 3 October 2022
Boeing Integrated Defense Systems, former name of a Boeing Defense, Space & Security
Integrated Defence Staff, an Indian military organisation
Investors Diversified Services, former name of Ameriprise Financial
Istrian Democratic Assembly, a Croatian political party
Indiana Daily Student, a newspaper
Raytheon Integrated Defense Systems
International Dermoscopy Society, an international medical academic society
International Design School, an Indonesian educational institution
Initiative for Democratic Socialism
Institute for the German Language
Science, technology and engineering
Iduronate-2-sulfatase, a sulfatase enzyme associated with Hunter syndrome
Index Catalogue of Visual Double Stars
Integrated Deepwater System Program, a program to upgrade equipment of the US Coast Guard
Tornado IDS (Interdictor/strike), a version of the Panavia Tornado combat aircraft
IDS experiment (ISOLDE Decay Station), at the ISOLDE facility, CERN
Other uses
Iain Duncan Smith (born 1954), British politician, widely referred to by his initials
IDS Center, building in Minneapolis, tallest in Minnesota, US
Infant-directed speech or baby talk
Information disclosure statement, to the US Patent and Trademark Office
Integrated delivery system, a generic term for a health care network that provides a variety of care
Income declaration scheme
Intercontinental Dictionary Series, an online linguistic database
International District/Chinatown station, a light rail station in Seattle, Washington, US
See also
International Docking System Standard (IDSS), a proposed international standard for spacecraft docking |
https://en.wikipedia.org/wiki/Pattern%20recognition | Pattern recognition is the automated recognition of patterns and regularities in data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess (PR) capabilities but their primary function is to distinguish and create emergent pattern. PR has applications in statistical data analysis, signal processing, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Pattern recognition has its origins in statistics and engineering; some modern approaches to pattern recognition include the use of machine learning, due to the increased availability of big data and a new abundance of processing power.
Pattern recognition systems are commonly trained from labeled "training" data. When no labeled data are available, other algorithms can be used to discover previously unknown patterns. KDD and data mining have a larger focus on unsupervised methods and stronger connection to business use. Pattern recognition focuses more on the signal and also takes acquisition and signal processing into consideration. It originated in engineering, and the term is popular in the context of computer vision: a leading computer vision conference is named Conference on Computer Vision and Pattern Recognition.
In machine learning, pattern recognition is the assignment of a label to a given input value. In statistics, discriminant analysis was introduced for this same purpose in 1936. An example of pattern recognition is classification, which attempts to assign each input value to one of a given set of classes (for example, determine whether a given email is "spam"). Pattern recognition is a more general problem that encompasses other types of output as well. Other examples are regression, which assigns a real-valued output to each input; sequence labeling, which assigns a class to each member of a sequence of values (for example, part of speech tagging, which assigns a part of speech to each word in an input sentence); and parsing, which assigns a parse tree to an input sentence, describing the syntactic structure of the sentence.
Pattern recognition algorithms generally aim to provide a reasonable answer for all possible inputs and to perform "most likely" matching of the inputs, taking into account their statistical variation. This is opposed to pattern matching algorithms, which look for exact matches in the input with pre-existing patterns. A common example of a pattern-matching algorithm is regular expression matching, which looks for patterns of a given sort in textual data and is included in the search capabilities of many text editors and word processors.
Overview
A modern definition of pattern recognition is:
Pattern recognition is generally categorized according to the type of learning procedure used to generate the output value. Supervised learning assumes that a set of training data (the training set) has been provided, consisting of a set of instances tha |
https://en.wikipedia.org/wiki/Cargolux | Cargolux, legally Cargolux Airlines International S.A., is a Luxembourgish flag carrier cargo airline with its headquarters and hub at Luxembourg Airport. With a global network, it is among the largest scheduled all-cargo airlines in the world. Charter flights and third party maintenance are also operated. It has 85 offices in over 50 countries as of 2018, and operates a global trucking network to more than 250 destinations.
History
The airline was established in March 1970 by Luxair, the Salen Shipping Group, Loftleiðir, and various private interests in Luxembourg. Einar Olafsson was the airline's first employee and CEO. It started operations in May 1970 with one Canadair CL-44 freighter with services from Luxembourg to Hong Kong. Over the next two years, the airline grew, as did its public visibility.
By 1973, Cargolux had five CL-44s and made the leap into the jet age by acquiring a Douglas DC-8. This enabled the company to speed up its cargo deliveries. In 1974, Loftleiðir and Cargolux amalgamated their maintenance and engineering departments, and by 1975, Cargolux enjoyed new facilities consisting of central offices and two hangars.
In 1978, the airline began to take shape into the company it is today. The CL-44s began to be retired and the airline ordered its first Boeing 747s. In that same year it also began flying to other places in Asia, as well as to the United States. In 1979, as the company concluded its first decade, its first Boeing 747s were delivered.
In 1982, China Airlines became the first airline company to sign a strategic alliance with Cargolux.
1983 saw the introduction of the CHAMP (Cargo Handling and Management Planning) computer system and the start of some charter passenger flights for the Hajj pilgrimage.
1984 saw the departure of the last Douglas DC-8 in the fleet and the addition of a third Boeing 747. Lufthansa bought a 24.5% share of the airline in 1987 and Luxair increased its share to 24.53%.
1988 saw the birth of Lion Air, a passenger charter airline established by both Cargolux and Luxair. The airline had two Boeing 747s but Cargolux's venture into the charter airline world proved unsuccessful and soon Lion Air folded.
Despite that setback, Cargolux made it into the 1990s in proper financial shape. It added two more Boeing 747s in 1990, as a way of celebrating its 20th anniversary, and in 1993, three Boeing 747-400Fs arrived at Luxembourg. In 1995 Cargolux had a year-long celebration of its 25th anniversary and Heiner Wilkens was named CEO and President.
In 1997, Luxair was able to increase its share to 34%, while in September that year Lufthansa sold its 24.5% stake to Sair Logistics; and Swissair Cargo made a cooperation agreement with the Luxembourg company. The following year Sair Logistics increased its share to 33%.
By 1999, Cargolux's fleet had reached double figures, with 10 Boeing 747s. In 2000 a route was opened to Seoul, South Korea, and in 2001 Wilkens decided to step down as president an |
https://en.wikipedia.org/wiki/Internationalization%20and%20localization | In computing, internationalization and localization (American) or internationalisation and localisation (British), often abbreviated i18n and l10n respectively, are means of adapting computer software to different languages, regional peculiarities and technical requirements of a target locale.
Internationalization is the process of designing a software application so that it can be adapted to various languages and regions without engineering changes. Localization is the process of adapting internationalized software for a specific region or language by translating text and adding locale-specific components.
Localization (which is potentially performed multiple times, for different locales) uses the infrastructure or flexibility provided by internationalization (which is ideally performed only once before localization, or as an integral part of ongoing development).
Naming
The terms are frequently abbreviated to the numeronyms i18n (where 18 stands for the number of letters between the first i and the last n in the word internationalization, a usage coined at Digital Equipment Corporation in the 1970s or 1980s) and l10n for localization, due to the length of the words. Some writers have the latter term capitalized (L10n) to help distinguish the two.
Some companies, like IBM and Oracle, use the term globalization, g11n, for the combination of internationalization and localization.
Microsoft defines internationalization as a combination of world-readiness and localization. World-readiness is a developer task, which enables a product to be used with multiple scripts and cultures (globalization) and separates user interface resources in a localizable format (localizability, abbreviated to L12y).
Hewlett-Packard and HP-UX created a system called "National Language Support" or "Native Language Support" (NLS) to produce localizable software.
Scope
According to Software without frontiers, the design aspects to consider when internationalizing a product are "data encoding, data and documentation, software construction, hardware device support, and user interaction"; while the key design areas to consider when making a fully internationalized product from scratch are "user interaction, algorithm design and data formats, software services, and documentation".
Translation is typically the most time-consuming component of language localization. This may involve:
For film, video, and audio, translation of spoken words or music lyrics, often using either dubbing or subtitles
Text translation for printed materials, and digital media (possibly including error messages and documentation)
Potentially altering images and logos containing text to contain translations or generic icons
Different translation lengths and differences in character sizes (e.g. between Latin alphabet letters and Chinese characters) can cause layouts that work well in one language to work poorly in others
Consideration of differences in dialect, register or variety
Writing con |
https://en.wikipedia.org/wiki/Microsequencer | In computer architecture and engineering, a sequencer or microsequencer generates the addresses used to step through the microprogram of a control store. It is used as a part of the control unit of a CPU or as a stand-alone generator for address ranges.
Usually the addresses are generated by some combination of a counter, a field from a microinstruction, and some subset of the instruction register. A counter is used for the typical case, that the next microinstruction is the one to execute. A field from the microinstruction is used for jumps, or other logic.
Since CPUs implement an instruction set, it's very useful to be able to decode the instruction's bits directly into the sequencer, to select a set of microinstructions to perform a CPU's instructions.
Most modern CISC processors use a combination of pipelined logic to process lower complexity opcodes which can be completed in one clock cycle, and microcode to implement ones that take multiple clock cycles to complete.
One of the first integrated microcoded processors was the IBM PALM Processor, which emulated all of the processor's instruction in microcode and was used on the IBM 5100, one of the first personal computers.
Recent examples of similar open-sourced microsequencer-based processors are the MicroCore Labs MCL86, MCL51 , and MCL65 cores which emulate the Intel 8086/8088, 8051 and MOS 6502 instruction sets entirely in microcode.
Simple example
The Digital Scientific Corp. Meta 4 Series 16 computer system was a user-microprogrammable system first available in 1970. Branches in the microcode sequence occur in
one of three ways.
A branch microinstruction specifies the address of the next instruction, either conditionally or unconditionally. The logical index (IX) option causes the 16-bit Link register to be logical ORed into the branch address, thus providing a simple indexed branch capability.
All the arithmetic/logical instructions allow the jump (J) modifier, which redirects execution to the microinstruction addressed by the Link register.
All the arithmetic/logical instructions allow both the decrement counter (D) and jump (J) modifiers. In this case, the 8-bit loop counter register is decremented. If it is then not zero, a branch is taken to the contents of the Link register. If it is zero, execution continues with the next instruction.
One more sequencing option allowed on a branch instruction is the execute (XQ) option. When specified, the single instruction at the branch address is executed, but then execution continues after the original branch instruction. The IX option can be used with the XQ option.
Complex example
The IBM System/360 was a series of compatible computers introduced in 1964, many of which were microprogrammed. The System/360 Model 40 is a good example of a microprogrammed machine with complex microsequencing.
The microstore consists of 4,096 56-bit microinstructions that operate in
a horizontal microprogramming style. The store is addresse |
https://en.wikipedia.org/wiki/Macintosh%20Classic | The Macintosh Classic is a personal computer designed, manufactured and sold by Apple Computer from October 1990 to September 1992. It was the first Macintosh to sell for less than US$1,000.
Production of the Classic was prompted by the success of the original Macintosh 128K, then the Macintosh Plus, and finally the Macintosh SE. The system specifications of the Classic are very similar to those of its predecessors, with the same monochrome CRT display, 512 × 342pixel resolution, and 4megabyte (MB) memory limit of the older Macintosh computers. Apple's decision to not update the Classic with newer technology such as a newer CPU, higher RAM capacity or color display resulted in criticism from reviewers, with Macworld describing it as having "nothing to gloat about beyond its low price" and "unexceptional". However, it ensured compatibility with the Mac's by-then healthy software base, as well as enabled it to sell for the lower price, as planned. The Classic also featured several improvements over the aging Macintosh Plus, which it replaced as Apple's low-end Mac computer. It is up to 25percent faster than the Plus and included an Apple SuperDrive floppy disk drive as standard. Unlike the Macintosh SE/30 and other compact Macs before it, the Classic did not have an internal Processor Direct Slot, making it the first non-expandable desktop Macintosh since the Macintosh Plus. Instead, it had a memory expansion/FPU slot.
The Classic is an adaptation of Jerry Manock's and Terry Oyama's 1984 Macintosh 128K industrial design, as had been the earlier Macintosh SE. Apple released two versions. The price and the availability of education software led to the Classic's popularity in education. It was sold alongside the more powerful Macintosh Classic II in 1991 until its discontinuation the next year.
History
Development
After Apple co-founder Steve Jobs left Apple in 1985, product development was handed to Jean-Louis Gassée, formerly the manager of Apple France. Gassée consistently pushed the Apple product line in two directions, towards more "openness" in terms of expandability and interoperability, and towards higher price. Gassée long argued that Apple should not aim for the low end of the computer market, where profits were thin, but instead concentrate on the high end and higher profit margins. He illustrated the concept using a graph showing the price-performance ratio of computers with low-power, low-cost machines in the lower left and high-power high-cost machines in the upper right. The "high-right" goal became a mantra among the upper management, who said "fifty-five or die", referring to Gassée's goal of a 55 percent profit margin.
The high-right policy led to a series of machines with ever-increasing prices. The original Macintosh plans called for a system around $1,000, but by the time it had morphed from Jef Raskin's original vision of an easy-to-use machine for composing text documents to Jobs's concept incorporating ideas gleaned duri |
https://en.wikipedia.org/wiki/Aurora%2C%20Oregon | Aurora is a city in Marion County, Oregon, United States and is home to the nation's largest not-for-profit air ambulance company, Life Flight Network. Before being incorporated as a city, it was the location of the Aurora Colony, a religious commune founded in 1856 by William Keil and John E. Schmit. William named the settlement after his daughter. The population was 1,133 at the 2020 Census. It is part of the Salem Metropolitan Statistical Area.
Geography
According to the United States Census Bureau, the city has a total area of , all of it land.
The Pudding River flows northward, just east of Aurora.
Climate
This region experiences warm (but not hot) and dry summers, with no average monthly temperatures above . According to the Köppen Climate Classification system, Aurora has a warm-summer Mediterranean climate, abbreviated "Csb" on climate maps.
Demographics
2020 census
As of the census of 2020, there were 1,133 people, and 336 households in the city. There were 431 housing units. The makeup of the city was 79.3% White, 1.3% African American, 1.5% Native American, 0.9% Asian, 7.1% from other races, and 9.5% from two or more races. Hispanic or Latino of any race were 16.4% of the population.
There were 336 households, of which 28.5% had children under the age of 18 living with them, 66.1% were married couples living together, 17.6% had a female householder with no husband present, 14% had a male householder with no wife present, and 2.3% were cohabitating couple households. 31.5% had someone living alone who was 65 years of age or older. The average household size was 2.74 and the average family size was 3.00.
The median age in the city was 42.4 years. 6.7% of the residents were under the age of 5; 20.7% of residents were under the age of 18; and 17.1% were 65 years of age or older.
The median income for a household in the city was $90,357, which far exceeds the median income for a household in the State of Oregon of $65,667.
The poverty rate of all people in Aurora was 2.4%, which was lower than the poverty rate of all people in Oregon at 12.4%.
Source:
2010 census
As of the census of 2010, there were 918 people, 336 households, and 256 families living in the city. The population density was . There were 349 housing units at an average density of . The racial makeup of the city was 89.7% White, 0.5% African American, 0.9% Native American, 0.3% Asian, 6.3% from other races, and 2.3% from two or more races. Hispanic or Latino of any race were 10.9% of the population.
There were 336 households, of which 37.5% had children under the age of 18 living with them, 64.0% were married couples living together, 7.4% had a female householder with no husband present, 4.8% had a male householder with no wife present, and 23.8% were non-families. 18.5% of all households were made up of individuals, and 6.6% had someone living alone who was 65 years of age or older. The average household size was 2.73 and the average family size was 3.12.
The m |
https://en.wikipedia.org/wiki/Brains%20in%20Bahrain | Brains in Bahrain was an eight-game chess match between World Chess Champion Vladimir Kramnik and the computer program Deep Fritz 7, held in October 2002. The match ended in a tie 4-4, with two wins for each participant and four draws.
Outcome of games
The first game was drawn. Kramnik won games 2 and 3 by "conventional" anti-computer tactics—play conservatively for a long-term advantage the computer is not able to see in its game tree search. After a draw in game 4, Kramnik lost game 5 due to a blunder. Game 6 was described by commentators as "spectacular". Kramnik, in a better position in the early middlegame, sacrificed a piece to launch an attack—a strategy known to be highly risky against computers, which are at their strongest when defending such attacks. True to form, Fritz found a watertight defense and Kramnik was left in a bad position. Kramnik resigned the game, believing his position to be lost. However, post-game analysis has shown that Fritz was unlikely to have been able to force a win—Kramnik gave up a drawn position. The final two games were draws.
Selection of Fritz and creation of Deep Fritz
Fritz had been chosen to play Kramnik by winning a qualifying event in Cadaques, Spain in 2001. The other competing program was Junior; the reigning world computer chess champion Shredder declined an invitation to compete. The 24-game match started very poorly for Fritz, which lost five games in a row before coming back strongly in the last ten games to tie the series and finally win the play-off. Fritz became Deep Fritz when its hardware was extended to an eight-processor machine for the competition.
Advantages
Kramnik was given several advantages in his match against Fritz when compared to most other Man vs. Machine matches, such as the one Garry Kasparov lost against Deep Blue in 1997. The code of Fritz was frozen some time before the first match and Kramnik was given a copy of Fritz to practice with for several months. Another difference was that in games lasting more than 56 moves, Kramnik was allowed to adjourn until the following day, during which time he could use his copy of Fritz to aid him in his overnight analysis of the position.
See also
Deep Blue versus Garry Kasparov
References
External links
Deep Fritz 7 - Product details and price at Chessbase, 28 August 2002
Brains in Bahrain page on chessgames.com
Chess competitions
Computer chess
2002 in chess
Sport in Bahrain
Human versus computer matches |
https://en.wikipedia.org/wiki/Extranet | An extranet is a controlled private network that allows access to partners, vendors and suppliers or an authorized set of customers – normally to a subset of the information accessible from an organization's intranet. An extranet is similar to a DMZ in that it provides access to needed services for authorized parties, without granting access to an organization's entire network.
Historically, the term was occasionally also used in the sense of two organizations sharing their internal networks over a virtual private network (VPN).
Enterprise applications
During the late 1990s and early 2000s, several industries started to use the term 'extranet' to describe centralized repositories of shared data (and supporting applications) made accessible via the web only to authorized members of particular work groups - for example, geographically dispersed, multi-company project teams. Some applications are offered on a software as a service (SaaS) basis.
For example, in the construction industry, project teams may access a project extranet to share drawings, photographs and documents, and use online applications to mark-up and make comments and to manage and report on project-related communications. In 2003 in the United Kingdom, several of the leading vendors formed the Network for Construction Collaboration Technology Providers (NCCTP) to promote the technologies and to establish data exchange standards between the different data systems. The same type of construction-focused technologies have also been developed in the United States, Australia and mainland Europe.
Advantages
Exchange large volumes of data using Electronic Data Interchange (EDI)
Share product catalogs exclusively with trade partners
Collaborate with other companies on joint development efforts
Jointly develop and use training programs with other companies
Provide or access services provided by one company to a group of other companies, such as an online banking application managed by one company on behalf of affiliated banks
improved efficiency: since the customers are satisfied with the information provided it can be an advantage for the organization where they will get more customers which increases the efficiency.
Disadvantages
Extranets can be expensive to implement and maintain within an organization (e.g., hardware, software, employee training costs), if hosted internally rather than by an application service provider.
Security of extranets can be a concern when hosting valuable or proprietary information.
Partner and customer access may result in contentious or controversial debates
See also
LAN
List of collaborative software
Wide area network
References
Further reading
Callaghan, J. (2002), Inside Intranets & Extranets: Knowledge Management and the Struggle for Power, Palgrave Macmillan,
Stambro, Robert and Svartbo, Erik (2002), Extranet Use in Supply Chain Management, University of Technology
Computer network security
Network architecture |
https://en.wikipedia.org/wiki/Power%20Macintosh%208500 | The Power Macintosh 8500 (sold as the Power Macintosh 8515 in Europe and Japan) is a personal computer designed, manufactured and sold by Apple Computer from August 1995 to February 1997. Billed as a high-end graphics computer, the Power Macintosh 8500 was initially released with a 120 MHz PowerPC 604, and unlike earlier Power Macintosh machines, the CPU was mounted on an upgradeable daughtercard. Though slower than the 132 MHz Power Macintosh 9500, the first-generation 8500 featured several audio and video (S-Video and composite video) in/out ports not found in the 9500. In fact, the 8500 incorporated near-broadcast quality (640×480) A/V input and output and was the first personal computer to do so, but no hard drive manufactured in 1997 could sustain the 18 MB/s data rate required to capture video at that resolution. Later, special "AV" hard drives were made available that could delay thermal recalibration until after a write operation had completed. With special care to minimize fragmentation, these drives were able to keep up with the 8500's video circuitry.
The 8500 was introduced alongside the Power Macintosh 7200 and 7500 at the 1995 MacWorld Expo in Boston. Apple referred to these machines collectively as the "Power Surge" line, communicating that these machines offered a significant speed improvement over its predecessors. Infoworld Magazine's review of the 8500 showed a performance improvement in their "business applications suite" from 10 minutes with the 8100/100, to 7:37 for the 8500/120. They also noted that the 8500 run an average of 24 to 44 percent faster than a similarly-clocked Intel Pentium chip, with the performance nearly double on graphics and publishing tasks.
The 8500's CPU was updated twice during its production run. It originally shipped with a 120 MHz PowerPC 604, later with the same chip running at 150 MHz, and finally with a PowerPC 604e running at 180 MHz. It was succeeded by the Power Macintosh 8600 in February 1997.
Models
Introduced August 8, 1995:
Power Macintosh 8500/120
Introduced January 11, 1996:
Power Macintosh 8515/120
Introduced February 26, 1996:
Workgroup Server 8550/132
Introduced April 22, 1996:
Power Macintosh 8500/132
Power Macintosh 8500/150
Introduced August 5, 1996:
Power Macintosh 8500/180
Introduced September 9, 1996:
Workgroup Server 8550/200 200 MHz PowerPC 604e CPU, 32 MB RAM. US$5,799. Sold with one of three software bundles, titled "Application Server Solution", "Apple Internet Server Solution 2.1", and "AppleShare Server Solution".
Timeline
References
External links
Power Macintosh 8500/120 at everymac.com.
8500
8500
Macintosh towers
Computer-related introductions in 1995 |
https://en.wikipedia.org/wiki/Great%20Bend%2C%20Pennsylvania | Great Bend is a borough in Susquehanna County, Pennsylvania, United States, north of Scranton. According to 2020 Census data, Great Bend's population was 634, down 13.6% from 2010. Great Bend sits along the Susquehanna River, less than two miles (about 3 km) from the New York State border, and is located directly off Interstate 81. Several small manufacturers also call Great Bend home. Great Bend is considered a bedroom community of the Binghamton, NY metropolitan area. Downtown Binghamton is roughly from Great Bend. The borough has three public parks. Billy Greenwood Memorial Park on Kilrow Avenue and Veterans' Memorial Park on Spring St. overlook the Susquehanna River. Great Bend is within the Blue Ridge School District.
History
Great Bend Borough was incorporated on November 19, 1861 from parts of Great Bend Township. Great Bend was named from a bend in the Susquehanna River.
Geography
Great Bend is located at (41.973226, -75.744376).
According to the United States Census Bureau, the borough has a total area of , all land.
Demographics
As of the census of 2010, there were 734 people, 341 households, and 194 families residing in the borough. The population density was . There were 369 housing units at an average density of . The racial makeup of the borough was 97.7% White, 0.4% Asian, 0.3% some other race, and 1.6% two or more races. Hispanic or Latino of any race composed 1.4% of the population.
There were 341 households, out of which 22.6% had children under the age of 18 living with them, 41.1% were married couples living together, 10% had a female householder with no husband present, and 43.1% were non-families. 36.1% of all households were made up of individuals, and 18.5% had someone living alone who was 65 years of age or older. The average household size was 2.15 and the average family size was 2.74.
In the borough the population was spread out, with 19.8% under the age of 18, 59.1% from 18 to 64, and 21.1% who were 65 years of age or older. The median age was 46 years.
The median income for a household in the borough was $41,776, and the median income for a family was $52,381. Males had a median income of $33,750 versus $29,138 for females. The per capita income for the borough was $21,634. About 1.2% of families and 6.3% of the population were below the poverty line, including 3.5% of those under age 18 and 8.9% of those age 65 or over.
Notable people
Charles L. Catlin, Wisconsin state legislator and lawyer, was born in Great Bend.
Fanny DuBois Chase (1828–1902), social reformer and author
Sylvia Dubois (1778/89 - 1888), African-American woman born into slavery, became free after striking her mistress while living in Great Bend
References
External links
Boroughs in Susquehanna County, Pennsylvania
Populated places established in 1861
Pennsylvania populated places on the Susquehanna River
1861 establishments in Pennsylvania |
https://en.wikipedia.org/wiki/BETA%20%28programming%20language%29 | BETA is a pure object-oriented language originating within the "Scandinavian School" in object-orientation where the first object-oriented language Simula was developed. Among its notable features, it introduced nested classes, and unified classes with procedures into so called patterns.
The project is inactive as of October 2020.
Features
Technical overview
From a technical perspective, BETA provides several unique features. Classes and Procedures are unified to one concept, a Pattern. Also, classes are defined as properties/attributes of objects. This means that a class cannot be instantiated without an explicit object context. A consequence of this is that BETA supports nested classes. Classes can be virtually defined, much like virtual methods can be in most object-oriented programming languages. Virtual entities (such as methods and classes) are never overwritten; instead they are redefined or specialized.
BETA supports the object-oriented perspective on programming and has comprehensive facilities for procedural and functional programming. It has powerful abstraction mechanisms to support identification of objects, classification and composition. BETA is a statically typed language like Simula, Eiffel and C++, with most type checking done at compile-time. BETA aims to achieve an optimal balance between compile-time type checking and run-time type checking.
Patterns
A major and peculiar feature of the language is the concept of patterns. In another programming language, such as C++, one would have several classes and procedures. BETA expresses both of these concepts using patterns.
For example, a simple class in C++ would have the form
class point {
int x, y;
};
In BETA, the same class could be represented by the pattern
point: (#
x, y: @integer
#)
That is, a class called point will have two fields, x and y, of type integer. The symbols (# and #) introduce patterns. The colon is used to declare patterns and variables. The @ sign before the integer type in the field definitions specifies that these are integer fields, and not, by contrast, references, arrays or other patterns.
As another comparison, a procedure in C++ could have the form
int max(int x, int y)
{
if (x >= y)
{
return x;
}
else
{
return y;
}
}
In BETA, such a function could be written using a pattern
max: (#
x, y, z: @integer
enter (x, y)
do
(if x >= y // True then
x -> z
else
y -> z
if)
exit z
#)
The x, y and z are local variables. The enter keyword specifies the input parameters to the pattern, while the exit keyword specifies the result of the function. Between the two, the do keyword prefixes the sequence of operations to be made. The conditional block is delimited by (if and if), that is the if keyword becomes part of the opening and closing parenthesis. Truth is checked through // True within an if block. Finally, the assignment operator -> assigns the value on its left hand side to t |
https://en.wikipedia.org/wiki/Fort%20Hancock%2C%20Texas | Fort Hancock is an unincorporated community and census-designated place (CDP) in Hudspeth County, Texas, United States. Its population was 1,213 at the 2020: DEC Redistricting Data.
Fort Hancock is situated on the Mexico–United States border, across from El Porvenir, Chihuahua. The Fort Hancock–El Porvenir International Bridge connects the two communities, and the Fort Hancock Port of Entry is located on the Texas side.
Texas State Highway 20 and the Union Pacific Railroad run through the town.
History
Camp Rice and Fort Hancock
Fort Hancock began as a military establishment named Camp Rice in 1882, along the San Antonio-El Paso Road. Camp Rice had formerly been located at Fort Quitman, and had been established by troops of the 10th U.S. Cavalry "buffalo soldiers". Camp Rice did not grow after moving to this community, and rarely hosted more than 60 men. It was renamed Fort Hancock in 1886 after the death of General Winfield Scott Hancock, a hero of the Battle of Gettysburg. The fort was damaged in a flood that year, but rebuilt. It was damaged again by fires in 1889, then abandoned in 1895. The remains of the old fort are located in a cotton field about west of present-day Fort Hancock.
Town of Fort Hancock
A post office was established in 1886, with Albert Warren as postmaster. In 1887, a new railroad depot was built at Fort Hancock, and by 1890, a town had grown up around it and had a population of 200, a general store, a hotel, and a meat market.
By 1914, the population of the town had dropped to 50, though by 1940, it had increased to 500.
Federal troops were sent to Fort Hancock in 1918 to contain Mexican "bandits and outlaws" operating along the border. The bandits were suspected of being directed by German agents.
In 1995, a 13-year-old Ricardo Soto "trying to get toys for Christmas" fired three rifle shots at a semitrailer traveling along nearby Interstate 10, hoping to blow out a tire so the truck would spill its load. He instead hit the driver of a pickup truck, Alberto Tarango, fatally wounding him. The man succumbed to his injuries two days later.
Officials in Fort Hancock raised the speed limit to in 2006 along their portion of Interstate 10, making it the highest speed limit in the country.
In 2006, CNN did a feature story about Fort Hancock, highlighting the close relationship between families living on the US and Mexican sides of the border. In the introduction, it described how "illegal immigrants risk their lives to cross the border, but not in Fort Hancock, Texas. A casual stroll across the foot bridge gets you in there." In an interview with Hudspeth County Deputy Sheriff Mike Doyal, he described the border as "just an open footway traffic for people coming across", and showed one of the four unguarded foot bridges that connect Fort Hancock to Mexico. Doyal spoke fondly of his Mexican neighbors, saying "those are not the people that we have a problem with, because I'm going to make it real clear that some of t |
https://en.wikipedia.org/wiki/Invariant | Invariant and invariance may refer to:
Computer science
Invariant (computer science), an expression whose value doesn't change during program execution
Loop invariant, a property of a program loop that is true before (and after) each iteration
A data type in method overriding that is neither covariant nor contravariant
Class invariant, an invariant used to constrain objects of a class
Physics, mathematics, and statistics
Invariant (mathematics), a property of a mathematical object that is not changed by a specific operation or transformation
Rotational invariance, the property of function whose value does not change when arbitrary rotations are applied to its argument
Scale invariance, a property of objects or laws that do not change if scales of length, energy, or other variables, are multiplied by a common factor
Topological invariant
Invariant (physics), something does not change under a transformation, such as from one reference frame to another
Invariant estimator in statistics
Measurement invariance, a statistical property of measurement
Oxford University Invariant Society, an Oxford student mathematics club
Other uses
Invariant (linguistics), a word that does not undergo inflection
Invariant (music)
Writer invariant, property of a text which is similar in all texts of a given author, and different in texts of different authors
Invariance (magazine), a French Communist journal
Invariances, a 2001 book by philosopher Robert Nozick
See also |
https://en.wikipedia.org/wiki/IBM%20Blue%20Gene | Blue Gene was an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with low power consumption.
The project created three generations of supercomputers, Blue Gene/L, Blue Gene/P, and Blue Gene/Q. During their deployment, Blue Gene systems often led the TOP500 and Green500 rankings of the most powerful and most power-efficient supercomputers, respectively. Blue Gene systems have also consistently scored top positions in the Graph500 list. The project was awarded the 2009 National Medal of Technology and Innovation.
As of 2015, IBM appears to have ended development of the Blue Gene family, though no formal announcement has been made. IBM has since focused its supercomputer efforts on the OpenPower platform, using accelerators such as FPGAs and GPUs to address the diminishing returns of Moore's law.
History
In December 1999, IBM announced a US$100 million research initiative for a five-year effort to build a massively parallel computer, to be applied to the study of biomolecular phenomena such as protein folding. The project had two main goals: to advance our understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. Major areas of investigation included: how to use this novel platform to effectively meet its scientific goals, how to make such massively parallel machines more usable, and how to achieve performance targets at a reasonable cost, through novel machine architectures. The initial design for Blue Gene was based on an early version of the Cyclops64 architecture, designed by Monty Denneau. The initial research and development work was pursued at IBM T. J. Watson Research Center and led by William R. Pulleyblank.
At IBM, Alan Gara started working on an extension of the QCDOC architecture into a more general-purpose supercomputer: The 4D nearest-neighbor interconnection network was replaced by a network supporting routing of messages from any node to any other; and a parallel I/O subsystem was added. DOE started funding the development of this system and it became known as Blue Gene/L (L for Light); development of the original Blue Gene system continued under the name Blue Gene/C (C for Cyclops) and, later, Cyclops64.
In November 2004 a 16-rack system, with each rack holding 1,024 compute nodes, achieved first place in the TOP500 list, with a Linpack performance of 70.72 TFLOPS. It thereby overtook NEC's Earth Simulator, which had held the title of the fastest computer in the world since 2002. From 2004 through 2007 the Blue Gene/L installation at LLNL gradually expanded to 104 racks, achieving 478 TFLOPS Linpack and 596 TFLOPS peak. The LLNL BlueGene/L installation held the first position in the TOP500 list for 3.5 years, until in June 2008 it was overtaken by IBM's Cell-based Roadrunner system at Los Alamos National Laboratory, which was the first system to surpass the 1 |
https://en.wikipedia.org/wiki/Memory%20hierarchy | In computer organisation, the memory hierarchy separates computer storage into a hierarchy based on response time. Since response time, complexity, and capacity are related, the levels may also be distinguished by their performance and controlling technologies. Memory hierarchy affects performance in computer architectural design, algorithm predictions, and lower level programming constructs involving locality of reference.
Designing for high performance requires considering the restrictions of the memory hierarchy, i.e. the size and capabilities of each component. Each of the various components can be viewed as part of a hierarchy of memories in which each member is typically smaller and faster than the next highest member of the hierarchy. To limit waiting by higher levels, a lower level will respond by filling a buffer and then signaling for activating the transfer.
There are four major storage levels.
Internal – Processor registers and cache.
Main – the system RAM and controller cards.
On-line mass storage – Secondary storage.
Off-line bulk storage – Tertiary and Off-line storage.
This is a general memory hierarchy structuring. Many other structures are useful. For example, a paging algorithm may be considered as a level for virtual memory when designing a computer architecture, and one can include a level of nearline storage between online and offline storage.
Properties of the technologies in the memory hierarchy
Adding complexity slows down the memory hierarchy.
CMOx memory technology stretches the Flash space in the memory hierarchy
One of the main ways to increase system performance is minimising how far down the memory hierarchy one has to go to manipulate data.
Latency and bandwidth are two metrics associated with caches. Neither of them is uniform, but is specific to a particular component of the memory hierarchy.
Predicting where in the memory hierarchy the data resides is difficult.
...the location in the memory hierarchy dictates the time required for the prefetch to occur.
Examples
The number of levels in the memory hierarchy and the performance at each level has increased over time. The type of memory or storage components also change historically. For example, the memory hierarchy of an Intel Haswell Mobile processor circa 2013 is:
Processor registers – the fastest possible access (usually 1 CPU cycle). A few thousand bytes in size
Cache
Level 0 (L0) Micro operations cache – 6,144 bytes (6 KiB) in size
Level 1 (L1) Instruction cache – 128 KiB in size
Level 1 (L1) Data cache – 128 KiB in size. Best access speed is around 700 GB/s
Level 2 (L2) Instruction and data (shared) – 1 MiB in size. Best access speed is around 200 GB/s
Level 3 (L3) Shared cache – 6 MiB in size. Best access speed is around 100 GB/s
Level 4 (L4) Shared cache – 128 MiB in size. Best access speed is around 40 GB/s
Main memory (Primary storage) – GiB in size. Best access speed is around 10 GB/s. In the case of a NUMA machine, acc |
https://en.wikipedia.org/wiki/ACCESS.bus | ACCESS.bus, or A.b for short, is a peripheral-interconnect computer bus developed by Philips and DEC in the early 1990s, based on Philips' I²C system. It is similar in purpose to USB, in that it allows low-speed devices to be added or removed from a computer on the fly. While it was made available earlier than USB, it never became popular as USB gained in popularity.
History
Apple Computer's Apple Desktop Bus (ADB), introduced in the mid-1980s, allowed all sorts of low-speed devices like mice and keyboards to be daisy-chained into a single port on the computer, greatly reducing the number of ports needed, as well as the resulting cable clutter. ADB was universal on the Macintosh line by the late 1980s, and offered a clear advantage over the profusion of standards being used on PCs.
A.b was an attempt to reproduce these qualities in a new standard for the PC and workstation market. It had two additional advantages over ADB; hot plugging (plug-n-play) and the ability for the devices to have their own host controllers so devices could be plugged together without the need for a host computer to control the communications. Philips also suggested that the ability to plug any A.b device into any computer meant that people with special devices, like mice designed for people with disabilities, might carry their device from machine to machine.
An industry group, the ACCESS.bus Industry Group, or ABIG, was created in 1993 to control the development of the standard. There were 29 voting members of the group, including Microsoft. By this point DEC had introduced A.b on some of their workstations and a number of peripherals had been introduced by a variety of companies.
Development of USB began the next year, in 1994, and the consortium included a number of the members of the A.b group, notably DEC and Microsoft. Interest in A.b waned, leaving Philips as the primary supporter. A.b had a number of technical advantages over USB, which would not re-appear on that system until years later, and it was also easier and less expensive to implement. However, it was also much slower than USB, ten to a hundred times. USB fit neatly into the performance niche between A.b and FireWire, which made it practical to design a system with USB alone. Intel's backing was another deciding factor; the company began including USB controllers in the standard motherboard control chips, making the cost of implementation roughly that of the connector.
The only widespread use of the A.b system was DDC2Ab interface by the VESA group. They needed a standardized bus for communicating device abilities and status between monitors and computers, and selected I²C because it required only two pins; by re-using existing reserved pins in the standard VGA cable they could implement a complete A.b bus (including power). The bus could then be offered as an external expansion port simply by adding a socket on the monitor case. A number of monitors with A.b connectors started appearing in the mid-1 |
https://en.wikipedia.org/wiki/Thorp%2C%20Washington | Thorp ( ) is an unincorporated community and census-designated place (CDP) in Kittitas County, Washington, United States. In 2015, the population was 317 according to statistics compiled by Data USA.
The town of Thorp is east of Seattle, northwest of Ellensburg, and southeast of Cle Elum. It is located at the narrow west end of the Kittitas Valley, where high elevation forests of the Cascade Range give way to cattle ranches surrounded by farmlands noted for timothy hay, alfalfa, vegetables, and fruit production.
Thorp is named for Fielden Mortimer Thorp, recognized as the first permanent white settler in the Kittitas Valley. He established a homestead at the approach to Taneum Canyon (, ) near the present-day town in 1868. Klála, an ancient Native American village and the largest indigenous settlement in the Kittitas Valley at the arrival of the first white settlers, was located about one mile above the current town site.
Geography
Thorp is located in central Kittitas County at (47.068006, -120.672687). According to the United States Census Bureau, the CDP has a total area of , all of it land.
The town site of Thorp is above the flood plain of the upper Yakima River at an elevation of . It is situated near the river's west bank directly opposite the Hayward Hill slide area and Clark Flats, near the southeastern approach to the Yakima River canyon at the foot of Thorp Prairie. To the west of the town is Taneum Canyon, and to the northwest are Elk Heights, Morrison Canyon and the Sunlight Waters private residential subdivision. Ellensburg, the county seat, is southeast of Thorp.
Northwest of Thorp at the junction of SR 10 and Thorp Highway, the Yakima River emerges from a canyon parallel to a basalt flow, the uppermost layers of which have been dated to 10.5 million years. The Thorp Prairie sits atop the basalt flows and ends at a deep canyon of Miocene columnar basalt structures carved by Swauk Creek whose headwaters are at Blewett Pass along US 97 to the north. The Thorp Prairie deposits were also delivered by the Thorp Glacial episode.
Topography
North and northeast of the town of Thorp along the Yakima River channel is the gradual upward lift of the Thorp Drift, marked by an elevation change due to the incline onto the terminal moraine that marks the furthest advance of the Thorp Glacial stage. Here the Thorp Gravels, which are named for the town of Thorp and the Thorp Glacial episode, are exposed along the ancient river channel in what is known as the "Slide Area". The gravels were formed at the terminus of the Thorp Glacial advance approximately 600,000 years ago.
The Thorp Gravels themselves are believed to be between 3 and 4 million years old. The whole structure is composed of individually layered belts of gravel and sand which are not well consolidated, continually weather, and are prone to continuing erosion and landslides averaging 30 degrees. The area is rich with wildlife, including bald eagles and osprey who hunt for prey |
https://en.wikipedia.org/wiki/Hydroxyproline | (2S,4R)-4-Hydroxyproline, or L-hydroxyproline (C5H9O3N), is an amino acid, abbreviated as Hyp or O, e.g., in Protein Data Bank.
Structure and discovery
In 1902, Hermann Emil Fischer isolated hydroxyproline from hydrolyzed gelatin. In 1905, Hermann Leuchs synthesized a racemic mixture of 4-hydroxyproline.
Hydroxyproline differs from proline by the presence of a hydroxyl (OH) group attached to the gamma carbon atom.
Production and function
Hydroxyproline is produced by hydroxylation of the amino acid proline by the enzyme prolyl hydroxylase following protein synthesis (as a post-translational modification). The enzyme catalyzed reaction takes place in the lumen of the endoplasmic reticulum. Although it is not directly incorporated into proteins, hydroxyproline comprises roughly 4% of all amino acids found in animal tissue, an amount greater than seven other amino acids that are translationally incorporated.
Animals
Collagen
Hydroxyproline is a major component of the protein collagen, comprising roughly 13.5% of mammalian collagen. Hydroxyproline and proline play key roles for collagen stability. They permit the sharp twisting of the collagen helix. In the canonical collagen Xaa-Yaa-Gly triad (where Xaa and Yaa are any amino acid), a proline occupying the Yaa position is hydroxylated to give a Xaa-Hyp-Gly sequence. This modification of the proline residue increases the stability of the collagen triple helix. It was initially proposed that the stabilization was due to water molecules forming a hydrogen bonding network linking the prolyl hydroxyl groups and the main-chain carbonyl groups. It was subsequently shown that the increase in stability is primarily through stereoelectronic effects and that hydration of the hydroxyproline residues provides little or no additional stability.
Non-collagen
Hydroxyproline is found in few proteins other than collagen. For this reason, hydroxyproline content has been used as an indicator to determine collagen and/or gelatin amount. However, the mammalian proteins elastin and argonaute 2 have collagen-like domains in which hydroxyproline is formed. Some snail poisons, conotoxins, contain hydroxyproline, but lack collagen-like sequences.
Hydroxylation of proline has been shown to be involved in targeting Hypoxia-inducible factor (HIF) alpha subunit (HIF-1 alpha) for degradation by proteolysis. Under normoxia (normal oxygen conditions) EGLN1 protein hydroxylates the proline at the 564 position of HIF-1 alpha, which allows ubiquitylation by the von Hippel-Lindau tumor suppressor (pVHL) and subsequent targeting for proteasome degradation.
Plants
Hydroxyproline rich glycoproteins (HRGPs) are also found in plant cell walls. These hydroxyprolines serve as the attachment points for glycan chains which are added as post-translational modifications.
Clinical significance
Proline hydroxylation requires ascorbic acid (vitamin C). The most obvious, first effects (gingival and hair problems) of absence of ascorbic ac |
https://en.wikipedia.org/wiki/Apple%20Desktop%20Bus | Apple Desktop Bus (ADB) is a proprietary bit-serial peripheral bus connecting low-speed devices to computers. It was introduced on the Apple IIGS in 1986 as a way to support low-cost devices like keyboards and mice, allowing them to be connected together in a daisy chain without the need for hubs or other devices. Apple Desktop Bus was quickly introduced on later Macintosh models, on later models of NeXT computers, and saw some other third-party use as well. Like the similar PS/2 connector used in many PC-compatibles at the time, Apple Desktop Bus was rapidly replaced by USB as that system became popular in the late 1990s; the last external Apple Desktop Bus port on an Apple product was in 1999, though it remained as an internal-only bus on some Mac models into the 2000s.
History
AppleBus
Early during the creation of the Macintosh computer, the engineering team had selected the fairly sophisticated Zilog 8530 to supply serial communications. This was initially done to allow multiple devices to be plugged into a single port, using simple communication protocols implemented inside the 8530 to allow them to send and receive data with the host computer.
During development of this AppleBus system, computer networking became a vitally important feature of any computer system. With no card slots, the Macintosh was unable to easily add support for Ethernet or similar local area networking standards. Work on AppleBus was re-directed to networking purposes, and was released in 1985 as the AppleTalk system. This left the Mac with the original single-purpose mouse and keyboard ports, and no general-purpose system for low-speed devices to use.
Apple Desktop Bus
The first system to use Apple Desktop Bus was the Apple IIGS of 1986. It was used on all Apple Macintosh machines starting with the Macintosh II and Macintosh SE. Apple Desktop Bus was also used on later models of NeXT computers. The vast majority of Apple Desktop Bus devices are for input, including trackballs, joysticks, graphics tablets and similar devices. Special-purpose uses included software protection dongles and even the TelePort modem.
Move to USB
The first Macintosh to move on from Apple Desktop Bus was the iMac in 1998, which uses USB in its place. The last Apple computer to have an Apple Desktop Bus port is the Power Macintosh G3 (Blue and White) in 1999. PowerPC-based PowerBooks and iBooks still used the Apple Desktop Bus protocol in the internal interface with the built-in keyboard and touchpad. Subsequent models use a USB-based trackpad.
Design
Physical
In keeping with Apple's general philosophy of industrial design, Apple Desktop Bus was intended to be as simple to use as possible, while still being inexpensive to implement. A suitable connector was found in the form of the 4-pin mini-DIN connector, which is also used for (but incompatible with) S-Video. The connectors are small, widely available, and can only be inserted the "correct way". They do not lock into position, but |
https://en.wikipedia.org/wiki/Bot | Bot or BOT may refer to:
Sciences
Computing and technology
Chatbot, a computer program that converses in natural language
Internet bot, a software application that runs automated tasks (scripts) over the Internet
a Spambot, an internet bot designed to assist in the sending of spam
Internet Relay Chat bot, a set of scripts or an independent program that connects to Internet Relay Chat as a client
Robot, or "bot", a mechanical device that can perform physical tasks
Social bot, a type of chatbot that is employed in social media networks to automatically generate messages
Twitter bot, a program used to produce automated posts on the Twitter microblogging service
Trading bot, a program in an Automated trading system that is linked to an exchange or broker that automates trading using algorithms, there are trading bot providers such as HaasOnline, Cryptohopper, and MetaTrader 4
Video game bot, a computer-controlled player or opponent
Wikipedia bot, an internet bot which performs tasks in Wikipedia
Zombie (computer science), a zombie computer is part of a botnet
Biology and medicine
BOT, base of tongue, in medicine
Bot, the lesion caused by a botfly larva
Borderline ovarian tumor, a tumor of the ovaries
Places
Bni Ounjel Tafraout, a commune in Taounate Province, Taza-Al Hoceima-Taounate, Morocco
Bot, Tarragona, a town in Spain
Bot River, South Africa
Botswana, IOC and FIFA trigram BOT
British Overseas Territories, territories under the jurisdiction and sovereignty of the United Kingdom
Bucharest Old Town
People
Ajaw B'ot, 8th century Maya king of the city of Seibal
Ben Bot, Dutch politician
G. W. Bot, Australian printmaker, sculptor, painter and graphic artist
Jeanne Bot, French supercentenarian
Theo Bot, Dutch politician
Brands and enterprises
Bank of Taiwan, a bank headquartered in Taipei, Taiwan
Bank of Tanzania, the central bank of the United Republic of Tanzania
Bank of Thailand, the central bank of Thailand
Blue Orange Theatre, an independent theatre in Birmingham, England
Bolt On Technology, an American software development company
The Bank of Tokyo, a defunct Japanese bank now part of The Bank of Tokyo-Mitsubishi UFJ
Bot, a line of budget desktop PCs manufactured by Alienware
Business
Balance of trade, difference between the monetary value of exports and imports
Build–operate–transfer, a form of project financing
Sports
Bobby Orr Trophy, the championship trophy of the Eastern Conference of the Ontario Hockey League
Brava Opening Tournament, a football tournament in Brava, Cape Verde
Transportation
Air Botswana (ICAO: BOT)
Bosset Airport (IATA: BOT), in Bosset, Papua New Guinea
Bryn Oer Tramway, a narrow gauge railway built in South Wales in 1814
Other uses
"B.O.T.", a 1986 episode of The Transformers
Bot caste, a Hindu caste of Nepali origin found in the Indian state of Uttar Pradesh
Bot people, or Boto people, a community in Jammu and Kashmir
Bot, short for ubosot, the ordination hall of a Buddh |
https://en.wikipedia.org/wiki/Lagard%C3%A8re%20News | Lagardère News, formerly known as Lagardère Active, is the media activities arm of the French Lagardère Group.
Its subsidiaries include Lagardère's radio operations, television networks, and book and magazine publishers.
In 2018, Arnaud Lagardère announced that Lagardère would be disposing of its media assets, which they carried out throughout the year. This included their stake in Marie Claire, their radio businesses in Eastern Europe and Africa, and their press titles in France, including Elle.
See also
References
External links
Ketupa.net: Ketupa - Hachette-Filipacchi — extensive profile.
French-language television networks
Television networks in France |
https://en.wikipedia.org/wiki/Micro%20Channel%20architecture | Micro Channel architecture, or the Micro Channel bus, is a proprietary 16- or 32-bit parallel computer bus introduced by IBM in 1987 which was used on PS/2 and other computers until the mid-1990s. Its name is commonly abbreviated as "MCA", although not by IBM. In IBM products, it superseded the ISA bus and was itself subsequently superseded by the PCI bus architecture.
Background
The development of Micro Channel was driven by both technical and business pressures.
Technology
The IBM AT bus, which later became known as the Industry Standard Architecture (ISA) bus, had a number of technical design limitations, including:
A slow bus speed.
A limited number of interrupts, fixed in hardware.
A limited number of I/O device addresses, also fixed in hardware.
Hardwired and complex configuration with no conflict resolution.
Deep links to the architecture of the 80x86 chip family
In addition, it suffered from other problems:
Poor grounding and power distribution.
Undocumented bus interface standards that varied between systems and manufacturers.
These limitations became more serious as the range of tasks and peripherals, and the number of manufacturers for IBM PC-compatibles, grew. IBM was already investigating the use of RISC processors in desktop machines, and could, in theory, save considerable money if a single well-documented bus could be used across their entire computer lineup.
Market share
It was thought that by creating a new standard, IBM would regain control of standards via the required licensing. As patents can take three years or more to be granted, however, only those relating to ISA could be licensed when Micro Channel was announced. Patents on important Micro Channel features, such as Plug and Play automatic configuration, were not granted to IBM until after PCI had replaced Micro Channel in the marketplace. The overall reception was tepid and the impact of Micro Channel in the worldwide PC market was minor.
Design
The Micro Channel architecture was designed by engineer Chet Heath. A lot of the Micro Channel cards that were developed used the CHIPS P82C612 MCA interface controller; allowing MCA implementations to become a lot easier.
Overview
The Micro Channel was primarily a 32-bit bus, but the system also supported a 16-bit mode designed to lower the cost of connectors and logic in Intel-based machines like the IBM PS/2.
The situation was never that simple, however, as both the 32-bit and 16-bit versions initially had a number of additional optional connectors for memory cards which resulted in a huge number of physically incompatible cards for bus attached memory. In time, memory moved to the CPU's local bus, thereby eliminating the problem. On the upside, signal quality was greatly improved as Micro Channel added ground and power pins and arranged the pins to minimize interference; a ground or a supply was thereby located within 3 pins of every signal.
Another connector extension was included for graphics cards. Thi |
https://en.wikipedia.org/wiki/Linux%20framebuffer | The Linux framebuffer (fbdev) is a linux subsystem used to show graphics on a computer monitor, typically on the system console.
It was designed as a hardware-independent API to give user space software access to the framebuffer (the part of a computer's video memory containing a current video frame) using only the Linux kernel's own basic facilities and its device file system interface, avoiding the need for libraries like SVGAlib which effectively implemented video drivers in user space.
In most applications, fbdev has been superseded by the linux Direct Rendering Manager subsystem, but as of 2022, several drivers provide both DRM and fbdev APIs for backwards compatibility with software that has not been updated to use the DRM system, and there are still fbdev drivers for older (mostly embedded) hardware that does not have a DRM driver.
Applications
There are three applications of the Linux framebuffer:
An implementation of text Linux console that doesn't use hardware text mode (useful when that mode is unavailable, or to overcome its restrictions on glyph size, number of code points etc.). One popular aspect of this is the ability to have console show the Tux logo at boot up.
A possible graphic output method for a display server, independent of video adapter hardware and its drivers.
Graphic programs avoiding the overhead of the X Window System.
Examples of the third application include Linux programs such as MPlayer, links2, Netsurf, w3m, fbff, fbida, and fim and libraries such as GLUT, SDL (version 1.2), GTK, and Qt, which can all use the framebuffer directly. This use case is particularly popular in embedded systems.
The now defunct DirectFB is another project aimed at providing a framework for hardware acceleration of the Linux framebuffer.
There was also a windowing system called FramebufferUI (fbui) implemented in kernel-space that provided a basic 2D windowing experience with very little memory use.
History
Linux has generic framebuffer support since 2.1.109 kernel.
It was originally implemented to allow the kernel to emulate a text console on systems such as the Apple Macintosh that do not have a text-mode display, and was later expanded to Linux's originally supported IBM PC compatible platform.
See also
Direct Rendering Infrastructure
KMS driver
SVGAlib.
References
External links
XFree86 doc
Free software programmed in C
Free system software
Interfaces of the Linux kernel
Linux APIs |
https://en.wikipedia.org/wiki/Flyweight%20pattern | In computer programming, the flyweight software design pattern refers to an object that minimizes memory usage by sharing some of its data with other similar objects. The flyweight pattern is one of twenty-three well-known GoF design patterns. These patterns promote flexible object-oriented software design, which is easier to implement, change, test, and reuse.
In other contexts, the idea of sharing data structures is called hash consing.
The term was first coined, and the idea extensively explored, by Paul Calder and Mark Linton in 1990 to efficiently handle glyph information in a WYSIWYG document editor. Similar techniques were already used in other systems, however, as early as 1988.
Overview
The flyweight pattern is useful when dealing with large numbers of objects with simple repeated elements that would use a large amount of memory if individually stored. It is common to hold shared data in external data structures and pass it to the objects temporarily when they are used.
A classic example are the data structures used representing characters in a word processor. Naively, each character in a document might have a glyph object containing its font outline, font metrics, and other formatting data. However, this would use hundreds or thousands of bytes of memory for each character. Instead, each character can have a reference to a glyph object shared by every instance of the same character in the document. This way, only the position of each character needs to be stored internally.
As a result, flyweight objects can:
store intrinsic state that is invariant, context-independent and shareable (for example, the code of character 'A' in a given character set)
provide an interface for passing in extrinsic state that is variant, context-dependent and can't be shared (for example, the position of character 'A' in a text document)
Clients can reuse Flyweight objects and pass in extrinsic state as necessary, reducing the number of physically created objects.
Structure
The above UML class diagram shows:
the Client class, which uses the flyweight pattern
the FlyweightFactory class, which creates and shares Flyweight objects
the Flyweight interface, which takes in extrinsic state and performs an operation
the Flyweight1 class, which implements Flyweight and stores intrinsic state
The sequence diagram shows the following run-time interactions:
The Client object calls getFlyweight(key) on the FlyweightFactory, which returns a Flyweight1 object.
After calling operation(extrinsicState) on the returned Flyweight1 object, the Client again calls getFlyweight(key) on the FlyweightFactory.
The FlyweightFactory returns the already-existing Flyweight1 object.
Implementation details
There are multiple ways to implement the flyweight pattern. One example is mutability: whether the objects storing extrinsic flyweight state can change.
Immutable objects are easily shared, but require creating new extrinsic objects whenever a change in state occur |
https://en.wikipedia.org/wiki/ATOLL%20%28programming%20language%29 | Acceptance, Test Or Launch Language (ATOLL) was the programming language used for automating the checking and launch of Saturn rockets.
References
Saturn (rocket family)
Avionics programming languages |
https://en.wikipedia.org/wiki/Access%20query%20language | Access, the successor to ENGLISH, is an English-like query language used in the Pick operating system.
The original name ENGLISH is something of a misnomer, as PICK's flexible dictionary structure meant that file and attribute names could be given aliases in any natural language. For instance the command SORT could be given the alias TRIEZ, the file CUSTOMER the alias CLIENT, the attribute BALANCE the alias BILAN and the particle BY the alias PAR. This would allow the database to be interrogated using the French-language command string "TRIEZ CLIENT PAR BILAN", resulting in a list of customers by balance.
Etomology
The Access query (or enquiry) language is known by different names on different implementations of Pick: with English, Info/Access, Inform and Recall all being used.
References
Footnotes
Sources
Further reading
Query languages |
https://en.wikipedia.org/wiki/BS2000 | BS2000 is an operating system for IBM 390-compatible mainframe computers developed in the 1970s by Siemens (Data Processing Department EDV) and from early 2000s onward by Fujitsu Technology Solutions.
Unlike other mainframe systems, BS2000 provides exactly the same user and programming interface in all operating modes (batch, interactive and online transaction processing) and regardless of whether it is running natively or as a guest system in a virtual machine. This uniformity of the user interface and the entire BS2000 software configuration makes administration and automation particularly easy.
Currently, it is mainly used in Germany - making up to 83% of its total user base - as well as in the United Kingdom (8%), Belgium (4.8%) and other European countries (4.2%).
History
BS2000 has its roots in the Time Sharing Operating System (TSOS) first developed by RCA for the /46 model of the Spectra/70 series, a computer family of the late 1960s related in its architecture to IBM's /360 series. It was an early operating system which used virtual addressing and a segregated address space for the programs of different users.
From the outset TSOS also allowed data peripherals to be accessed only via record- or block-oriented file interfaces, thereby preventing the necessity to implement device dependencies in user programs. The same operating system was also sold to Sperry Univac when it bought most of RCA's computer division. Univac's "fork" of TSOS would become VS/9, which used many of the same concepts.
1970s
In 1973, BS2000 V1.0 was a port of the TSOS operating system to models of the Siemens system 7.700
In June 1975, Siemens shipped the enhanced BS2000 V2.0 version of the TSOS operating system for the models of the Siemens 7.700 mainframe series for the first time under the name BS2000. This first version supported disk paging and three different operating modes in the same system: interactive dialog, batch, and transaction mode, a precursor of online transaction processing.
In 1977, the TRANSDATA communication system used computer networking.
In 1978, multiprocessor technology was introduced. The operating system had the ability to cope with a processor failure. At the same time the new technology considerably extended the performance range of the system.
In 1979, a transaction processing monitor, the Universal Transaction Monitor (UTM), was introduced, providing support for online transaction processing as an additional operating mode.
1980s
In 1980, Siemens introduced the system 7.500 hardware family, ranging from desk size models for use in office environments to large models with water cooling.
In 1987, BS2000 V9.0 was ported to the /370 architecture supported 2GB address spaces, 512 processes and the XS channel system (Dynamic Channel Subsystem).
BS2000 was subdivided into subsystems decoupled from one another.
1990s
With the advent of the VM2000 virtual machine in 1990, multiple BS2000 systems, of the same or different versions, can |
https://en.wikipedia.org/wiki/Siemens%20Nixdorf | Siemens Nixdorf Informationssysteme, AG (SNI) was formed in 1990 by the merger of Nixdorf Computer and the Data Information Services (DIS) division of Siemens.
It functioned as a separate company within Siemens.
It was the largest information technology company in Europe until 1999, when it was split into two: Fujitsu Siemens Computers and Wincor Nixdorf. Wincor Nixdorf took over all banking and retail related business.
Products
SNI sold:
BS2000 and SINIX operating systems
BS2000 mainframe computers
a number of databases
SNI RISC-based RM-x00 servers
a variety of other hardware and software products (from Personal Computers to SAP R/3).
ComfoDesk – a GUI shell for enterprise users
See also
Heinz Nixdorf MuseumsForum
References
Defunct computer companies of Germany
Diebold Nixdorf
Siemens
Defunct technology companies of Germany
Computer hardware companies of Germany
Computer companies established in 1990
Computer companies disestablished in 1999
1990 establishments in Germany
1999 disestablishments in Germany |
https://en.wikipedia.org/wiki/Vienna%20Development%20Method | The Vienna Development Method (VDM) is one of the longest-established formal methods for the development of computer-based systems. Originating in work done at the IBM Laboratory Vienna in the 1970s, it has grown to include a group of techniques and tools based on a formal specification language—the VDM Specification Language (VDM-SL). It has an extended form, VDM++, which supports the modeling of object-oriented and concurrent systems. Support for VDM includes commercial and academic tools for analyzing models, including support for testing and proving properties of models and generating program code from validated VDM models. There is a history of industrial usage of VDM and its tools and a growing body of research in the formalism has led to notable contributions to the engineering of critical systems, compilers, concurrent systems and in logic for computer science.
Philosophy
Computing systems may be modeled in VDM-SL at a higher level of abstraction than is achievable using programming languages, allowing the analysis of designs and identification of key features, including defects, at an early stage of system development. Models that have been validated can be transformed into detailed system designs through a refinement process. The language has a formal semantics, enabling proof of the properties of models to a high level of assurance. It also has an executable subset, so that models may be analyzed by testing and can be executed through graphical user interfaces, so that models can be evaluated by experts who are not necessarily familiar with the modeling language itself.
History
The origins of VDM-SL lie in the IBM Laboratory in Vienna where the first version of the language was called the Vienna Definition Language (VDL). The VDL was essentially used for giving operational semantics descriptions in contrast to the VDM – Meta-IV which provided denotational semantics
«Towards the end of 1972 the Vienna group again turned their attention to the problem of systematically developing a compiler from a language definition. The overall approach adopted has been termed the "Vienna Development Method"... The meta-language actually adopted ("Meta-IV") is used to define major portions of PL/1 (as given in ECMA 74 – interestingly a "formal standards document written as an abstract interpreter") in BEKIČ 74.»
There is no connection between Meta-IV, and Schorre's META II language, or its successor Tree Meta; these were compiler-compiler systems rather than being suitable for formal problem descriptions.
So Meta-IV was "used to define major portions of" the PL/I programming language. Other programming languages retrospectively described, or partially described, using Meta-IV and VDM-SL include the BASIC programming language, FORTRAN, the APL programming language, ALGOL 60, the Ada programming language and the Pascal programming language. Meta-IV evolved into several variants, generally described as the Danish, English and Irish Schools.
The "Engl |
https://en.wikipedia.org/wiki/Specification%20language | A specification language is a formal language in computer science used during systems analysis, requirements analysis, and systems design to describe a system at a much higher level than a programming language, which is used to produce the executable code for a system.
Overview
Specification languages are generally not directly executed. They are meant to describe the what, not the how. It is considered an error if a requirement specification is cluttered with unnecessary implementation detail.
A common fundamental assumption of many specification approaches is that programs are modelled as algebraic or model-theoretic structures that include a collection of sets of data values together with functions over those sets. This level of abstraction coincides with the view that the correctness of the input/output behaviour of a program takes precedence over all its other properties.
In the property-oriented approach to specification (taken e.g. by CASL), specifications of programs consist mainly of logical axioms, usually in a logical system in which equality has a prominent role, describing the properties that the functions are required to satisfy—often just by their interrelationship.
This is in contrast to so-called model-oriented specification in frameworks like VDM and Z, which consist of a simple realization of the required behaviour.
Specifications must be subject to a process of refinement (the filling-in of implementation detail) before they can actually be implemented. The result of such a refinement process is an executable algorithm, which is either formulated in a programming language, or in an executable subset of the specification language at hand. For example, Hartmann pipelines, when properly applied, may be considered a dataflow specification which is directly executable. Another example is the actor model which has no specific application content and must be specialized to be executable.
An important use of specification languages is enabling the creation of proofs of program correctness (see theorem prover).
Languages
See also
Formal specification
Language-independent specification
Specification and Description Language
Unified Modeling Language
References
External links
Computer languages
Scientific modelling
Formal specification |
https://en.wikipedia.org/wiki/BOS/360 | Basic Operating System/360 (BOS/360) was an early IBM System/360 operating system.
Origin
BOS was one of four System/360 Operating System versions developed by the IBM General Products Division (GPD) in Endicott, New York to fill a gap at the low end of the System/360 line when it became apparent that OS/360 was not able to run on the smallest systems. BPS (Basic Programming support) was designed to run on systems with a minimum of 8 KB of main storage and no disk. BOS was intended for disk systems with at least 8 KB and one 2311 disk drive. DOS and TOS were developed from BOS for systems with at least 16 KB and either disks (DOS) or tape drives only (TOS).
BOS was released in October 1965, nearly two years before OS/360, thus BOS was the only disk based operating system available at launch for a machine that was marketed as disk based.
Components
BOS consisted of the following components:
Control programs:
The supervisor.
Job control capable of running jobs sequentially from the card reader.
The IPL loader.
System Service Programs:
The Linkage Editor.
The Librarian, supporting a core-image library, and optionally a macro library and a relocatable library.
The "Load System Program," a sysgen program to build a disk-resident BOS system from cards.
IBM-supplied processing programs which could be installed with BOS:
Language translators, an Assembler and an RPG compiler. Compilers for FORTRAN IV and COBOL were added later.
Autotest, a debugging aid.
Sort/Merge.
Utility programs for file-to-file copy between devices and formats.
Remote Job Entry allowing the BOS system to submit jobs to a remote System/360 and receive output.
Data Management, consisting of supervisor support for Physical IOCS, and macros for Logical IOCS which could be incorporated into the user's processing programs.
IBM 1070 Process Communication Supervisor
The IBM 1070 Process Communication Supervisor was a dedicated process control system that ran as an extension under BOS "Relying on the BOS supervisor to handle ordinary physical and logical I/O operations (i. e., for cards, disk, etc.), the PC supervisor is specialized to the process control aspects of the user's program."
References
Further reading
Pugh, Emerson W.; Johnson, Lyle R.; Palmer, John H. (1991). IBM's 360 and Early 370 Systems, Cambridge : MIT Press. (pp. 321–345)
IBM mainframe operating systems
Discontinued operating systems
Assembly language software
1965 software |
https://en.wikipedia.org/wiki/Cornell%20University%20Center%20for%20Advanced%20Computing | The Cornell University Center for Advanced Computing (CAC), housed at Frank H. T. Rhodes Hall on the campus of Cornell University, is one of five original centers in the National Science Foundation's Supercomputer Centers Program. It was formerly called the Cornell Theory Center.
Establishment
The Cornell Theory Center (CTC) was established in 1985 under the direction of Cornell Physics Professor and Nobel Laureate Kenneth G. Wilson. In 1984, the National Science Foundation began work on establishing five new supercomputer centers, including the CTC, to provide high-speed computing resources for research within the United States. In 1985, a team from the National Center for Supercomputing Applications began the development of NSFNet, a TCP/IP-based computer network that could connect to the ARPANET at Cornell University and the University of Illinois at Urbana–Champaign. This high-speed network, unrestricted to academic users, became a backbone to which regional networks would be connected. Initially a 56-kbit/s network, traffic on the network grew exponentially; the links were upgraded to 1.5-Mbit/s T1s in 1988 and to 45 Mbit/s in 1991. The NSFNet was a major milestone in the development of the Internet and its rapid growth coincided with the development of the World Wide Web. In the early 1990s, in addition to support from the National Science Foundation, the CTC received funding from the Advanced Research Projects Agency, the National Institutes of Health, New York State, IBM Corporation, SGI, and members of the center's Corporate Research Institute. The center's focus was on developing scalable parallel computing resources for its user community and applying their expertise in parallel algorithm development and optimization to a wide range of scientific and engineering problems.
History
The Cornell University Center for Advanced Computing, and its predecessor the Cornell Theory Center, deployed the first IBM Scalable POWERparallel System SP2 supercomputer and first Dell supercomputer, and established a financial solutions center for supercomputing.
Today, CAC is a partner on the National Science Foundation XSEDE project, a collection of integrated digital resources and services enabling open science research. CAC is also developing training for TACC's Frontera supercomputer, serving as the technical lead for the Scalable Cyberinfrastructure Institute for Multi-Messenger Astrophysics (SCiMMA) project, developing software for the Institute for Research and Innovation in Software for High Energy Physics (IRIS-HEP), and designing cyberinfrastructure for the NANOGrav Physics Frontiers Center.
A 175 times faster computation of a CDC hepatitis C model on a CAC MATLAB cloud is noted in the International Data Corporation's What the Exascale Era Can Provide report. CAC was an early implementer of cloud computing with the deployment of Red Cloud. CAC also designed and deployed a federated cloud called Aristotle and builds cloud images and containeri |
https://en.wikipedia.org/wiki/CORC | CORC (the Cornell computing language) was a simple computer language developed at Cornell University in 1962 to serve lay users, namely for students to use to solve math problems. Its developers, industrial engineering professors Richard W. Conway and William L. Maxwell, sought to create a language which could both expose mathematics and engineering students to computing and remove the burden of mechanical problem-solving from their professors.
CORC was designed with ease of use in mind. It contained strains of both FORTRAN and ALGOL but was much simpler. Since programs were tediously input with punched cards, the compiler had a high tolerance for error, attempting to bypass or even correct problem sections of code. Students could submit a program by 5 PM which would be compiled or run overnight, with results available the next morning.
It was initially run on the Burroughs 220 and later extended to the CDC 1604. In 1966 it was superseded by CUPL, a batch compiler for teaching which ran on the IBM System/360.
An extension of CORC, the Cornell List Processor (CLP), was a list processing language used for simulation.
References
David N. Freeman. 1964. "Error correction in CORC: the Cornell Computing Language". In Proceedings of the October 27-29, 1964, fall joint computer conference, part I (AFIPS '64 (Fall, part I)). Association for Computing Machinery, 15–34. https://doi.org/10.1145/1464052.1464055
Richard C. Lesser's Recollections: The Cornell Computing Center - the early years, 1953 to 1964.
External links
Resource page for cupl 1.6, providing binary and source code and background information about CUPL and CORC.
Procedural programming languages
Cornell University
Programming languages created in 1962 |
https://en.wikipedia.org/wiki/JOSS | JOSS (acronym for JOHNNIAC Open Shop System) was one of the first interactive, time-sharing programming languages. It pioneered many features that would become common in languages from the 1960s into the 1980s, including use of line numbers as both editing instructions and targets for branches, statements predicated by boolean decisions, and a built-in source-code editor that can perform instructions in direct or immediate mode, what they termed a conversational user interface.
JOSS was initially implemented on the JOHNNIAC machine at RAND Corporation and put online in 1963. It proved very popular, and the users quickly bogged the machine down. By 1964, a replacement was sought with higher performance. JOHNNIAC was retired in 1966 and replaced by a PDP-6, which ultimately grew to support hundreds of computer terminals based on the IBM Selectric. The terminals used green ink for user input and black for the computer's response. Any command that was not understood elicited the response .
The system was highly influential, spawning a variety of ports and offshoots. Some remained similar to the original, like TELCOMP and STRINGCOMP, CAL, CITRAN, ISIS, PIL/I, JEAN (ICT 1900 series), Algebraic Interpretive Dialogue (AID, on PDP-10). Others, such as FOCAL and MUMPS, developed in distinctive directions. JOSS also bears a strong resemblance to the BASIC interpreters found on microcomputers in the 1980s, differing mainly in syntax details.
History
Initial idea
In 1959, Willis Ware wrote a RAND memo on the topic of computing in which he stated future computers would have "a multiplicity of personal input-output stations, so that many people can interact with the machine at the same time." The memo gained the interest of the US Air Force, Rand's primary sponsors, and in 1960, they formed the Information Processor Project to explore this concept, what would soon be known as time-sharing. The project was not specifically about time-sharing, but aimed to improve human-computer interaction overall. The idea at the time was that constant interaction between the user and the computer in a back-and-forth manner would make such interactions more natural. As JOSS director Keith Uncapher later put it:
A formal proposal to develop what became JOSS on the JOHNNIAC computer was accepted in March 1961.
JOSS-1
JOSS was implemented almost entirely by J. Clifford Shaw, a mathematician who worked in Rand's growing computing division. It was written in a symbolic assembly language called EasyFox (E and F in the US military's then phonetic alphabet), also developed by Shaw.
The JOSS system was brought up formally for the first time in May 1963, supporting five consoles, one in the machine room and another four in offices around the building. The early consoles were based in the IBM Model 868 Transmitting Typewriter, as the Selectric had not yet been introduced to market when development began. The first schedule was published on 17 June, with JOSS running for three hours |
https://en.wikipedia.org/wiki/Maximum%20likelihood%20estimation | In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
If the likelihood function is differentiable, the derivative test for finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, the ordinary least squares estimator for a linear regression model maximizes the likelihood when the random errors are assumed to have normal distributions with the same variance.
From the perspective of Bayesian inference, MLE is generally equivalent to maximum a posteriori (MAP) estimation with uniform prior distributions (or a normal prior distribution with a standard deviation of infinity). In frequentist inference, MLE is a special case of an extremum estimator, with the objective function being the likelihood.
Principles
We model a set of observations as a random sample from an unknown joint probability distribution which is expressed in terms of a set of parameters. The goal of maximum likelihood estimation is to determine the parameters for which the observed data have the highest joint probability. We write the parameters governing the joint distribution as a vector so that this distribution falls within a parametric family where is called the parameter space, a finite-dimensional subset of Euclidean space. Evaluating the joint density at the observed data sample gives a real-valued function,
which is called the likelihood function. For independent and identically distributed random variables, will be the product of univariate density functions:
The goal of maximum likelihood estimation is to find the values of the model parameters that maximize the likelihood function over the parameter space, that is
Intuitively, this selects the parameter values that make the observed data most probable. The specific value that maximizes the likelihood function is called the maximum likelihood estimate. Further, if the function so defined is measurable, then it is called the maximum likelihood estimator. It is generally a function defined over the sample space, i.e. taking a given sample as its argument. A sufficient but not necessary condition for its existence is for the likelihood function to be continuous over a parameter space that is compact. For an open the likelihood function may increase without ever reaching a supremum value.
In practice, it is often convenient to work with the natural logarithm of the likelihood function, called the log-likelihood:
Since the logarithm is a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.