source
stringlengths
32
199
text
stringlengths
26
3k
https://en.wikipedia.org/wiki/Rock%20%27n%27%20Roll%20%28video%20game%29
Rock 'n' Roll is a video game for the Commodore 64, Atari ST, Amiga, MS-DOS, ZX Spectrum, and Amstrad CPC, published by Rainbow Arts in 1989. The idea for the game and the programming are by Frank Prasse. The Soundtrack for the Amiga version was composed by Chris Huelsbeck. The titles comes from both the aspect of rolling the ball along the level, and the various Rock and roll-inspired tunes that play during gameplay. Gameplay Rock 'n' Roll is an action-oriented puzzle game with 32 levels (plus a secret bonus level). The player controls a ball (steered with the mouse on the Amiga or the joystick on the Commodore 64) and his job is to reach the exit on each level. Numerous objects help or hinder the player's path to the exit. These include locked doors that he needs to find a key to, ice which hinders his steering, fans that push him away or magnets that pull him towards them, crumbling floor tiles, and others. A map of the level is accessible at any time, but it starts out completely blank. The player has to collect eyes along the level to make various objects visible on the map. He may also buy various supplies like a fastball, a spikeball, bombs, and even hints for game secrets like extra money, extra lives and short cuts. Reception "With this title, Rainbow Arts has given proof again, that also simple game ideas can result in complex, entertaining programs." Torsten Oppermann in ASM issue 11/1989 See also Enigma Oxyd Rollin References External links Game Map 1989 video games Amiga games Amstrad CPC games Atari ST games Commodore 64 games DOS games Marble video games Video games scored by Barry Leitch Video games scored by Chris Huelsbeck Video games developed in Germany ZX Spectrum games Rainbow Arts games Multiplayer and single-player video games
https://en.wikipedia.org/wiki/Apache%20Wicket
Apache Wicket, commonly referred to as Wicket, is a component-based web application framework for the Java programming language conceptually similar to JavaServer Faces and Tapestry. It was originally written by Jonathan Locke in April 2004. Version 1.0 was released in June 2005. It graduated into an Apache top-level project in June 2007. Rationale Traditional model-view-controller (MVC) frameworks work in terms of whole requests and whole pages. In each request cycle, the incoming request is mapped to a method on a controller object, which then generates the outgoing response in its entirety, usually by pulling data out of a model to populate a view written in specialized template markup. This keeps the application's flow-of-control simple and clear, but can make code reuse in the controller difficult. In contrast, Wicket is closely patterned after stateful GUI frameworks such as Swing. Wicket applications are trees of components, which use listener delegates to react to HTTP requests against links and forms in the same way that Swing components react to mouse and keystroke events. Wicket is categorized as a component-based framework. Design Wicket uses plain XHTML for templating (which enforces a clear separation of presentation and business logic and allows templates to be edited with conventional WYSIWYG design tools). Each component is bound to a named element in the XHTML and becomes responsible for rendering that element in the final output. The page is simply the top-level containing component and is paired with exactly one XHTML template. Using a special tag, a group of individual components may be abstracted into a single component called a panel, which can then be reused whole in that page, other pages, or even other panels. Each component is backed by its own model, which represents the state of the component. The framework does not have knowledge of how components interact with their models, which are treated as opaque objects automatically serialized and persisted between requests. More complex models, however, may be made detachable and provide hooks to arrange their own storage and restoration at the beginning and end of each request cycle. Wicket does not mandate any particular object-persistence or ORM layer, so applications often use some combination of Hibernate objects, EJBs or POJOs as models. In Wicket, all server side state is automatically managed. You should never directly use an HttpSession object or similar wrapper to store state. Instead, state is associated with components. Each server-side page component holds a nested hierarchy of stateful components, where each component's model is, in the end, a POJO (Plain Old Java Object) Wicket aims for simplicity. There are no configuration files to learn in Wicket. Wicket is a simple class library with a consistent approach to component structure. Example A Hello World Wicket application, with four files: HelloWorld.html The XHTML template. <!DOCTYPE html P
https://en.wikipedia.org/wiki/FanimeCon
FanimeCon is an annual four-day anime convention held during May at the San Jose McEnery Convention Center in San Jose, California, over Memorial Day weekend. Programming The convention typically offers an AMV contest, artist's alley, contests, cosplay chess, dances, dealer's room, formal ball, game room (arcade, console, PC, and tabletop), karaoke, maid cafe, masquerade, panels, screenings, a swap meet, tournaments, and workshops. The convention offers 24-hour programming, including gaming and video. FanimeCon held an art auction for the charity Habitat for Humanity in 2004. Charities that FanimeCon supported in 2011 included the American Red Cross of Silicon Valley, APA Family Support Services of San Francisco, Cancer Support Community, and Japanese Red Cross Society. History FanimeCon was first held in 1994 at California State University, Hayward, being run by several anime clubs. Foothill College would also host the convention until moving to the Wyndham Hotel in San Jose for 1999. From 2000 to 2003 the Santa Clara Convention Center hosted FanimeCon. In 2004, FanimeCon moved to the San Jose McEnery Convention Center. That year, the convention brought to the local economy, growing to an estimated in 2013, and in 2014. Problems with the convention in 2009 included Christian protests and over purchasing of artist alley tables, with the protesters also returning in 2010. In 2011, Saturday saw three hour registration waits, problems with the convention not using a printed schedule, outside religious protesters, and the Marriott fire alarm being pulled on Monday morning. Registration was affected in 2012 by a power outage. FanimeCon's 20th anniversary in 2014 was marked by San Jose having Fanime Day on May 23, 2014. The masquerade in 2015 suffered from technical issues. FanimeCon's masquerade for 2016 was scheduled to run for five hours. FanimeCon 2020 was canceled due to the COVID-19 pandemic. FanimeCon was changed to a virtual event for 2021. Event history References External links Anime conventions in the United States Annual events in Silicon Valley Culture of San Jose, California San Francisco Bay Area conventions Recurring events established in 1994 1994 establishments in California Tourist attractions in Santa Clara County, California Tourist attractions in San Jose, California
https://en.wikipedia.org/wiki/Interlochen%20Public%20Radio
Interlochen Public Radio (IPR), established in 1963, is the National Public Radio member network for Northern Michigan. It broadcasts classical music and news on five stations in the northwestern Lower Peninsula. It is operated by the Interlochen Center for the Arts, with studios on the center's campus in Interlochen, Michigan; just outside Traverse City. It carries programming from NPR and Public Radio International. At one point early in the 2000s, IPR led the nation in annual listener support. This was all the more remarkable because it is the second-smallest NPR member in Michigan, and one of the smallest in the entire NPR system. History Joseph E. Maddy, founder of the National Music Camp (now the Interlochen Center for the Arts), had long wanted to bring a fine arts radio station to Northern Michigan. In 1963, WIAA signed on for the first time. Originally broadcasting eight hours per day, it grew enough within a decade to become a charter member of NPR. Interlochen Public Radio became a network in 1989 with the addition of WICV. Interlochen bought contemporary Christian station WDQV in 2005 and converted it into a third satellite for the eastern portion of the market, WIAB. In 2000, Interlochen signed on WICA at 91.5, and by 2001 all NPR news and talk programming moved there from WIAA/WICV. However, WICA does not have nearly as large a footprint as WIAA. It must conform its signal to protect [[WHMW-FM] in Mount Pleasant, also at 91.5. As a result, Cadillac, the second-largest city in IPR's service area, does not have a clear signal for NPR talk programming; WICA's signal in Cadillac is marginal at best. This is true even after the addition of two repeaters for WICA since the turn of the millennium. In 2018, Interlochen sold WICV to Northern Christian Radio for $150,000, and the station adopted a contemporary Christian format as an affiliate of The Promise FM. Stations Since 2000, IPR has operated a two-service network. "Classical IPR" (formerly known as "IPR Music Radio") provides classical music and hourly NPR news updates for three stations. News and talk programming from NPR and other sources is heard on three stations, branded as "IPR News Radio". References Sources Michiguide.com - WIAA History Michiguide.com - WIAB History Michiguide.com - WICA History Michiguide.com - WICV History External links Classical music radio stations in the United States NPR member networks
https://en.wikipedia.org/wiki/Commodore%208050
The Commodore 8050, Commodore 8250, and Commodore SFD-1001 are 5¼-inch floppy disk drives manufactured by Commodore International, primarily for its 8-bit CBM and PET series of computers. The drives offered improved storage capacities over previous Commodore drive models. They are notable for the disk drive having twice the processing power than the connected computer in having two 1MHz 6502 processors sharing operation of communication and disk operation, though only supporting 4k of main memory. The disk operating system is actually contained within the disk drive unit with commands being sent via the 8 bit GPIB interface where the system decodes the message and carries out the requested operation such as formatting a disk without further involvement from the connected computer. Specifications All three models utilize 5¼-inch double-density floppy disks with a track spacing of 100 tracks-per-inch, for a total of 77 logical tracks per side. Data is encoded using Commodore's proprietary group coded recording scheme. Soft sectoring is used for track alignment. Like most other Commodore disk drives, these drives utilize zone bit recording to maintain an average bit density across the entire disk. Formatted capacity is approximately 0.5 megabyte per side, or 1 megabyte (1,066,496 bytes) in 4166 blocks total. The 8050 is a single-sided drive, whereas the 8250 and SFD-1001 are double-sided drives. Double-sided drives can fully read and write to disks formatted by single-sided drives, but single-sided drives can only read and write to the bottom side of disks formatted by double-sided drives. Both the 8050 and 8250 are housed in a dual-drive case similar to the Commodore 4040 drive. The SFD-1001 is housed in a single-drive case similar to the Commodore 1541. The 8250LP, a low-profile revision of the 8250, is housed in a shorter dual-drive case that is stylistically similar to the SFD-1001. All models include an internal power supply and an IEEE-488 data connector on the rear of the case. The 8050 and 8250 include latch sensors that can detect when a disk is inserted or removed. These drives are not dual-mode, so they cannot read or write 5¼-inch disks formatted by lower-capacity 48-tpi models, such as the Commodore 1541 or 4040. They also cannot read or write 5¼-inch disks formatted by 96-tpi drives, such as the 640 kilobyte IBM PC disk or 880 kilobyte Commodore Amiga disk, due to the minor difference in track spacing. Lastly, they cannot read or write high-density 5¼-inch disks due to both the difference in track spacing and the difference in write head coercivity (300-oersted for double-density, 600-oersted for high-density). Disk Layout Total Sectors: 2083 (4166 for the 8250) The disk header is on 39/0 (track 39, sector 0), with the directory residing on the remaining 28 sectors of track 39. Header Layout 39/0 $00–01 T/S reference to the first BAM (block availability map) sector 02 DOS version ('C') 06-16 Disk label, $A0
https://en.wikipedia.org/wiki/Alef%20%28programming%20language%29
Alef is a discontinued concurrent programming language, designed as part of the Plan 9 operating system by Phil Winterbottom of Bell Labs. It implemented the channel-based concurrency model of Newsqueak in a compiled, C-like language. History Alef appeared in the first and second editions of Plan 9, but was abandoned during development of the third edition. Rob Pike later explained Alef's demise by pointing to its lack of automatic memory management, despite Pike's and other people's urging Winterbottom to add garbage collection to the language; also, in a February 2000 slideshow, Pike noted: "…although Alef was a fruitful language, it proved too difficult to maintain a variant language across multiple architectures, so we took what we learned from it and built the thread library for C." Alef was superseded by two programming environments. The Limbo programming language can be considered a direct successor of Alef and is the most commonly used language in the Inferno operating system. The Alef concurrency model was replicated in the third edition of Plan 9 in the form of the libthread library, which makes some of Alef's functionality available to C programs and allowed existing Alef programs (such as Acme) to be translated. Example This example was taken from the Alef reference manual. The piece illustrates the use of tuple data type. (int, byte*, byte) func() { return (10, "hello", 'c'); } void main() { int a; byte* str; byte c; (a, str, c) = func(); } See also Communicating sequential processes Plan 9 from Bell Labs Go (programming language) References C programming language family Concurrent programming languages Plan 9 from Bell Labs Programming languages created in 1992
https://en.wikipedia.org/wiki/KTXL
KTXL (channel 40) is a television station in Sacramento, California, United States, affiliated with the Fox network. The station is owned by Nexstar Media Group, and maintains studios on Fruitridge Road near the Oak Park district on the southern side of Sacramento; its transmitter is located in Walnut Grove, California. History Early history of channel 40 in Sacramento (1953–1960) The UHF channel 40 frequency in Sacramento was first occupied by KCCC-TV, which signed on in September 1953. It was affiliated with all four television networks of the time: ABC, CBS, NBC and the DuMont Television Network. KCCC's first broadcast was the 1953 World Series between the New York Yankees and the Brooklyn Dodgers. The station became a primary ABC affiliate by 1955, after KCRA-TV (channel 3) and KBET-TV (channel 10, now KXTV) signed on, respectively taking over NBC and CBS full-time; and dropped DuMont after that network folded in 1956. It was the Sacramento–Stockton–Modesto area's first television station. However, as a UHF station, it suffered in the ratings because television sets were not required to incorporate UHF tuning until the All-Channel Receiver Act went into effect in 1964. Although its fate was sealed when the first VHF stations signed on in the area, it managed to hang on until 1957. The ABC affiliation moved to KOVR (channel 13) after KCCC-TV and KOVR reached an agreement to merge operations and turn over the KCCC license to the Federal Communications Commission (FCC). The former KCCC-TV studios and transmitting facilities were then sold to a group of broadcasters who applied for a new license, returning channel 40 to the air in late 1959 as KVUE, broadcasting from studios near the old California state fairgrounds off Stockton Boulevard. The station operated for just under five months before also falling silent. The KVUE call letters now reside on the ABC affiliate in Austin, Texas. As an independent station (1963–1986) In 1963, KVUE attempted to file for a license renewal even though the station had been off the air for more than three years; Camellia City Telecasters, a group headed by Jack Matranga, former owner and co-founder of radio station KGMS (now KTKZ), filed an application with the FCC to build a station on channel 40, as a challenge to the KVUE renewal, and was granted the license in early 1965. KTXL first signed on the air on October 26, 1968, operating as an independent station for nearly the first two decades of its existence. It was then branded as "TV 40". The station gained a huge advantage early on when its original owner won the local syndication rights to a massive number of movies, including classic and contemporary films. At one point, it had one of the largest film libraries in the Sacramento area. In addition, KTXL ventured into in-house productions, such as the children's program "Captain Mitch", horror movie host Bob Wilkins and Big Time Wrestling. The latter show aired until 1979, and was syndicated to several s
https://en.wikipedia.org/wiki/Two-line%20element%20set
A two-line element set (TLE) is a data format encoding a list of orbital elements of an Earth-orbiting object for a given point in time, the epoch. Using a suitable prediction formula, the state (position and velocity) at any point in the past or future can be estimated to some accuracy. The TLE data representation is specific to the simplified perturbations models (SGP, SGP4, SDP4, SGP8 and SDP8), so any algorithm using a TLE as a data source must implement one of the SGP models to correctly compute the state at a time of interest. TLEs can describe the trajectories only of Earth-orbiting objects. TLEs are widely used as input for projecting the future orbital tracks of space debris for purposes of characterizing "future debris events to support risk analysis, close approach analysis, collision avoidance maneuvering" and forensic analysis. The format was originally intended for punched cards, encoding a set of elements on two standard 80-column cards. This format was eventually replaced by text files as punch card systems became obsolete, with each set of elements written to two 69-column ASCII lines preceded by a title line. The United States Space Force tracks all detectable objects in Earth orbit, creating a corresponding TLE for each object, and makes publicly available TLEs for many of the space objects on the websites Space Track and Celestrak, holding back or obfuscating data on many military or classified objects. The TLE format is a de facto standard for distribution of an Earth-orbiting object's orbital elements. A TLE set may include a title line preceding the element data, so each listing may take up three lines in the file. The title is not required, as each data line includes a unique object identifier code. History In the early 1960s, Max Lane developed mathematical models for predicting the locations of satellites based on a minimal set of data elements. His first paper on the topic, published in 1965, introduced the Analytical Drag Theory, which concerned itself primarily with the effects of drag caused by a spherically symmetric non-rotating atmosphere. Joined by K. Cranford, the two published an improved model in 1969 that added various harmonic effects due to Earth-Moon-Sun interactions and various other inputs. Lane's models were widely used by the military and NASA starting in the late 1960s. The improved version became the standard model for NORAD in the early 1970s, which ultimately led to the creation of the TLE format. At the time there were two formats designed for punch cards, an "internal format" that used three cards encoding complete details for the satellite (including name and other data), and the two card "transmission format" that listed only those elements that were subject to change. The latter saved on cards and produced smaller decks when updating the databases. Cranford continued to work on the modelling, eventually leading Lane to publish Spacetrack Report #2 detailing the Air Force General Perturba
https://en.wikipedia.org/wiki/Decisive%20Battles%20of%20WWII%3A%20Korsun%20Pocket
Decisive Battles of WWII Vol 2: Korsun Pocket is a computer wargame developed by the Strategic Studies Group (SSG). It is the second game in the Decisive Battles of WWII series, following Decisive Battles of WWII: The Ardennes Offensive (1997). Korsun Pocket received critical acclaim and a number of awards. Upon release, critics from GameSpot, Computer Gaming World and PC Gamer US dubbed it the best traditional computer wargame ever made. In 2004, Korsun Pocket was followed by the third Decisive Battles title, Battles in Normandy. Setting The game depicts the historical World War II battle of the Korsun Pocket. Development In October 2001, SSG announced a deal with publisher Matrix Games to release Korsun Pocket. It was revealed under the name Decisive Battles of WWII: Korsun Pocket in May 2002. It was shown off by Matrix Games in April 2003. It is the sequel to Decisive Battles of WWII: The Ardennes Offensive. The game was released on August 25, 2003. In December 2003, SSG released an expansion pack for Korsun Pocket called Across the Dnepr, referring to the Dnieper River in Ukraine. The expansion pack is only available via a download from the Matrix Games website. Reception In PC Gamer US, William R. Trotter declared Korsun Pocket "the best wargame ever made for the PC." He praised its combination of extreme detail and accessibility, and argued that "it can be mastered by any novice willing to invest the time and patience", despite single turns requiring an hour of play. Trotter concluded, "It is the Mount Everest, the Beethoven's Ninth, and Moby Dick of PC wargames, and it absolutely qualifies as a genuine work of art." Similarly, Bruce Geryk of Computer Gaming World called Korsun Pocket "the best hex-based computer wargame ever made", and echoed Trotter's praise of its high level of detail. He wrote, "Korsun Pocket manages to capture everything that's compelling about historical wargaming, exclude the tedium, and present it as a tremendous game." Writing for GameSpot, Jeff Lackey echoed Geryk's and Trotter's praise: Korsun Pocket is "easily the best 2D wargame for the PC to date", he argued. The editors of Computer Gaming World named Korsun Pocket the 2003 "Wargame of the Year". They wrote, "The core design is as outstanding as the interface and presentation, and the A.I. demolishes the myth that war games can't put up a decent fight against a human opponent." GameSpy also declared Korsun Pocket the best computer wargame of 2003, and its editors called it "arguably the best traditional hex-based wargame of all time." It was a nominee for PC Gamer USs 2003 "Best Turn-Based Strategy Game" award, although it lost to Combat Mission: Afrika Korps. The editors called it "the new 'big dog' of the hex-based wargame genre". References External links Official site (archived) Game profile on IGN.com 2003 video games Computer wargames Strategic Studies Group games Video games developed in Australia Video games with expansion packs Windows gam
https://en.wikipedia.org/wiki/Opal%20%28programming%20language%29
OPAL (OPtimized Applicative Language) is a functional programming language first developed at the Technical University of Berlin. Example program This is an example OPAL program, which calculates the GCD recursively. Signature file (declaration) SIGNATURE GCD FUN GCD: nat ** nat -> nat Implementation file (definition) IMPLEMENTATION GCD IMPORT Nat COMPLETELY DEF GCD(a,b) == IF a % b = 0 THEN b ELSE IF a-b < b THEN GCD(b,a-b) ELSE GCD(a-b,b) FI FI External links The OPAL Home Page OPAL Installation Guide Functional languages
https://en.wikipedia.org/wiki/Temporary%20folder
In computing, a temporary folder or temporary directory is a directory used to hold temporary files. Many operating systems and some software automatically delete the contents of this directory at bootup or at regular intervals, leaving the directory itself intact. For security reasons, it is best for each user to have their own temporary directory, since there has been a history of security vulnerabilities with temporary files due to programs incorrect file permissions or race conditions. A standard procedure for system administration is to reduce the amount of storage space used (typically, on a disk drive) by removing temporary files. In multi-user systems, this can potentially remove active files, disrupting users' activities. To avoid this, some space-reclaiming procedures remove only files which are inactive or "old" - those which have not been read or modified in several days. Practical issues In Unix, the /tmp directory will often be a separate disk partition. In systems with magnetic hard disk drives, performance (overall system IOPS) will increase if disk-head movements from regular disk I/O are separated from the access to the temporary directory. Increasingly, memory-based solutions for the temporary directory or folder are being used, such as "RAM disks" set up in random-access memory or the shared-memory device in Linux. A Flash-based solid-state drive is less suitable as a temporary-storage device for reading and writing due to the asymmetric read/write duration and due to wear. (See wear leveling.) Traditional locations In MS-DOS and Microsoft Windows, the temporary directory is set by the environment variable or . Using the Window API, one can find the path to the temporary directory using the function, or one can obtain a path to a uniquely-named temporary file using the function. Originally, the default was , then . In the Windows XP era, the temporary directory was set per-user as , although still user-relocatable. For Windows Vista, 7, 8, and 10 the temp location has moved again to within the AppData section of the User Profile, typically User Name (). In all versions of Windows, the Temp location can be accessed, for example, in Explorer, "Run..." boxes and in an application's internal code by using or . As with other environmental variables, or is synonymous with the full path. In Unix and Linux, the global temporary directories are and . Web browsers periodically write data to the tmp directory during page views and downloads. Typically, is for persistent files (as it may be preserved over reboots), and is for more temporary files. See Filesystem Hierarchy Standard. In addition, a user can set their TMPDIR environment variable to point to a preferred directory (where the creation and modification of files is allowed). In macOS, a sandboxed application cannot use the standard Unix locations, but may use a user-specific directory whose path is provided by the function . In OpenVMS, and in AmigaDOS are
https://en.wikipedia.org/wiki/Robustness%20principle
In computing, the robustness principle is a design guideline for software that states: "be conservative in what you do, be liberal in what you accept from others". It is often reworded as: "be conservative in what you send, be liberal in what you accept". The principle is also known as Postel's law, after Jon Postel, who used the wording in an early specification of TCP. In other words, programs that send messages to other machines (or to other programs on the same machine) should conform completely to the specifications, but programs that receive messages should accept non-conformant input as long as the meaning is clear. Among programmers, to produce compatible functions, the principle is also known in the form be contravariant in the input type and covariant in the output type. Interpretation RFC 1122 (1989) expanded on Postel's principle by recommending that programmers "assume that the network is filled with malevolent entities that will send in packets designed to have the worst possible effect". Protocols should allow for the addition of new codes for existing fields in future versions of protocols by accepting messages with unknown codes (possibly logging them). Programmers should avoid sending messages with "legal but obscure protocol features" that might expose deficiencies in receivers, and design their code "not just to survive other misbehaving hosts, but also to cooperate to limit the amount of disruption such hosts can cause to the shared communication facility". Criticism In 2001, Marshall Rose characterized several deployment problems when applying Postel's principle in the design of a new application protocol. For example, a defective implementation that sends non-conforming messages might be used only with implementations that tolerate those deviations from the specification until, possibly several years later, it is connected with a less tolerant application that rejects its messages. In such a situation, identifying the problem is often difficult, and deploying a solution can be costly. Rose therefore recommended "explicit consistency checks in a protocol ... even if they impose implementation overhead". From 2015 to 2018, in a series of Internet Drafts, Martin Thomson argues that Postel's robustness principle actually leads to a lack of robustness, including security: In 2018, a paper on privacy-enhancing technologies by Florentin Rochet and Olivier Pereira showed how to exploit Postel's robustness principle inside the Tor routing protocol to compromise the anonymity of onion services and Tor clients. See also Unix philosophy Static discipline Open–closed principle References External links History of the (Internet) Robustness Principle (Nick Gall, May 2005) Internet Protocol, page 22; J. Postel, IEN 111, August 1979. Computer architecture statements Internet Standards
https://en.wikipedia.org/wiki/Computational%20Diffie%E2%80%93Hellman%20assumption
The computational Diffie–Hellman (CDH) assumption is a computational hardness assumption about the Diffie–Hellman problem. The CDH assumption involves the problem of computing the discrete logarithm in cyclic groups. The CDH problem illustrates the attack of an eavesdropper in the Diffie–Hellman key exchange protocol to obtain the exchanged secret key. Definition Consider a cyclic group G of order q. The CDH assumption states that, given for a randomly chosen generator g and random it is computationally intractable to compute the value Relation to Discrete Logarithms The CDH assumption is strongly related to the discrete logarithm assumption. If computing the discrete logarithm (base g ) in G were easy, then the CDH problem could be solved easily: Given one could efficiently compute in the following way: compute by taking the discrete log of to base ; compute by exponentiation: ; Computing the discrete logarithm is the only known method for solving the CDH problem. But there is no proof that it is, in fact, the only method. It is an open problem to determine whether the discrete log assumption is equivalent to the CDH assumption, though in certain special cases this can be shown to be the case. Relation to Decisional Diffie–Hellman Assumption The CDH assumption is a weaker assumption than the Decisional Diffie–Hellman assumption (DDH assumption). If computing from was easy (CDH problem), then one could solve the DDH problem trivially. Many cryptographic schemes that are constructed from the CDH problem rely in fact on the hardness of the DDH problem. The semantic security of the Diffie-Hellman Key Exchange as well as the security of the ElGamal encryption rely on the hardness of the DDH problem. There are concrete constructions of groups where the stronger DDH assumption does not hold but the weaker CDH assumption still seems to be a reasonable hypothesis. Variations of the Computational Diffie–Hellman assumption The following variations of the CDH problem have been studied and proven to be equivalent to the CDH problem: Square computational Diffie-Hellman problem (SCDH): On input , compute ; Inverse computational Diffie-Hellman problem (InvCDH): On input , compute ; Divisible computation Diffie-Hellman problem (DCDH): On input , compute ; Variations of the Computational Diffie–Hellman assumption in product groups Let and be two cyclic groups. Co-Computational Diffie–Hellman (co-CDH) problem: Given and , compute ; References Computational hardness assumptions
https://en.wikipedia.org/wiki/Ethernet%20hub
An Ethernet hub, active hub, network hub, repeater hub, multiport repeater, or simply hub is a network hardware device for connecting multiple Ethernet devices together and making them act as a single network segment. It has multiple input/output (I/O) ports, in which a signal introduced at the input of any port appears at the output of every port except the original incoming. A hub works at the physical layer (layer 1) of the OSI model. A repeater hub also participates in collision detection, forwarding a jam signal to all ports if it detects a collision. In addition to standard 8P8C ("RJ45") ports, some hubs may also come with a BNC or an Attachment Unit Interface (AUI) connector to allow connection to legacy 10BASE2 or 10BASE5 network segments. Hubs are now largely obsolete, having been replaced by network switches except in very old installations or specialized applications. As of 2011, connecting network segments by repeaters or hubs is deprecated by IEEE 802.3. Physical layer function A layer 1 network device such as a hub transfers data but does not manage any of the traffic coming through it. Any packet entering a port is repeated to the output of every other port except for the port of entry. Specifically, each bit or symbol is repeated as it flows in. A repeater hub can therefore only receive and forward at a single speed. Dual-speed hubs internally consist of two hubs with a bridge between them. Since every packet is repeated on every other port, packet collisions affect the entire network, limiting its overall capacity. A network hub is an unsophisticated device in comparison with a switch. As a multiport repeater it works by repeating transmissions received from one of its ports to all other ports. It is aware of physical layer packets, that is it can detect their start (preamble), an idle line (interpacket gap) and sense a collision which it also propagates by sending a jam signal. A hub cannot further examine or manage any of the traffic that comes through it. A hub has no memory to store data and can handle only one transmission at a time. Therefore, hubs can only run in half duplex mode. Due to a larger collision domain, packet collisions are more likely in networks connected using hubs than in networks connected using more sophisticated devices. Connecting multiple hubs The need for hosts to be able to detect collisions limits the number of hubs and the total size of a network built using hubs (a network built using switches does not have these limitations). For 10 Mbit/s networks built using repeater hubs, the 5-4-3 rule must be followed: up to five segments (four hubs) are allowed between any two end stations. For 10BASE-T networks, up to five segments with four repeaters are allowed between any two hosts. For 100 Mbit/s networks, the limit is reduced to three segments between any two end stations, and even that is only allowed if the hubs are of Class II. Some hubs have manufacturer-specific stack ports allowing them
https://en.wikipedia.org/wiki/Radio%20Free%20Vietnam
Radio Free Vietnam is the broadcasting network of a Vietnamese anti-communist group called the Government of Free Vietnam that is released throughout the world and its headquarters is located in Southern California. It is a non-profit organization that is able to broadcast directly into Vietnam and all of Asia. It calls for the right of freedom of opinion and expression - including the freedom to seek, receive and impart information and ideas through any medium regardless of frontiers. External links Radio Free Vietnam Government of Free Vietnam International broadcasters Vietnamese-American culture in California Radio stations broadcasting on subcarriers
https://en.wikipedia.org/wiki/Extension%20conflict
Extension conflicts were sometimes a common nuisance on Apple Macintosh computers running the classic Mac OS, especially System 7. Extensions were bundles of code that extended the operating system's capabilities by directly patching OS calls, thus receiving control instead of the operating system when applications (including the Finder) made system calls. Generally, once an extension completed its task, it was supposed to pass on the (possibly modified) system call to the operating system's routine. If multiple extensions want to patch the same system call, they end up receiving the call in a chain, the first extension in line passing it on to the next, and so on in the order they are loaded, until the last extension passes to the operating system. If an extension does not hand the next extension in line what it is expecting, problems occur; ranging from unexpected behavior to full system crashes. This is triggered by several factors such as carelessly programmed and malicious extensions that change or disrupt the way part of the system software works. In addition, extensions sometimes competed for system resources with applications, utilities and other extensions, leading to crashes and general instability. Some users happily loaded every extension they could find on their computer, with little or no impact. Others fastidiously avoided any non-essential extensions as a way of avoiding the problem. Many were judicious in the addition of extensions. This problem increased during the mid-1990s as resource-hungry multimedia technologies such as QuickTime were installed as extensions. In addition, a number of applications, especially Microsoft Office, required a large number of extensions. Many Macintosh users had hundreds of extensions running on their systems, all of varying age and quality. Buggy, damaged and outdated extensions were the most common cause of problems. Some users had to remember to turn off problematic extensions when running certain programs. Later versions of System 7 included the Extensions Manager, which allowed users to disable specific extensions or groups of extensions at startup when troubleshooting the conflict by pressing the spacebar while the computer boots. This tool was also accessible by opening the Extensions CDEV in the Control Panels found in the Apple menu. Conflict Catcher and Now Startup Manager were third party utilities that automatically detected conflicts and problematic extensions and other software executing at boot, otherwise a time-consuming task that required users to turn off extensions in sets until they found the conflict, as well as allowing load order to be altered without renaming items. Extensions were only loaded at startup time, meaning that any attempted change required a reboot. The most common time for extension conflicts to start was the release of a new version of the operating system, followed closely by the installation of a complex new application that either conflicted with exist
https://en.wikipedia.org/wiki/Ravenscar%20profile
The Ravenscar profile is a subset of the Ada tasking features designed for safety-critical hard real-time computing. It was defined by a separate technical report in Ada 95; it is now part of the Ada 2012 Standard. It has been named after the English village of Ravenscar, the location of the 8th International Real-Time Ada Workshop (IRTAW 8). Restrictions of the profile A Ravenscar Ada application uses the following compiler directive: pragma Profile (Ravenscar); This is the same as writing the following set of configuration pragmas: pragma Task_Dispatching_Policy (FIFO_Within_Priorities); pragma Locking_Policy (Ceiling_Locking); pragma Detect_Blocking; pragma Restrictions ( No_Abort_Statements, No_Calendar, No_Dynamic_Attachment, No_Dynamic_Priorities, No_Implicit_Heap_Allocations, No_Local_Protected_Objects, No_Local_Timing_Events, No_Protected_Type_Allocators, No_Relative_Delay, No_Requeue_Statements, No_Select_Statements, No_Specific_Termination_Handlers, No_Task_Allocators, No_Task_Hierarchy, No_Task_Termination, Simple_Barriers, Max_Entry_Queue_Length => 1, Max_Protected_Entries => 1, Max_Task_Entries => 0, No_Dependence => Ada.Asynchronous_Task_Control, No_Dependence => Ada.Calendar, No_Dependence => Ada.Execution_Time.Group_Budget, No_Dependence => Ada.Execution_Time.Timers, No_Dependence => Ada.Task_Attributes); See also Ada (programming language) High integrity software SPARK (programming language) From "Ada Reference Manual (Ada 202x Draft 19"): (D.13 The Ravenscar and Jorvik Profiles) References External links A Ravenscar runtime for ARM processors Discussion about implementation Ravenscar Profile for ARM's Ada (programming language) Ada programming language family
https://en.wikipedia.org/wiki/Ravenscar
Ravenscar may refer to: Ravenscar, North Yorkshire Ravenscar railway station, in Ravenscar, North Yorkshire Ravenscar profile, a subset of the Ada programming language designed for safety-critical real-time computing Roger Comstock, Marquis of Ravenscar, a character in Neal Stephenson's The Baroque Cycle
https://en.wikipedia.org/wiki/The%20Extraordinary
The Extraordinary is an Australian television documentary series that featured stories of the paranormal and supernatural. It ran on the Seven Network from 1993 to 1996. The following year it moved to the Nine Network. History The show was hosted by Warwick Moss, who would narrate to the audience "true life" tales of the paranormal. Stories on the show revolved around a wide variety of subjects including alien abduction, ghosts, tales of clairvoyance and cryptozoological creatures such as the yowie. The show had a distinct local slant, with stories on the 1987 Nullarbor UFO incident or the appearance of a headless ghost in a music video for the Australian band 1927. The Extraordinary was one of very few Australian programs to crack the US Market. In the United States it ran as a syndicated program from 1994 until 1996, hosted by Corbin Bernsen. Episodes Telly Savalas' ghost story, death pool aboriginal legend, soldier ghost photograph, killer whale conspiracy, blind psychic. See also List of Australian television series External links 1990s Australian documentary television series Paranormal television 1993 Australian television series debuts 1996 Australian television series endings Seven Network original programming
https://en.wikipedia.org/wiki/Robert%20Tomasulo
Robert Marco Tomasulo (October 31, 1934 – April 3, 2008) was a computer scientist, and the inventor of the Tomasulo algorithm. Tomasulo was the recipient of the 1997 Eckert–Mauchly Award "[f]or the ingenious Tomasulo algorithm, which enabled out-of-order execution processors to be implemented." Robert Tomasulo attended Regis High School in New York City. He graduated from Manhattan College and then earned an engineering degree from Syracuse University. In 1956 he joined IBM research. After nearly a decade gaining broad experience in a variety of technical and leadership roles, he transitioned to mainframe development, including the IBM System/360 Model 91 and its successors. Following his 25-year career with IBM, Bob worked on an incubator project at Storage Technology Corporation to develop the first CMOS-based mainframe system; co-founded NetFrame, a mid-80s startup to develop one of the earliest microprocessor-based server systems; and worked as a consultant on processor architecture and microarchitecture for Amdahl Consulting. On January 30, 2008, Tomasulo spoke at the University of Michigan College of Engineering about his career and the history and development of out-of-order execution. Notes External links Lecture, 2008 Personal Profile on the computer.org 1934 births 2008 deaths American computer scientists Regis High School (New York City) alumni Manhattan College alumni Syracuse University alumni
https://en.wikipedia.org/wiki/Relocation%20%28computing%29
Relocation is the process of assigning load addresses for position-dependent code and data of a program and adjusting the code and data to reflect the assigned addresses. Prior to the advent of multiprocess systems, and still in many embedded systems, the addresses for objects were absolute starting at a known location, often zero. Since multiprocessing systems dynamically link and switch between programs it became necessary to be able to relocate objects using position-independent code. A linker usually performs relocation in conjunction with symbol resolution, the process of searching files and libraries to replace symbolic references or names of libraries with actual usable addresses in memory before running a program. Relocation is typically done by the linker at link time, but it can also be done at load time by a relocating loader, or at run time by the running program itself. Some architectures avoid relocation entirely by deferring address assignment to run time; as, for example, in stack machines with zero address arithmetic or in some segmented architectures where every compilation unit is loaded into a separate segment. Segmentation Object files are segmented into various memory segment types. Example segments include code segment (.text), initialized data segment (.data), uninitialized data segment (.bss), or others. Relocation table The relocation table is a list of pointers created by the translator (a compiler or assembler) and stored in the object or executable file. Each entry in the table, or "fixup", is a pointer to an absolute address in the object code that must be changed when the loader relocates the program so that it will refer to the correct location. Fixups are designed to support relocation of the program as a complete unit. In some cases, each fixup in the table is itself relative to a base address of zero, so the fixups themselves must be changed as the loader moves through the table. In some architectures a fixup that crosses certain boundaries (such as a segment boundary) or that is not aligned on a word boundary is illegal and flagged as an error by the linker. DOS and 16-bit Windows Far pointers (32-bit pointers with segment:offset, used to address 20-bit 640 KB memory space available to DOS programs), which point to code or data within a DOS executable (EXE), do not have absolute segments, because the actual address of code/data depends on where the program is loaded in memory and this is not known until the program is loaded. Instead, segments are relative values in the DOS EXE file. These segments need to be corrected, when the executable has been loaded into memory. The EXE loader uses a relocation table to find the segments which need to be adjusted. 32-bit Windows With 32-bit Windows operating systems, it is not mandatory to provide relocation tables for EXE files, since they are the first image loaded into the virtual address space and thus will be loaded at their preferred base address. For b
https://en.wikipedia.org/wiki/Data%20Access%20Manager
The Data Access Manager (DAM) was a database access API for the classic Mac OS, introduced in 1991 as an extension to System 7. Similar in concept to ODBC, DAM saw little use and was eventually dropped in the late 1990s. Only a handful of products ever used it, although it was used for some extremely impressive demoware in the early 1990s. More modern versions of the classic Mac OS, and macOS, use ODBC for this role instead. Concepts DAM and ODBC are similar in many ways. The primary purpose of both systems was to send "query strings" to a data provider, who would respond (potentially) with a "result set" consisting of rows of data. Both systems were expected to convert data to and from the system's respective formats, integers and strings for instance. Additionally, both provided a communications subsystem that hid the details of sending queries and data between the client and server. Like most Apple software, DAM attempted to make the query process as simple as possible for the users, both application users and programmers writing those applications. One particularly notable feature was the concept of "query documents". Query documents contained any number of pre-defined queries (or other server commands), along with optional code to modify them before being sent to the server. For instance, a typical query document might contain a query string that would log into the database server, and if that was successful, look up the current date from the local client machine using a Mac OS call, and then use that date in a query that returns inventory in a warehouse for a given date. Query documents could also include computer code and resources needed to support this process, for instance, a dialog box asking for the username and password. Applications could use query documents without having any idea of the internals of the query. They simply opened the document, which consisted of a series of resources, and ran each query resource inside in turn. The DAM would ensure that any needed code in the document would be run without the application even being aware of it, and eventually, results would be passed back to the application for display. The entire operation was opaque, allowing applications to add DAM support with ease. DAM also included two more direct API's, the High Level interface, and the Low Level interface. High Level was fairly similar to using query documents, although it was expected that the application would construct the queries in code rather than resources. The High Level interface is broadly similar to ODBC's public interface. Low Level allowed the programmer to intercede at any point in the query process, retrieving data line-by-line for instance. One major difference between DAM and ODBC came about largely by accident. Prior to the development of DAM, Apple had purchased a database middleware product they sold as the Data Access Language, or DAL. DAL was essentially a standardized SQL with translators for various databases th
https://en.wikipedia.org/wiki/V6%20%28disambiguation%29
A V6 is an engine with six cylinders in two banks of three. V6 or V-6 may also refer to: Science and technology ITU-T V.6, a withdrawn recommendation for data signalling Version 6 Unix, a reference to the sixth edition of Research Unix from 1975 Dorsomedial area, (Area V6) of the visual cortex V6, one of six precordial leads in electrocardiography V6 television set-top-box, used by Virgin Media – refer to V+#History Places V6 Grafton Street, a road in Milton Keynes Federated States of Micronesia, by ITU prefix Other SSSR-V6 OSOAVIAKhIM, a Soviet airship V6 (quickstep), a dance figure in quickstep V6 (band), a Japanese musical group ATC code V06 General nutrients, a subgroup of the Anatomical Therapeutic Chemical Classification System See also Sony MDR-V6, a large diaphragm foldable headphone 6V (disambiguation) VVVVVV
https://en.wikipedia.org/wiki/Thermal%20conductivities%20of%20the%20elements%20%28data%20page%29
Thermal conductivity Notes Ref. CRC: Values refer to 27 °C unless noted. Ref. CR2: Values refer to 300 K and a pressure of "100 kPa (1 bar)", or to the saturation vapor pressure if that is less than 100 kPa. The notation (P=0) denotes low pressure limiting values. Ref. LNG: Values refer to 300 K. Ref. WEL: Values refer to 25 °C. References CRC As quoted from various sources in an online version of: David R. Lide (ed), CRC Handbook of Chemistry and Physics, 84th Edition. CRC Press. Boca Raton, Florida, 2003; Section 12, Properties of Solids; Thermal and Physical Properties of Pure Metals / Thermal Conductivity of Crystalline Dielectrics / Thermal Conductivity of Metals and Semiconductors as a Function of Temperature CR2 As quoted from various sources in an online version of: David R. Lide (ed), CRC Handbook of Chemistry and Physics, 84th Edition. CRC Press. Boca Raton, Florida, 2003; Section 6, Fluid Properties; Thermal Conductivity of Gases LNG As quoted from this source in an online version of: J.A. Dean (ed), Lange's Handbook of Chemistry (15th Edition), McGraw-Hill, 1999; Section 4; Table 4.1, Electronic Configuration and Properties of the Elements Ho, C. Y., Powell, R. W., and Liley, P. E., J. Phys. Chem. Ref. Data 3:Suppl. 1 (1974) WEL As quoted at http://www.webelements.com/ from these sources: G.W.C. Kaye and T.H. Laby in Tables of physical and chemical constants, Longman, London, UK, 15th edition, 1993. D.R. Lide, (Ed.) in Chemical Rubber Company handbook of chemistry and physics, CRC Press, Boca Raton, Florida, USA, 79th edition, 1998. J.A. Dean (ed) in Lange's Handbook of Chemistry, McGraw-Hill, New York, USA, 14th edition, 1992. A.M. James and M.P. Lord in Macmillan's Chemical and Physical Data, Macmillan, London, UK, 1992. See also List of thermal conductivities Properties of chemical elements Chemical element data pages
https://en.wikipedia.org/wiki/Randall%20Hyde
Randall Hyde (born 1956) is best known as the author of The Art of Assembly Language, a popular book on assembly language programming. He created the Lisa assembler in the late 1970s and developed the High Level Assembly (HLA) language. Biography Hyde was educated, and later became a lecturer, at the University of California, Riverside. He earned a bachelor's degree in Computer Science in 1982, and a master's degree in Computer Science in 1987 - both from UC Riverside. His area of specialization is compilers and other system software, and he has written compilers, assemblers, operating systems and control software. He was a lecturer at California State Polytechnic University, Pomona from 1988 to 1993 and a lecturer at UC Riverside from 1989 to 2000. While teaching at UC Riverside and Cal Poly, Pomona, Randy frequently taught classes pertaining to assembly programming (beginning and advanced), software design, compilers, and programming language theory. He was founder and president of Lazer Microsystems, which wrote the SmartBASIC interpreter and ADAM Calc for the Coleco Adam. According to Rich Drushel, the company also wrote the ADAM implementation of CP/M 2.2. He also wrote the 1983 Atari 2600 game Porky's while at Lazer, published by Fox Video Games. Hyde has made many posts to the alt.lang.asm newsgroup in the past. , Hyde operates and is president of Plantation Productions, Inc., a Riverside, California corporation providing sound, lighting, staging, and event support services for small to medium-sized venues, for audiences of 10 to 5,000 people. Books Modern books Early Apple programming books How to Program the Apple II Using 6502 Assembly Language (1981) p-Source (A Guide to the Apple Pascal System) (1983) References External links Webster: The Place on the Net to Learn Assembly Language Randall Hyde's homepage The Rebirth of Assembly Language Programming by Dan Romanchik, Application Development Trends, October 13, 2003, an interview with Randy Hyde about assembly language The Fallacy of Premature Optimization, ACM Ubiquity, 2006, Volume 7, Issue 24. University of California, Riverside alumni Living people Computer programmers American technology writers 1956 births
https://en.wikipedia.org/wiki/Cdrdao
cdrdao (“CD recorder disc-at-once”) is a free utility software product for authoring and ripping of CD-ROMs. The program is released under the GPL. Cdrdao records audio or data CD-Rs in disk-at-once mode based on a textual description of the CD contents, known as a TOC file that can be created and customized inside a text editor. cdrdao runs from command line and has no graphical user interface, except for third-party ones such as K3b (Linux), Gcdmaster (Linux) or XDuplicator (Windows). References External links cdrdao man page Gcdmaster page XDuplicator website K3b website Free optical disc authoring software Free software programmed in C++ Console CD ripping software Linux CD ripping software
https://en.wikipedia.org/wiki/Information%20logistics
Information Logistics (IL) deals with the flow of information between human and / or machine actors within or between any number of organizations that in turn form a value creating network (see, e.g.). IL is closely related to information management, information operations and information technology. Definition The term Information Logistics (IL) may be used in either of two ways: Firstly, it can be defined as "managing and controlling information handling processes optimally with respect to time (flow time and capacity), storage, distribution and presentation in such a way that it contributes to company results in concurrence with the costs of capturing (creation, searching, maintenance etc)." (Petri,2017) Thus IL utilizes logistic principles to optimize information handling. Secondly, IL can be seen as a concept using information technology to optimize logistics. A term which is closely related to the first meaning of Information Logistics is Data Logistics, a concept used in Computer Networking. "The study of solutions to problems in Computer Systems that flexibly span resources and services relating to Data Movement, Data Storage and Data Processing." [ref?] Systems that support general Data Logistics solutions thus must span the traditionally separate fields of Networking, File/Database Systems and Process Management. Data Logistics is a more general form of the term Logistical Networking, used as the name of a particular network storage architecture and software stack. Goal The goal of Information Logistics is to deliver the right product, consisting of the right information element, in the right format, at the right place at the right time for the right people at the right price and all of this is customer demand driven. If this goal is to be achieved, knowledge workers are best equipped with information for the task at hand for improved interaction with its customers and machines are enabled to respond automatically to meaningful information. Methods for achieving the goal are: the analysis of information demand intelligent information storage the optimization of the flow of information maintaining both security and organizational flexibility integrated information and billing solutions The expression was formed by the Indian mathematician and librarian S. R. Ranganathan . The supply of a product is part of the discipline Logistics. The purpose of this discipline is described as follows: Logistics is the teachings of the plans and the effective and efficient run of supply. The contemporary logistics focuses on the organization, planning, control and implementation of the flow of goods, money, information and people. Information Logistics focusses on information. Information (from Latin informare: "shape, shapes, instruct") means in a general sense everything that adds knowledge and thus reduce ignorance or lack of precision. In a stricter sense, raw data only becomes information to those who can interpret it. Interpr
https://en.wikipedia.org/wiki/Meg%20Goetz
Meg Goetz was the first woman to be appointed as a Reading clerk of the United States House of Representatives, a face familiar to viewers of C-SPAN, the network which covers House proceedings. A graduate of Chestnut Hill College in Philadelphia, she has degrees in political science and economics. She was appointed Democratic reading clerk by Speaker Tip O'Neill in 1982 and served until 1998 when she retired from the House. The reading clerks prepare the official version of all House-passed legislation and maintain all official papers on behalf of the House of Representatives relative to legislation. They report to the House membership all bills, motions, amendments, and all official communications. They also serve as House liaisons to the U.S. Senate, formally transmit all official actions taken by the House, and prepare the official records of all changes to legislation made on the floor. During the vote for Speaker at the beginning of each Congress, or when the electronic voting system fails, the reading clerk calls the roll of House membership. Ms. Goetz is currently the Vice President for Advocacy for the American Indian Higher Education Consortium, an advocacy group in Washington, D.C. She was replaced as House reading clerk for the Democrats by Mary Kevin Niland who was appointed by then Minority Leader Richard Gephardt. External links References Living people Year of birth missing (living people) Reading Clerks of the United States House of Representatives Chestnut Hill College alumni
https://en.wikipedia.org/wiki/Paul%20Hays
Paul Hays is a former Reading Clerk of the United States House of Representatives, a face familiar to viewers of C-SPAN, the network which covers House proceedings. The reading clerk reads bills, motions, and other papers before the House and keeps track of changes to legislation made on the floor. During the vote for Speaker at the beginning of each Congress, or when the electronic voting system fails, the clerk calls the roll of members for voting viva voce. Hays joined the House in 1966 and became Republican reading clerk in 1988 at the nomination of Minority Leader Robert H. Michel of Illinois. His parents met while at George Washington University, his father a native of Mississippi, and his mother a transplant from Kansas. Paul Hays was born in Washington D.C. Hays started his career in Washington as a Supreme Court Page. He attended the Capitol Page School while he was a Page. Hays's aunt taught at the Capitol Page School for many years. Hays's Democratic counterpart was Mary Kevin Niland, who remained a reading clerk until 2008. Paul Hays retired as Reading Clerk on April 30, 2007. As the House met only in a pro forma session that day, the last day Hays actually assisted in legislative business was April 26. References External links Short profile in the Washington Post Living people Year of birth missing (living people) Reading Clerks of the United States House of Representatives
https://en.wikipedia.org/wiki/Mary%20Kevin%20Niland
Mary Kevin Niland is a former Reading Clerk of the United States House of Representatives, a face familiar to viewers of C-SPAN, the network which covers House proceedings. The reading clerk reads bills, motions, and other papers before the House and keeps track of changes to legislation made on the floor. During the vote for Speaker of the United States House of Representatives at the beginning of each Congress (or when the electronic voting system fails), the Reading Clerk calls the roll of members for voting viva voce. Niland became Democratic clerk in 1998 at the recommendation of then-House Minority Leader Dick Gephardt of Missouri. She replaced Meg Goetz. Niland served until 2008, when she became Deputy Chief of Legislative Operations in the Office of the Clerk. Her successor as Reading Clerk was Jaime Zapata. Niland's Republican counterpart was Susan Cole. References External links C-SPAN on the reading clerk, with photo of Niland Living people Year of birth missing (living people) Reading Clerks of the United States House of Representatives
https://en.wikipedia.org/wiki/AutoRun
AutoRun and the companion feature AutoPlay are components of the Microsoft Windows operating system that dictate what actions the system takes when a drive is mounted. AutoRun was introduced in Windows 95 to ease application installation for non-technical users and reduce the cost of software support calls. When an appropriately configured CD-ROM is inserted into a CD-ROM drive, Windows detects the arrival and checks the contents for a special file containing a set of instructions. For a CD containing software, these instructions normally initiate installation of the software from the CD-ROM onto the hard drive. To maximise the likelihood of installation success, AutoRun also acts when the drive is accessed ("double-clicked") in Windows Explorer (or "My Computer"). Until the introduction of Windows XP, the terms AutoRun and AutoPlay were used interchangeably, developers often using the former term and end users the latter. This tendency is reflected in Windows Policy settings named AutoPlay that change Windows Registry entries named AutoRun, and in the file which causes "AutoPlay" to be added to drives’ context menus. The terminology was of little importance until the arrival of Windows XP and its addition of a new feature to assist users in selecting appropriate actions when new media and devices were detected. This new feature was called AutoPlay and a differentiation between the two terms was created. AutoRun, a feature of Windows Explorer (actually of the shell32 dll) introduced in Windows 95, enables media and devices to launch programs by use of command listed in a file called , stored in the root directory of the medium. Primarily used on installation CD-ROMs, the applications called are usually application installers. The autorun.inf file can also specify an icon which will represent the device visually in Explorer along with other advanced features. The terms AutoRun and AutoPlay tend to be interchangeably used when referring to the initiating action, the action that detects and starts reading from discovered volumes. The flowchart illustration in the AutoPlay article shows how AutoRun is positioned as a layer between AutoPlay and the Shell Hardware Detection service and may help in understanding the terminology. However, to avoid confusion, this article uses the term AutoRun when referring to the initiating action. AutoPlay AutoPlay is a feature introduced in Windows XP which examines removable media and devices and, based on content such as pictures, music or video files, launches an appropriate application to play or display the content. If available, settings in an autorun.inf file can add to the options presented to the user. AutoPlay is based on a set of handler applications registered with the AutoPlay system. Each media type (Pictures, Music, Video) can have a set of registered handlers which can deal with playing or display that type of media. Each hardware device can have a default action occurring on discovery of a
https://en.wikipedia.org/wiki/Otsu%27s%20method
In computer vision and image processing, Otsu's method, named after , is used to perform automatic image thresholding. In the simplest form, the algorithm returns a single intensity threshold that separate pixels into two classes, foreground and background. This threshold is determined by minimizing intra-class intensity variance, or equivalently, by maximizing inter-class variance. Otsu's method is a one-dimensional discrete analogue of Fisher's Discriminant Analysis, is related to Jenks optimization method, and is equivalent to a globally optimal k-means performed on the intensity histogram. The extension to multi-level thresholding was described in the original paper, and computationally efficient implementations have since been proposed. Otsu's method The algorithm exhaustively searches for the threshold that minimizes the intra-class variance, defined as a weighted sum of variances of the two classes: Weights and are the probabilities of the two classes separated by a threshold ,and and are variances of these two classes. The class probability is computed from the bins of the histogram: For 2 classes, minimizing the intra-class variance is equivalent to maximizing inter-class variance: which is expressed in terms of class probabilities and class means , where the class means , and are: The following relations can be easily verified: The class probabilities and class means can be computed iteratively. This idea yields an effective algorithm. Algorithm Compute histogram and probabilities of each intensity level Set up initial and Step through all possible thresholds maximum intensity Update and Compute Desired threshold corresponds to the maximum MATLAB implementation histogramCounts is a 256-element histogram of a grayscale image different gray-levels (typical for 8-bit images). level is the threshold for the image (double). function level = otsu(histogramCounts) total = sum(histogramCounts); % total number of pixels in the image %% OTSU automatic thresholding top = 256; sumB = 0; wB = 0; maximum = 0.0; sum1 = dot(0:top-1, histogramCounts); for ii = 1:top wF = total - wB; if wB > 0 && wF > 0 mF = (sum1 - sumB) / wF; val = wB * wF * ((sumB / wB) - mF) * ((sumB / wB) - mF); if ( val >= maximum ) level = ii; maximum = val; end end wB = wB + histogramCounts(ii); sumB = sumB + (ii-1) * histogramCounts(ii); end end Matlab has built-in functions graythresh() and multithresh() in the Image Processing Toolbox which are implemented with Otsu's method and Multi Otsu's method, respectively. Python implementation This implementation requires the NumPy library.import numpy as np def compute_otsu_criteria(im, th): """Otsu's method to compute criteria.""" # create the thresholded image thresholded_im = np.zeros(im.shape) thresholded_im[im >= th] = 1 # compute weights nb_pixels = im.size nb_pixels1 = np.count_nonze
https://en.wikipedia.org/wiki/Autorun.inf
An autorun.inf file is a text file that can be used by the AutoRun and AutoPlay components of Microsoft Windows operating systems. For the file to be discovered and used by these component, it must be located in the root directory of a volume. As Windows has a case-insensitive view of filenames, the autorun.inf file can be stored as AutoRun.inf or Autorun.INF or any other case combination. The AutoRun component was introduced in Windows 95 as a way of reducing support costs. AutoRun enabled application CD-ROMs to automatically launch a program which could then guide the user through the installation process. By placing settings in an autorun.inf file, manufacturers could decide what actions were taken when their CD-ROM was inserted. The simplest autorun.inf files have just two settings: one specifying an icon to represent the CD in Windows Explorer (or "My Computer") and one specifying which application to run. Extra settings have been added in successive versions of Windows to support AutoPlay and other new features. The autorun.inf file autorun.inf is an ASCII text file located in the root folder of a CD-ROM or other volume device medium (See AutoPlay device types). The structure is that of a classic Windows .ini file, containing information and commands as "key=value" pairs, grouped into sections. These keys specify: The name and the location of a program to call when the medium is inserted (the "AutoRun task"). The name of a file that contains an icon that represents the medium in Explorer (instead of the standard drive icon). Commands for the menu that appears when the user right-clicks the drive icon. The default command that runs when the user double-clicks the drive icon. Settings that alter AutoPlay detection routines or search parameters. Settings that indicate the presence of drivers. Abuse Autorun.inf has been used to execute a malicious program automatically, without the user knowing. This functionality was removed in Windows 7 and a patch for Windows XP and Vista was released on August 25, 2009 and included in Microsoft Automatic Updates on February 8, 2011. Inf handling The mere existence of an autorun.inf file on a medium does not mean that Windows will automatically read it or use its settings. How an inf file is handled depends on the version of Windows in use, the volume drive type and certain Registry settings. Assuming Registry settings allow AutoRun, then the following autorun.inf handling takes place: Windows versions prior to Windows XP On any drive type, the autorun.inf is read, parsed and instructions followed immediately and silently. The "AutoRun task" is the application specified by the open or shellexecute keys. If an AutoRun task is specified it is executed immediately without user interaction. Windows XP, prior to Service Pack 2 Introduction of AutoPlay. Drives of type DRIVE_CDROM invoke AutoPlay if no autorun.inf file is found. Drives of type DRIVE_REMOVABLE do not use the autorun.inf file. An
https://en.wikipedia.org/wiki/Lighthill%20report
Artificial Intelligence: A General Survey, commonly known as the Lighthill report, is a scholarly article by James Lighthill, published in Artificial Intelligence: a paper symposium in 1973. Published in 1973, it was compiled by Lighthill for the British Science Research Council as an evaluation of academic research in the field of artificial intelligence (AI). The report gave a very pessimistic prognosis for many core aspects of research in this field, stating that "In no part of the field have the discoveries made so far produced the major impact that was then promised". It "formed the basis for the decision by the British government to end support for AI research in most British universities". While the report was supportive of research into the simulation of neurophysiological and psychological processes, it was "highly critical of basic research in foundational areas such as robotics and language processing". The report stated that AI researchers had failed to address the issue of combinatorial explosion when solving problems within real-world domains. That is, the report states that AI techniques may work within the scope of small problem domains, but the techniques would not scale up well to solve more realistic problems. The report represents a pessimistic view of AI that began after early excitement in the field. The Science Research Council's decision to invite the report was partly a reaction to high levels of discord within the University of Edinburgh's Department of Artificial Intelligence, one of the earliest and biggest centres for AI research in the UK. See also AI winter ALPAC report References External links "Artificial Intelligence: A General Survey" James Lighthill: in Artificial Intelligence: a paper symposium, Science Research Council Other Freddy II Robot Resources Includes a link to the 90-minute 1973 "Controversy" debate from the Royal Academy of Lighthill vs. Michie, McCarthy and Gregory in response to Lighthill's report to the British government. The Lighthill Debate (1973) at YouTube: Part 1·2·3·4·5·6 1973 documents 1973 in science 1973 in the United Kingdom Computer science papers History of artificial intelligence History of computing in the United Kingdom Documents of the United Kingdom
https://en.wikipedia.org/wiki/RoboCop%203
RoboCop 3 is a 1993 American cyberpunk action film directed by Fred Dekker and written by Dekker and Frank Miller. It is the sequel to the 1990 film RoboCop 2 and the third and final entry in the original RoboCop franchise. It stars Robert Burke, Nancy Allen and Rip Torn. Set in the near future in a dystopian metropolitan Detroit, the plot centers around RoboCop (Burke) as he vows to avenge the death of his partner Anne Lewis (Allen) and save Detroit from falling into chaos, while evil conglomerate OCP, run by its CEO (Torn), advances its program to demolish the city and build a new "Delta City" over the former homes of the residents. It was filmed in Atlanta, Georgia. Most of the buildings seen in the film were slated for demolition to make way for facilities used in the 1996 Summer Olympics, which were held in the city. RoboCop 3 is the first film to use digital morphing in more than one scene. The film was a critical and commercial failure in the US, grossing $47 million worldwide against its $22 million budget, making it the least profitable film of the RoboCop franchise. Two television series, RoboCop and RoboCop: Prime Directives, were released in 1994 and 2001 respectively, while the film series was rebooted with the 2014 remake RoboCop. Plot In a dystopian future, the conglomerate Omni Consumer Products (OCP) have succeeded in their plan from prior films and have acquired the city of Detroit via bankruptcy, but are now struggling with their plans to create the new Delta City. The Delta City dream of the now-deceased OCP CEO lives on with the help of the Japanese Kanemitsu Corporation, which has bought a controlling stake in OCP and is trying to finance the plan. Kanemitsu, CEO of the Kanemitsu Corporation, proceeds with the plans to remove the current citizens in order to create Delta City, but is doubtful about the competence of his new "partners". Due to passive resistance by the DPD toward mass eviction, OCP creates a heavily armed private security force called the Urban Rehabilitators, nicknamed "Rehabs", under the command of Paul McDaggett, to forcibly relocate the evicted citizens such as the residents of the now condemned Cadillac Heights. Nikko Halloran, a young resident of Cadillac Heights skilled with computers, loses her parents in the relocation process. RoboCop and his partner Anne Lewis try to defend civilians from the Rehabs one night, but McDaggett mortally wounds Lewis, who eventually dies. Unable to fight back because of his "Fourth Directive" programming, RoboCop is saved by members of a resistance movement composed of Nikko and residents from Cadillac Heights and he eventually joins them. Because he was severely damaged during the shoot-out, RoboCop's systems efficiency plummets, and he asks the resistance to summon Dr. Lazarus, one of the scientists who created him. Upon arrival, she begins to treat him, deleting the Fourth Directive in the process. During an earlier raid on an armory, the resistance picked up
https://en.wikipedia.org/wiki/Michael%20Hirsh%20%28producer%29
Michael Hirsh (born April 7, 1948) is a Belgian-born Canadian producer. He has been a significant figure in the Canadian television industry, or more specifically children's programming, since the 1980s, with his most well-known role being the co-founder of animation studio Nelvana. Personal life Born in Belgium in 1948, Michael's family emigrated to North America when he was a child; he was raised primarily in Toronto, Ontario, Canada and New York City. After high school, Michael attended York University in Toronto where he would meet his future business partner, Patrick Loubert. Hirsh abandoned his post-secondary education after three years to pursue his filmmaking ambitions. Career Nelvana In 1971, Hirsh co-founded Nelvana with Patrick Loubert and British-born animation artist Clive A. Smith. Under co-CEO Hirsh's leadership, the studio was responsible for many of its animated phenomena. During this period, he co-directed the satirical live-action/animated 1972 feature Voulez-vous coucher avec God? In the 1980s, Hirsh saved Nelvana from more than one brush with bankruptcy. After the failure of their initial feature film, Rock & Rule (in which he also worked as a storyboard artist for the film), the original distributor of their live-action show, T. and T., went out of business. Defying advice to fold the company, Michael found a replacement distributor within six weeks. In late 1996, amid Golden Books negotiations to buy Nelvana, Hirsh went against his co-founders' advice and declined the offer. This led to a now infamous argument with the then COO of the company, Eleanor Olmstead, in which the normally mild-mannered Hirsh and Olmstead were reportedly heard "swearing up and down the hallway at one another". After remaining unaware for some time, Golden Books eventually walked out of the C$140 million deal in light of the internal discord. In 1997, Hirsh and Nelvana helped found Teletoon along with fellow Canadian children's television production company Cinar. In September 2000, Hirsh sold the Nelvana holdings to Corus Entertainment for C$540 million. Two years later, he was the last of the original founders to leave the studio, though the Corus press release stated that he had decided to take on an advisory role in the company. Cookie Jar Group In 2004, Hirsh reestablished himself in the children's television market when he led a consortium which acquired the remains of Cinar after a financial scandal had brought that company to ruin. Cinar was bought for C$190 million and Hirsh became CEO of the new company rebranded "Cookie Jar Entertainment". Since then, the Cookie Jar Group has been expanding in both Canada and the United States. In 2008, Cookie Jar merged with DIC Entertainment in an estimated US$87.6 million buyout, forming one of the world's largest privately held children's entertainment companies. In 2012, Cookie Jar Entertainment was acquired by DHX Media and Hirsh became Executive Chairman of DHX Media. The combined
https://en.wikipedia.org/wiki/Macclesfield%20Canal
The Macclesfield Canal is a canal in east Cheshire, England. There were various proposals for a canal to connect the town of Macclesfield to the national network from 1765 onwards, but it was not until 1824 that a scheme came to fruition. There were already suggestions by that date that a railway would be better, but the committee that had been formed elected for a canal and the engineer Thomas Telford endorsed the decision. The canal as built was a typical Telford canal, constructed using cut and fill, with numerous cuttings and embankments to enable it to follow as straight a course as possible, although Telford had little to do with its construction, which was managed by William Crosley. The canal opened in 1831 and is long. All of its twelve locks are concentrated in a single flight at Bosley, which alters the level by . The canal runs from a junction with the Peak Forest Canal at Marple in the north, in a generally southerly direction, through the towns of Macclesfield and Congleton, to an end-on junction with the Hall Green Branch of the Trent and Mersey Canal. There is a stop lock at the junction, which drops the level by , and the branch runs for another to Hardings Wood Junction, where it joins the Trent and Mersey main line. This short branch is usually considered to be part of the Macclesfield Canal in modern literature. Faced with growing threats from railways and the fact that the Trent and Mersey was proposing to merge with a railway company, the management did all they could to cut costs. In 1846, they reached an agreement to sell the canal to a railway company, which became the Manchester, Sheffield and Lincolnshire Railway soon afterwards. Under railway ownership, the canal fared better than many and commercial carrying continued until 1954. There had been some leisure use of the canal since the end of the First World War and the North Cheshire Cruising Club, formed in 1943 and based at the High Lane arm, became the first such cruising club on the British inland waterways. There were dangers that the northern end would be isolated under plans to close the Ashton Canal and the lower Peak Forest Canal in the early 1960s, but vigorous campaigning and a growing restoration movement resulted in the Transport Act 1968, which secured the future of those canals. The designation of the canal as part of the Cheshire Ring in 1965 was part of the strategy by the Inland Waterways Association to promote the leisure potential of canals. The whole canal was designated as a Conservation Area by Macclesfield Borough Council in 1975 and a large number of its structures have been Grade II listed in recognition of their historic importance. This includes a number of elegant roving bridges, which are known locally as snake bridges. Much of the canal is rural, passing through open countryside, and there are a number of impressive embankments and aqueducts, where the canal crosses river valleys. In the centres of population, there are several large
https://en.wikipedia.org/wiki/The%20Blunder%20Years
"The Blunder Years" is the fifth episode of the thirteenth season of the American animated television series The Simpsons. It originally aired on the Fox network in the United States on December 9, 2001. The episode sees Homer, after being hypnotized by the hypnotist Mesmerino while having dinner at the restaurant Pimento Grove, reminded by a repressed traumatic experience from his childhood, including the moment he discovered the dead body of Waylon Smithers' father while having a fun at an abandoned mine. The Simpsons set out to find the corpse that triggered Homer's psychological trauma, which evolves into a murder mystery later in the episode. The episode was written by Ian Maxtone-Graham while Steven Dean Moore served as the director. The original idea for the episode came from current show runner Al Jean, which involved the murder mystery in the episode. The writers then incorporated Homer's flashbacks, at which point the episode was titled "The Blunder Years", a parody on the television show The Wonder Years. Following the release of The Simpsons thirteenth season on DVD and Blu-ray, the episode received mixed reviews from critics. Plot After tricking Marge into believing that the model for the Burly paper towel corporation Chad Sexington would have dinner with the Simpsons, Homer takes the family to the Pimento Grove to watch live performers as compensation. One of the acts is a hypnotist called Mesmerino. Homer volunteers, and Mesmerino hypnotizes him into thinking he is twelve years old again. As Homer starts to reminisce, he starts screaming incessantly all through the night. The next day, Homer's co-workers Lenny and Carl bring him home early from work, still screaming, and Lisa and Marge finally manage to calm him down with some Yaqui tea. Homer starts to recall the events leading up to the scream-inducing incident: beginning when he, Lenny, and Carl were hiking in the woods and were confronted by a young Fat Tony, only to be saved by a young Moe. Upon noticing that his bar was empty, the present-day Moe arrives at the Simpsons' home and recalls that while they sat by a fire that night, they saw a near-meltdown at the Springfield Nuclear Power Plant. The next day, they went to the old quarry for a swim, and Homer jumped in, only to find that there was no water but only mud. However, Homer recalls that there was no water in the quarry because something was blocking the inlet pipe. When Homer unblocked it, he found a rotting corpse in his lap, causing him to scream so much his voice changed from puberty. Back in the present, the Simpsons decide to investigate. They go to the old quarry, where they meet Chief Wiggum, who comes with them. Marge uses Burly paper towels to drain the water from the quarry. Finding nothing left of the corpse but a skeleton, they take its skull with them and travel through the pipe to emerge through a hatch in Mr. Burns's office. They confront him about the body, but he insists he did not murder anyone.
https://en.wikipedia.org/wiki/Lisa%27s%20Substitute
"Lisa's Substitute" is the nineteenth episode of the second season of the American animated television series The Simpsons. It originally aired on the Fox network in the United States on April 25, 1991. In the episode, Lisa's teacher Miss Hoover takes medical leave due to what she thinks is Lyme disease, so substitute teacher Mr. Bergstrom takes over the class. Lisa finds Mr. Bergstrom's teaching methods inspiring and discovers an entirely new love for learning. When Miss Hoover returns to class, Lisa is devastated to lose her most positive adult role model. Eventually, she realizes that while Mr. Bergstrom was one of a kind, she can find role models in other people, including her father Homer. Meanwhile, Bart runs for class president against Martin. The episode was written by Jon Vitti and directed by Rich Moore. It is the first episode of the show to have the opening sequence start at the driveway scene. Dustin Hoffman, using the pseudonym Sam Etic, guest-starred as Mr. Bergstrom, who was modeled on the physical appearance of Mike Reiss, a longtime writer and producer on the show. The episode features cultural references to Mike Nichols's film The Graduate, which starred Hoffman, and the novel Charlotte's Web by E. B. White. Since airing, the episode has received positive reviews from television critics, and it is one of the most celebrated episodes in the show's history, with many fans regarding it as the best episode of the series. It acquired a Nielsen rating of 11.1, and was the highest-rated show on Fox the week it aired. Plot Lisa's teacher, Miss Hoover, announces to her class that she has Lyme disease, and is taking a leave of absence. She is soon replaced by a substitute teacher, Mr. Bergstrom. When Mr. Bergstrom shows up for his first day teaching Lisa's class, he is dressed as a cowboy and pretends he is in Texas in 1830, and asks the students to name three historical inaccuracies on his costume. Lisa is able to name four historical innacuracies, and Mr. Bergstrom is impressed by Lisa's knowledge of his deliberate anachronisms. He rewards her with his cowboy hat. Due to Mr. Bergstrom's unorthodox teaching methods and friendly nature, Lisa begins to look up to him. When Lisa and Homer are visiting a museum, they run into Mr. Bergstrom, and Lisa becomes embarrassed when Homer displays his ignorance. Sensing a void in Lisa and Homer's relationship, Mr. Bergstrom takes Homer aside to suggest he be a more positive role model. At Marge's suggestion, Lisa goes to invite Mr. Bergstrom to dinner at their home, but is devastated to find Miss Hoover back (it turns out her Lyme disease was psychosomatic) and Mr. Bergstrom gone. Lisa rushes to Mr. Bergstrom's apartment and learns that he has accepted a new job in Capital City. She rushes to the train station right when Mr. Bergstrom is about to board the train and tearfully tells him that she will be lost without him. Mr. Bergstrom replies that the life of a substitute teacher is transient, a
https://en.wikipedia.org/wiki/Packet%20data%20serving%20node
The Packet Data Serving Node, or PDSN, is a component of a CDMA2000 mobile network. It acts as the connection point between the radio access and IP networks. This component is responsible for managing PPP sessions between the mobile provider's core IP network and the mobile station (mobile phone). It is similar in function to the GGSN (GPRS Gateway Support Node) that is found in GSM and UMTS networks. Although PDSN is thought of being similar to GGSN in a conceptual sense, logically it's a combination of SGSN and GGSN in the CDMA world. It provides: Mobility management functions (provided by SGSN in the GPRS/UMTS networks) Packet routing functionality (provided by GGSN in the GPRS/UMTS networks) References See also CDMA 2000 Radio Network Controller Starent Networks is the leading provider of PDSN. www.starentnetworks.com Mobile telecommunications standards 3rd Generation Partnership Project 2 standards Telecommunications infrastructure
https://en.wikipedia.org/wiki/Rajiin
"Rajiin" is the 56th episode of the American science fiction television series Star Trek: Enterprise, the fourth episode of season three. It first aired on October 1, 2003, on the UPN network in the United States. It was written by Brent V. Friedman and Chris Black from a story idea from Friedman and Paul Brown, and directed by Mike Vejar. Set in the 22nd century, the series follows the adventures of the first Starfleet starship Enterprise, registration NX-01. Season three of Enterprise features an ongoing story following an attack on Earth by previously unknown aliens called the Xindi. In this episode, Captain Jonathan Archer (Scott Bakula) and the crew visit an alien bazaar seeking a formula to help protect the ship against the anomalies in the Delphic Expanse. They bring back on board a former slave called Rajiin (Nikita Ager), whose motivations are not what the crew initially believe. Several sets were built for the episode, including the alien bazaar. Filming took longer than the normal seven days, with secondary shoots taking an additional day and a half. It was the highest-rated episode of the season so far, with 4.52 million viewers watching the first broadcast. The critical reception was mixed with criticism levelled at its "gratuitous" female sexuality, but the reviewers were pleased that it showed a sense of continuity in the overall Xindi arc with it described as a "space opera". Plot The Xindi Council meet to discuss the progress of Enterprise. Although the Reptilians and Insectoids want to attack the humans, Degra advises them to continue with the plan to build the superweapon. On Enterprise, Sub-Commander T'Pol continues to help Commander Tucker with Vulcan neuropressure sessions. The crew, seeking the formula for a compound to reinforce the ship's hull against spatial anomalies, approach an ocean planet with a vast floating bazaar. Captain Archer leads an away team to meet with B'Rat Ud, who they have come into contact with. After bartering, he sells them the formula for liquid trellium-D, and also informs them that the Xindi recently visited a merchant nearby. Archer meets the merchant, Zjod, a slaver who tries to sell him a female called Rajiin. Archer refuses and leaves, but Rajiin chases after him. Following a fight between Archer and Zjod, the away team leave with Rajiin, and Archer promises to return her to her home planet. Later she approaches Archer in his quarters, and as she nears him she puts him in a trance, and he no longer remembers her visit. Meanwhile, T'Pol and Tucker attempt to replicate the chemical from B'Rat's formula but the first attempt fails. Afterwards, T'Pol returns to her cabin and is surprised to find Rajiin inside. She tries to resist, but is soon overcome. Rajiin attempts to flee using the transporter, but is quickly captured and placed in the brig. As Archer attempts to question her, Lieutenant Reed informs him that two Reptilian ships are on an intercept course. Rajiin admits she was gathering
https://en.wikipedia.org/wiki/Script%20Debugger
Script Debugger is a Macintosh computer source code editor and debugging environment for the AppleScript programming language, and other languages based on Apple Inc.'s Open Scripting Architecture. It is a product of Late Night Software. History Script Debugger version 1.0 was released in 1995 by Mark Alldritt as a third-party alternative to Apple's freeware application, Script Editor. Its competitors at that time included ScriptWizard and Main Event Software's popular Scripter. These two products today are defunct, leaving only Satimage's Smile and integrated development environments such as FaceSpan (also from Late Night) and AppleScript Studio as Script Debugger's current competitors in the field. From version 1 on the program contained several notable features: it was "scriptable," (it could be used to create scripts to control itself), recordable, (it could create scripts based on user actions), and attachable, (scripts could be written to respond to events). More importantly, Script Debugger now allowed inspection of running applications to see what events they were emitting. True to its name, the new utility also contained a full debugger, with support for breakpoints. Script Debugger has since won many awards in the Macintosh scripting community. Version 1 received "5 mice" from MacUser and 4 stars from MacWEEK. Version 2 received the 2000 Macworld Eddy for "Best Development Software", and received "4.5 mice" from both MacUser and Macworld. On February 9, 2006, version 4 of Script Debugger was released. This version was completely rewritten to take full advantage of the new Cocoa and Tiger APIs. The new release also included an improved version of the JavaScript OSA scripting component. Version 5 of Script Debugger was released in June 2012. Version 6 of Script Debugger was released in June 2016, with support for new features such as code folding and AppleScriptObjC. Version 7 of Script Debugger was released in March 2018, introducing the free "Lite" mode and new features such as version browsing, enhanced applets, better bundle editing. References External links Script Debugger 7 AppleScript Editors a comparison of several editors, out of date AppleScript Editors, a page of links to AppleScript utilities MacUser UK a review of version 3.0.1 Macworld another review Macintosh operating systems development MacOS programming tools
https://en.wikipedia.org/wiki/Rede%20Nacional%20de%20Ensino%20e%20Pesquisa
Rede Nacional de Ensino e Pesquisa (RNP) (National Education and Research Network) is Brazil's academic Internet backbone. It was created in 1989 and the network started being built in 1991. RNP has 27 points of presence, one on each of the 26 Brazilian states and on the Federal District. It connects 15 state networks and over 600 institutions. See also National Research and Education Network External links Rede Nacional de Ensino e Pesquisa List of connected institutions (in Portuguese) Internet in Brazil National research and education networks
https://en.wikipedia.org/wiki/6bone
The 6bone was a testbed for Internet Protocol version 6; it was an outgrowth of the IETF IPng project that created the IPv6 protocols intended to eventually replace the current Internet network layer protocols known as IPv4. The 6bone was started outside the official IETF process at the March 1996 IETF meetings, and became a worldwide informal collaborative project, with eventual oversight from the "NGtrans" (IPv6 Transition) Working Group of the IETF. The original mission of the 6bone was to establish a network to foster the development, testing, and deployment of IPv6 using a model to be based upon the experiences from the Mbone, hence the name "6bone". The 6bone started as a virtual network (using IPv6 over IPv4 tunneling/encapsulation) operating over the IPv4-based Internet to support IPv6 transport, and slowly added native links specifically for IPv6 transport. Although the initial 6bone focus was on testing of standards and implementations, the eventual focus became more on testing of transition and operational procedures, as well as actual IPv6 network usage. The 6bone operated under the IPv6 Testing Address Allocation (see ), which specified the 3FFE::/16 IPv6 prefix for 6bone testing purposes. At its peak in mid-2003, over 150 6bone top level 3FFE::/16 network prefixes were routed, interconnecting over 1000 sites in more than 50 countries. When it became obvious that the availability of IPv6 top level production prefixes was assured, and that commercial and private IPv6 networks were being operated outside the 6bone using these prefixes, a plan was developed to phase out the 6bone (see ). The phaseout plan called for a halt to new 6bone prefix allocations on 1 January 2004 and the complete cessation of 6bone operation and routing over the 6bone testing prefixes on 6 June 2006. Addresses within the 6bone testing prefix have now reverted to the IANA. Related RFCs IPv6 Testing Address Allocation 6bone (IPv6 Testing Address Allocation) Phaseout External links Experimental computer networks IPv6 Projects established in 1996 2006 disestablishments Internet architecture
https://en.wikipedia.org/wiki/Internet%20begging
Internet begging, cyber-begging, e-begging or Internet panhandling is the online version of traditional begging, asking strangers for money to meet basic needs such as food and shelter. Internet begging among strangers differs from street begging in that it can be practiced with relative anonymity, thereby eliminating or reducing the shame and disgrace apparent of begging in public. Internet begging is also commonly done among acquaintances on social media platforms, such as requests for donations from friends and family members to pay for normal educational expenses. A cause website is a cyber-begging site that presents a personal appeal for funds or help. History During the early days of the Internet, cyber-begging was evident in the form of personal advertisements for help on local bulletin board systems (BBS). As personal websites became more popular, individuals began advertising their needs using the features available through website authoring. Many Internet service providers (ISPs) offered a free homepage along with the basic dial-up connection service to the Internet. For many people, this was an opportunity to create an inexpensive website to host and share their personal experience and need. As non-profit organizations began moving their fundraising efforts from snail mail (postal mail) to the World Wide Web, individuals began to create more elaborate forms of personal 'fundraising' by utilizing many of the same Internet techniques. During the late 1990s, as the Internet became more sophisticated, resources became available allowing any individual to create an attractive website without requiring the knowledge of HTML or other web authoring systems. These free-to-inexpensive web hosting services remain a constant on the Internet making it easy for the public to access, create and advertise an Internet begging website. Internet begging gained notoriety and momentum after June 2002 when Karyn Bosnak started SaveKaryn.com as an attempt to have the Internet public help pay her credit card debt, which was in part due to her predilection for designer clothing and Starbucks coffee. For Bosnak, the results led to traditional media attention, appearances on popular television programs and a book. Her website was probably the first Internet begging site to gain wide exposure and it became the example for many to follow. In October 2009, the Boston Globe carried a story on so-called cyberbegging, or Internet begging, which was reported to be a new trend worldwide. Internet begging sites With hundreds of Internet begging sites on-line, it has become common practice for web beggars to register and own the domain name of their websites. Using free or inexpensive hosting services and specialized websites such as GoFundMe, Internet begging websites ask the public for help with many needs including breast augmentation surgery, cancer treatments, new cars, preventing personal homelessness, and medical bills to suggest a few. Websites with name
https://en.wikipedia.org/wiki/Albert%20Corbett
Albert Corbett may refer to: Albert H. C. Corbett (1887–1983), politician in Manitoba, Canada Albert T. Corbett, associate research professor of human-computer interaction at Carnegie Mellon University
https://en.wikipedia.org/wiki/David%20Faber%20%28journalist%29
David H. Faber (; born March 10, 1964) is an American financial journalist and market news analyst for the television cable network CNBC. He is currently one of the co-hosts of CNBC's morning show Squawk on the Street. Career Faber joined CNBC in 1993 after seven years at Institutional Investor. He has been dubbed "The Brain" by CNBC co-workers, and has hosted several documentaries on corporations, such as Wal-Mart and eBay. The Age of Walmart earned Faber a 2005 Peabody Award and an Alfred I. duPont-Columbia University Award for Broadcast Journalism. In 2010, he shared the Gerald Loeb Award for Television Enterprise business journalism for "House of Cards." On Wednesday, September 17, 2023 Faber celebrated working 30 years at CNBC https://www.youtube.com/watch?v=8iUG-swSWIc. In addition to Squawk on the Street, Faber hosts the network's monthly program, Business Nation, which debuted on January 24, 2007. Faber is the author of three books; The Faber Report (2002), And Then the Roof Caved In (2009), and House of Cards: The Origins of the Collapse (2010). Faber served as a guest host on Jeopardy! from August 2–6, 2021. Faber was the champion of Celebrity Jeopardy! in 2012. Personal life Faber is Jewish and was raised in Queens, New York. He is a 1985 cum laude graduate of Tufts University, where he earned a Bachelor of Arts degree in English. In 2000, Faber married Jenny Harris, who is a business journalist and television producer. She is the daughter of lawyer Jay Harris (Hall Dickler Kent Goldstein & Wood) and As the World Turns actress Marie Masters and fraternal twin sister of musician Jesse Harris. See also New Yorkers in journalism References External links David Faber Biography. – CNBC TV Profiles. – CNBC.com. 1964 births Living people Tufts University School of Arts and Sciences alumni Gerald Loeb Award winners for Television Place of birth missing (living people) CNBC people American business and financial journalists American male journalists Jewish American journalists Journalists from Queens, New York 21st-century American Jews
https://en.wikipedia.org/wiki/Distributed%20object
In distributed computing, distributed objects are objects (in the sense of object-oriented programming) that are distributed across different address spaces, either in different processes on the same computer, or even in multiple computers connected via a network, but which work together by sharing data and invoking methods. This often involves location transparency, where remote objects appear the same as local objects. The main method of distributed object communication is with remote method invocation, generally by message-passing: one object sends a message to another object in a remote machine or process to perform some task. The results are sent back to the calling object. Distributed objects were popular in the late 1990s and early 2000s, but have since fallen out of favor. The term may also generally refer to one of the extensions of the basic object concept used in the context of distributed computing, such as replicated objects or live distributed objects. Replicated objects are groups of software components (replicas) that run a distributed multi-party protocol to achieve a high degree of consistency between their internal states, and that respond to requests in a coordinated manner. Referring to the group of replicas jointly as an object reflects the fact that interacting with any of them exposes the same externally visible state and behavior. Live distributed objects (or simply live objects) generalize the replicated object concept to groups of replicas that might internally use any distributed protocol, perhaps resulting in only a weak consistency between their local states. Live distributed objects can also be defined as running instances of distributed multi-party protocols, viewed from the object-oriented perspective as entities that have a distinct identity, and that can encapsulate distributed state and behavior. See also Internet protocol suite. Local vs. distributed objects Local and distributed objects differ in many respects. Here are some of them: Life cycle : Creation, migration and deletion of distributed objects is different from local objects Reference : Remote references to distributed objects are more complex than simple pointers to memory addresses Request Latency : A distributed object request is orders of magnitude slower than local method invocation Object Activation : Distributed objects may not always be available to serve an object request at any point in time Parallelism : Distributed objects may be executed in parallel. Communication : There are different communication primitives available for distributed objects requests Failure : Distributed objects have far more points of failure than typical local objects. Security : Distribution makes them vulnerable to attack. Examples The RPC facilities of the cross platform serialization protocol, Cap'n Proto amount to a distributed object protocol. Distributed object method calls can be executed (chained, in a single network request, if needs be) t
https://en.wikipedia.org/wiki/Ndiyo
Ndiyo was a not-profit organisation based out of Cambridge, United Kingdom, which aimed to promote networked computing that is "simple, affordable, open." Ndiyo, pronounced nn-dee-yo, is the Swahili word for "yes". The company developed an ultra-thin client called the nivo (network in, video out) based on Ubuntu Linux and other open-source software, for use especially in developing countries. The data sent to the clients over the network was pixel data, using a similar approach to Virtual Network Computing (VNC). The project worked on the basis of multiple workstations running from a single PC. Quentin Stafford-Fraser, founder of the organisation, told The Economist "We can make computing more affordable by sharing it". The system allows a basic PC running linux to be shared by many users. The Ndiyo Nivo was similar in concept to Sun Microsystems' Sun Ray virtual display thin client, but at sub-$100 and using only 2W, it was lower-cost and used much less power, making it more suitable for these kinds of situations. In addition to its use by organisations within the United Kingdom, Ndiyo-based systems were deployed in internet cafes in Bangladesh and South Africa, and in Tanzanian Schools. The Nivo technology went on to become the basis of DisplayLink, a company founded by members of the team. References External links Ndiyo! The Ndiyo system and the nivo Video of a 2006 Ndiyo deployment in Bangladesh Video of a 2006 Ndiyo deployment in South Africa Organisations based in Cambridge Swahili words and phrases Thin clients
https://en.wikipedia.org/wiki/Feldmann
Feldmann is a German surname. Notable people with the surname include: Anja Feldmann (born 1966), German computer scientist Else Feldmann (1884-1942), Austrian writer and journalist Gyula Feldmann (1880-1955), Jewish Hungarian football player and coach Jean Feldmann (1905–1978), French algologist, given the standard abbreviation "Feldmann" John Feldmann (born 1967), American musician and producer, member of band Goldfinger Markus Feldmann (1897-1958), Swiss politician Rötger Feldmann (born 1950), German comic book artist also known as Brösel See also Killing of Susanna Feldmann, a 2018 crime which occurred in Germany Feldmann's method, a method of titration of tannin, especially in wine Feldman German-language surnames
https://en.wikipedia.org/wiki/Local%20Management%20Interface
Local Management Interface (LMI) is a term for some signaling standards used in networks, namely Frame Relay and Carrier Ethernet. Frame Relay LMI is a set of signalling standards between routers and Frame Relay switches. Communication takes place between a router and the first Frame Relay switch to which it is connected. Information about keepalives, global addressing, IP multicast and the status of virtual circuits is commonly exchanged using LMI. There are three standards for LMI: Using DLCI 0: ANSI's T1.617 Annex D standard ITU-T's Q.933 Annex A standard Using DLCI 1023: The "Gang of Four" standard, developed by Cisco, DEC, StrataCom and Nortel Carrier Ethernet Ethernet Local Management Interface (E-LMI) is an Ethernet layer operation, administration, and management (OAM) protocol defined by the Metro Ethernet Forum (MEF) for Carrier Ethernet networks. It provides information that enables auto configuration of customer edge (CE) devices. References External links Additional information on Frame Relay LMI Computer networking Frame Relay
https://en.wikipedia.org/wiki/Sports%20radio
Sports radio (or sports talk radio) is a radio format devoted entirely to discussion and broadcasting of sporting events. A widespread programming genre that has a narrow audience appeal, sports radio is characterized by an often-boisterous on-air style and extensive debate and analysis by both hosts and callers. Many sports talk stations also carry play-by-play (live commentary) of local sports teams as part of their regular programming History Hosted by Bill Mazer, the first sports talk radio show in history launched in March 1964 on New York's WNBC (AM). Soon after WNBC launched its program, in 1965 Seton Hall University's radio station, WSOU, started Hall Line, a call-in sports radio talk show focusing on the team's basketball program. Having celebrated its 50th anniversary on air during the 2015–2016 season, Hall Line, which broadcasts to central and northern New Jersey as well as all five boroughs of New York, is the oldest and longest running sports talk call-in show in the NY-NJ Metropolitan area, and is believed to be the oldest in the nation. Enterprise Radio Network became the first national all-sports network, operating out of Avon, Connecticut, from New Year's Day 1981 through late September of that year before going out of business. ER had two channels, one for talk and a second for updates and play-by-play. ER's talk lineup included current New York Yankees voice John Sterling, New York Mets radio host Ed Coleman and former big-league pitcher Bill Denehy. Emmis Broadcasting's WFAN in New York in 1987 was the first all-sports radio station. The success of the station and its programs, such as Mike and the Mad Dog, caused many to appear around the United States; while only one other radio show besides Mike and the Mad Dog attended the 1990 Super Bowl, about 100 attended the 2004 Super Bowl's radio row. Programming Sports talk is available in local, network and syndicated forms, is available in multiple languages, and is carried in multiple forms on both major North American satellite radio networks. In the United States, most sports talk-formatted radio stations air syndicated programming from ESPN Radio, SportsMap, Sports Byline USA, Fox Sports Radio, CBS Sports Radio, or NBC Sports Radio, while in the Spanish language, ESPN Deportes Radio is the largest current network. In contrast, Canadian sports talk stations may carry a national brand (such as TSN Radio or Sportsnet Radio) but carry mostly local programming, with American-based shows filling in gaps. Compared to other formats, interactive "talkback" sports radio poses difficulties for Internet radio, since as a live format it is difficult to automate; most prominent sports leagues also place their radio broadcasts behind a paywall or provide their broadcasts directly to the consumer, depriving standalone Internet stations of potential programming. Pre-recorded sports talk programs (usually interview-centered) can be syndicated as podcasts with relative ease, and sports te
https://en.wikipedia.org/wiki/Dan%20Farmer
Dan Farmer (born April 5, 1962) is an American computer security researcher and programmer who was a pioneer in the development of vulnerability scanners for Unix operating systems and computer networks. Life and career Farmer developed his first software suite while he was a computer science student at Purdue University in 1989. Gene Spafford, one of his professors, helped him to start the project. The software, called the Computer Oracle and Password System (COPS), comprises several small, specialized vulnerability scanners designed to identify security weaknesses in one part of a Unix operating system. In 1995, Farmer and Wietse Venema (a Dutch programmer and physicist) developed a second vulnerability scanner called the Security Administrator Tool for Analyzing Networks (SATAN). Due to a misunderstanding of SATAN's capabilities, when it was first published, some network administrators and law enforcement personnel believed that hackers would use it to identify and break into vulnerable computers. Consequently, SGI terminated Farmer's employment. However, contrary to popular opinion, SATAN did not function as an automatic hacking program that undermined network security. Rather, it operated as an audit on network security that identified vulnerabilities and made suggestions to help prevent them. No information about how security vulnerabilities could be exploited was provided by the tool. Within a few years, the use of vulnerability scanners such as SATAN became an accepted method for auditing computer and network security. He co-developed the Titan vulnerability scanner with Brad Powell and Matt Archibald, which they presented at the Large Installation System Administration Conference (LISA) in 1998. Farmer and Venema collaborated again to develop a computer forensics suite called The Coroner's Toolkit, and later coauthored Forensic Discovery (2005), a book about computer forensics. Farmer co-founded Elemental Security with Dayne Myers, and served as the corporation's chief technical officer. References Bibliography External links Home page Blog Hackers, episode of NetCafe containing an interview with Dan Farmer 1962 births Living people Purdue University alumni Chief technology officers of computer security companies American chief technology officers Computer science writers Silicon Graphics people
https://en.wikipedia.org/wiki/In-memory%20database
An in-memory database (IMDB, or main memory database system (MMDB) or memory resident database) is a database management system that primarily relies on main memory for computer data storage. It is contrasted with database management systems that employ a disk storage mechanism. In-memory databases are faster than disk-optimized databases because disk access is slower than memory access and the internal optimization algorithms are simpler and execute fewer CPU instructions. Accessing data in memory eliminates seek time when querying the data, which provides faster and more predictable performance than disk. Applications where response time is critical, such as those running telecommunications network equipment and mobile advertising networks, often use main-memory databases. IMDBs have gained much traction, especially in the data analytics space, starting in the mid-2000s – mainly due to multi-core processors that can address large memory and due to less expensive RAM. A potential technical hurdle with in-memory data storage is the volatility of RAM. Specifically in the event of a power loss, intentional or otherwise, data stored in volatile RAM is lost. With the introduction of non-volatile random-access memory technology, in-memory databases will be able to run at full speed and maintain data in the event of power failure. ACID support In its simplest form, main memory databases store data on volatile memory devices. These devices lose all stored information when the device loses power or is reset. In this case, IMDBs can be said to lack support for the "durability" portion of the ACID (atomicity, consistency, isolation, durability) properties. Volatile memory-based IMDBs can, and often do, support the other three ACID properties of atomicity, consistency and isolation. Many IMDBs have added durability via the following mechanisms: Snapshot files, or, checkpoint images, which record the state of the database at a given moment in time. The system typically generates these periodically, or at least when the IMDB does a controlled shut-down. While they give a measure of persistence to the data (in that the database does not lose everything in the case of a system crash) they only offer partial durability (as 'recent" changes will be lost). For full durability, they need supplementing with one of the following: Transaction logging, which records changes to the database in a journal file and facilitates automatic recovery of an in-memory database. Non-Volatile DIMM (NVDIMM), a memory module that has a DRAM interface, often combined with NAND flash for the Non-Volatile data security. The first NVDIMM solutions were designed with supercapacitors instead of batteries for the backup power source. With this storage, IMDB can resume securely from its state upon reboot. Non-volatile random-access memory (NVRAM), usually in the form of static RAM backed up with battery power (battery RAM), or an electrically erasable programmable ROM (EEPROM). Wi
https://en.wikipedia.org/wiki/Architecture%20of%20Windows%20NT
The architecture of Windows NT, a line of operating systems produced and sold by Microsoft, is a layered design that consists of two main components, user mode and kernel mode. It is a preemptive, reentrant multitasking operating system, which has been designed to work with uniprocessor and symmetrical multiprocessor (SMP)-based computers. To process input/output (I/O) requests, it uses packet-driven I/O, which utilizes I/O request packets (IRPs) and asynchronous I/O. Starting with Windows XP, Microsoft began making 64-bit versions of Windows available; before this, there were only 32-bit versions of these operating systems. Programs and subsystems in user mode are limited in terms of what system resources they have access to, while the kernel mode has unrestricted access to the system memory and external devices. Kernel mode in Windows NT has full access to the hardware and system resources of the computer. The Windows NT kernel is a hybrid kernel; the architecture comprises a simple kernel, hardware abstraction layer (HAL), drivers, and a range of services (collectively named Executive), which all exist in kernel mode. User mode in Windows NT is made of subsystems capable of passing I/O requests to the appropriate kernel mode device drivers by using the I/O manager. The user mode layer of Windows NT is made up of the "Environment subsystems", which run applications written for many different types of operating systems, and the "Integral subsystem", which operates system-specific functions on behalf of environment subsystems. The kernel mode stops user mode services and applications from accessing critical areas of the operating system that they should not have access to. The Executive interfaces, with all the user mode subsystems, deal with I/O, object management, security and process management. The kernel sits between the hardware abstraction layer and the Executive to provide multiprocessor synchronization, thread and interrupt scheduling and dispatching, and trap handling and exception dispatching. The kernel is also responsible for initializing device drivers at bootup. Kernel mode drivers exist in three levels: highest level drivers, intermediate drivers and low-level drivers. Windows Driver Model (WDM) exists in the intermediate layer and was mainly designed to be binary and source compatible between Windows 98 and Windows 2000. The lowest level drivers are either legacy Windows NT device drivers that control a device directly or can be a plug and play (PnP) hardware bus. User mode User mode is made up of various system-defined processes and DLLs. The interface between user mode applications and operating system kernel functions is called an "environment subsystem." Windows NT can have more than one of these, each implementing a different API set. This mechanism was designed to support applications written for many different types of operating systems. None of the environment subsystems can directly access hardware; access to ha
https://en.wikipedia.org/wiki/WeatherBug
WeatherBug is a brand based in New York City, that provides location-based advertising to businesses. WeatherBug consists of a mobile app reporting live and forecast data on hyperlocal weather to consumer users. History Originally owned by Automated Weather Source, the WeatherBug brand was founded by Bob Marshall and other partners in 1993. It started in the education market by selling weather tracking stations and educational software to public and private schools and then used the data from the stations on their website. Later, the company began partnering with TV stations so that broadcasters could use WeatherBug's local data and camera shots in their weather reports. In 2000, the WeatherBug desktop application and website were launched. Later, the company launched WeatherBug and WeatherBug Elite as smartphone apps for iOS and Android, which won an APPY app design award in 2013. The company also sells a lightning tracking safety system that is used by schools and parks in southern Florida and elsewhere. The company used lightning detection sensors throughout Guinea in Africa to track storms as they develop and has more than 50 lightning detection sensors in Brazil. Earth Networks received The Award for Outstanding Services to Meteorology by a Corporation in 2014 from the American Meteorological Society for "developing innovative lightning detection data products that improve severe-storm monitoring and warnings." WeatherBug announced in 2004 it had been certified to display the TRUSTe privacy seal on its website. In 2005, Microsoft AntiSpyware flagged the application as a low-risk spyware threat. According to the company, the desktop application is not spyware because it is incapable of tracking users' overall Web use or deciphering anything on their hard drive. In early 2011, AWS Convergence Technologies, Inc. (formerly Automated Weather Source) changed its name to Earth Networks, Inc. In April 2013, WeatherBug was the second most popular weather information service on the Internet, behind only The Weather Channel's Web site, and ahead of the sites run by Weather Underground and AccuWeather. In November 2016, it was announced that xAd acquired WeatherBug from Earth Networks. Mobile application The company developed WeatherBug, a mobile application of their service for Android, iOS and Windows Phone platforms. Spark is a component of the WeatherBug app that reports where the nearest lightning strike is to the user based on data from the Total Lightning Network (run by WeatherBug's former owner, Earth Networks) and your phone's GPS location. WeatherBug is a mobile application created by WeatherBug for Android and iOS platforms. An iPhone version was available in October 2007, and the Android version was released in November 2008 and 2009. References Meteorological data and networks Meteorological companies Companies based in New York City Consulting firms established in 1993 Business services companies established in 1993 Internet p
https://en.wikipedia.org/wiki/NFL%20on%20NBC
The NFL on NBC is the branding used for broadcasts of National Football League (NFL) games that are produced by NBC Sports, and televised on the NBC television network and the Peacock streaming service in the United States. NBC had sporadically carried NFL games as early as 1939, including the championship and Pro Bowl through the 1950s and early 1960s. Beginning in 1965, NBC signed an agreement to carry the American Football League (AFL)'s telecasts, which carried over with the American Football Conference (AFC) when the AFL merged with the NFL. NBC would continuously carry the AFL/AFC's Sunday afternoon games from 1965 through the 1997 season, after which NBC lost the AFC contract to CBS. NFL coverage returned to NBC on August 6, 2006, under the title NBC Sunday Night Football, beginning with its coverage of the preseason Pro Football Hall of Fame Game. From 2016 to 2017, NBC added a five-game Thursday Night Football package to its offerings supplementing two Thursday games that were already part of the Sunday Night Football package. Game coverage is usually preceded by the pregame show Football Night in America. History Beginnings through the 1950s NBC's coverage of the National Football League (which has aired under numerous program titles and formats) actually goes back to the beginnings of the network's relationship with the league in 1939, when its New York City flagship station, then known as W2XBS (now WNBC) aired the first televised professional football game between the Philadelphia Eagles and the now-defunct Brooklyn Dodgers football team. Even before this, in 1934, NBC Radio's Blue Network had carried the Detroit Lions' inaugural Thanksgiving game nationwide. By 1955, NBC became the television home to the NFL Championship Game, the precursor to the Super Bowl, paying US$100,000 to the league for the rights. The network had taken over the broadcast rights from the DuMont Television Network, which had struggled to give the league a national audience (NBC's coverage of proto-Canadian Football League games from the year prior was more widely available at the time) and was on the brink of failure; the NFL's associations with NBC (as well as with CBS) proved to be a boost to the league's popularity. For the 1957 NFL Championship Game, Van Patrick and Ken Coleman split a half of the play-by-play duties and Red Grange, normally on play-by-play for Chicago Bears games on CBS, assumed the color commentator role for this game. The 1958 NFL Championship Game, played at Yankee Stadium, between the Baltimore Colts and the New York Giants went into sudden death overtime. This game, since known as the "Greatest Game Ever Played", was seen by many throughout the country and is credited with increasing the popularity of professional football in the late 1950s and early 1960s. Chris Schenkel called the first half while Chuck Thompson called the second half and overtime. NBC televised the NFL Championship Game until 1963. The contract for the titl
https://en.wikipedia.org/wiki/TOra
TOra (Toolkit for Oracle) is a free software database development and administration GUI, available under the GNU General Public License. It features a PL/SQL debugger, an SQL worksheet with syntax highlighting, a database browser and a comprehensive set of database administration tools. In addition to Oracle Database support, support for MySQL, PostgreSQL and Teradata databases has been added since the initial launch. It uses the Qt, and can use the qScintilla2 library. The Oracle connector uses the Oracle Template Library. TOra was originally written by Henrik Johnson and copyright by GlobeCom AB, which was acquired by Quest Software. Start of conversion to being maintained as open source project was made on 2005-02-17 with version 1.3.15. QT4 conversion took place in 2009 with version 1.4. See also Comparison of database tools References External links SourceForge project page The Cuddletech SAs Guide to Oracle: TOra, Ben Rockwood (2005-02-10) TOra, orafaq wiki Database administration tools SQL clients PL/SQL editors Quest Software Software that uses Qt Oracle database tools
https://en.wikipedia.org/wiki/Propagation%20constraint
In database systems, a propagation constraint "details what should happen to a related table when we update a row or rows of a target table" (Paul Beynon-Davies, 2004, p.108). Tables are linked using primary key to foreign key relationships. It is possible for users to update one table in a relationship in such a way that the relationship is no longer consistent and this is known as breaking referential integrity. An example of breaking referential integrity: if a table of employees includes a department number for 'Housewares' which is a foreign key to a table of departments and a user deletes that department from the department table then Housewares employees records would refer to a non-existent department number. Propagation constraints are methods used by relational database management systems (RDBMS) to solve this problem by ensuring that relationships between tables are preserved without error. In his database textbook, Beynon-Davies explains the three ways that RDBMS handle deletions of target and related tuples: Restricted Delete - the user cannot delete the target row until all rows that point to it (via foreign keys) have been deleted. This means that all Housewares employees would need to be deleted, or their departments changed, before removing the department from the departmental table. Cascades Delete - can delete the target row and all rows that point to it (via foreign keys) are also deleted. The process is the same as a restricted delete, except that the RDBMS would delete the Houseware employees automatically before removing the department. Nullifies Delete - can delete the target row and all foreign keys (pointing to it) are set to null. In this case, after removing the housewares department, employees who worked in this department would have a NULL (unknown) value for their department. Bibliography Beynon-Davies, P. (2004) Database Systems Third Edition, Palgrave Macmillan. Relational database management systems
https://en.wikipedia.org/wiki/Liang%E2%80%93Barsky%20algorithm
In computer graphics, the Liang–Barsky algorithm (named after You-Dong Liang and Brian A. Barsky) is a line clipping algorithm. The Liang–Barsky algorithm uses the parametric equation of a line and inequalities describing the range of the clipping window to determine the intersections between the line and the clip window. With these intersections it knows which portion of the line should be drawn. So this algorithm is significantly more efficient than Cohen–Sutherland. The idea of the Liang–Barsky clipping algorithm is to do as much testing as possible before computing line intersections. The algorithm uses the parametric form of a straight line: A point is in the clip window, if and which can be expressed as the 4 inequalities where To compute the final line segment: A line parallel to a clipping window edge has for that boundary. If for that , , then the line is completely outside and can be eliminated. When , the line proceeds outside to inside the clip window, and when , the line proceeds inside to outside. For nonzero , gives for the intersection point of the line and the window edge (possibly projected). The two actual intersections of the line with the window edges, if they exist, are described by and , calculated as follows. For , look at boundaries for which (i.e. outside to inside). Take to be the largest among . For , look at boundaries for which (i.e. inside to outside). Take to be the minimum of . If , the line is entirely outside the clip window. If it is entirely inside it. // Liang—Barsky line-clipping algorithm #include<iostream> #include<graphics.h> #include<math.h> using namespace std; // this function gives the maximum float maxi(float arr[],int n) { float m = 0; for (int i = 0; i < n; ++i) if (m < arr[i]) m = arr[i]; return m; } // this function gives the minimum float mini(float arr[], int n) { float m = 1; for (int i = 0; i < n; ++i) if (m > arr[i]) m = arr[i]; return m; } void liang_barsky_clipper(float xmin, float ymin, float xmax, float ymax, float x1, float y1, float x2, float y2) { // defining variables float p1 = -(x2 - x1); float p2 = -p1; float p3 = -(y2 - y1); float p4 = -p3; float q1 = x1 - xmin; float q2 = xmax - x1; float q3 = y1 - ymin; float q4 = ymax - y1; float posarr[5], negarr[5]; int posind = 1, negind = 1; posarr[0] = 1; negarr[0] = 0; rectangle(xmin, ymin, xmax, ymax); // drawing the clipping window if ((p1 == 0 && q1 < 0) || (p2 == 0 && q2 < 0) || (p3 == 0 && q3 < 0) || (p4 == 0 && q4 < 0)) { outtextxy(80, 80, "Line is parallel to clipping window!"); return; } if (p1 != 0) { float r1 = q1 / p1; float r2 = q2 / p2; if (p1 < 0) { negarr[negind++] = r1; // for negative p1, add it to negative array posarr[posind++] = r2; // and add p2 to positive array } else { negarr[negind++] = r2; posarr[posind++] = r1; } } if (p3 != 0) {
https://en.wikipedia.org/wiki/Rail%20transportation%20in%20the%20United%20States
Rail transportation in the United States consists primarily of freight shipments along a well integrated network of standard gauge private freight railroads that also extend into Canada and Mexico. The United States has the largest rail transport network size of any country in the world, at a total of approximately . Passenger service serves as a mass transit option for Americans with commuter rail in most major American cities, especially on the U.S. East Coast. Intercity passenger service was once a large and vital part of the nation's passenger transportation network, but it began playing an increasingly diminished role for passengers in the 20th century as commercial air traffic and the Interstate Highway System made commercial air and vehicle transport a practical option throughout the United States. The nation's earliest railroads were built in the 1820s and 1830s, primarily in New England and the Mid-Atlantic region. The Baltimore and Ohio Railroad, chartered in 1827, was the nation's first common carrier railroad. By 1850, an extensive railroad network had begun to take shape in the rapidly industrializing Northeastern United States and the Midwest, while relatively fewer railroads were constructed in the primarily agricultural Southern United States. During and after the American Civil War, the first transcontinental railroad was built to connect California with the rest of the national network in Iowa. Railroads continued to expand throughout the rest of the 1800s, eventually reaching nearly every corner of the nation. The nation's railroads were temporarily nationalized between 1917 and 1920 by the United States Railroad Administration, as a result of U.S. entry into World War I. Railroad mileage in the nation peaked at this time. Railroads were affected deeply by the Great Depression in the United States, with some lines being abandoned during this time. A major increase in traffic during World War II brought a temporary reprieve, but after the war railroads faced intense competition from automobiles and aircraft and began a long decline. Passenger service was especially hard hit, with the federal government creating Amtrak in 1971 to take over responsibility for intercity passenger travel. Numerous railroad companies went bankrupt starting in the 1960s, most notably Penn Central Transportation Company in 1971, in the largest bankruptcy in the nation's history at the time. Once again, the federal government intervened, forming Conrail in 1976 to assume control of bankrupt railroads in the northeast. Railroads' fortunes began to change following the passage of the Staggers Rail Act in 1980, which deregulated railroad companies, who had previously faced much stronger regulation than competing modes of transportation. With innovations such as trailer-on-flatcar and intermodal freight transport, railroad traffic began to increase. Following the Staggers Act, many railroads merged, forming major systems such as CSX and Norfolk Southern
https://en.wikipedia.org/wiki/Anthornis
Anthornis is a bird genus in the honeyeater family (Meliphagidae). Its members are called bellbirds. According to genetic data, it is a sister genus to Prosthemadera. It contains the following species: New Zealand bellbird, Anthornis melanura Chatham bellbird, Anthornis melanocephala (extinct) They are named bellbirds because their call sounds like a bell. Young male bellbirds copy the calls of neighbouring older males. Sometimes two males can sing in almost perfect unison because one has been copying the other. References Bird genera Taxa named by George Robert Gray Bird genera with one living species
https://en.wikipedia.org/wiki/Filter%20driver
A filter driver is a Microsoft Windows driver that extends or modifies the function of peripheral devices or supports a specialized device in the personal computer. It is a driver or program or module that is inserted into the existing Driver Stack to perform some specific function. A filter driver should not affect the normal working of the existing driver stack in any major way. Written either by Microsoft or the vendor of the hardware, any number of filter drivers can be added to Windows. Upper level filter drivers sit above the primary driver for the device (the function driver), while lower level filter drivers sit below the function driver and above the bus driver. Filters may work on a certain brand of device such as a mouse or keyboard, or they may perform some operation on a class of devices, such as any mouse or any keyboard. The Windows Dev Center - Hardware pages explain upper and lower filter drivers in detail. For example, the generic USB camera (UVC) driver usbvideo.sys is a function driver, while the bus driver handles USB data from the host controller devices. A lower level filter modifies the behavior of the camera hardware (e.g. watching for interrupt packets from a camera switch) and fits between the function and bus drivers. An upper level filter typically provide added-value features for a camera, such as additional processing of the video stream (e.g. colour changes, identification of objects, applying overlays), and fit between the function driver and the user application that has connected to the camera. Another type of filter driver is the bus (e.g. USB, PCI, PCIe) filter driver, which may be added on top of the bus driver. For example, an ACPI bus filter is added to support power management for each device. See also Windows Driver Model Device driver Advanced Configuration and Power Interface References Device drivers Microsoft application programming interfaces
https://en.wikipedia.org/wiki/Todd%20Grisham
Todd Grisham (born January 9, 1976) is an American sports reporter for DAZN and Glory kickboxing. Prior to his departure from ESPN at the end of 2016, his duties for the network included being the in-studio host for Friday Night Fights as well as a SportsCenter anchor. He was previously a sports reporter for UFC from 2017 to 2020. Prior to joining ESPN in 2011, Grisham worked as a professional wrestling commentator for WWE. As of January 2017, Grisham is working for the UFC as a broadcaster. Early life Born in Hattiesburg, Mississippi, Grisham was raised in Bay Minette, Alabama, and attended Baldwin County High School for his freshman year, where he played on the school's soccer team as a forward. His family relocated several times after his freshman year, and Grisham graduated from Orange Park High School in Orange Park, Florida. Grisham attended Wingate University for one year on a soccer scholarship and later transferred to the University of West Georgia, where he received his degree in communications. His first professional job in the television industry was with KTVO of Ottumwa, Iowa, where he worked for a year and a half. He was then a sportscaster for KOLD-TV Tucson for approximately five years before signing a two-year contract with WWE as an announcer, officially joining them on January 14, 2004. Professional wrestling career World Wrestling Entertainment/WWE Voice of Heat, Byte This and Bottom Line (2004–2008) Grisham debuted in WWE in 2004 as the voice of Heat, doing play-by-play alongside Jonathan Coachman, Josh Mathews, and others for just over four years. He had co-hosted Experience with Ivory, but after her release from WWE, he hosted it alone until mid-2006, when Josh Mathews took over. Grisham also hosted WWE's premier web show, Byte This!, which was canceled in 2006. In addition, he did backstage interviews for the Raw brand. In 2005 Grisham also began hosting Raw's catch-up program Bottom Line after Marc Loyd was released from the company, until September 2007. ECW (2008–2009) Grisham became the new play-by-play commentator for the ECW brand on the July 29, 2008 episode of ECW on Syfy, replacing Mike Adamle, and was paired with Tazz, who was the color commentator until Matt Striker took over. Grisham and Striker won the 2008 Slammy Award for Announce Team of the Year. SmackDown, NXT and departure (2009–2011) On the April 7, 2009 episode of ECW, Josh Mathews became the new play-by-play commentator for ECW. Grisham was promoted to SmackDown play by play commentator, and debuted on April 10, 2009. On October 30, 2009, he was reunited with his ECW broadcasting partner Matt Striker. On December 10, 2010, Grisham was replaced as SmackDown's play-by-play commentator by Josh Mathews; Grisham replaced Michael Cole on NXT as play-by-play commentator and joined with Mathews. After becoming the lead play-by-play announcer for NXT for five months, he left; Grisham's last WWE appearance was on the August 26, 2011, edition of SmackDo
https://en.wikipedia.org/wiki/Educational%20technology
Educational technology (commonly abbreviated as edutech, or edtech) is the combined use of computer hardware, software, and educational theory and practice to facilitate learning. When referred to with its abbreviation, "EdTech," it often refers to the industry of companies that create educational technology. In EdTech Inc.: Selling, Automating and Globalizing Higher Education in the Digital Age, Tanner Mirrlees and Shahid Alvi (2019) argue "EdTech is no exception to industry ownership and market rules" and "define the EdTech industries as all the privately owned companies currently involved in the financing, production and distribution of commercial hardware, software, cultural goods, services and platforms for the educational market with the goal of turning a profit. Many of these companies are US-based and rapidly expanding into educational markets across North America, and increasingly growing all over the world." In addition to the practical educational experience, educational technology is based on theoretical knowledge from various disciplines such as communication, education, psychology, sociology, artificial intelligence, and computer science. It encompasses several domains including learning theory, computer-based training, online learning, and m-learning where mobile technologies are used. Definition The Association for Educational Communications and Technology (AECT) has defined educational technology as "the study and ethical practice of facilitating learning and improving performance by creating, using and managing appropriate technological processes and resources". It denotes instructional technology as "the theory and practice of design, development, utilization, management, and evaluation of processes and resources for learning". As such, educational technology refers to all valid and reliable applied education sciences, such as equipment, as well as processes and procedures that are derived from scientific research, and in a given context may refer to theoretical, algorithmic or heuristic processes: it does not necessarily imply physical technology. Educational technology is the process of integrating technology into education in a positive manner that promotes a more diverse learning environment and a way for students to learn how to use technology as well as their common assignments. Accordingly, there are several discrete aspects to describing the intellectual and technical development of educational technology: Educational technology as the theory and practice of educational approaches to learning. Educational technology as technological tools and media, for instance massive online courses, that assist in the communication of knowledge, and its development and exchange. This is usually what people are referring to when they use the term "edtech". Educational technology for learning management systems (LMS), such as tools for student and curriculum management, and education management information systems (EMIS). Educa
https://en.wikipedia.org/wiki/Lists%20of%20microcomputers
For an overview of microcomputers of different kinds, see the following lists of microcomputers: List of early microcomputers List of home computers List of home computers by video hardware Lists of computer hardware
https://en.wikipedia.org/wiki/Ettercap%20%28software%29
Ettercap is a free and open source network security tool for man-in-the-middle attacks on a LAN. It can be used for computer network protocol analysis and security auditing. It runs on various Unix-like operating systems including Linux, Mac OS X, BSD and Solaris, and on Microsoft Windows. It is capable of intercepting traffic on a network segment, capturing passwords, and conducting active eavesdropping against a number of common protocols. Its original developers later founded Hacking Team. Functionality Ettercap works by putting the network interface into promiscuous mode and by ARP poisoning the target machines. Thereby it can act as a 'man in the middle' and unleash various attacks on the victims. Ettercap has plugin support so that the features can be extended by adding new plugins. Features Ettercap supports active and passive dissection of many protocols (including ciphered ones) and provides many features for network and host analysis. Ettercap offers four modes of operation: IP-based: packets are filtered based on IP source and destination. MAC-based: packets are filtered based on MAC address, useful for sniffing connections through a gateway. ARP-based: uses ARP poisoning to sniff on a switched LAN between two hosts (full-duplex). PublicARP-based: uses ARP poisoning to sniff on a switched LAN from a victim host to all other hosts (half-duplex). In addition, the software also offers the following features: Character injection into an established connection: characters can be injected into a server (emulating commands) or to a client (emulating replies) while maintaining a live connection. SSH1 support: the sniffing of a username and password, and even the data of an SSH1 connection. Ettercap is the first software capable of sniffing an SSH connection in full duplex. HTTPS support: the sniffing of HTTP SSL secured data—even when the connection is made through a proxy. Remote traffic through a GRE tunnel: the sniffing of remote traffic through a GRE tunnel from a remote Cisco router, and perform a man-in-the-middle attack on it. Plug-in support: creation of custom plugins using Ettercap's API. Password collectors for: TELNET, FTP, POP, IMAP, rlogin, SSH1, ICQ, SMB, MySQL, HTTP, NNTP, X11, Napster, IRC, RIP, BGP, SOCKS 5, IMAP 4, VNC, LDAP, NFS, SNMP, MSN, YMSG Packet filtering/dropping: setting up a filter that searches for a particular string (or hexadecimal sequence) in the TCP or UDP payload and replaces it with a custom string/sequence of choice, or drops the entire packet. TCP/IP stack fingerprinting: determine the OS of the victim host and its network adapter. Kill a connection: killing connections of choice from the connections-list. Passive scanning of the LAN: retrieval of information about hosts on the LAN, their open ports, the version numbers of available services, the type of the host (gateway, router or simple PC) and estimated distances in number of hops. Hijacking of DNS requests. Ettercap also has the
https://en.wikipedia.org/wiki/Noble%20Network%20of%20Charter%20Schools
Noble Schools, (formerly known as the Noble Network of Charter Schools and as Noble Street Charter School) is an open enrollment, public charter network of high schools and middle schools serving students throughout Chicago. Noble was co-founded in 1999 by Michael Milkie and Tonya Hernandez through a partnership between Ron Manderschied, President of Northwestern University Settlement House. Noble's first expansions, Rauner College Prep and Pritzker College Prep, opened in 2006. There are currently 18 schools in the charter school network: 1 middle school and 17 high schools. Noble schools are public and open to all students in Chicago and there is no testing required for admission. The student population for Noble Network schools is 98% minority and 89% low-income. It currently serves 12,543 students from more than 70 Chicago communities. The Noble Network has an overall college acceptance rate of 90%. In 2014 black and Hispanic students in Noble schools ranked in the top 30 percent in reading, math and science. It was named top public charter network in 2015 by the Eli and Edythe Broad Foundation and Chicago Magazine named Noble schools as the five top charter high schools in Chicago. According to Princeton University and the Brookings Institution in 2018, attending a Noble high school increased college enrollment by 13 percentage points, with most of the increase coming at four-year, relatively selective institutions. Persistence in college also increased, with a 12 percentage point increase in attending four or more semesters of higher education. In the 2018-2019 School Quality Rating Policy results published by the Chicago Public Schools, Noble's high schools earned 10 of the 15 top ranking school slots in the district. The School Quality Rating Policy (SQRP) is the Board of Education's policy for evaluating school performance. It establishes the indicators of school performance and growth and the benchmarks against which a school's success will be evaluated on an annual basis. Through this policy, each school receives a School Quality Rating and an Accountability Status. Programs Teaching Residency Program The Noble-Relay Teaching Residency, run in partnership with Relay Graduate School of Education, launched in the summer of 2014, provides a one-year pathway into a career as a teacher in an urban setting for Noble alumni and interested community members. College Counseling & Alumni Support Noble's college counseling and alumni support program has led to students graduating from college at 3-5x the national average. Noble uses college counseling tools and software to match students with the highest graduation rate schools. This system has been shared with other schools in Chicago and around the country. Each senior has a college counselor and applies to 8-10 colleges to find the right "match" school. Students can attend a College Seminar course their senior year to help them complete college, financial, and scholarship applications.
https://en.wikipedia.org/wiki/The%20Way%20We%20Was
"The Way We Was" is the twelfth episode of the second season of the American animated television series The Simpsons. It originally aired on the Fox network in the United States on January 31, 1991. In the episode, Marge tells the story of how she and Homer first met and fell in love. Flashing back to 1974, it is shown how Homer falls in love with Marge in high school and tries to get close to her by enlisting her as his French tutor. After several hours of verb conjugation, Marge falls for Homer too, only to become enraged when he admits he is not a French student. Marge rejects Homer's invitation to the prom and goes with Artie Ziff. Artie turns out to be a terrible date and Marge realizes that it is Homer she really wants. The episode was written by Al Jean, Mike Reiss, and Sam Simon, and directed by David Silverman. It was the first flashback episode of The Simpsons. Jon Lovitz guest-starred in it as Artie Ziff. The episode features cultural references to songs such as "The Joker" and "(They Long to Be) Close to You", and the television series Siskel & Ebert & the Movies. The title itself is a reference to the 1973 film The Way We Were. Since airing, the episode has received mostly positive reviews from television critics. It acquired a Nielsen rating of 15.6, and was the highest rated show on Fox the week it aired. Plot When the Simpsons' television set breaks, Marge tells her children how she and Homer met in a flashback. Marge and Homer were both high school seniors in 1974. Homer and his close friend Barney earned detention for smoking in the boys' restroom. Unlike Homer, Marge was studious, but she was also sent to detention for burning a bra at a feminist rally. Homer instantly fell in love with Marge the first time he saw her in the detention room. Despite his father Abe's warning that he was aiming too high, Homer was determined to win Marge's heart. To impress Marge, Homer joined her debate team, where he learned she was romantically interested in the more articulate Artie Ziff. Homer asked Marge to tutor him in French, and she accepted his invitation to the senior prom. When Homer confessed that he was not enrolled in French class and was only using the ruse to spend time with her, Marge told him off for making her needlessly stay awake late the night before an important debate tournament. She lost the debate to Artie, who asked her to be his prom date. Homer was unaware of this, so he unexpectedly arrived at her house on prom night. When Artie arrived moments later, Homer despondently left and attended the prom alone. Artie and Marge were crowned prom king and queen and shared the first dance. Marge found Homer crying in the hallway. He confessed his feelings for her and although she was sympathetic, she urged him to accept her love for Artie. At Inspiration Point after the prom, Artie tried to make out with Marge in the back seat of his car; when he tore her dress in a fit of passion, Marge slapped him and demanded to be take
https://en.wikipedia.org/wiki/Global%20Ecolabelling%20Network
The Global Ecolabelling Network (GEN) is a non-profit network composed of some 29 ecolabel organisations throughout the world representing nearly 60 countries and territories, with two associate members and a growing number of affiliate members, one of which is Google. GEN members have certified over 252,000 products and services for environmental leadership. GEN was established in 1994. The stated goal of the Network is to further the exchange of information between national ecolabel organisations that operate "Type I" ecolabels, the strongest category, as defined by ISO 14024. "Blauer Engel" (Blue Angel), the German ecolabel, established in 1978, was the first of this kind. Ecolabels are "licensed" for use only after a product or service is proven to meet transparent, published standards for environmental preferability, verified by a qualified, independent third party, and assessed over multiple environmental parameters (not just one single issue). The ecolabels are an assurance to consumers and procurement professionals that a product or service is proven "green" and has high environmental values and integrity. The Global Ecolabelling Network, its members, their licensees, and the public celebrates World Ecolabel Day every year in October. Members Australia – Good Environmental Choice Australia (Environmental Choice Australia) Brazil – Associação Brasileira de Normas Técnicas (ABNT-Environmental Quality) [Brazilian National Standards Organization] China – China Environmental United Certification Center (China Environmental Labelling) China – China Quality Certification Centre (China Environmentally Friendly Certification) Chinese Taipei – Environment and Development Foundation (Green Mark) EU – European Commission (EU Ecolabel) Germany – German Federal Environmental Agency (Blue Angel) Germany – TÜV Rheinland (Green Product Mark) Hong Kong – Green Council (Green Label) India – Confederation of Indian Industry (GreenPro) Indonesia – Ministry of Environment (Indonesian Ecolabel) Israel – Standards Institution of Israel (Israeli Green Label) Japan – Japan Environment Association (Eco Mark Program) Kazakhstan – International Academy of Ecology of the Republic of Kazakhstan (ECO-Labelling Program) Korea – Korea Environmental Industry & Technology Institute (Korea Eco-Label) Malaysia – SIRIM QAS International Sdn Bhd (SIRIM Eco-Labelling Scheme) New Zealand – New Zealand Ecolabelling Trust (Environmental Choice New Zealand) Nordic Countries – Nordic Ecolabelling Board (Nordic Swan) North America (U.S.A and Canada) – UL (Ecologo) North America (U.S.A.) – (Green Seal) Philippines – Philippine Center for Environmental Protection and Sustainable Development (Green Choice Philippines) Russia – Ecological Union (Vitality Leaf) Singapore – Singapore Environment Council (Green Label Singapore ) Sweden – Swedish Society for Nature Conservation (Good Environmental Choice) Sweden – TCO Development (TCO Certified) Thailand – Thailand Environment Institute
https://en.wikipedia.org/wiki/Regnum
Regnum may refer to: Latin for kingdom or dominion, see realm Regnum, Latin word for Kingdom (biology) REGNUM News Agency, a Russian news agency Champions of Regnum, a computer game An online database for PhyloCode
https://en.wikipedia.org/wiki/Frank%20Rosenblatt
Frank Rosenblatt (July 11, 1928July 11, 1971) was an American psychologist notable in the field of artificial intelligence. He is sometimes called the father of deep learning for his pioneering work on neural networks. Life and career Rosenblatt was born into a Jewish family in New Rochelle, New York as the son of Dr. Frank and Katherine Rosenblatt. After graduating from The Bronx High School of Science in 1946, he attended Cornell University, where he obtained his A.B. in 1950 and his Ph.D. in 1956. He then went to Cornell Aeronautical Laboratory in Buffalo, New York, where he was successively a research psychologist, senior psychologist, and head of the cognitive systems section. This is also where he conducted the early work on perceptrons, which culminated in the development and hardware construction of the Mark I Perceptron in 1960. This was essentially the first computer that could learn new skills by trial and error, using a type of neural network that simulates human thought processes. Rosenblatt's research interests were exceptionally broad. In 1959 he went to Cornell's Ithaca campus as director of the Cognitive Systems Research Program and also as a lecturer in the Psychology Department. In 1966 he joined the Section of Neurobiology and Behavior within the newly formed Division of Biological Sciences, as associate professor. Also in 1966, he became fascinated with the transfer of learned behavior from trained to naive rats by the injection of brain extracts, a subject on which he would publish extensively in later years. In 1970 he became field representative for the Graduate Field of Neurobiology and Behavior, and in 1971 he shared the acting chairmanship of the Section of Neurobiology and Behavior. Frank Rosenblatt died in July 1971 on his 43rd birthday, in a boating accident in Chesapeake Bay. Academic interests Perceptron Rosenblatt was best known for the Perceptron, an electronic device which was constructed in accordance with biological principles and showed an ability to learn. Rosenblatt's perceptrons were initially simulated on an IBM 704 computer at Cornell Aeronautical Laboratory in 1957. When a triangle was held before the perceptron's eye, it would pick up the image and convey it along a random succession of lines to the response units, where the image was registered. He developed and extended this approach in numerous papers and a book called Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, published by Spartan Books in 1962. He received international recognition for the Perceptron. The New York Times billed it as a revolution, with the headline “New Navy Device Learns By Doing”, and The New Yorker similarly admired the technological advancement. Research on comparable devices was also being done in other places such as SRI, and many researchers had big expectations on what they could do. The initial excitement became somewhat reduced, though, when in 1969 Marvin Minsky and Seymour P
https://en.wikipedia.org/wiki/Sigaction
In computing, sigaction is a function API defined by POSIX to give the programmer access to what should be a program's behavior when receiving specific OS signals. General In Unix-like operating systems, one means of inter-process communication is through signals. When an executing unit (process or thread) receives a signal from the OS, it should react in some way defined by the datasheet and the conventional meaning of this signal (i.e. by dumping its data, stopping execution, synchronizing something...). The sigaction() system call is used to declare the behavior of the program should it receive one particular non-system-reserved signal. This is done by giving along with the system call a structure containing, among others, a function pointer to the signal handling routine. Some predefined signals (such as SIGKILL) have locked behavior that is handled by the system and cannot be overridden by such system calls. sigaction structure The POSIX standard requires that the sigaction structure be defined as below in the <signal.h> header file and it should contain at least the following fields: struct sigaction { void (*sa_handler)(int); /* address of signal handler */ sigset_t sa_mask; /* additional signals to block */ int sa_flags; /* signal options */ /* alternate signal handler */ void (*sa_sigaction)(int, siginfo_t *, void*); }; Implementations are free to define additional, possibly non-portable fields. The sa_handler member specifies the address of a function to be called when the process receives the signal. The signal number is passed as an integer argument to this function. The sa_mask member specifies additional signals to be blocked during the execution of signal handler. sa_mask must be initialized with sigemptyset(3). The sa_flags member specifies some additional flags. sa_sigaction is an alternate signal handler with different set of parameters. Only one signal handler must be specified, either sa_handler or sa_sigaction. If it is desired to use sa_sigaction instead of sa_handler, SA_SIGINFO flag must be set. Replacement for signal() The sigaction() function provides an interface for reliable signals in replacement of the unreliable signal() function. Signal handlers installed by the signal() interface will be uninstalled immediately prior to execution of the handler. Permanent handlers must therefore be reinstalled by a call to signal() during the handler's execution, causing unreliability in the event a signal of the same type is received during the handler's execution but before the reinstall. Handlers installed by the sigaction() interface can be installed permanently and a custom set of signals can be blocked during the execution of the handler. These signals will be unblocked immediately following the normal termination of the handler (but not in the event of an abnormal termination such as a C++ exception throw.) Use in C++ In C++, the try {/* ... *
https://en.wikipedia.org/wiki/Practice-based%20research%20network
A practice-based research network (PBRN) is a group of practices devoted principally to the care of patients and affiliated for the purpose of examining the health care processes that occur in practices. PBRNs are characterized by an organizational framework that transcends a single practice or study. They provide a "laboratory" for studying broad populations of patients and care providers in community-based settings. History of primary care research Before there were research institutes or networks of practices, individual practitioners studied their patients' problems with scientific rigor. Among these were five general practitioners who have been recognized for their seminal work during the past 125 years. They are James Mackenzie, Will Pickles, John Fry, F.J.A. Huygen and Curtis G. Hames. Each of these pioneers demonstrated that important new knowledge could be discovered by practicing family physicians. More recently, practicing primary care pediatricians such as Burtis Breese and William Carey contributed a body of knowledge on child health. These doctors all wondered about their patients' problems and they developed a means of gathering and recording data on their patients. Each of these research pioneers provide inspiration for the development of practice-based, primary care research networks because each demonstrated that important new knowledge could be discovered by the practicing primary care physician. They each wondered about their patients, developed means of gathering and recording data, and found collaborators and support from their staff and local communities. Unfortunately, they practiced in an era that was over-committed to specialism. Research focused on molecular mechanisms of disease. The rush to specialization by the medical community and the linking of research to specialists resulted in decades of neglect of primary care and virtually no recognition of the need to investigate care in the primary care setting. Instead, the common wisdom viewed primary care practices as relatively boring places that could be potential sites of application of the fruits of research done elsewhere in research laboratories, hospitals and institutes. Among the early regional networks started in the 1970s were the Dartmouth CO-OP PBRN in New Hampshire, Family Medicine Information System in Colorado (FMIS) and the Cooperative Information Project. These regional networks learned from each other and succeeded in conducting studies focused on what was happening in primary care. They attracted funding from medical schools, national philanthropic foundations and federal programs such as Health for Underserved Rural Areas. As the 1970s closed, these early networks enjoyed sufficient success to stimulate debate about the next steps in the context of the microcomputer's development. Among them was a small group convened by Gene Farley in Denver in 1978 to consider establishing a national sentinel practice system. It was this idea that lead to the Amb
https://en.wikipedia.org/wiki/Nooj
Nooj may refer to: NooJ, a computer program for natural language processing Nooj, one of the Characters of Final Fantasy X and X-2
https://en.wikipedia.org/wiki/Hammerhead%20Networks
Hammerhead Networks was a computer networking company based in Billerica, Massachusetts. It produced software solutions for the delivery of Internet Protocol service features. History It was founded in April 2000 by Eddie Sullivan, who also served as its CEO. It was acquired by Cisco Systems on May 1, 2002, in a stock transaction worth up to US$173M. Cisco had previously owned a minority interest. It had 85 employees at the time of the acquisition. References Defunct software companies of the United States Networking companies of the United States Cisco Systems acquisitions Software companies based in Massachusetts Companies based in Billerica, Massachusetts Software companies established in 2000 Software companies disestablished in 2002 Defunct companies based in Massachusetts
https://en.wikipedia.org/wiki/Televisi%C3%B3%20de%20Catalunya
Televisió de Catalunya (, known by the acronym TVC) is the public broadcasting network of Catalonia, one of the seventeen autonomous communities of Spain. It is part of the Corporació Catalana de Mitjans Audiovisuals, a public corporation created by the Generalitat de Catalunya by a Founding Act in 1983. Slightly more than half of its revenue (52%) comes from public funding through the Generalitat de Catalunya, while the remaining 48% is raised through advertising, sponsorship and merchandise and original productions' sales. It is officially composed by six channels: TV3, TV3 HD, 33/Super3, 3/24, Esport3 and TV3CAT. While the main language of all these channels is Catalan, Spanish is usually neither sub-titled nor dubbed, as it is generally accepted that all Catalan speakers are by default also Spanish speakers. Some programmes such as Polònia and APM use Spanish extensively, largely for effect. In the Aran Valley, there are programs in Aranese. TVC headquarters are located in Sant Joan Despí, near Barcelona. History TV3 started its trial broadcasts on 11 September 1983 on the National Day of Catalonia, but its regular broadcasts started a few months later, on 16 January 1984. TV3 was the first television channel to broadcast only in Catalan. In 1985, TV3 expanded its coverage to Andorra, Northern Catalonia and the Valencian Community. One year later, TV3 inaugurated its new headquarters, a 4.5-hectare facility in Sant Joan Despí, near Barcelona. Since 1987, TV3 has broadcast a second audio channel on almost all foreign-language series and movies with the original programme audio, first using the Zweikanalton system and later on, using NICAM. Back on analog broadcasting, series and movies were usually broadcast in NICAM stereo. However, sometimes an audio narration track for blind and visually impaired viewers is also provided. In 1988, TV3 started a decentralization process, first broadcasting programmes in the Aranese language for the Aran Valley and, one year later, opening branch offices in Tarragona, Girona and Lleida and creating the Telenoticies Comarques, a regional news programme broadcast simultaneously in four different editions, one for each of the four Catalan provinces. During the 1992 Summer Olympics, TV3 and TVE created the Olympic Channel, a joint network to provide coverage for the Olympic Games using Canal 33's frequency. In 1999, Televisió de Catalunya started broadcasting in the Digital terrestrial television system, and regular DTT broadcasts started in 2002. Coverage Televisió de Catalunya's terrestrial channels are available in Catalonia, its home region. Thanks to agreements with the neighboring territories, they can be received in the Balearic Islands (TV3CAT, SX3/33 and 3/24 only), Andorra (all channels) and Northern Catalonia (all channels). The agreement with the Balearic Islands is reciprocal, as in the public balearic channel IB3 Global is available in Catalonia as well. Since 1985 and until 2011, TVC's c
https://en.wikipedia.org/wiki/CosmicOS
CosmicOS is a self-contained message designed to be understood primarily by treating it as a computer program and executing it. It is inspired by Hans Freudenthal's Lincos and resembles the programming language Scheme in many ways. The message is written with only four basic symbols representing the binary digits one and zero and open and close brackets. Numbers are represented as a string of binary digits between a pair of brackets and expressions are represented as a string of numbers between brackets. Identifiers for operations are arbitrarily assigned numbers and their functions can be defined within the message itself. Self-contained messages are of interest for CETI research, but there is much difference of opinion over the most appropriate encoding and broadcast medium to use. CosmicOS is released in modular form, so that the basic message can be adapted to a particular concrete instantiation. The message is released under the GPL licence. See also Hello world References External links CosmicOS project old CosmicOS project page Interstellar messages Engineered languages Knowledge representation languages
https://en.wikipedia.org/wiki/Motorway%20Incident%20Detection%20and%20Automatic%20Signalling
Motorway Incident Detection and Automatic Signalling, usually abbreviated to MIDAS, is a UK distributed network of traffic sensors, mainly inductive loops, (trialling at the moment radar technology by Wavetronix and magneto-resistive wireless sensors by Clearview Intelligence) which are designed to alert the local regional control centre (RCC) to traffic flow and average speeds, and set variable message signs and advisory speed limits (or mandatory speed limits on smart motorways) with little human intervention. Companies such as RAC, TomTom and Google use this traffic flow data via halogens reporting systems. Originally installed on the congested western stretch of the M25 motorway, much of the M60 motorway around Manchester and the Birmingham box (M6, M5 and M42), MIDAS has been installed on all but the most minor stretches of UK motorway. The system has successfully reduced accidents. Additionally, the system is installed on parts of the non-motorway trunk road network including the A14. Although all stretches with MIDAS have at least small signals in the central reservation to show advisory speed limits for the whole carriageway, major motorways often also have text variable message signs, and on the busiest stretches, lane control signals above each lane. Additionally, many motorways, called smart motorways, have now been equipped with the newest signs and signals for variable mandatory speed limits and lane control. The system replaced the Automatic Incident Detection (AID) system which was trialled in 1989 on an section of the M1 motorway. MIDAS was first operated on the M25 in the SouthWest quadrant before the section went live with a variable speed limit. By March 2006, National Highways aimed to have MIDAS installed on more than of the English motorway network. See also Electronic Monitoring and Advisory System - a similar type of system in Singapore Freeway Traffic Management System External links Highways Agency: Variable Message Signs (VMS) (PDF) References Automotive safety Traffic signals Road transport in England Traffic signs
https://en.wikipedia.org/wiki/Commit
Commit may refer to: Computing Databases Commit (data management), a set of permanent changes in a database COMMIT (SQL), an SQL statement used to create such a changeset Version control Changeset, list of differences between two successive versions in a repository Commit (version control), the operation of committing such a changeset to the repository Microsoft Windows Commit charge, a concept in operating system-level memory management Others Commit (motion), a parliamentary motion Nicotine replacement therapy, by the trade name Commit Commit (card game), a 19th century American variant of the French card game, Comet See also Commitment (disambiguation)
https://en.wikipedia.org/wiki/Sara%20Fagen
Sara Taylor Fagen (born September 15, 1974) is a technology and data entrepreneur, and former staff member in the administration of President George W. Bush. Education and early career Fagen was born on September 15, 1974, in Dubuque, Iowa. She graduated from Wahlert High School, a private, co-educational, Roman Catholic school, in 1992. She then attended Drake University in Des Moines, Iowa. While in college, she was the National Co-Chairman of the College Republicans. She also took a year off, in 1995–1996, to work on Senator Phil Gramm's presidential campaign in Iowa. After graduating from Drake in 1997 with a B.S. in Finance, Fagen worked for two years at the Tarrance Group, a northern Virginia polling firm headed by Ed Goeas. White House career In April 1999, Fagen began working for the presidential campaign of George Bush. Her initial position, through January 2000, was coalitions director for Bush's Iowa caucus campaign. She then did field work in the South Carolina, Virginia, Washington, and Illinois primaries, and finally served as executive director of the Michigan campaign. After Bush was elected, Fagen worked for the White House as an associate political director (Midwest) doing political and public affairs outreach. Fagen became the deputy to Matthew Dowd, Chief Strategist for the Bush-Cheney 2004 re-election campaign. Sara served as a senior aide and White House Political Director for President George W. Bush, and playing a part in the 2004 Bush-Cheney re-election campaign. During the campaign, The Wall Street Journal called her a "data whiz," and said: "As a top strategist for the 2004 Bush-Cheney re-election efforts Fagen helped perfect political micro-targeting." She also served as a senior strategist helping to direct the President's message development, paid media strategy, and opinion research. After Bush's 2004 re-election, Fagen returned to work in the White House, where she served as the director of the White House Office of Political Affairs and deputy assistant to President George W. Bush. She left for the private sector in May 2007. Dismissal of U.S. Attorneys controversy On June 13, 2007, the Senate and House judiciary committees issued a subpoena to Fagen, to produce documents and testify before the committee. A subpoena was also issued to Harriet E. Miers, former White House counsel and supreme court nominee. In response to the subpoenas, the White House said that its longstanding policy was that no past or present White House officials would be permitted to testify under oath before the panels, and that only private, non-legally-binding, non-transcribed interviews would be permitted. The Democratic chairs of the House and Senate Judiciary Committees said that the White House terms were unacceptable. A ranking member of the Senate Judiciary committee, Arlen Specter (R-PA) said that the White House had not responded to an April 11, 2007, inquiry by the committee, and he supported the issuance of the subpoena
https://en.wikipedia.org/wiki/Windows%20Live%20OneCare
Windows Live OneCare (previously Windows OneCare Live, codenamed A1) was a computer security and performance enhancement service developed by Microsoft for Windows. A core technology of OneCare was the multi-platform RAV (Reliable Anti-virus), which Microsoft purchased from GeCAD Software Srl in 2003, but subsequently discontinued. The software was available as an annual paid subscription, which could be used on up to three computers. On 18 November 2008, Microsoft announced that Windows Live OneCare would be discontinued on 30 June 2009 and will instead be offering users a new free anti-malware suite called Microsoft Security Essentials to be available before then. However, virus definitions and support for OneCare would continue until a subscription expires. In the end-of-life announcement, Microsoft noted that Windows Live OneCare would not be upgraded to work with Windows 7 and would also not work in Windows XP Mode. History Windows Live OneCare entered a beta state in the summer of 2005. The managed beta program was launched before the public beta, and was located on BetaPlace, Microsoft's former beta delivery system. On 31 May 2006, Windows Live OneCare made its official debut in retail stores in the United States. The beta version of Windows Live OneCare 1.5 was released in early October 2006 by Microsoft. Version 1.5 was released to manufacturing on 3 January 2007 and was made available to the public on 30 January 2007. On 4 July 2007, beta testing started for version 2.0, and the final version was released on 16 November 2007. Microsoft acquired Komoku on 20 March 2008 and merged its computer security software into Windows Live OneCare. Windows Live OneCare 2.5 (build 2.5.2900.28) final was released on 3 July 2008. On the same day, Microsoft also released Windows Live OneCare for Server 2.5. Features Windows Live OneCare features integrated anti-virus, personal firewall, and backup utilities, and a tune-up utility with the integrated functionality of Windows Defender for malware protection. A future addition of a registry cleaner was considered but not added because "there are not significant customer advantages to this functionality". Version 2 added features such as multi-PC and home network management, printer sharing support, start-time optimizer, proactive fixes and recommendations, monthly reports, centralized backup, and online photo backup. Windows Live OneCare is built for ease-of-use and is designed for home users. OneCare also attempts a very minimal interface to lessen user confusion and resource use. It adds an icon to the notification area that tells the user at a glance the status of the system's health by using three alert colors: green (good), yellow (fair), and red (at risk). Compatibility Version 1.5 of OneCare is only compatible with the 32 bit versions of Windows XP and Windows Vista. Version 2 of OneCare supports 64 bit compatibility to Vista. In version 2.5, Microsoft released Windows Live OneCare for Se
https://en.wikipedia.org/wiki/Anne%20Wheeler
Anne Wheeler, OC, (born September 23, 1946) is a Canadian film and television writer, producer, and director. Biography Graduating in Mathematics from the University of Alberta she was a computer programmer before traveling abroad. Her years of travels inspired her to become a storyteller and when she returned she joined a group of old friends to form a film collective. From 1975 to 1985 she worked for the NFB where she made her first feature film, A War Story (1981), which was about her father, Ben Wheeler and his time as a doctor in a P.O.W. camp during World War II. The war is a common theme in her work and she revisited it later in her films Bye Bye Blues (1989) and The War Between Us (1995). Her first non-NFB film was Loyalties in 1986. In addition to her films, Wheeler has directed episodes of Anne with an E, Private Eyes, Strange Empire, The Romeo Section, The Guard, This Is Wonderland, Da Vinci's Inquest, and Cold Squad. Awards and honors Wheeler has been nominated four times for the Genie Award for Best Achievement in Direction for her films Loyalties (1986), Cowboys Don't Cry (1988), Bye Bye Blues (1989), and Suddenly Naked (2001). Her 1998 television miniseries, The Sleep Room, won Gemini awards for best television movie and best direction. In 2017 Wheeler won a Leo Award for Best Direction (Television Film) for the Hallmark movie Stop the Wedding. Wheeler was made an Officer of the Order of Canada in 1995. In 2012 she received the Queen Elizabeth II Diamond Jubilee Medal. Wheeler has also been awarded seven honorary doctorates and is the first woman to be given a Lifetime Achievement Award from the Directors Guild of Canada. Filmography See also List of female film and television directors List of LGBT-related films directed by women References External links Canadian Film Encyclopedia |A publication of The Film Reference Library/a division of the Toronto International Film Festival Group Official web site Anne Wheeler at the Canadian Women Film Directors Database 1946 births Canadian women film directors Canadian television directors Canadian women television directors Living people Officers of the Order of Canada Film directors from Edmonton Writers from Edmonton Canadian women screenwriters Victoria School of Performing and Visual Arts alumni Best Original Song Genie and Canadian Screen Award winners Directors of Genie and Canadian Screen Award winners for Best Short Documentary Film 20th-century Canadian screenwriters 21st-century Canadian screenwriters 20th-century Canadian women writers 21st-century Canadian women writers Canadian women documentary filmmakers
https://en.wikipedia.org/wiki/Account%20aggregation
Account aggregation sometimes also known as financial data aggregation is a method that involves compiling information from different accounts, which may include bank accounts, credit card accounts, investment accounts, and other consumer or business accounts, into a single place. This may be provided through connecting via an API to the financial institution or provided through "screen scraping" where a user provides the requisite account-access information for an automated system to gather and compile the information into a single page. The security of the account access details as well as the financial information is key to users having confidence in the service. The database either resides in a web-based application or in client-side software. While such services are primarily designed to aggregate financial information, they sometimes also display other things such as the contents of e-mail boxes and news headlines. Account Aggregator System Account aggregator system is a data-sharing system, which helps lenders to conduct an easy and speedy assessment of the creditworthiness of the borrower. Components of Account Aggregator system The Account Aggregator system essentially has three important components – Financial Information Provider (FIP) Financial Information User (FIU) Account Aggregators Financial Information Providers has the necessary data about the customer, which it provides to the Financial Information Users. The Financial Information Provider can be a bank, a Non-Banking Financial Company (NBFC), mutual fund, insurance repository, pension fund repository, or even your wealth supervisor. The account aggregators act as the intermediary by collecting data from FIPs that hold the customer’s financial data and share that with FIUs such as lending banks/agencies that provide financial services. History The ideas around account aggregation first emerged in the mid 1990s when banks started releasing Internet banking applications. In the late 1990s services helped users to manage their money on the Internet (typical desktop alternatives include Microsoft Money, Intuit Quicken etc.) in an easy-to-use manner wherein they got functionalities like single password, one-click access to current account data, total net worth and expense analysis. Initial setback One of the first major account aggregation services was Citibank's My Accounts service, though this service ended in late 2005 without explanation from Citibank. Much has been said in the financial services and banking industry as to the benefits of account aggregation – principally the customer and web site loyalty it might generate for providers – but the lack of responsibility and commitment by the providers is one reason for skepticism about committing to those same providers. New applications Account aggregation evolved with single sign-on (SSO) at most major banks such as Bank of America. With SSO (usually implemented via SAML) major financial institutions are now
https://en.wikipedia.org/wiki/Access%20Database%20Engine
The Access Database Engine (also Office Access Connectivity Engine or ACE and formerly Microsoft Jet Database Engine, Microsoft JET Engine or simply Jet) is a database engine on which several Microsoft products have been built. The first version of Jet was developed in 1992, consisting of three modules which could be used to manipulate a database. JET stands for Joint Engine Technology. Microsoft Access and Visual Basic use or have used Jet as their underlying database engine. However, it has been superseded for general use, first by Microsoft Desktop Engine (MSDE), then later by SQL Server Express. For larger database needs, Jet databases can be upgraded (or, in Microsoft parlance, "up-sized") to Microsoft's flagship SQL Server database product. A five billion record MS Jet (Red) database with compression and encryption turned on requires about one terabyte of disk storage space. It comprises typically hundreds of *.mdb files. Architecture Jet, being part of a relational database management system (RDBMS), allows the manipulation of relational databases. It offers a single interface that other software can use to access Microsoft databases and provides support for security, referential integrity, transaction processing, indexing, record and page locking, and data replication. In later versions, the engine has been extended to run SQL queries, store character data in Unicode format, create database views and allow bi-directional replication with Microsoft SQL Server. There are three modules to Jet: One is the Native Jet ISAM Driver, a dynamic link library (DLL) that can directly manipulate Microsoft Access database files (MDB) using a (random access) file system API. Another one of the modules contains the ISAM Drivers, DLLs that allow access to a variety of Indexed Sequential Access Method ISAM databases, among them xBase, Paradox, Btrieve and FoxPro, depending on the version of Jet. The final module is the Data Access Objects (DAO) DLL. DAO provides an API that allows programmers to access JET databases using any programming language. Locking Jet allows multiple users to access the database concurrently. To prevent that data from being corrupted or invalidated when multiple users try to edit the same record or page of the database, Jet employs a locking policy. Any single user can modify only those database records (that is, items in the database) to which the user has applied a lock, which gives exclusive access to the record until the lock is released. In Jet versions before version 4, a page locking model is used, and in Jet 4, a record locking model is employed. Microsoft databases are organized into data "pages", which are fixed-length (2 kB before Jet 4, 4 kB in Jet 4) data structures. Data is stored in "records" of variable length that may take up less or more than one page. The page locking model works by locking the pages, instead of individual records, which though less resource-intensive also means that when a user locks one r
https://en.wikipedia.org/wiki/UNICORE
UNICORE (UNiform Interface to COmputing REsources) is a grid computing technology for resources such as supercomputers or cluster systems and information stored in databases. UNICORE was developed in two projects funded by the German ministry for education and research (BMBF). In European-funded projects UNICORE evolved to a middleware system used at several supercomputer centers. UNICORE served as a basis in other research projects. The UNICORE technology is open source under BSD licence and available at SourceForge. History The concept of grid computing was first introduced in the book "The Grid: Blueprint for a New Computing Infrastructure" at the end of 1998. By 1997, the development of UNICORE was initiated for German supercomputer centers as an alternative for the Globus Toolkit. The first prototype was developed in the German UNICORE project, while the foundations for the production version were laid in the follow-up project UNICORE Plus, which ended in 2002. Follow-up European projects extended the functionality and worked towards providing implementations of Open Grid Forum standards. These resulted in the release of UNICORE 6 on 28 August 2007. Architecture UNICORE consists of three layers: a user, server, and target system tier. The user tier is represented by various clients. The primary clients are the UNICORE Rich Client, a graphical user interface based on the Eclipse framework, and the UNICORE commandline client (UCC). The clients use SOAP Web services to communicate with the server tier. XML documents are used to transmit platform and site independent descriptions of computational and data related tasks, resource information, and workflow specifications between client and server. The servers are accessible only via the Secure Sockets Layer protocol. As the single secure entry point to a UNICORE site, the Gateway accepts and authenticates all requests, and forwards them to the target service. A further server, UNICORE/X, is used to access a particular set of Grid resources at a site. UNICORE supports many different system architectures and ensures that organization full control over its resources. UNICORE/X servers may be used to access a supercomputer, a Linux cluster or a single PC. The UNICORE/X server creates concrete target system specific actions from the XML job description (Abstract Job Objects, AJO) received from the client. Available UNICORE services include job submission and job management, file access, file transfer (both client-server and server-server), storage operations (mkdir, ls, etc.), and workflow submission and management. The target system tier consists of the Target System Interface (TSI), which directly interfaces with the underlying local operating system and resource management system. Security model The security within UNICORE relies on the usage of permanent X.509 certificates issued by a trusted Certificate Authority (CA). These certificates are used to provide a single sign-on in the UNIC
https://en.wikipedia.org/wiki/Neuroplasticity
Neuroplasticity, also known as neural plasticity, or brain plasticity, is the ability of neural networks in the brain to change through growth and reorganization. It is when the brain is rewired to function in some way that differs from how it previously functioned. These changes range from individual neuron pathways making new connections, to systematic adjustments like cortical remapping or neural oscillation. Other forms of neuroplasticity include homologous area adaptation, cross modal reassignment, map expansion, and compensatory masquerade. Examples of neuroplasticity include circuit and network changes that result from learning a new ability, information acquisition, environmental influences, practice, and psychological stress. Neuroplasticity was once thought by neuroscientists to manifest only during childhood, but research in the latter half of the 20th century showed that many aspects of the brain can be altered (or are "plastic") even through adulthood. However, the developing brain exhibits a higher degree of plasticity than the adult brain. Activity-dependent plasticity can have significant implications for healthy development, learning, memory, and recovery from brain damage. History Origin The term plasticity was first applied to behavior in 1890 by William James in The Principles of Psychology where the term was used to describe "a structure weak enough to yield to an influence, but strong enough not to yield all at once". The first person to use the term neural plasticity appears to have been the Polish neuroscientist Jerzy Konorski. One of the first experiments providing evidence for the neuroplasticity phenomenon was conducted in 1793, by Italian anatomist Michele Vicenzo Malacarne, who described experiments in which he paired animals, trained one of the pair extensively for years, and then dissected both. Malacarne discovered that the cerebellums of the trained animals were substantially larger than the cerebellum of the untrained animals. However, while these findings were significant, they were eventually forgotten. In 1890, the idea that the brain and its function are not fixed throughout adulthood was proposed by William James in The Principles of Psychology, though the idea was largely neglected. Up until the 1970s, neuroscientists believed that the brain's structure and function was essentially fixed throughout adulthood. While the brain was commonly understood as a nonrenewable organ in the early 1900s, Santiago Ramón y Cajal, father of neuroscience, used the term neuronal plasticity to describe nonpathological changes in the structure of adult brains. Based on his renowned neuron doctrine, Cajal first described the neuron as the fundamental unit of the nervous system that later served as an essential foundation to develop the concept of neural plasticity. He used the term plasticity in reference to his work on findings of degeneration and regeneration in the central nervous system after a person had reached adul
https://en.wikipedia.org/wiki/PGPCoder
PGPCoder or GPCode is a trojan that encrypts files on the infected computer and then asks for a ransom in order to release these files, a type of behavior dubbed ransomware or cryptovirology. Trojan Once installed on a computer, the trojan creates two registry keys: one to ensure it is run on every system startup, and the second to monitor the progress of the trojan in the infected computer, counting the number of files that have been analyzed by the malicious code. Once it has been run, the trojan embarks on its mission, which is to encrypt, using a digital encryption key, all the files it finds on computer drives with extensions corresponding to those listed in its code. These extensions include .doc, .html, .jpg, .xls, .zip, and .rar. The blackmail is completed with the trojan dropping a text file in each directory, with instructions to the victim of what to do. An email address is supplied through which users are supposed to request for their files to be released after paying a ransom of $100–200 to an e-gold or Liberty Reserve account. Efforts to combat the trojan While a few Gpcode variants have been successfully implemented, many variants have flaws that allow users to recover data without paying the ransom fee. The first versions of Gpcode used a custom-written encryption routine that was easily broken. Variant Gpcode.ak writes the encrypted file to a new location, and deletes the unencrypted file, and this allows an undeletion utility to recover some of the files. Once some encrypted+unencrypted pairs have been found, this sometimes gives enough information to decrypt other files. Variant Gpcode.am uses symmetric encryption, which made key recovery very easy. In late November 2010, a new version called Gpcode.ax was reported. It uses stronger encryption (RSA-1024 and AES-256) and physically overwrites the encrypted file, making recovery nearly impossible. Kaspersky Lab has been able to make contact with the author of the program, and verify that the individual is the real author, but have so far been unable to determine his real world identity. References External links Kaspersky Lab Kaspersky Lab blog posts Kaspersky Lab forum dedicated to GPCode Kaspersky Lab virus descriptions StopGPCode trojan removal utilities Other virus description databases F-Secure Symantec McAfee: GPCoder GPCoder.e GPCoder.f GPCoder.g GPCoder.h GPCoder.i Trend Micro: TROJ_PGPCODER.A TROJ_PGPCODER.B TROJ_PGPCODER.C TROJ_PGPCODER.D TROJ_PGPCODER.E TROJ_PGPCODER.F TROJ_PGPCODER.G ThreatExpert Windows trojans Ransomware
https://en.wikipedia.org/wiki/JRuby
JRuby is an implementation of the Ruby programming language atop the Java Virtual Machine, written largely in Java. It is free software released under a three-way EPL/GPL/LGPL license. JRuby is tightly integrated with Java to allow the embedding of the interpreter into any Java application with full two-way access between the Java and the Ruby code (similar to Jython for the Python language). JRuby's lead developers are Charles Oliver Nutter and Thomas Enebo, with many current and past contributors including Ola Bini and Nick Sieger. In September 2006, Sun Microsystems hired Enebo and Nutter to work on JRuby full-time. In June 2007, ThoughtWorks hired Ola Bini to work on Ruby and JRuby. In July 2009, the JRuby developers left Sun to continue JRuby development at Engine Yard. In May 2012, Nutter and Enebo left Engine Yard to work on JRuby at Red Hat. History JRuby was originally created by Jan Arne Petersen, in 2001. At that time and for several years following, the code was a direct port of the Ruby 1.6 C code. With the release of Ruby 1.8.6, an effort began to update JRuby to 1.8.6 features and semantics. Since 2001, several contributors have assisted the project, leading to the current () core team of around six members. JRuby 1.1 added Just-in-time compilation and Ahead-of-time compilation modes to JRuby and was already faster in most cases than the then-current Ruby 1.8.7 reference implementation. JRuby packages are available for most platforms; Fedora 9 was among the first to include it as a standard package at JRuby 1.1.1. In July 2009, the core JRuby developers at Sun Microsystems, Charles Oliver Nutter, Thomas Enebo and Nick Sieger, joined Engine Yard to continue JRuby development. In May 2012, Nutter and Enebo left Engine Yard to work on JRuby at Red Hat. JRuby has supported compatibility with Ruby MRI versions 1.6 through 1.9.3. JRuby 1.0 supported Ruby 1.8.6, with JRuby 1.4.0 updating that compatibility to Ruby 1.8.7. JRuby 1.6.0 added simultaneous support for Ruby 1.9.2, with JRuby 1.7.0 making Ruby 1.9.3 the default execution mode (Ruby 1.8.7 compatibility is available via a command-line flag). JRuby 9.0.0.0 added support for Ruby 2.2. The current version of JRuby (9.4.3.0) targets Ruby 3.1, though some 3.1 features are still in work. Ruby on Rails JRuby has been able to run the Ruby on Rails web framework since version 0.9 (May 2006), with the ability to execute RubyGems and WEBrick. Since the hiring of the two lead developers by Sun, Rails compatibility and speed have improved greatly. JRuby version 1.0 successfully passed nearly all of Rails's own test cases. Since then, developers have begun to use JRuby for Rails applications in production environments. Multiple virtual machine collaboration On February 27, 2008, Sun Microsystems and the University of Tokyo announced a joint-research project to implement a virtual machine capable of executing more than one Ruby or JRuby application on one interpreter. Dynamic invocat
https://en.wikipedia.org/wiki/IBM%201360
The IBM 1360 Photo-Digital Storage System, or PDSS, was an online archival storage system for large data centers. It was the first storage device designed from the start to hold a terabit of data (128 GB). The 1360 stored data on index card sized pieces of stiff photographic film that were individually retrieved and read, and could be updated by copying data, with changes, to a new card. Only six PDSSs were constructed, including the prototype, and IBM abandoned the film-card system and moved on to other storage systems soon after. Only one similar commercial system seems to have been developed, the Foto-Mem FM 390, from the late 1960s. History Walnut In the mid-1950s IBM's San Jose lab was contracted by the CIA to provide a system to retrieve vast numbers of printed documents. The lab was interested in using a new type of photographic film known as Kalvar. Kalvar was developed to make copies of existing microfilm stock, simply by placing the Kalvar and original together, exposing them to ultraviolet light, and then heating the Kalvar to develop it. This could be carried out in a continuous roll-to-roll process. IBM's proposal, code named "Walnut", was a mechanical system that would automate the process of copying materials in the store using the Kalvar film. To further develop the system, in January 1958 IBM hired Jack Kuehler to head up a team exploring Kalvar-based films. He quickly concluded that Kalvar was not stable enough to store data with the sort of reliability IBM demanded, breaking down over a period of a few years and giving off corrosive gas while it did so. Kalvar is based on a diazo film and Kuehler was able to identify a similar film that would provide the reliability required, although at the price of needing to be developed in a wet lab process. He proposed a new version of Walnut that replaced the Kalvar developer with an automated diazo film developer system that developed the film in a few minutes. He was able to convince the CIA to accept this change, and the new version was announced in 1961 and delivered the next year. The primary element in a Walnut system was a large cylindrical carousel called the document store. Each store contained 200 small boxes IBM referred to as cells, in keeping with earlier magnetic tape-based systems. Each cell contained 50 strips of film, each of these containing 99 photographs arranged in a 3 by 33 grid. In total, each document store contained images of 990,000 documents, and up to 100 document stores could be used in a single Walnut system, for a total storage of 99,000,000 pages. A separate system was used to access pages from the Walnut system. Users would look up keywords stored on an IBM 1405 hard disk system, identifying individual documents to be retrieved. The machine produced punched cards that were inserted into the Walnut. The Walnut system retrieved the documents, copied them onto a film strip and developed it, and then inserted four such images into an aperture card. The c
https://en.wikipedia.org/wiki/Apocalypse%20%281990%20video%20game%29
Apocalypse is a futuristic 3D space shoot 'em up game released in 1990 for the Acorn Archimedes written by Gordon J. Key and published by The Fourth Dimension. Plot Sometime in the future, computers have evolved into sentient, mobile life-forms known as 'Rakonans'. They then proceed to conquer numerous planets, depleting the natural resources until nothing is left, and then swarming in a locust-like fashion to the next planet. The consequence of this is that humans then enter into conflict with the Rakonans in order to survive. The game sees the player acting as a Llanerk (a type of assault aircraft in the form of a flying saucer) pilot for the 'Royal Guild of Spacing'. During the course of the game, nine planets must be 'sterilised' by removing a set number of Rakonan units. Apocalypse is notable for the extremely high review scores awarded by The Micro User, and was only the second game on the Archimedes to feature fast, realtime true 3D polygon graphics (the first being David Braben's Zarch (1988), published by Superior Software). 1990 video games Acorn Archimedes games The Fourth Dimension (company) games Shoot 'em ups Single-player video games Video games developed in the United Kingdom
https://en.wikipedia.org/wiki/DMI
DMI may refer to: Organizations Danish Meteorological Institute Data Management Inc., a time-and-attendance software company Dead Man Incorporated, a predominantly white prison-gang formed in Maryland Development Media International, an organization that runs media campaigns to promote healthy behavior Dhulikhel Medical Institute, in Nepal Digital Management, Inc., a provider of mobile enterprise, intelligence, and cybersecurity solutions and services Digital Manga, Inc. Drum Major Institute, a non-profit American progressive think tank and community action group Dubai Media Incorporated, owned by the government of the Emirate of Dubai Dunder Mifflin, a fictional paper company on the American television show The Office Science and technology Deferred Maintenance Item, an aviation concept. Desktop Management Interface, a computer-software framework for managing components Digital Media Initiative, a cancelled technology project run by the BBC from 2008 to 2013 Direct manipulation interface, a style of human-computer interaction Direct Media Interface, an interconnection between the CPU to the southbridge of Intel motherboards. Dry matter intake, an animal's feed intake excluding its water content Dzyaloshinskii-Moriya Interaction, an interaction between neighboring magnetic spins 1,3-Dimethyl-2-imidazolidinone, in chemistry, an aprotic solvent Other uses Des Moines, Iowa Directorate of Military Intelligence (United Kingdom), a department of the British War Office until 1964 Dominica, UNDP country code
https://en.wikipedia.org/wiki/Variable%20data%20printing
Variable data printing (VDP) (also known as variable information printing (VIP) or variable imaging (VI)) is a form of digital printing, including on-demand printing, in which elements such as text, graphics and images may be changed from one printed piece to the next, without stopping or slowing down the printing process and using information from a database or external file. For example, a set of personalized letters, each with the same basic layout, can be printed with a different name and address on each letter. Variable data printing is mainly used for direct marketing, customer relationship management, advertising, invoicing and applying addressing on selfmailers, brochures or postcard campaigns. Variable data printing: Customization and Operational Methodologies VDP is a direct outgrowth of digital printing, which harnesses computer databases and digital print devices and highly effective software to create high-quality, full color documents, with a look and feel comparable to conventional offset printing. Variable data printing enables the mass customization of documents via digital print technology, as opposed to the 'mass-production' of a single document using offset lithography. Instead of producing 10,000 copies of a single document, delivering a single message to 10,000 customers, variable data printing could print 10,000 unique documents with customized messages for each customer. There are several levels of variable printing. The most basic level involves changing the salutation or name on each copy much like mail merge. More complicated variable data printing uses 'versioning', where there may be differing amounts of customization for different markets, with text and images changing for groups of addresses based upon which segment of the market is being addressed. Finally there is full variability printing, where the text and images can be altered for each individual address. All variable data printing begins with a basic design that defines static elements and variable fields for the pieces to be printed. While the static elements appear exactly the same on each piece, the variable fields are filled in with text or images as dictated by a set of application and style rules and the information contained in the database. There are three main operational methodologies for variable data printing. In one methodology, a static document is loaded into printer memory. The printer is instructed, through the print driver or raster image processor (RIP) to always print the static document when sending any page out to the printer driver or RIP. Variable data can then be printed on top of the static document. This methodology is the simplest way to execute VDP, however its capability is less than that of a typical mail merge. A second methodology is to combine the static and variable elements into print files, prior to printing, using standard software. This produces a conventional (and potentially huge) print file with every ima
https://en.wikipedia.org/wiki/Dependent%20type
In computer science and logic, a dependent type is a type whose definition depends on a value. It is an overlapping feature of type theory and type systems. In intuitionistic type theory, dependent types are used to encode logic's quantifiers like "for all" and "there exists". In functional programming languages like Agda, ATS, Coq, F*, Epigram, and Idris, dependent types help reduce bugs by enabling the programmer to assign types that further restrain the set of possible implementations. Two common examples of dependent types are dependent functions and dependent pairs. The return type of a dependent function may depend on the value (not just type) of one of its arguments. For instance, a function that takes a positive integer may return an array of length , where the array length is part of the type of the array. (Note that this is different from polymorphism and generic programming, both of which include the type as an argument.) A dependent pair may have a second value the type of which depends on the first value. Sticking with the array example, a dependent pair may be used to pair an array with its length in a type-safe way. Dependent types add complexity to a type system. Deciding the equality of dependent types in a program may require computations. If arbitrary values are allowed in dependent types, then deciding type equality may involve deciding whether two arbitrary programs produce the same result; hence the decidability of type checking may depend on the given type theory's semantics of equality, that is, whether the type theory is intensional or extensional. History In 1934, Haskell Curry noticed that the types used in typed lambda calculus, and in its combinatory logic counterpart, followed the same pattern as axioms in propositional logic. Going further, for every proof in the logic, there was a matching function (term) in the programming language. One of Curry's examples was the correspondence between simply typed lambda calculus and intuitionistic logic. Predicate logic is an extension of propositional logic, adding quantifiers. Howard and de Bruijn extended lambda calculus to match this more powerful logic by creating types for dependent functions, which correspond to "for all", and dependent pairs, which correspond to "there exists". (Because of this and other work by Howard, propositions-as-types is known as the Curry–Howard correspondence.) Formal definition Π type Loosely speaking, dependent types are similar to the type of an indexed family of sets. More formally, given a type in a universe of types , one may have a family of types , which assigns to each term a type . We say that the type varies with . A function whose type of return value varies with its argument (i.e. there is no fixed codomain) is a dependent function and the type of this function is called dependent product type, pi-type ( type) or dependent function type. From a family of types we may construct the type of dependent functions , whose t
https://en.wikipedia.org/wiki/Nodezilla
Nodezilla is a peer-to-peer network software written in C++ (core, aka Network Agent) and Java (GUI), and the GUI part is released under the GNU General Public License. It attempts to provide anonymity. Features Technically, Nodezilla is a secured, distributed and fault-tolerant routing system (aka Grid network). Its main purpose is to serve as a link for distributed services built on top of it (like chat, efficient video multicast streaming, file sharing, secured file store ...). Nodezilla provides cache features; any server may create a local replica of any data object. These local replicas provide faster access and robustness to network partitions. They also reduce network congestion by localizing access traffic. It is assumed that any server in the infrastructure may crash, leak information, or become compromised, therefore in order to ensure data protection, redundancy and TLS encryption are used. As the developers have not published the source code of the Network Agent yet, no independent validation of the provided level of anonymity has been performed. It currently offers three services: Anonymous file sharing Hierarchical multimedia streaming Digital photo sharing with selected friends. The project also offers a plugin for Vuze, the popular BitTorrent Client, enabling users to publish and distribute .torrent files without index nor tracker web pages. See also Anonymous P2P References Anonymous file sharing networks File sharing software
https://en.wikipedia.org/wiki/CCVS
CCVS, or Credit Card Verification System, was a credit card processing system designed for Posix-based operating systems, including Unix, Linux, and a version for Palm OS shown at trade shows. It was originally sold by Hell's Kitchen Systems, Inc. from 1997 onward and was acquired along with the company by Red Hat in January 2000. In 2002, Red Hat decided to exit the eCommerce market as an ISV, discontinued support for CCVS, and recommended that customers transition to MCVE. The CCVS API supported use under PHP, Java, Perl, Tcl, and C to allow merchants to communicate directly with the credit card clearing house instead of using Internet-based intermediaries. Along with Red Hat's decision to discontinue support for this extension, it has also been removed from PHP and is no longer available since version 4.3.0. An alternative to CCVS is MCVE. External links PHP Manual Entry on CCVS PHP Manual Entry on MCVE Red Hat Manual on CCVS Main Street Softworks Credit card terminology
https://en.wikipedia.org/wiki/List%20of%20radio%20stations%20in%20Rhode%20Island
The following is a list of the FCC-licensed radio stations in the U.S. state of Rhode Island, which can be sorted by their call signs, frequencies, cities of license, licensees, and programming formats. List of radio stations Defunct WALE WFCI WJAR-FM WKFD WRJI References Rhode Island Radio
https://en.wikipedia.org/wiki/IEC%2062056
IEC 62056 is a set of standards for electricity metering data exchange by International Electrotechnical Commission. The IEC 62056 standards are the international standard versions of the DLMS/COSEM specification. DLMS or Device Language Message Specification (originally Distribution Line Message Specification), is the suite of standards developed and maintained by the DLMS User Association (DLMS UA) and has been adopted by the IEC TC13 WG14 into the IEC 62056 series of standards. The DLMS User Association maintains a D Type liaison with IEC TC13 WG14 responsible for international standards for meter data exchange and establishing the IEC 62056 series. In this role, the DLMS UA provides maintenance, registration and compliance certification services for IEC 62056 DLMS/COSEM. COSEM or Companion Specification for Energy Metering, includes a set of specifications that defines the transport and application layers of the DLMS protocol. The DLMS User Association defines the protocols into a set of four specification documents namely Green Book, Yellow Book, Blue Book and White Book. The Blue Book describes the COSEM meter object model and the OBIS object identification system, the Green Book describes the architecture and protocols, the Yellow Book treats all the questions concerning conformance testing, the White Book contains the glossary of terms. If a product passes the conformance test specified in the Yellow Book, then a certification of DLMS/COSEM compliance is issued by the DLMS UA. The IEC TC13 WG14 groups the DLMS specifications under the common heading: "Electricity metering data exchange - The DLMS/COSEM suite". DLMS/COSEM protocol is not specific to electricity metering, it is also used for gas, water and heat metering. Standards IEC 62056-1-0:2014 Smart metering standardisation framework IEC 62056-3-1:2013 Use of local area networks on twisted pair with carrier signalling IEC 62056-4-7:2014 DLMS/COSEM transport layer for IP networks IEC 62056-5-3:2017 DLMS/COSEM application layer IEC 62056-6-1:2017 Object Identification System (OBIS) IEC 62056-6-2:2017 COSEM interface classes IEC 62056-6-9:2016 Mapping between the Common Information Model message profiles (IEC 61968-9) and DLMS/COSEM (IEC 62056) data models and protocols IEC 62056-7-3:2017 Wired and wireless M-Bus communication profiles for local and neighbourhood networks IEC 62056-7-5:2016 Local data transmission profiles for Local Networks (LN) IEC 62056-7-6:2013 The three-layer, connection-oriented HDLC based communication profile IEC 62056-8-3:2013 Communication profile for PLC S-FSK neighbourhood networks IEC 62056-8-5:2017 Narrow-band OFDM G3-PLC communication profile for neighbourhood networks IEC 62056-8-6:2017 High speed PLC ISO/IEC 12139-1 profile for neighbourhood networks IEC TS 62056-8-20:2016 Mesh communication profile for neighbourhood networks IEC TS 62056-9-1:2016 Communication profile using web-services to access a DLMS/COSEM server via a COSEM Access Service (CA
https://en.wikipedia.org/wiki/OpenNTPD
OpenNTPD (also known as OpenBSD NTP Daemon) is a Unix daemon implementing the Network Time Protocol to synchronize the local clock of a computer system with remote NTP servers. It is also able to act as an NTP server to NTP-compatible clients. OpenBSD NTP Daemon was initially developed by Alexander Guy and Henning Brauer as part of the OpenBSD project, with further help by many authors. Its design goals include being secure (non-exploitable), easy to configure, and accurate enough for most purposes. Its portable version, like that of OpenSSH, is developed as a child project which adds the portability code to the OpenBSD version and releases it separately. The portable version is developed by Brent Cook. The project developers receive some funding from the OpenBSD Foundation. History The development of OpenNTPD was motivated by a combination of issues with current NTP daemons: difficult configuration, complicated and difficult to audit code, and unsuitable licensing. OpenNTPD was designed to solve these problems and make time synchronization accessible to a wider userbase. After a period of development, OpenNTPD first appeared in OpenBSD 3.6. Its first release was announced on 2 November 2004. Goals OpenNTPD is an attempt by the OpenBSD team to produce an NTP daemon implementation that is secure, simple to audit, trivial to set up and administer, reasonably accurate, and light on system resources. As such, the design goals for OpenNTPD are: security, ease of use, and performance. Security in OpenNTPD is achieved by robust validity check in the network input path, use of bounded buffer operations via strlcpy, and privilege separation to mitigate the effects of possible security bugs exploiting the daemon through privilege escalation. In order to simplify the use of NTP, OpenNTPD implements a smaller set of functionalities than those available in other NTP daemons, such as that provided by the Network Time Protocol Project. The objective is to provide enough features to satisfy typical usage at the risk of unsuitability for esoteric or niche requirements. OpenNTPD is configured through the configuration file, ntpd.conf. A minimal number of options are offered: IP address or hostname on which OpenNTPD should listen, a timedelta sensor device to be used, and the set of servers from which the time will be synchronized. The accuracy of OpenNTPD is best-effort; the daemon attempts to be as accurate as possible but no specific accuracy is guaranteed. Example OpenNTPD gradually adjusts the system clock, as seen here in the output of OpenNTPD running on a Linux system: $ grep ntpd /var/log/daemon.log | grep adjusting Aug 4 03:32:20 nikolai ntpd[4784]: adjusting local clock by -1.162333s Aug 4 03:36:08 nikolai ntpd[4784]: adjusting local clock by -1.023899s Aug 4 03:40:02 nikolai ntpd[4784]: adjusting local clock by -0.902637s Aug 4 03:43:43 nikolai ntpd[4784]: adjusting local clock by -0.789431s Aug 4 03:47:35 nikolai ntpd[4784]: adjusting local
https://en.wikipedia.org/wiki/Elan%20Graphics
Elan Graphics is a computer graphics architecture for Silicon Graphics computer workstations. Elan Graphics was developed in 1991 and was available as a high-end graphics option on workstations released during the mid-1990s as part of the Express Graphics architectures family. Elan Graphics gives the workstation real-time 2D and 3D graphics rendering capability similar to that of even high-end PCs made over ten years after Elan's introduction, with the exception of texture mapping, which had to be performed in software. The Silicon Graphics Indigo Elan option Graphics systems consist of four GE7 Geometry Engines capable of a combined 128 MFLOPS and one RE3 Raster Engine. Together, they are capable of rendering 180K Z-buffered, lit, Gouraud-shaded triangles per second. The framebuffer has 56 bits per pixel, causing 12-bits per pixel (dithered RGB 4/4/4) to be used for a double-buffered, depth buffered, RGB layout. When double-buffering isn't required, it is possible to run in full 24-bit color. Similarly, when Z-buffering is not required, a double-buffered 24-bit RGB framebuffer configuration is possible. The Elan Graphics system also implemented hardware stencil buffering by allocating 4 bits from the Z-buffer to produce a combined 20-bit Z, 4-bit stencil buffer. Elan Graphics consists of five graphics subsystems: the HQ2 Command Engine, GE7 Geometry Subsystem, RE3 Raster Engine, VM2 framebuffer and VC1 Display Subsystem. Elan Graphics can produce resolutions up to 1280 x 1024 pixels with 24-bit color and can also process unencoded NTSC and PAL analog television signals. The Elan Graphics system is made up of five daughterboards that plug into the main workstation motherboard. The Elan Graphics architecture was superseded by SGI's Extreme Graphics architecture on Indigo2 models and eventually by the IMPACT graphics architecture in 1995. Features Subpixel positioning Advanced lighting models: Multiple colored light sources (up to 8) Ambient, diffuse, and specular lighting models Phong lighting Spotlights Local and infinite light source positioning Two-sided lighting Anti-aliased lines and points Full scene anti-aliasing Atmospheric effects Sphere rendering Pixel-blending capabilities for transparency effects Soft shadows and depth-of-field Texture-mapping Multimode windowing environment X11 drawing primitives and pixel move operations Non-Uniform Rational B-Spline (NURBS) surfaces References External links ElanTR - Elan Graphics Technical Report Graphics chips SGI graphics