source
stringlengths
32
199
text
stringlengths
26
3k
https://en.wikipedia.org/wiki/Audio%20compression
Audio compression may refer to: Audio compression (data), a type of lossy or lossless compression in which the amount of data in a recorded waveform is reduced to differing extents for transmission respectively with or without some loss of quality, used in CD and MP3 encoding, Internet radio, and the like Dynamic range compression, also called audio level compression, in which the dynamic range, the difference between loud and quiet, of an audio waveform is reduced
https://en.wikipedia.org/wiki/SkyOS
SkyOS (Sky Operating System) is a discontinued prototype commercial, proprietary, graphical desktop operating system written for the x86 computer architecture. As of January 30, 2009 development was halted with no plans to resume its development. In August 2013, developer Robert Szeleney announced the release of a public beta on the SkyOS website. This allows public users to download a Live CD of the SkyOS operating system, for testing and to optionally install the system. History Development started in 1996 with the first version released in December 1997. Up until version 4.x the OS was freely available. Starting with beta development of SkyOS 5 in 2003, users were required to pay US$30 to get access to beta releases. SkyOS adapted new filesystem SkyFS based on OpenBFS in 2004 and its graphics subsystem was improved in 2006 with support for desktop compositing including double buffering and transparency. The OS also moved to ELF binaries then. The last beta build 6947 was released in August 2008 and there was no status update for several months. As the OS was mainly the work of one man, Robert Szeleney, there was increasing difficulty to add new device drivers. Considering lack of development under Robert Szeleney, going open source was viewed by the tech press as the best option for SkyOS. Although Szeleney tried to bypass the lack of drivers by using a new kernel based on Linux or NetBSD, and reported some progress in this regard, development has not resumed. SkyOS website disappeared in 2013 and final public build from August 2008 was released for free shortly thereafter. Features Kernel SkyOS is a Unix-like operating system with a monolithic kernel. The OS supports multiple users and symmetric multiprocessing. Graphics and GUI SkyOS has an integrated graphics subsystem with support for desktop compositing including double buffering and transparency. SkyOS GUI also allows system-wide mouse gestures. SkyFS SkyFS is a fork of the OpenBFS filesystem. SkyOS can also be run from the following filesystems: FAT32/FAT16/FAT12 ISO 9660 Fast searching SkyOS offers real-time file content query searches with multiple keywords (comparable to Beagle in Linux or Spotlight in macOS), including indexing of files and programs. Porting applications Most command-line applications that were written to be compiled with the GNU Toolchain can be ported to SkyOS with little or no modification. SkyOS contains several frameworks for creating applications (including Mono port). Ported applications include Mozilla Firefox, Mozilla Thunderbird, Nvu, GIMP and AbiWord. There was also a monetary incentive for porting applications as the SkyOS community voted for desired programs and then supported developers with donations. Reception Although SkyOS includes many interesting features, limited application and hardware support are among its weaknesses (e.g. only a few graphics cards allow 2D acceleration). Kernel and drivers updates were solely worked on by
https://en.wikipedia.org/wiki/Open%20file%20format
An open file format is a file format for storing digital data, defined by an openly published specification usually maintained by a standards organization, and which can be used and implemented by anyone. An open file format is licensed with an open license. For example, an open format can be implemented by both proprietary and free and open-source software, using the typical software licenses used by each. In contrast to open file formats, closed file formats are considered trade secrets. Depending on the definition, the specification of an open format may require a fee to access or, very rarely, contain other restrictions. The range of meanings is similar to that of the term open standard. Specific definitions Sun Microsystems Sun Microsystems defined the criteria for open formats as follows: The format is based on an underlying open standard The format is developed through a publicly visible, community driven process The format is affirmed and maintained by a vendor-independent standards organization The format is fully documented and publicly available The format does not contain proprietary extensions UK government In 2012 the UK Government created the policy Open Standards Principles, stating that the Open Standards Principles apply to every aspect of government IT and that Government technology must remain open to everyone. They have seven principles for selecting open standards for use in government, following these principals many open formats were adopted, notably Open Document Format (ODF). The seven principles for selecting open standards for use in the UK government are: Open standards must meet user needs Open standards must give suppliers equal access to government contracts Open standards must support flexibility and change Open standards must support sustainable cost Select open standards using well-informed decisions Select open standards using fair and transparent processes Specify and implement open standards using fair and transparent processes US government Within the framework of Open Government Initiative, the federal government of the United States adopted the Open Government Directive, according to which: "An open format is one that is platform independent, machine readable, and made available to the public without restrictions that would impede the re-use of that information". State of Minnesota The State of Minnesota defines the criteria for open, XML-based file formats as follows: The format is interoperable among diverse internal and external platforms and applications The format is fully published and available royalty-free The format is implemented by multiple vendors The format is controlled by an open industry organization with a well-defined inclusive process for evolution of the standard Commonwealth of Massachusetts The Commonwealth of Massachusetts "defines open formats as specifications for data file formats that are based on an underlying open standard, developed by an open communit
https://en.wikipedia.org/wiki/NetBIOS%20Frames
NetBIOS Frames (NBF) is a non-routable network- and transport-level data protocol most commonly used as one of the layers of Microsoft Windows networking in the 1990s. NBF or NetBIOS over IEEE 802.2 LLC is used by a number of network operating systems released in the 1990s, such as LAN Manager, LAN Server, Windows for Workgroups, Windows 95 and Windows NT. Other protocols, such as NBT (NetBIOS over TCP/IP), and NBX (NetBIOS-over-IPX/SPX) also implement the NetBIOS/NetBEUI services over other protocol suites. The NBF protocol is broadly, but incorrectly, referred to as NetBEUI. This originates from the confusion with NetBIOS Extended User Interface, an extension to the NetBIOS API that was originally developed in conjunction with the NBF protocol; both the protocol and the NetBEUI emulator were originally developed to allow NetBIOS programs to run over IBM's new Token Ring network. Microsoft caused this confusion by labelling its NBF protocol implementation NetBEUI. NBF is a protocol and the original NetBEUI was a NetBIOS application programming interface extension. Overview NBF protocol uses 802.2 type 1 mode to provide the NetBIOS/NetBEUI name service and datagram service, and 802.2 type 2 mode to provide the NetBIOS/NetBEUI session service (virtual circuit). NBF protocol makes wide use of broadcast messages, which accounts for its reputation as a chatty interface. While the protocol consumes few network resources in a very small network, broadcasts begin to adversely impact performance and speed when the number of hosts present in a network grows. Sytek developed NetBIOS for IBM for the PC-Network program and was used by Microsoft for MS-NET in 1985. In 1987, Microsoft and Novell utilized it for their network operating systems LAN Manager and NetWare. Because NBF protocol is unroutable it can only be used to communicate with devices in the same broadcast domain, but being bridgeable it can also be used to communicate with network segments connected to each other via bridges. The lack of support for routable networks means that NBF is only well-suited for small to medium-sized networks, where it has such an advantage over TCP/IP that requires little configuration. The NetBIOS/NetBEUI services must be implemented atop other protocols, such as IPX and TCP/IP (see above) in order to be of use in an internetwork. Services NetBIOS/NetBEUI provides three distinct services: Name service for name registration and resolution Datagram distribution service for connectionless communication Session service for connection-oriented communication NBF protocol implements all of these services. Name service In order to start sessions or distribute datagrams, an application must register its NetBIOS/NetBEUI name using the name service. To do so, an "Add Name Query" or "Add Group Name Query" packet is broadcast on the network. If the NetBIOS/NetBEUI name is already in use, the name service, running on the host that owns the name, broadcasts a "Node Confli
https://en.wikipedia.org/wiki/Data%20Link%20Control
In the OSI networking model, Data Link Control (DLC) is the service provided by the data link layer. Network interface cards have a DLC address that identifies each card; for instance, Ethernet and other types of cards have a 48-bit MAC address built into the cards' firmware when they are manufactured. There is also a network transport protocol with the name Data Link Control, comparable to better-known protocols like TCP/IP and AppleTalk. DLC is a transport protocol used by IBM SNA mainframe computers and peripherals and compatible equipment. In computer networking, it is typically used for communications between network-attached printers, workstations and servers, for example by HP in their JetDirect print servers. While it was widely used up until the time of Windows 2000, versions from Windows XP onward do not include support for DLC. External links Generic DLC Environment Overview Microsoft DLC protocol in Windows 2000 Microsoft TechNet: The Data Link Control Interface, 30.3.2013 References OSI protocols
https://en.wikipedia.org/wiki/The%20Whistler%20%28radio%20series%29
The Whistler is an American radio mystery drama which ran from May 16, 1942, until September 22, 1955, on the west-coast regional CBS radio network. The show was also broadcast in Chicago and over Armed Forces Radio. On the west coast, it was sponsored by the Signal Oil Company: "That whistle is your signal for the Signal Oil program, The Whistler." There were also two short-lived attempts to form east-coast broadcast spurs: July 3 to September 25, 1946, sponsored by the Campbell Soup Company; and March 26, 1947, to September 29, 1948, sponsored by Household Finance. The program was also adapted into a film noir series by Columbia Pictures in 1944. Characters and story Each episode of The Whistler began with the sound of footsteps and a person whistling. (The Saint radio series with Vincent Price used a similar opening.) The haunting signature theme tune was composed by Wilbur Hatch and featured Dorothy Roberts whistling with an orchestra. A character known only as the Whistler was the host and narrator of the tales, which focused on crime and fate. He often commented directly upon the action in the manner of a Greek chorus, taunting the characters, guilty or innocent, from an omniscient perspective. The stories followed a formula in which a person's criminal acts were typically revealed either by an overlooked but important detail or by the criminal's own stupidity. An ironic ending, often grim, was a key feature of each episode. But on rare occasions, such as "Christmas Bonus" broadcast on Christmas Day 1944, the plot's twist of fate caused the story to end happily for the protagonist. Bill Forman, a veteran radio announcer, had the title role of the Whistler for the longest period of time. Others who portrayed the Whistler at various times were Gale Gordon, Joseph Kearns, Marvin Miller (announcer for the show, who occasionally filled in for Forman and played supporting roles), and Bill Johnstone (who had the title role on radio's The Shadow from 1938 to 1943). Cast members included Betty Lou Gerson, Hans Conried, Joseph Kearns, Cathy Lewis, Elliott Lewis, Gerald Mohr, Lurene Tuttle and Jack Webb. Writer-producer J. Donald Wilson established the tone of the show during its first two years, and he was followed in 1944 by producer-director George Allen. Other directors included Sterling Tracy and Sherman Marks with final scripts by Joel Malone and Harold Swanton. Of the 692 episodes, over 200 no longer exist. In 1946, a local Chicago version of The Whistler with local actors (including Everett Clarke as the Whistler) aired Sundays on WBBM, sponsored by Meister Brau beer. Films and television Films The Whistler was adapted into a film noir series of eight films by Columbia Pictures. The "Voice of the Whistler" was provided by an uncredited Otto Forrest. In the first seven films, veteran actor Richard Dix played the main character in the storya different character in each film, ranging from mild-mannered sympathetic heroes to flawed and forc
https://en.wikipedia.org/wiki/Curve%20sketching
In geometry, curve sketching (or curve tracing) are techniques for producing a rough idea of overall shape of a plane curve given its equation, without computing the large numbers of points required for a detailed plot. It is an application of the theory of curves to find their main features. Basic techniques The following are usually easy to carry out and give important clues as to the shape of a curve: Determine the x and y intercepts of the curve. The x intercepts are found by setting y equal to 0 in the equation of the curve and solving for x. Similarly, the y intercepts are found by setting x equal to 0 in the equation of the curve and solving for y. Determine the symmetry of the curve. If the exponent of x is always even in the equation of the curve then the y-axis is an axis of symmetry for the curve. Similarly, if the exponent of y is always even in the equation of the curve then the x-axis is an axis of symmetry for the curve. If the sum of the degrees of x and y in each term is always even or always odd, then the curve is symmetric about the origin and the origin is called a center of the curve. Determine any bounds on the values of x and y. If the curve passes through the origin then determine the tangent lines there. For algebraic curves, this can be done by removing all but the terms of lowest order from the equation and solving. Similarly, removing all but the terms of highest order from the equation and solving gives the points where the curve meets the line at infinity. Determine the asymptotes of the curve. Also determine from which side the curve approaches the asymptotes and where the asymptotes intersect the curve. Equate first and second derivatives to 0 to find the stationary points and inflection points respectively. If the equation of the curve cannot be solved explicitly for x or y, finding these derivatives requires implicit differentiation. Newton's diagram Newton's diagram (also known as Newton's parallelogram, after Isaac Newton) is a technique for determining the shape of an algebraic curve close to and far away from the origin. It consists of plotting (α, β) for each term Axαyβ in the equation of the curve. The resulting diagram is then analyzed to produce information about the curve. Specifically, draw a diagonal line connecting two points on the diagram so that every other point is either on or to the right and above it. There is at least one such line if the curve passes through the origin. Let the equation of the line be qα+pβ=r. Suppose the curve is approximated by y=Cxp/q near the origin. Then the term Axαyβ is approximately Dxα+βp/q. The exponent is r/q when (α, β) is on the line and higher when it is above and to the right. Therefore, the significant terms near the origin under this assumption are only those lying on the line and the others may be ignored; it produces a simple approximate equation for the curve. There may be several such diagonal lines, each corresponding to one or more branches of the c
https://en.wikipedia.org/wiki/SciPy
SciPy (pronounced "sigh pie") is a free and open-source Python library used for scientific computing and technical computing. SciPy contains modules for optimization, linear algebra, integration, interpolation, special functions, FFT, signal and image processing, ODE solvers and other tasks common in science and engineering. SciPy is also a family of conferences for users and developers of these tools: SciPy (in the United States), EuroSciPy (in Europe) and SciPy.in (in India). Enthought originated the SciPy conference in the United States and continues to sponsor many of the international conferences as well as host the SciPy website. The SciPy library is currently distributed under the BSD license, and its development is sponsored and supported by an open community of developers. It is also supported by NumFOCUS, a community foundation for supporting reproducible and accessible science. Components The SciPy package is at the core of Python's scientific computing capabilities. Available sub-packages include: cluster: hierarchical clustering, vector quantization, K-means constants: physical constants and conversion factors fft: Discrete Fourier Transform algorithms fftpack: Legacy interface for Discrete Fourier Transforms integrate: numerical integration routines interpolate: interpolation tools io: data input and output linalg: linear algebra routines misc: miscellaneous utilities (e.g. example images) ndimage: various functions for multi-dimensional image processing ODR: orthogonal distance regression classes and algorithms optimize: optimization algorithms including linear programming signal: signal processing tools sparse: sparse matrices and related algorithms spatial: algorithms for spatial structures such as k-d trees, nearest neighbors, Convex hulls, etc. special: special functions stats: statistical functions weave: tool for writing C/C++ code as Python multiline strings (now deprecated in favor of Cython) Data structures The basic data structure used by SciPy is a multidimensional array provided by the NumPy module. NumPy provides some functions for linear algebra, Fourier transforms, and random number generation, but not with the generality of the equivalent functions in SciPy. NumPy can also be used as an efficient multidimensional container of data with arbitrary datatypes. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases. Older versions of SciPy used Numeric as an array type, which is now deprecated in favor of the newer NumPy array code. History In the 1990s, Python was extended to include an array type for numerical computing called Numeric (This package was eventually replaced by Travis Oliphant who wrote NumPy in 2006 as a blending of Numeric and Numarray which had been started in 2001). As of 2000, there was a growing number of extension modules and increasing interest in creating a complete environment for scientific and technical computing. In 2001, Travis Oliphant
https://en.wikipedia.org/wiki/MorphOS
MorphOS is an AmigaOS-like computer operating system (OS). It is a mixed proprietary and open source OS produced for the Pegasos PowerPC (PPC) processor based computer, PowerUP accelerator equipped Amiga computers, and a series of Freescale development boards that use the Genesi firmware, including the Efika and mobileGT. Since MorphOS 2.4, Apple's Mac mini G4 is supported as well, and with the release of MorphOS 2.5 and MorphOS 2.6 the eMac and Power Mac G4 models are respectively supported. The release of MorphOS 3.2 added limited support for Power Mac G5. The core, based on the Quark microkernel, is proprietary, although several libraries and other parts are open source, such as the Ambient desktop. Characteristics and versions Developed for PowerPC CPUs from Freescale and IBM, it also supports the original AmigaOS Motorola 68000 series (68k, MC680x0) applications via proprietary task-based emulation, and most AmigaOS PPC applications via API wrappers. It is API compatible with AmigaOS 3.1 and has a GUI based on the Magic User Interface (MUI). Besides the Pegasos version of MorphOS, there is a version for Amiga computers equipped with PowerUP accelerator cards produced by Phase5. This version is free, as is registration. If unregistered, it slows down after each two-hour session. PowerUP MorphOS was most recently updated on 23 February 2006; however, it does not exceed the feature set or advancement of the Pegasos release. A version of MorphOS for the Efika, a very small mainboard based on the ultra-low-power MPC5200B processor from Freescale, has been shown at exhibitions and user gatherings in Germany. Current (since 2.0) release of MorphOS supports the Efika. Components ABox ABox is an emulation sandbox featuring a PPC native AmigaOS API clone that is binary compatible with both 68k Amiga applications and both PowerUP and WarpOS formats of Amiga PPC executables. ABox is based in part on AROS Research Operating System. ABox includes Trance JIT code translator for 68k native Amiga applications. Other AHI – audio interface: 6.7 Ambient – the default MorphOS desktop, inspired by Workbench and Directory Opus 5 CyberGraphX – graphics interface originally developed for Amiga computers: 5.1 Magic User Interface – primary graphical user interface (GUI) toolkit: 4.2 Poseidon – the Amiga USB stack developed by Chris Hodges TurboPrint – the printing system TinyGL – OpenGL implementation and Warp3D compatibility is featured via Rendering Acceleration Virtual Engine (RAVE) low-level API: V 51 Quark – manages the low level systems and hosts the A/Box currently MorphOS software MorphOS can run any system friendly Amiga software written for 68k processors. Also it is possible to use 68k libraries or datatypes on PPC applications and vice versa. It also provides compatibility layer for PowerUP and WarpUP software written for PowerUP accelerator cards. The largest repository is Aminet with over 75,000 packages online with packages from all Amig
https://en.wikipedia.org/wiki/IBM%206400
The IBM 6400 family of line matrix printers were modern highspeed business computer printers introduced by IBM in 1995. These printers were designed for use on a variety IBM systems including mainframes, servers, and PCs. Configuration The 6400 was available in a choice of open pedestal (to minimize floor size requirements) or an enclosed cabinet (for quiet operation). Three models existed, with print speeds of 500, 1000 or 1500 lines/minute. When configured with the appropriate graphics option, it could print mailing bar codes "certified by the U.S. Postal service. Twelve configurations were commonly sold by IBM. Rebadged These printers were manufactured by Printronix Corp and rebranded for IBM. All internal parts had the Printronix Logo and/or artwork. Although they once did, IBM no longer manufactures printers. One of their old printer divisions became Lexmark The other became the IBM Printing Systems Division, which was subsequently sold to Ricoh in 2007. References 6400 Line printers Computer-related introductions in 1995
https://en.wikipedia.org/wiki/Connectionism
Connectionism (coined by Edward Thorndike in the 1930s) is the name of an approach to the study of human mental processes and cognition that utilizes mathematical models known as connectionist networks or artificial neural networks. Connectionism has had many 'waves' since its beginnings. The first wave appeared in the 1950s with Warren Sturgis McCulloch and Walter Pitts both focusing on comprehending neural circuitry through a formal and mathematical approach, and Frank Rosenblatt who published the 1958 book “The Perceptron: A Probabilistic Model For Information Storage and Organization in the Brain” in Psychological Review, while working at the Cornell Aeronautical Laboratory. The first wave ended with the 1969 book about the limitations of the original perceptron idea, written by Marvin Minsky and Papert, which contributed to discouraging major funding agencies in the US from investing in connectionist research. With a few noteworthy deviations, the majority of connectionist research entered a period of inactivity until the mid-1980s. The second wave began in the late 1980s, following the 1987 book about Parallel Distributed Processing by James L. McClelland, David E. Rumelhart et al., which introduced a couple of improvements to the simple perceptron idea, such as intermediate processors (known as "hidden layers" now) alongside input and output units and used sigmoid activation function instead of the old 'all-or-nothing' function. Their work has, in turn, built upon that of John Hopfield, who was a key figure investigating the mathematical characteristics of sigmoid activation functions. From the late 1980s to the mid-1990s, connectionism took on an almost revolutionary tone when Schneider, Terence Horgan and Tienson posed the question of whether connectionism represented a fundamental shift in psychology and GOFAI. Some advantages of the second wave connectionist approach included its applicability to a broad array of functions, structural approximation to biological neurons, low requirements for innate structure, and capacity for graceful degradation. Some disadvantages of the second wave connectionist approach included the difficulty in deciphering how ANNs process information, or account for the compositionality of mental representations, and a resultant difficulty explaining phenomena at a higher level. The current (third) wave has been marked by advances in Deep Learning allowing for Large language models. The success of deep learning networks in the past decade has greatly increased the popularity of this approach, but the complexity and scale of such networks has brought with them increased interpretability problems. Basic principle The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units. The form of the connections and the units can vary from model to model. For example, units in the network could represent neurons and the connections could repres
https://en.wikipedia.org/wiki/A20%20line
The A20, or address line 20, is one of the electrical lines that make up the system bus of an x86-based computer system. The A20 line in particular is used to transmit the 21st bit on the address bus. A microprocessor typically has a number of address lines equal to the base-two logarithm of the number of words in its physical address space. For example, a processor with 4 GB of byte-addressable physical space requires 32 lines (log2(4 GB) = log2(232 B) = 32), which are named A0 through A31. The lines are named after the zero-based number of the bit in the address that they are transmitting. The least significant bit is first and is therefore numbered bit 0 and signaled on line A0. A20 transmits bit 20 (the 21st bit) and becomes active once addresses reach 1 MB, or 220. Overview The Intel 8086, Intel 8088, and Intel 80186 processors had 20 address lines, numbered A0 to A19; with these, the processor can access 220 bytes, or 1 MB. Internal address registers of such processors only had 16 bits. To access a 20-bit address space, an external memory reference was made up of a 16-bit offset address added to a 16-bit segment number, shifted 4 bits to the left so as to produce a 20-bit physical address. The resulting address is equal to segment × 16 + offset. There are many combinations of segment and offset that produce the same 20-bit physical address. Therefore, there were various ways to address the same byte in memory. For example, here are four of the 4096 different segment:offset combinations, all referencing the byte whose physical address is 0x000FFFFF (the last byte in 1 MB-memory space): F000:FFFF FFFF:000F F555:AAAF F800:7FFF Referenced the last way, an increase of one in the offset yields F800:8000, which is a proper address for the processor, but since it translates to the physical address 0x00100000 (the first byte over 1 MB), the processor would need another address line for actual access to that byte. Since there is no such line on the 8086 line of processors, the 21st bit above, while set, gets dropped, causing the address F800:8000 to "wrap around" and to actually point to the physical address 0x00000000. When IBM designed the IBM PC AT (1984) machine, it decided to use the new higher-performance Intel 80286 microprocessor. The 80286 could address up to 16 MB of system memory in protected mode. However, the CPU was supposed to emulate an 8086's behavior in real mode, its startup mode, so that it could run operating systems and programs that were not written for protected mode. The 80286 did not force the A20 line to zero in real mode, however. Therefore, the combination F800:8000 would no longer point to the physical address 0x00000000, but to the address 0x00100000. As a result, programs relying on the address wrap around would no longer work. To remain compatible with such programs, IBM decided to correct the problem on the motherboard. That was accomplished by inserting a logic gate on the A20 line between the processor
https://en.wikipedia.org/wiki/Edward%20Tufte
Edward Rolf Tufte (; born March 14, 1942), sometimes known as "ET", is an American statistician and professor emeritus of political science, statistics, and computer science at Yale University. He is noted for his writings on information design and as a pioneer in the field of data visualization. Biography Edward Rolf Tufte was born in 1942 in Kansas City, Missouri, to Virginia Tufte (1918–2020) and Edward E. Tufte (1912–1999). He grew up in Beverly Hills, California, where his father was a longtime city official, and he graduated from Beverly Hills High School. He received a BS and MS in statistics from Stanford University and a PhD in political science from Yale. His dissertation, completed in 1968, was titled The Civil Rights Movement and Its Opposition. He was then hired by Princeton University's Woodrow Wilson School, where he taught courses in political economy and data analysis while publishing three quantitatively inclined political science books. In 1975, while at Princeton, Tufte was asked to teach a statistics course to a group of journalists who were visiting the school to study economics. He developed a set of readings and lectures on statistical graphics, which he further developed in joint seminars he taught with renowned statistician John Tukey, a pioneer in the field of information design. These course materials became the foundation for his first book on information design, The Visual Display of Quantitative Information. After negotiations with major publishers failed, Tufte decided to self-publish Visual Display in 1982, working closely with graphic designer Howard Gralla. He financed the work by taking out a second mortgage on his home. The book quickly became a commercial success and secured his transition from political scientist to information expert. On March 5, 2010, President Barack Obama appointed Tufte to the American Recovery and Reinvestment Act's Recovery Independent Advisory Panel "to provide transparency in the use of Recovery-related funds". Work Tufte is an expert in the presentation of informational graphics such as charts and diagrams, and is a fellow of the American Statistical Association. He has held fellowships from the Guggenheim Foundation and the Center for Advanced Study in the Behavioral Sciences. Information design Tufte's writing is important in such fields as information design and visual literacy, which deal with the visual communication of information. He coined the word chartjunk to refer to useless, non-informative, or information-obscuring elements of quantitative information displays. Tufte's other key concepts include what he calls the lie factor, the data-ink ratio, and the data density of a graphic. He uses the term "data-ink ratio" to argue against using excessive decoration in visual displays of quantitative information. In Visual Display, Tufte explains, "Sometimes decoration can help editorialize about the substance of the graphic. But it is wrong to distort the data measu
https://en.wikipedia.org/wiki/History%20of%20computing%20hardware%20%281960s%E2%80%93present%29
The history of computing hardware starting at 1960 is marked by the conversion from vacuum tube to solid-state devices such as transistors and then integrated circuit (IC) chips. Around 1953 to 1959, discrete transistors started being considered sufficiently reliable and economical that they made further vacuum tube computers uncompetitive. Metal–oxide–semiconductor (MOS) large-scale integration (LSI) technology subsequently led to the development of semiconductor memory in the mid-to-late 1960s and then the microprocessor in the early 1970s. This led to primary computer memory moving away from magnetic-core memory devices to solid-state static and dynamic semiconductor memory, which greatly reduced the cost, size, and power consumption of computers. These advances led to the miniaturized personal computer (PC) in the 1970s, starting with home computers and desktop computers, followed by laptops and then mobile computers over the next several decades. Second generation For the purposes of this article, the term "second generation" refers to computers using discrete transistors, even when the vendors referred to them as "third-generation". By 1960 transistorized computers were replacing vacuum tube computers, offering lower cost, higher speeds, and reduced power consumption. The marketplace was dominated by IBM and the seven dwarfs: IBM The BUNCH Burroughs UNIVAC NCR Control Data Corporation (CDC) Honeywell General Electric RCA. Some examples of 1960s second generation computers from those vendors are: the IBM 1401, the IBM 7090/7094, and the IBM System/360; the Burroughs 5000 series; the UNIVAC 1107; the NCR 315; the CDC 1604 and the CDC 3000 series; the Honeywell 200, Honeywell 400, and Honeywell 800; the GE-400 series and the GE-600 series; the RCA 301, 3301 and the Spectra 70 series. However, some smaller companies made significant contributions. Also, towards the end of the second generation Digital Equipment Corporation (DEC) was a serious contender in the small and medium machine marketplace. Meanwhile, second-generation computers were also being developed in the USSR as, e.g., the Razdan family of general-purpose digital computers created at the Yerevan Computer Research and Development Institute. The second-generation computer architectures initially varied; they included character-based decimal computers, sign-magnitude decimal computers with a 10-digit word, sign-magnitude binary computers, and ones' complement binary computers, although Philco, RCA, and Honeywell, for example, had some computers that were character-based binary computers and Digital Equipment Corporation (DEC) and Philco, for example, had two's complement computers. With the advent of the IBM System/360, two's complement became the norm for new product lines. The most common word sizes for binary mainframes were 36 and 48 bits, although entry-level and midrange machines used smaller words, e.g., 12 bits, 18 bits, 24 bits, 30 bits. All but the
https://en.wikipedia.org/wiki/Superuser
In computing, the superuser is a special user account used for system administration. Depending on the operating system (OS), the actual name of this account might be root, administrator, admin or supervisor. In some cases, the actual name of the account is not the determining factor; on Unix-like systems, for example, the user with a user identifier (UID) of zero is the superuser, regardless of the name of that account; and in systems which implement a role based security model, any user with the role of superuser (or its synonyms) can carry out all actions of the superuser account. The principle of least privilege recommends that most users and applications run under an ordinary account to perform their work, as a superuser account is capable of making unrestricted, potentially adverse, system-wide changes. Unix and Unix-like In Unix-like computer OSes (such as Linux), root is the conventional name of the user who has all rights or permissions (to all files and programs) in all modes (single- or multi-user). Alternative names include baron in BeOS and avatar on some Unix variants. BSD often provides a toor ("root" written backward) account in addition to a root account. Regardless of the name, the superuser always has a user ID of 0. The root user can do many things an ordinary user cannot, such as changing the ownership of files and binding to network ports numbered below 1024. The name root may have originated because root is the only user account with permission to modify the root directory of a Unix system. This directory was originally considered to be root's home directory, but the UNIX Filesystem Hierarchy Standard now recommends that root's home be at . The first process bootstrapped in a Unix-like system, usually called , runs with root privileges. It spawns all other processes directly or indirectly, which inherit their parents' privileges. Only a process running as root is allowed to change its user ID to that of another user; once it has done so, there is no way back. Doing so is sometimes called dropping root privileges and is often done as a security measure to limit the damage from possible contamination of the process. Another case is and other programs that ask users for credentials and in case of successful authentication allow them to run programs with privileges of their accounts. It is often recommended that root is never used as a normal user account, since simple typographical errors in entering commands can cause major damage to the system. Instead, a normal user account should be used, and then either the (substitute user) or (substitute user do) command is used. The approach requires the user to know the root password, while the method requires that the user be set up with the power to run "as root" within the file, typically indirectly by being made a member of the wheel, adm, admin, or sudo group. For a number of reasons, the approach is now generally preferred – for example it leaves an audit trail of w
https://en.wikipedia.org/wiki/Hacker%20Manifesto
The Conscience of a Hacker (also known as The Hacker Manifesto) is a short essay written on January 8, 1986 by Loyd Blankenship, a computer security hacker who went by the handle The Mentor, and belonged to the second generation of hacker group Legion of Doom. It was written after the author's arrest, and first published in the underground hacker ezine Phrack and can be found on many websites, as well as on T-shirts and in films. Considered a cornerstone of hacker culture, the Manifesto asserts that there is a point to hacking that supersedes selfish desires to exploit or harm other people, and that technology should be used to expand our horizons and try to keep the world free. When asked about his motivation for writing the article, Blankenship said, I was going through hacking withdrawal, and Craig/Knight Lightning needed something for an upcoming issue of Phrack. I was reading The Moon Is a Harsh Mistress and was very taken with the idea of revolution. At a more prominent public event, when asked about his arrest and motivation for writing the article, Blankenship said, I was just in a computer I shouldn’t have been. And [had] a great deal of empathy for my friends around the nation that were also in the same situation. This was post-WarGames, the movie, so pretty much the only public perception of hackers at that time was ‘hey, we’re going to start a nuclear war, or play tic-tac-toe, one of the two,’ and so I decided I would try to write what I really felt was the essence of what we were doing and why we were doing it. In popular culture The article is quoted several times in the 1995 movie Hackers, although in the movie it is being read from an issue of the hacker magazine 2600, not the historically accurate Phrack. The Mentor gave a reading of The Hacker Manifesto and offered additional insight at H2K2. It is also an item in the game Culpa Innata. A poster of the Hacker Manifesto appears in the 2010 film The Social Network in the Harvard room of Mark Zuckerberg. The Hacker Manifesto is mentioned in Edward Snowden's autobiography Permanent Record. Amplitude Problem's 2019 album Crime of Curiosity, featuring The Mentor himself, YTCracker, Inverse Phase and Linux kernel maintainer King Fisher of TRIAD is dedicated to The Hacker Manifesto. Each song title is a phrase from the essay. See also Hacker ethic Timeline of computer security hacker history References External links Hacker's Manifesto at Phrack Magazine 1986 documents Hacker culture Hacking (computer security) Manifestos Texts about the Internet Texts related to the history of the Internet Works about computer hacking
https://en.wikipedia.org/wiki/David%20Jay
David Jay (born April 24, 1982) is an American asexual activist. Jay is the founder and webmaster of the Asexual Visibility and Education Network (AVEN), the most prolific and well-known of the various asexual communities established since the advent of the World Wide Web and social media. Activism Frustrated with the lack of resources available regarding asexuality, Jay launched AVEN's website in 2001. Since then, he has taken a leading role in the asexuality movement, appearing on multiple television shows, and being featured heavily in Arts Engine's 2011 documentary (A)sexual. AVEN, which Salon.com referred to as the "unofficial online headquarters" of the asexuality movement, is widely recognised as the largest online asexual community. Its two main goals are to create public acceptance and discussion about asexuality and to facilitate the growth of a large online asexual community. As of June 17, 2013, AVEN has nearly 70,000 registered members. In New York City, working both with the Department of Education and private organizations, he has been providing training on Ace (asexual) inclusion to health educators. Personal life Jay is from St. Louis, Missouri, and he graduated from Crossroads College Preparatory School in 2000. At the age of 15, Jay began considering himself asexual, and he came out as asexual while a student at Wesleyan University in Connecticut. Jay is part of a nonromantic, three-parent family, which he views as influenced by his asexual identity. References External links AVENguy – David Jay's online profile Love from the Asexual Underground - David Jay's blog and podcast about asexuality AVEN – Asexual Visibility and Education Network. Interview with Jay 1982 births American founders Asexual men Crossroads College Preparatory School alumni Living people Activists from St. Louis Wesleyan University alumni American LGBT rights activists
https://en.wikipedia.org/wiki/Bitwise%20operation
In computer programming, a bitwise operation operates on a bit string, a bit array or a binary numeral (considered as a bit string) at the level of its individual bits. It is a fast and simple action, basic to the higher-level arithmetic operations and directly supported by the processor. Most bitwise operations are presented as two-operand instructions where the result replaces one of the input operands. On simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster than multiplication, and sometimes significantly faster than addition. While modern processors usually perform addition and multiplication just as fast as bitwise operations due to their longer instruction pipelines and other architectural design choices, bitwise operations do commonly use less power because of the reduced use of resources. Bitwise operators In the explanations below, any indication of a bit's position is counted from the right (least significant) side, advancing left. For example, the binary value 0001 (decimal 1) has zeroes at every position but the first (i.e., the rightmost) one. NOT The bitwise NOT, or bitwise complement, is a unary operation that performs logical negation on each bit, forming the ones' complement of the given binary value. Bits that are 0 become 1, and those that are 1 become 0. For example: NOT 0111 (decimal 7) = 1000 (decimal 8) NOT 10101011 (decimal 171) = 01010100 (decimal 84) The result is equal to the two's complement of the value minus one. If two's complement arithmetic is used, then NOT x = -x − 1. For unsigned integers, the bitwise complement of a number is the "mirror reflection" of the number across the half-way point of the unsigned integer's range. For example, for 8-bit unsigned integers, NOT x = 255 - x, which can be visualized on a graph as a downward line that effectively "flips" an increasing range from 0 to 255, to a decreasing range from 255 to 0. A simple but illustrative example use is to invert a grayscale image where each pixel is stored as an unsigned integer. AND A bitwise AND is a binary operation that takes two equal-length binary representations and performs the logical AND operation on each pair of the corresponding bits. Thus, if both bits in the compared position are 1, the bit in the resulting binary representation is 1 (1 × 1 = 1); otherwise, the result is 0 (1 × 0 = 0 and 0 × 0 = 0). For example: 0101 (decimal 5) AND 0011 (decimal 3) = 0001 (decimal 1) The operation may be used to determine whether a particular bit is set (1) or cleared (0). For example, given a bit pattern 0011 (decimal 3), to determine whether the second bit is set we use a bitwise AND with a bit pattern containing 1 only in the second bit: 0011 (decimal 3) AND 0010 (decimal 2) = 0010 (decimal 2) Because the result 0010 is non-zero, we know the second bit in the original pattern was set. This is often called bit masking. (By analogy, the u
https://en.wikipedia.org/wiki/LunarStorm
LunarStorm, in Swedish often shortened to Lunar, was a Swedish commercial advertisement-financed social networking website for teenagers, which was also available in the United Kingdom before 2007. "LunarStorm" was operated by a company called LunarWorks. According to the company's official statistics, the website had 1.2 million members in 2007, of whom some 70% were 12–17 years old. The website drastically dropped in popularity since then and in June 2010, Wyatt Media Groups (the owner of LunarStorm at that time) announced that LunarStorm would be shut down on 18 August 2010 due to lack of activity. History The predecessor to Lunarstorm was called "StajlPlejs" (i.e. "Style Place" transcribed to Swedish spelling), which started around 1996. It was created by Rickard Eriksson as Europe's first digital online community. Lunarstorm officially opened on 1 January 2000, after Lunarworks had taken over StajlPlejs and decided to rename it after the username used at the website by Rickard Erikssons girlfriend. In 2001, LunarStorm had grown to over 600,000 members but still experienced economic hardships. After the beginning, LunarStorm was financed by banners and other advertising on the website, but this soon evolved to include more of pay-by-SMS services. An early example was LunarStorm's own pre-paid card "Vrål" ("Roar"). In 2002, "Kolla" ("Look" or "Check this out") was introduced which allowed users to visit LunarStorm from their mobile phones. In the same year, members were able to upgrade their membership to "pro" status and get unlimited access to a range of services for a fee. LunarStorm Pro was extremely popular among the member base and it improved the website's economic situation greatly. Another great source of revenue was LunarStorm's cooperation with other companies. An early business partner was OLW who sponsored "RajRaj", a party-oriented photo album ("RajRaj" being Swedish slang for party). But LunarStorm has through the years also cooperated with numerous other companies including Arla, EA Games, Loka, McDonald's, Coca-Cola, Aftonbladet, Sveriges Radio and Logitech. In September 2006, the website style was changed. The change had already been introduced at the British sister site a few months earlier (May - June). In February 2007, LunarWorks bought 40 percent of the shares in Bilddagboken, an increasingly big competitor to LunarStorm. At the same time, LunarStorm replaced the "Kollage" service which allowed you to upload images for a fee with the free "Gallery", to compete with Bilddagbokens free picture upload functions. Eight months later, on 4 October 2007, the owners of LunarStorm announced that they had bought the remaining shares of Bilddagboken, as well as 57 percent of the online Swedish-English dictionary Tyda.se. LunarStorm, Bilddagboken and Tyda.se are now parts of Wyatt Media Group, owned by Sten Mörtstedt. Both the Danish and British branches of the website were closed down on 13 July 2007. On 13 December 2007, L
https://en.wikipedia.org/wiki/Luxor%20AB
Luxor was a Swedish home electronics and computer manufacturer located in Motala, established in 1923 and acquired by Nokia in 1985. The brand name is now owned by Turkish company Vestel and is used for televisions sold in the Swedish market. Originally a manufacturer of tape recorders, radios, television sets, stereo systems, and other home electronics, it launched its first home computer, the ABC 80 in 1978. The succeeding ABC 800 series was introduced in 1981 with new releases in 1983 being produced until the ABC line was terminated in 1986. History The company was established in 1923 by Axel Holstensson (full name Axel Harald Holstensson; 1889–1979). He was the son of an ore loader and had no training other than a few years of elementary school. He worked as an electrician and eventually as a travel fitter and foreman at ASEA years from 1907 to 1918. In his spare time, he studied electrical engineering. In 1918, he moved to Motala and started an electronics store and installation company Motala Electric Agency (). The post-war depression, however, almost led to bankruptcy, the situation was critical and the resort was to start building radios. Components procured from Germany and it was shown to be possible to sell a number of devices at a profit, so Axel Holstensson registered the Luxor Radio Factory Company () in 1923. The name Luxor was apposite and then very topical as the Tomb of Tutankhamun had just been discovered near the city of Luxor in Egypt. Finnish company Nokia became a principal owner in the company in 1984. Computer production was discontinued in 1986, followed by television production in Motala in 1992, when the production moved to Finland. Instead, it produced receivers for cable and satellite television. But Nokia wanted to streamline its business to mobile telephony and therefore sold the satellite receiver business in 1998 to American company Space Craft Inc. The company continued to manufacture satellite receivers in Motala until 2002, when it was transferred to low-cost countries. In October 1997, Nokia sold its car speaker and audio amplifier business to Harman International Industries. A collaboration with the then Electrolux-owned Autoliv AB led to the company first became a partner for Luxor Electronics since 1998, buying the entire business. In Motala, there still remains manufacturing of various types of electronic products for the automotive industry but not under the Luxor brand. The brand name was later sold to the Norwegian retailer Elkjøp ASA, When British Dixons Retail bought Elkjøp in November 1999 they also acquired the Luxor brand. In 2006 Turkish company Vestel acquired the Luxor brand. See also List of Swedish companies References Defunct companies of Sweden Electronics companies of Sweden Electronics companies established in 1923 Electronics companies disestablished in 2002 Defunct computer hardware companies Swedish companies established in 1923 Swedish companies disestablished in 2002
https://en.wikipedia.org/wiki/Syntax%20error
In computer science, a syntax error is an error in the syntax of a sequence of characters or tokens that is intended to be written in a particular programming language. For compiled languages, syntax errors are detected at compile-time. A program will not compile until all syntax errors are corrected. For interpreted languages, however, a syntax error may be detected during program execution, and an interpreter's error messages might not differentiate syntax errors from errors of other kinds. There is some disagreement as to just what errors are "syntax errors". For example, some would say that the use of an uninitialized variable's value in Java code is a syntax error, but many others would disagree and would classify this as a (static) semantic error. In 8-bit home computers that used BASIC interpreter as their primary user interface, the error message became somewhat notorious, as this was the response to any command or user input the interpreter could not parse. A syntax error can occur or take place, when an invalid equation is being typed on a calculator. This can be caused, for instance, by opening brackets without closing them, or less commonly, entering several decimal points in one number. In Java the following is a syntactically correct statement: System.out.println("Hello World"); while the following is not: System.out.println(Hello World); The second example would theoretically print the variable Hello World instead of the words "Hello World". However, a variable in Java cannot have a space in between, so the syntactically correct line would be System.out.println(Hello_World). A compiler will flag a syntax error when given source code that does not meet the requirements of the language's grammar. Type errors (such as an attempt to apply the ++ increment operator to a boolean variable in Java) and undeclared variable errors are sometimes considered to be syntax errors when they are detected at compile-time. However, it is common to classify such errors as (static) semantic errors instead. Syntax errors on calculators A syntax error is one of several types of errors on calculators (most commonly found on scientific calculators and graphing calculators), representing that the equation that has been input has incorrect syntax of numbers, operations and so on. It can result in various ways, including but not limited to: An open bracket without closing parenthesis (unless missing closing parenthesis is at very end of equation) Using minus sign instead of negative symbol (or vice versa), which are distinct on most scientific calculators. Note that while some scientific calculators allow a minus sign to stand in for a negative symbol, the reverse is less common. See also Tag soup References Error Computer errors Parsing Programming language theory
https://en.wikipedia.org/wiki/List%20of%20home%20computers
Home computers were a class of microcomputer that existed from 1977 to about 1995. During this time it made economic sense for manufacturers to make microcomputers aimed at the home user. By simplifying the machines, and making use of household items such as television sets and cassette recorders instead of dedicated computer peripherals, the home computer allowed the consumer to own a computer at a fraction of the price of computers oriented to small business. Today, the price of microcomputers has dropped to the point where there's no advantage to building a separate, incompatible series just for home users. While many office-type personal computers were used in homes, in this list a "home computer" is a factory-assembled mass-marketed consumer product, usually at significantly lower cost than contemporary business computers. It would have an alphabetic keyboard and a multi-line alphanumeric display, the ability to run both games software as well as application software and user-written programs, and some removable mass storage device (such as cassette tape or floppy disk). This list excludes smartphones, personal digital assistants, pocket computers, laptop computers, programmable calculators and pure video game consoles. Single-board development or evaluation boards, intended to demonstrate a microprocessor, are excluded since these were not marketed to general consumers. Pioneering kit and assembled hobby microcomputers which generally required electronics skills to build or operate are listed separately, as are computers intended primarily for use in schools. A hobby-type computer often would have required significant expansion of memory and peripherals to make it useful for the usual role of a factory-made home computer. School computers usually had facilities to share expensive peripherals such as disk drives and printers, and often had provision for central administration. Attributes Attributes are as typically advertised by the original manufacturer. Popular machines inspired third-party sources for adapters, add-on processors, mass storage, and other peripherals. "Processor" indicates the microprocessor chip that ran the system. A few home computers had multiple processors, generally used for input/output devices. Processor speeds were not a competitive point among home computer manufacturers, and typically the processor ran either at its maximum rated speed ( between 1 and 4 MHz for most processor types here), or at some fraction of the television color subcarrier signal, for economy of design. Since a crystal oscillator was necessary for stable color, it was often also used as the microprocessor clock source. Many processors were second-sourced, with different manufacturers making the same device under different part numbers. Variations of a basic part number might have been used to indicate minor variations in speed or transistor type, or might indicate fairly significant alterations to the prototype's capabilities. In the Eas
https://en.wikipedia.org/wiki/Server%20farm
A server farm or server cluster is a collection of computer servers, usually maintained by an organization to supply server functionality far beyond the capability of a single machine. They often consist of thousands of computers which require a large amount of power to run and to keep cool. At the optimum performance level, a server farm has enormous financial and environmental costs. They often include backup servers that can take over the functions of primary servers that may fail. Server farms are typically collocated with the network switches and/or routers that enable communication between different parts of the cluster and the cluster's users. Server "farmers" typically mount computers, routers, power supplies and related electronics on 19-inch racks in a server room or data center. Applications Server farms are commonly used for cluster computing. Many modern supercomputers comprise giant server farms of high-speed processors connected by either Gigabit Ethernet or custom interconnects such as Infiniband or Myrinet. Web hosting is a common use of a server farm; such a system is sometimes collectively referred to as a web farm. Other uses of server farms include scientific simulations (such as computational fluid dynamics) and the rendering of 3D computer generated imagery (see render farm). Server farms are increasingly being used instead of or in addition to mainframe computers by large enterprises. In large server farms, the failure of an individual machine is a commonplace event: large server farms provide redundancy, automatic failover, and rapid reconfiguration of the server cluster. Performance The performance of the largest server farms (thousands of processors and up) is typically limited by the performance of the data center's cooling systems and the total electricity cost rather than by the processors' performance. Computers in server farms run 24/7 and consume large amounts of electricity. For this reason, the critical design parameter for both large and continuous systems tends to be performance per watt rather than cost of peak performance or (peak performance / (unit * initial cost)). Also, for high availability systems that must run 24/7 (unlike supercomputers that can be power-cycled to demand, and also tend to run at much higher utilizations), there is more attention to power-saving features such as variable clock-speed and the ability to turn off both computer parts, processor parts, and entire computers (WoL and virtualization) according to demand without bringing down services. The network connecting the servers in a server farm is also an essential factor in overall performance, especially when running applications that process massive volumes of data. Performance per watt The EEMBC EnergyBench, SPECpower, and the Transaction Processing Performance Council TPC-Energy are benchmarks designed to predict performance per watt in a server farm. The power used by each rack of equipment can be measured at the power dist
https://en.wikipedia.org/wiki/Computer%20science%20%28disambiguation%29
Computer Science and Computer Sciences can refer to: The general field of computer science Computer Sciences Corporation, the predecessor of DXC Technology Computer Science (journal), a peer-reviewed scientific journal Computer Science (UIL), a University Interscholastic League academic event
https://en.wikipedia.org/wiki/Syntax%20highlighting
Syntax highlighting is a feature of text editors that is used for programming, scripting, or markup languages, such as HTML. The feature displays text, especially source code, in different colours and fonts according to the category of terms. This feature facilitates writing in a structured language such as a programming language or a markup language as both structures and syntax errors are visually distinct. This feature is also employed in many programming related contexts (such as programming manuals), either in the form of colorful books or online websites to make understanding code snippets easier for readers. Highlighting does not affect the meaning of the text itself; it is intended only for human readers. Syntax highlighting is a form of secondary notation, since the highlights are not part of the text meaning, but serve to reinforce it. Some editors also integrate syntax highlighting with other features, such as spell checking or code folding, as aids to editing which are external to the language. Practical benefits Syntax highlighting is one strategy to improve the readability and context of the text; especially for code that spans several pages. The reader can easily ignore large sections of comments or code, depending on what they are looking for. Syntax highlighting also helps programmers find errors in their program. For example, most editors highlight string literals in a different color. Consequently, spotting a missing delimiter is much easier because of the contrasting color of the text. Brace matching is another important feature with many popular editors. This makes it simple to see if a brace has been left out or locate the match of the brace the cursor is on by highlighting the pair in a different color. A study published in the conference PPIG evaluated the effects of syntax highlighting on the comprehension of short programs, finding that the presence of syntax highlighting significantly reduces the time taken for a programmer to internalise the semantics of a program. Additionally, data gathered from an eye-tracker during the study suggested that syntax highlighting enables programmers to pay less attention to standard syntactic components such as keywords. Support in text editors Some text editors can also export the coloured markup in a format that is suitable for printing or for importing into word-processing and other kinds of text-formatting software; for instance as a HTML, colorized LaTeX, PostScript or RTF version of its syntax highlighting. There are several syntax highlighting libraries or "engines" that can be used in other applications, but are not complete programs in themselves, for example the Generic Syntax Highlighter (GeSHi) extension for PHP. For editors that support more than one language, the user can usually specify the language of the text, such as C, LaTeX, HTML, or the text editor can automatically recognize it based on the file extension or by scanning contents of the file. This automatic
https://en.wikipedia.org/wiki/JAR%20%28file%20format%29
A JAR ("Java archive") file is a package file format typically used to aggregate many Java class files and associated metadata and resources (text, images, etc.) into one file for distribution. JAR files are archive files that include a Java-specific manifest file. They are built on the ZIP format and typically have a .jar file extension. Design A JAR file allows Java runtimes to efficiently deploy an entire application, including its classes and their associated resources, in a single request. JAR file elements may be compressed, shortening download times. A JAR file may contain a manifest file, that is located at META-INF/MANIFEST.MF. The entries in the manifest file describe how to use the JAR file. For instance, a Classpath entry can be used to specify other JAR files to load with the JAR. Extraction The contents of a file may be extracted using any archive extraction software that supports the ZIP format, or the jar command line utility provided by the Java Development Kit. Security Developers can digitally sign JAR files. In that case, the signature information becomes part of the embedded manifest file. The JAR itself is not signed, but instead every file inside the archive is listed along with its checksum; it is these checksums that are signed. Multiple entities may sign the JAR file, changing the JAR file itself with each signing, although the signed files themselves remain valid. When the Java runtime loads signed JAR files, it can validate the signatures and refuse to load classes that do not match the signature. It can also support 'sealed' packages, in which the Classloader will only permit Java classes to be loaded into the same package if they are all signed by the same entities. This prevents malicious code from being inserted into an existing package, and so gaining access to package-scoped classes and data. The content of JAR files may be obfuscated to make reverse engineering more difficult. Executable JAR files An executable Java program can be packaged in a JAR file, along with any libraries the program uses. Executable JAR files have the manifest specifying the entry point class with Main-Class: myPrograms.MyClass and an explicit Class-Path (and the -cp argument is ignored). Some operating systems can run these directly when clicked. The typical invocation is java -jar foo.jar from a command line. Native launchers can be created on most platforms. For instance, Microsoft Windows users who prefer having Windows EXE files can use tools such as JSmooth, Launch4J, WinRun4J or Nullsoft Scriptable Install System to wrap single JAR files into executables. Manifest A manifest file is a metadata file contained within a JAR. It defines extension and package-related data. It contains name–value pairs organized in sections. If a JAR file is intended to be used as an executable file, the manifest file specifies the main class of the application. The manifest file is named MANIFEST.MF. The manifest directory has to be the firs
https://en.wikipedia.org/wiki/Global%20variable
In computer programming, a global variable is a variable with global scope, meaning that it is visible (hence accessible) throughout the program, unless shadowed. The set of all global variables is known as the global environment or global state. In compiled languages, global variables are generally static variables, whose extent (lifetime) is the entire runtime of the program, though in interpreted languages (including command-line interpreters), global variables are generally dynamically allocated when declared, since they are not known ahead of time. In some languages, all variables are global, or global by default, while in most modern languages variables have limited scope, generally lexical scope, though global variables are often available by declaring a variable at the top level of the program. In other languages, however, global variables do not exist; these are generally modular programming languages that enforce a module structure, or class-based object-oriented programming languages that enforce a class structure. Use Interaction mechanisms with global variables are called global environment (see also global state) mechanisms. The global environment paradigm is contrasted with the local environment paradigm, where all variables are local with no shared memory (and therefore all interactions can be reconducted to message passing). Global variables are used extensively to pass information between sections of code that do not share a caller/callee relation like concurrent threads and signal handlers. Languages (including C) where each file defines an implicit namespace eliminate most of the problems seen with languages with a global namespace though some problems may persist without proper encapsulation. Without proper locking (such as with a mutex), code using global variables will not be thread-safe except for read only values in protected memory. Environment variables Environment variables are a facility provided by some operating systems. Within the OS's shell (ksh in Unix, bash in Linux, COMMAND.COM in DOS and CMD.EXE in Windows) they are a kind of variable: for instance, in unix and related systems an ordinary variable becomes an environment variable when the export keyword is used. Program code other than shells has to access them by API calls, such as getenv() and setenv(). They are local to the process in which they were set. That means if we open two terminal windows (Two different processes running shell) and change value of environment variable in one window, that change will not be seen by other window. When a child process is created, it inherits all the environment variables and their values from the parent process. Usually, when a program calls another program, it first creates a child process by forking, then the child adjusts the environment as needed and lastly the child replaces itself with the program to be called. Child processes therefore cannot use environment variables to communicate with their peers, av
https://en.wikipedia.org/wiki/Bram%20Cohen
Bram Cohen is an American computer programmer, best known as the author of the peer-to-peer (P2P) BitTorrent protocol in 2001, as well as the first file sharing program to use the protocol, also known as BitTorrent. He is also the co-founder of CodeCon and organizer of the San Francisco Bay Area P2P-hackers meeting, was the co-author of Codeville and creator of the Chia cryptocurrency which implements the proof of space-time consensus algorithm. Early life and career Cohen grew up in the Upper West Side of Manhattan, New York City, as the son of a teacher and computer scientist. He claims he learned the BASIC programming language at the age of 5 on his family's Timex Sinclair computer. Cohen passed the American Invitational Mathematics Examination to qualify for the United States of America Mathematical Olympiad while he attended Stuyvesant High School in New York City. He graduated from Stuyvesant in 1993 and attended SUNY Buffalo. He later dropped out of college to work for several dot com companies throughout the mid to late 1990s, the last being MojoNation, an ambitious but ill-fated project he worked on with Jim McCoy. MojoNation allowed people to break up confidential files into encrypted chunks and distribute those pieces on computers also running the software. If someone wanted to download a copy of this encrypted file, they would have to download it simultaneously from many computers. This concept, Cohen thought, was perfect for a file sharing program, since programs like KaZaA take a long time to download a large file because the file is (usually) coming from one source (or "peer"). Cohen designed BitTorrent to be able to download files from many sources, thus speeding up the download time, especially for users with faster download than upload speeds. Thus, the more popular a file is, the faster a user will be able to download it, since many people will be downloading it at the same time, and these people will also be uploading the data to other users. Cohen says that he has Asperger syndrome based on a self diagnosis. BitTorrent In April 2001, Cohen quit MojoNation and began work on BitTorrent. Cohen unveiled his ideas at the first CodeCon conference, which he and his roommate Len Sassaman created as a showcase event for novel technology projects after becoming disillusioned with the state of technology conferences. Cohen wrote the first BitTorrent client implementation in Python, and many other programs have since implemented the protocol. In the summer of 2002, Cohen collected free pornography to lure beta testers to use the program. BitTorrent gained its fame for its ability to quickly share large music and movie files online. Cohen has claimed he has never violated copyright law using his software. Regardless, he is outspoken in his belief that the current media business was doomed to being outmoded despite the RIAA and MPAA's legal or technical tactics, such as digital rights management. In May 2005, Cohen released a tracke
https://en.wikipedia.org/wiki/Epidemiological%20method
The science of epidemiology has matured significantly from the times of Hippocrates, Semmelweis and John Snow. The techniques for gathering and analyzing epidemiological data vary depending on the type of disease being monitored but each study will have overarching similarities. Outline of the process of an epidemiological study Establish that a problem exists Full epidemiological studies are expensive and laborious undertakings. Before any study is started, a case must be made for the importance of the research. Confirm the homogeneity of the events Any conclusions drawn from inhomogeneous cases will be suspicious. All events or occurrences of the disease must be true cases of the disease. Collect all the events It is important to collect as much information as possible about each event in order to inspect a large number of possible risk factors. The events may be collected from varied methods of epidemiological study or from censuses or hospital records. The events can be characterized by Incidence rates and prevalence rates. Often, occurrence of a single disease entity is set as an event. Given inherent heterogeneous nature of any given disease (i.e., the unique disease principle), a single disease entity may be treated as disease subtypes. This framework is well conceptualized in the interdisciplinary field of molecular pathological epidemiology (MPE). Characterize the events as to epidemiological factors Predisposing factors Non-environmental factors that increase the likelihood of getting a disease. Genetic history, age, and gender are examples. Enabling/disabling factors Factors relating to the environment that either increase or decrease the likelihood of disease. Exercise and good diet are examples of disabling factors. A weakened immune system and poor nutrition are examples of enabling factors. Precipitation factors This factor is the most important in that it identifies the source of exposure. It may be a germ, toxin or gene. Reinforcing factors These are factors that compound the likelihood of getting a disease. They may include repeated exposure or excessive environmental stresses. Look for patterns and trends Here one looks for similarities in the cases which may identify major risk factors for contracting the disease. Epidemic curves may be used to identify such risk factors. Formulate a hypothesis If a trend has been observed in the cases, the researcher may postulate as to the nature of the relationship between the potential disease-causing agent and the disease. Test the hypothesis Because epidemiological studies can rarely be conducted in a laboratory the results are often polluted by uncontrollable variations in the cases. This often makes the results difficult to interpret. Two methods have evolved to assess the strength of the relationship between the disease causing agent and the disease. Koch's postulates were the first criteria developed for epidemiological relationships. Because they only work
https://en.wikipedia.org/wiki/Has-a
In database design, object-oriented programming and design, has-a (has_a or has a) is a composition relationship where one object (often called the constituted object, or part/constituent/member object) "belongs to" (is part or member of) another object (called the composite type), and behaves according to the rules of ownership. In simple words, has-a relationship in an object is called a member field of an object. Multiple has-a relationships will combine to form a possessive hierarchy. Related concepts "Has-a" is to be contrasted with an is-a (is_a or is a) relationship which constitutes a taxonomic hierarchy (subtyping). The decision whether the most logical relationship for an object and its subordinate is not always clearly has-a or is-a. Confusion over such decisions have necessitated the creation of these metalinguistic terms. A good example of the has-a relationship is containers in the C++ STL. To summarize the relations, we have hypernym-hyponym (supertype-subtype) relations between types (classes) defining a taxonomic hierarchy, where for an inheritance relation: a hyponym (subtype, subclass) has a type-of (is-a) relationship with its hypernym (supertype, superclass); holonym-meronym (whole/entity/container-part/constituent/member) relations between types (classes) defining a possessive hierarchy, where for an aggregation (i.e. without ownership) relation: a holonym (whole) has a has-a relationship with its meronym (part), for a composition (i.e. with ownership) relation: a meronym (constituent) has a part-of relationship with its holonym (entity), for a containment relation: a meronym (member) has a member-of relationship with its holonym (container); concept-object (type-token) relations between types (classes) and objects (instances), where a token (object) has an instance-of relationship with its type (class). Examples Entity–relationship model In databases has-a relationships are usually represented in an Entity–relationship model. As you can see by the diagram on the right an account can have multiple characters. This shows that account has a "has-a" relationship with character. UML class diagram In object-oriented programming this relationship can be represented with a Unified Modeling Language Class diagram. This has-a relationship is also known as composition. As you can see from the Class Diagram on the right a car "has-a" carburetor, or a car is "composed of" a carburetor. When the diamond is coloured black it signifies composition, i.e. the object on the side closest to the diamond is made up of or contains the other object. While the white diamond signifies aggregation, which means that the object closest to the diamond can have or possess the other object. C++ Another way to distinguish between composition and aggregation in modeling the real world, is to consider the relative lifetime of the contained object. For example, if a Car object contains a Chassis object, a Chassis will most likely
https://en.wikipedia.org/wiki/FileMaker
FileMaker is a cross-platform relational database application from Claris International, a subsidiary of Apple Inc. It integrates a database engine with a graphical user interface (GUI) and security features, allowing users to modify a database by dragging new elements into layouts, screens, or forms. It is available in desktop, server, iOS and web-delivery configurations. FileMaker Pro, the desktop app, evolved from a DOS application, originally called simply FileMaker, but was then developed primarily for the Apple Macintosh and released in April 1985. It was rebranded as FileMaker Pro in 1990. Since 1992 it has been available for Microsoft Windows and for the classic Mac OS and macOS, and can be used in a cross-platform environment. FileMaker Go, the mobile app, was released for iOS devices in July 2010. FileMaker Server allows centralized hosting of apps which can be used by clients running the desktop or mobile apps. It is also available hosted by Claris, called FileMaker Cloud. History FileMaker began as an MS-DOS-based computer program named Nutshell – developed by Nashoba Systems of Concord, Massachusetts, in the early 1980s. Nutshell was distributed by Leading Edge, an electronics marketing company that had recently started selling IBM PC-compatible computers. With the introduction of the Macintosh, Nashoba combined the basic data engine with a new forms-based graphical user interface (GUI). Leading Edge was not interested in newer versions, preferring to remain a DOS-only vendor, and kept the Nutshell name. Nashoba found another distributor, Forethought Inc., and introduced the program on the Macintosh platform as FileMaker in April 1985. When Apple introduced the Macintosh Plus in 1986 the next version of FileMaker was named FileMaker Plus to reflect the new model's name. Forethought was purchased by Microsoft, which was then introducing their PowerPoint product that became part of Microsoft Office. Microsoft had introduced its own database application, Microsoft File, shortly before FileMaker, but was outsold by FileMaker and therefore Microsoft File was discontinued. Microsoft negotiated with Nashoba for the right to publish FileMaker, but Nashoba decided to self-publish the next version, FileMaker 4. Purchase by Claris Shortly thereafter, Apple Computer formed Claris, a wholly owned subsidiary, to market software. Claris purchased Nashoba to round out its software suite. By then, Leading Edge and Nutshell had faded from the marketplace because of competition from other DOS- and later Windows-based database products. FileMaker, however, continued to succeed on the Macintosh platform. Claris changed the product's name to FileMaker II to conform to its naming scheme for other products, such as MacWrite II, but the product changed little from the last Nashoba version. Several minor versions followed. In 1990, the product was released as FileMaker Pro 1.0. And in September 1992, Claris released a cross-platform version for bot
https://en.wikipedia.org/wiki/Macintosh%20LC%20family
The Macintosh LC is a family of personal computers designed, manufactured and sold by Apple Computer, Inc. from 1990 to 1997. Introduced alongside the Macintosh IIsi and Macintosh Classic as part of a new wave of lower-priced Macintosh computers, the LC offered the same overall performance as the Macintosh II for half the price. Part of Apple's goal was to produce a machine that could be sold to school boards for the same price as an Apple IIGS, a machine that was very successful in the education market. Not long after the Apple IIe Card was introduced for the LC, Apple officially announced the retirement of the IIGS, as the company wanted to focus its sales and marketing efforts on the LC. The original Macintosh LC was introduced on October 1990, with updates in the form of the LC II and LC III in 1992 and early 1993. These early models all shared the same pizza box form factor, and were joined by the Macintosh LC 500 series of all-in-one desktop machines in mid-1993. A total of twelve different LC models were produced by the company, the last of which, the Power Macintosh 5300 LC, was on sale until early 1997. Overview After Apple co-founder Steve Jobs left Apple in 1985, product development was handed to Jean-Louis Gassée, formerly manager of Apple France. Gassée consistently pushed the Apple product line in two directions, towards more "openness" in terms of expandability and interoperability, and towards higher price. Gassée long argued that Apple should not market their computers towards the low end of the market, where profits were thin, but instead concentrate on the high end and higher profit margins. He illustrated the concept using a graph showing the price/performance ratio of computers with low-power, low-cost machines in the lower left and high-power high-cost machines in the upper right. The "high-right" goal became a mantra among the upper management, who said "fifty-five or die", referring to Gassée's goal of a 55 percent profit margin. This policy led to a series of ever more expensive computers. This was in spite of strenuous objections within the company, and when a group at Claris started a low-end Mac project called "Drama", Gassée actively killed it. Elsewhere at the company, two engineers, H.L. Cheung and Paul Baker, had been working in secret on a pet project, a color Macintosh prototype they called "Spin". The idea was to produce a low-cost system in the vein of the Apple II, a product that Cheung had previously worked on at Apple as the head of design. The machine would, in effect, be a significantly smaller Macintosh II with built-in video, no NuBus expansion, and a matching RGB monitor similar to the one introduced with the Apple IIGS the year prior. The project changed direction during development, with executives dictating that the machine should have video capabilities and processing power similar to the Macintosh IIci, which was also under development at the time. In early 1989, the prototype was sho
https://en.wikipedia.org/wiki/MTR
The Mass Transit Railway (MTR) is a major public transport network serving Hong Kong. Operated by the MTR Corporation Limited (MTRCL), it consists of heavy rail, light rail, and feeder bus service centred on a 10-line rapid transit network serving the urbanised areas of Hong Kong Island, Kowloon, and the New Territories. The system included of rail as of 2022 with 167 stations, including 98 heavy rail stations, 68 light rail stops and one high-speed rail terminus. Under the government's rail-led transport policy, the MTR system is a common mode of public transport in Hong Kong, with over five million trips made in an average weekday. It consistently achieves a 99.9 per cent on-time rate on its train journeys. As of 2018, the MTR has a 49.3 per cent share of the franchised public transport market, making it the most popular transport option in Hong Kong. The integration of the Octopus smart card fare-payment technology into the MTR system in September 1997 has further enhanced the ease of commuting History Initial proposals During the 1960s, the government of Hong Kong saw a need to accommodate increasing road traffic as Hong Kong's economy grew rapidly. In 1966, British transport consultants Freeman, Fox, Wilbur Smith & Associates were appointed to study the transport system of Hong Kong. The study was based on the projection of the population of Hong Kong for 1986, estimated at 6,868,000. On 1 September 1967, the consultants submitted the Hong Kong Mass Transport Study to the government, which recommended the construction of a rapid transit rail system in Hong Kong. The study suggested that four rail lines be developed in six stages, with a completion date set between December 1973 and December 1984. Detailed locations of lines and stations were presented in the study. These four lines were the Kwun Tong line (from Mong Kok to Ma Yau Tong), Tsuen Wan line (from Admiralty to Tsuen Wan), Island line (from Kennedy to Chai Wan Central), and Shatin line (from Tsim Sha Tsui to Wo Liu Hang). The study was submitted to the Legislative Council on 14 February 1968. The consultants received new data from the 1966 by-census on 6 March 1968. A short supplementary report was submitted on 22 March 1968 and amended in June 1968. The by-census indicated that the projected 1986 population was reduced by more than one million from the previous estimate to 5,647,000. The dramatic reduction affected town planning. The population distribution was largely different from the original study. The projected 1986 populations of Castle Peak New Town, Sha Tin New Town, and, to a lesser extent, Tsuen Wan New Town, were revised downwards, and the plan for a new town in Tseung Kwan O was shelved. In this updated scenario, the consultants reduced the scale of the recommended system. The supplementary report stated that the originally suggested four tracks between Admiralty station and Mong Kok station should be reduced to two, and only parts of the Island line, Tsuen Wan
https://en.wikipedia.org/wiki/Track%20gauge
In rail transport, track gauge (in American English, alternatively track gage) is the distance between the two rails of a railway track. All vehicles on a rail network must have wheelsets that are compatible with the track gauge. Since many different track gauges exist worldwide, gauge differences often present a barrier to wider operation on railway networks. The term derives from the metal bar, or gauge, that is used to ensure the distance between the rails is correct. Railways also deploy two other gauges to ensure compliance with a required standard. A loading gauge is a two-dimensional profile that encompasses a cross-section of the track, a rail vehicle and a maximum-sized load: all rail vehicles and their loads must be contained in the corresponding envelope. A structure gauge specifies the outline into which structures (bridges, platforms, lineside equipment etc.) must not encroach. Uses of the term The most common use of the term "track gauge" refers to the transverse distance between the inside surfaces of the two load-bearing rails of a railway track, usually measured at to below the top of the rail head in order to clear worn corners and allow for rail heads having sloping sides. The term derives from the "gauge", a metal bar with a precisely positioned lug at each end that track crews use to ensure the actual distance between the rails lies within tolerances of a prescribed standard: on curves, for example, the spacing is wider than normal. Deriving from the name of the bar, the distance between these rails is also referred to as the track gauge. Choice of gauge Early track gauges The earliest form of railway was a wooden wagonway, along which single wagons were manhandled, almost always in or from a mine or quarry. Initially the wagons were guided by human muscle power; subsequently by various mechanical methods. Timber rails wore rapidly: later, flat cast-iron plates were provided to limit the wear. In some localities, the plates were made L-shaped, with the vertical part of the L guiding the wheels; this is generally referred to as a "plateway". Flanged wheels eventually became universal, and the spacing between the rails had to be compatible with that of the wagon wheels. As the guidance of the wagons was improved, short strings of wagons could be connected and pulled by teams of horses, and the track could be extended from the immediate vicinity of the mine or quarry, typically to a navigable waterway. The wagons were built to a consistent pattern and the track would be made to suit the needs of the horses and wagons: the gauge was more critical. The Penydarren Tramroad of 1802 in South Wales, a plateway, spaced these at over the outside of the upstands. The Penydarren Tramroad probably carried the first journey by a locomotive, in 1804, and it was successful for the locomotive, but unsuccessful for the track: the plates were not strong enough to carry its weight. A considerable progressive step was made when cast ir
https://en.wikipedia.org/wiki/ZNetwork
ZNetwork, formerly known as Z Communications, is a left-wing activist-oriented media group founded in 1986 by Michael Albert and Lydia Sargent. It is, in broad terms, ideologically libertarian socialist, anti-capitalist, and heavily influenced by participatory economics, although much of its content is focused on critical commentary of foreign affairs. Its publications include Z Magazine, ZNet, and Z Video. Since early November 2022, they have all been regrouped under the name ZNetwork. History Zeta Magazine was founded by Michael Albert and Lydia Sargent in 1987, both of whom had previously co-founded South End Press. It was renamed Z Magazine in 1989. Founded in 1994, Z Media Institute provides classes and other sessions in how to start and produce alternative media, how to better understand media, and how to develop organising skills. The institute has hosted Stephen Shalom presentations on parpolity a number of times. Founded in 1995, ZNet (also known as ZNet, ZNetwork and Z Communications) is a website with contributors that include Noam Chomsky, Eduardo Galeano, Boris Kagarlitsky, Edward Said, Chris Spannos and Kevin Zeese. John Pilger has described it as one of the best news sources online. Rene Milan of the Institute for Ethics and Emerging Technologies called the site a rich source of information about participism. Publications and authors Z Magazine is published in print and on-line monthly. Contributors to the magazine have included Patrick Bond Noam Chomsky,, Ward Churchill, Alexander Cockburn, Edward S. Herman, bell hooks, Mike Kuhlenbeck, Staughton Lynd, John Ross, Juliet Schor, Holly Sklar, Cornel West, Kevin Zeese and Howard Zinn. Articles written by Chomsky have been republished in the New Statesman. Criticism In a 2005 interview with Joshua Frank, Ward Churchill discussed issues he had with Z Magazine. Churchill claimed an article he worked on was not published for two years and was misattributed. He also felt Albert and Sargent had greater influence than others involved with the publication. In 2012, George Monbiot criticized the site's defence of the book The Politics of Genocide by Edward S. Herman and David Peterson, which he said had been called a work of genocide denial by scholars he had consulted such as Martin Shaw. See also Alternative media (U.S. political left) CorpWatch Independent Media Centre Political cinema TeleSUR References External links Political magazines published in the United States Alternative magazines Monthly magazines published in the United States 1987 establishments in the United States Publishing collectives Magazines established in 1987 Communications and media organizations based in the United States Organizations based in Massachusetts Magazines published in Massachusetts
https://en.wikipedia.org/wiki/List%20of%20Romanians
Note: Names that cannot be confirmed in Wikipedia database nor through given sources are subject to removal. If you would like to add a new name please consider writing about the person first. If a notable Romanian is missing and without article, please add your request for a new article here. However, this is not a list of all famous Romanians. This is a list of some of the most prominent Romanians. It contains historical and important contemporary figures (athletes, actors, directors etc.). Most of the people listed here are of Romanian ethnicity, whose native tongue is Romanian. There are also a few mentioned who were born in Romania and can speak Romanian, though not of Romanian ethnicity. Historical and political figures Medieval Alexander I the Good (1375–1432), Domn of Moldavia (1400–1432) Basarab I the Founder (1270–1352), first independent Domn of Wallachia (1310–1352) Michael the Brave (1558–1601), Domn of Wallachia (1593–1601), Domn of Moldavia (1600) and de facto ruler of Transylvania (1599–1600) Mircea I the Elder (1355–1418), Domn of Wallachia (1386–1394, 1397–1418) Neagoe Basarab V (1459–1521), Domn of Wallachia (1512–1521) Nicolaus Olahus (1493–1568), Roman Catholic Archbishop of the Kingdom of Hungary Stephen III the Great (1433–1504), Domn of Moldavia (1457–1504) Vlad II Dracul (before 1395–1447), Domn of Wallachia (1436–1442, 1443–1447) Vlad III the Impaler (1431–1477), Domn of Wallachia (1448, 1456–1462, 1476–1477) Renaissance Age Dimitrie Cantemir, ruler of Moldavia, historian, writer, and music composer Antioch Kantemir, poet and Russian ambassador Constantin Brancoveanu, Prince of Wallachia (1688–1714) Modern era Tudor Vladimirescu Prince Gheorghe Bibescu Prince Barbu Dimitrie Știrbei Prince Alexandru Ioan Cuza King Carol I of Romania King Ferdinand I of Romania King Carol II of Romania King Michael I of Romania Queen Elizabeth of Romania Queen Marie of Romania Politicians and militarymen Ion Antonescu, Prime Minister and Conducător (Leader) during World War II Corneliu Codreanu, founder and charismatic leader of the Iron Guard Alexandru Averescu, general and politician Nicolae Bălcescu, historian, revolutionary Antoine Bibesco, diplomat, writer Gheorghe Grigore Cantacuzino, Prime Minister during the 1907 Romanian Peasants' revolt Lascăr Catargiu Constantin Cristescu, general, Chief of Staff Barbu Ștefănescu Delavrancea, former mayor of Bucharest Ion Dragalina, general Petre Dumitrescu, general Octavian Goga, writer, former Prime Minister Avram Iancu, revolutionary in 1848 Revolution Take Ionescu, Prime Minister in interbellum Romania Mugur Isărescu, economist and member of the Club of Rome, former Prime Minister Mihail Lascăr, World War II general, Minister of Defense Mihail Manoilescu, economist and Foreign Minister Alexandru Marghiloman, Prime Minister during World War I Gheorghe G. Mironescu, Prime Minister in interbellum Romania David Praporgescu, general Constantin Prezan, general in World War
https://en.wikipedia.org/wiki/HAL/S
HAL/S (High-order Assembly Language/Shuttle) is a real-time aerospace programming language compiler and cross-compiler for avionics applications used by NASA and associated agencies (JPL, etc.). It has been used in many U.S. space projects since 1973 and its most significant use was in the Space Shuttle program (approximately 85% of the Shuttle software was coded in HAL/S). It was designed by Intermetrics in 1972 for NASA and delivered in 1973. HAL/S is written in XPL, a dialect of PL/I. Although HAL/S is designed primarily for programming on-board computers, it is general enough to meet nearly all the needs in the production, verification, and support of aerospace and other real-time applications. According to documentation from 2005, it was being maintained by the HAL/S project of United Space Alliance. Goals and principles The three key principles in designing the language were reliability, efficiency, and machine-independence. The language is designed to allow aerospace-related tasks (such as vector/matrix arithmetic) to be accomplished in a way that is easily understandable by people who have spaceflight knowledge, but may not necessarily have proficiency with computer programming. HAL/S was designed not to include some constructs that were thought to be the cause of errors. For instance, there is no support for dynamic memory allocation. The language provides special support for real-time execution environments. Some features, such as "GOTO" were provided chiefly to ease mechanical translations from other languages. (page 82) "HAL" was suggested as the name of the new language by Ed Copps, a founding director of Intermetrics, to honor Hal Laning, a colleague at MIT. On the Preface page of the HAL/S Language Specification, it says, fundamental contributions to the concept and implementation of MAC were made by Dr. J. Halcombe Laning of the Draper Laboratory. A proposal for a NASA standard ground-based version of HAL named HAL/G for "ground" was proposed, but the coming emergence of the soon to be named Ada programming language contributed to Intermetrics' lack of interest in continuing this work. Instead, Intermetrics would place emphasis on what would be the "Red" finalist which would not be selected. Host compiler systems have been implemented on an IBM 360/370, Data General Eclipse, and the Modcomp IV/Classic computers. Target computer systems have included IBM 360/370, IBM AP-101 (space shuttle avionics computer), Sperry 1819A/1819B, Data General Nova and Eclipse, CII Mitra 125, Modcomp II and IV, NASA Std. Spacecraft Computer-l and Computer-2, ITEK ATAC 16M (Galileo Project), and since 1978 the RCA CDP1802 COSMAC microprocessor (Galileo Project and others). Syntax HAL/S is a mostly free-form language: statements may begin anywhere on a line and may spill over the next lines, and multiple statements may be fitted onto the same line if required. However, non-space characters in the first column of a program line may have special
https://en.wikipedia.org/wiki/Ian%20Clarke%20%28computer%20scientist%29
Ian Clarke (born 16 February 1977) is the original designer and lead developer of Freenet. Early life Clarke grew up in Navan, County Meath, Ireland. He was educated at Dundalk Grammar School and while there, he came first twice in the Senior Chemical, Physical, and Mathematical section of the Young Scientist Exhibition. The first time, in 1993, was with a project entitled "The C Neural Network Construction Kit". The second time, the following year, was with a project entitled "Mapping Internal Variations in Translucency within a Translucent Object using Beams of Light". Freenet In 1995, Clarke left Dundalk to study Computer Science and Artificial Intelligence at the University of Edinburgh, Scotland. While at Edinburgh, he became president of the then dormant Artificial Intelligence Society, resulting in its revival. In Clarke's final year at Edinburgh, he completed his final year project, entitled "A Distributed, Decentralised Information Storage and Retrieval System". In July 1999, after receiving a 'B' grade for his paper, he decided to release it to the Internet and invited volunteers to help implement his design. The resulting free software project became known as Freenet, and attracted significant attention from the mainstream and technology media. In August 1999, Clarke began his first full-time job as a software developer in the Space Division of Logica plc, a London-based software consulting company. In February 2000, he left Logica to join a small software start-up called Instil Ltd. In August 2000, he left London for Santa Monica, California, where he co-founded Uprizer, Inc. with the intent of commercializing some of his Freenet-related ideas. In January 2001, Uprizer Inc. successfully raised $4 million in Series A round venture funding from investors including Intel Capital. In March 2001, Clarke published an article describing FairShare, developed in collaboration with Uprizer's co-founders, Steven Starr and Rob Kramer. Clarke was concerned that copyright would become increasingly difficult to enforce in the Internet age, the goal of Fairshare was to provide an alternative to copyright as a way to compensate creators. Professional career In September 2002, after leaving Uprizer, Clarke formed Cematics LLC to explore a variety of new ideas and opportunities. Cematics LLC developed a number of products including Locutus - a P2P search application for the enterprise, WhittleBit - a search engine that learns from user feedback, and 3D17, a web-based collaborative editing tool. In October 2003, Clarke decided to leave the United States to return to Edinburgh, Scotland. In December 2004, he began work on Dijjer, a distributed P2P web cache, and Indy, a collaborative music discovery system, both in conjunction with ChangeTv, a company founded by his long-time collaborator, Steven Starr, who later brought in Clarke and Oliver Luckett as co-founders. In 2003, he was named to the MIT Technology Review TR100 as one of the top 100 innova
https://en.wikipedia.org/wiki/Project%20Genie
Project Genie was a computer research project started in 1964 at the University of California, Berkeley. It produced an early time-sharing system including the Berkeley Timesharing System, which was then commercialized as the SDS 940. History Project Genie was funded by J. C. R. Licklider, the head of ARPA's Information Processing Techniques Office at that time. The project was a smaller counterpart to MIT's Project MAC. The Scientific Data Systems SDS 940 was created by modifying an SDS 930 24-bit commercial computer so that it could be used for timesharing. The work was funded by ARPA and directed by Melvin W. Pirtle and Wayne Lichtenberger at UC Berkeley. Butler Lampson, Chuck Thacker, and L. Peter Deutsch were among the young technical leaders of that project. When completed and in service, the first 940 ran reliably in spite of its array of tricky mechanical issues such as a huge disk drive driven by hydraulic arms. It served about forty or fifty users at a time and still managed to drive a graphics subsystem that was quite capable for its time. When SDS realized the value of the time sharing system, and that the software was in the public domain (funded by the US federal government), they came back to Berkeley and collected enough information to begin manufacturing. Because SDS manufacturing was overloaded with the 9 series production and the startup of the Sigma Series production, it could not incorporate the 940 modifications into the standard production line. Instead, production of the 940s was turned over to the Systems Engineering Department, which manufactured systems customised to user requirements. To produce a 940, the Systems Engineering Department ordered a 930 from SDS manufacturing, installed the modifications developed by the Berkeley engineers, and shipped machine to the SDS customer as a 940. Project Genie pioneered several computer hardware techniques, such as commercial time-sharing which allowed end-user programming in machine language, separate protected user modes, memory paging, and protected memory. Concepts from Project Genie influenced the development of the TENEX operating system for the PDP-10, and Unix, which inherited the concept of process forking from it (Unix co-creator Ken Thompson worked on an SDS 940 while at Berkeley). An SDS 940 mainframe was used by Douglas Engelbart's OnLine System at the Stanford Research Institute and was the first computer used by the Community Memory Project at Berkeley. In 1968, Lampson also helped design a different timesharing system at Berkeley: Cal TSS for the CDC 6400 with Extended Core Storage. Lampson was only involved until 1969, but Cal TSS continued until 1971. Several members of project Genie such as Pirtle, Thacker, Deutsch and Lampson left UCB to form the Berkeley Computer Corporation (BCC), which produced one prototype, the BCC-500. After BCC went bankrupt after its funding from the computer mainframe lessor Data Processing Financial & General (DPF&G) s
https://en.wikipedia.org/wiki/Evolutionary%20computation
In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character. In evolutionary computation, an initial set of candidate solutions is generated and iteratively updated. Each new generation is produced by stochastically removing less desired solutions, and introducing small random changes as well as, depending on the method, mixing parental information. In biological terminology, a population of solutions is subjected to natural selection (or artificial selection), mutation and possibly recombination. As a result, the population will gradually evolve to increase in fitness, in this case the chosen fitness function of the algorithm. Evolutionary computation techniques can produce highly optimized solutions in a wide range of problem settings, making them popular in computer science. Many variants and extensions exist, suited to more specific families of problems and data structures. Evolutionary computation is also sometimes used in evolutionary biology as an in silico experimental procedure to study common aspects of general evolutionary processes. History The concept of mimicking evolutionary processes to solve problems originates before the advent of computers, such as when Alan Turing proposed a method of genetic search in 1948 . Turing's B-type u-machines resemble primitive neural networks, and connections between neurons were learnt via a sort of genetic algorithm. His P-type u-machines resemble a method for reinforcement learning, where pleasure and pain signals direct the machine to learn certain behaviors. However, Turing's paper went unpublished until 1968, and he died in 1954, so this early work had little to no effect on the field of evolutionary computation that was to develop. Evolutionary computing as a field began in earnest in the 1950s and 1960s. There were several independent attempts to use the process of evolution in computing at this time, which developed separately for roughly 15 years. Three branches emerged in different places to attain this goal: evolution strategies, evolutionary programming, and genetic algorithms. A fourth branch, genetic programming, eventually emerged in the early 1990s. These approaches differ in the method of selection, the permitted mutations, and the representation of genetic data. By the 1990s, the distinctions between the historic branches had begun to blur, and the term 'evolutionary computing' was coined in 1991 to denote a field that exists over all four paradigms. In 1962, Lawrence J. Fogel initiated the research of Evolutionary Programming in the United States, which was considered an artificial intelligence endeavor. In this system, finite state machines are used to solve a predictio
https://en.wikipedia.org/wiki/Y-%CE%94%20transform
In electrical engineering, the Y-Δ transform, also written wye-delta and also known by many other names, is a mathematical technique to simplify the analysis of an electrical network. The name derives from the shapes of the circuit diagrams, which look respectively like the letter Y and the Greek capital letter Δ. This circuit transformation theory was published by Arthur Edwin Kennelly in 1899. It is widely used in analysis of three-phase electric power circuits. The Y-Δ transform can be considered a special case of the star-mesh transform for three resistors. In mathematics, the Y-Δ transform plays an important role in theory of circular planar graphs. Names The Y-Δ transform is known by a variety of other names, mostly based upon the two shapes involved, listed in either order. The Y, spelled out as wye, can also be called T or star; the Δ, spelled out as delta, can also be called triangle, Π (spelled out as pi), or mesh. Thus, common names for the transformation include wye-delta or delta-wye, star-delta, star-mesh, or T-Π. Basic Y-Δ transformation The transformation is used to establish equivalence for networks with three terminals. Where three elements terminate at a common node and none are sources, the node is eliminated by transforming the impedances. For equivalence, the impedance between any pair of terminals must be the same for both networks. The equations given here are valid for complex as well as real impedances. Complex impedance is a quantity measured in ohms which represents resistance as positive real numbers in the usual manner, and also represents reactance as positive and negative imaginary values. Equations for the transformation from Δ to Y The general idea is to compute the impedance at a terminal node of the Y circuit with impedances , to adjacent nodes in the Δ circuit by where are all impedances in the Δ circuit. This yields the specific formula Equations for the transformation from Y to Δ The general idea is to compute an impedance in the Δ circuit by where is the sum of the products of all pairs of impedances in the Y circuit and is the impedance of the node in the Y circuit which is opposite the edge with . The formulae for the individual edges are thus Or, if using admittance instead of resistance: Note that the general formula in Y to Δ using admittance is similar to Δ to Y using resistance. A proof of the existence and uniqueness of the transformation The feasibility of the transformation can be shown as a consequence of the superposition theorem for electric circuits. A short proof, rather than one derived as a corollary of the more general star-mesh transform, can be given as follows. The equivalence lies in the statement that for any external voltages ( and ) applying at the three nodes ( and ), the corresponding currents ( and ) are exactly the same for both the Y and Δ circuit, and vice versa. In this proof, we start with given external currents at the nodes. According to the superpositi
https://en.wikipedia.org/wiki/Perfect%20hash%20function
In computer science, a perfect hash function for a set is a hash function that maps distinct elements in to a set of integers, with no collisions. In mathematical terms, it is an injective function. Perfect hash functions may be used to implement a lookup table with constant worst-case access time. A perfect hash function can, as any hash function, be used to implement hash tables, with the advantage that no collision resolution has to be implemented. In addition, if the keys are not in the data and if it is known that queried keys will be valid, then the keys do not need to be stored in the lookup table, saving space. Disadvantages of perfect hash functions are that needs to be known for the construction of the perfect hash function. Non-dynamic perfect hash functions need to be re-constructed if changes. For frequently changing dynamic perfect hash functions may be used at the cost of additional space. The space requirement to store the perfect hash function is in where is the number of keys in the structure. The important performance parameters for perfect hash functions are the evaluation time, which should be constant, the construction time, and the representation size. Application A perfect hash function with values in a limited range can be used for efficient lookup operations, by placing keys from (or other associated values) in a lookup table indexed by the output of the function. One can then test whether a key is present in , or look up a value associated with that key, by looking for it at its cell of the table. Each such lookup takes constant time in the worst case. With perfect hashing, the associated data can be read or written with a single access to the table. Performance of perfect hash functions The important performance parameters for perfect hashing are the representation size, the evaluation time, the construction time, and additionally the range requirement . The evaluation time can be as fast as , which is optimal. The construction time needs to be at least , because each element in needs to be considered, and contains elements. This lower bound can be achieved in practice. The lower bound for the representation size depends on and . Let and a perfect hash function. A good approximation for the lower bound is Bits per element. For minimal perfect hashing, , the lower bound is bits per element. Construction A perfect hash function for a specific set that can be evaluated in constant time, and with values in a small range, can be found by a randomized algorithm in a number of operations that is proportional to the size of S. The original construction of uses a two-level scheme to map a set of elements to a range of indices, and then map each index to a range of hash values. The first level of their construction chooses a large prime (larger than the size of the universe from which is drawn), and a parameter , and maps each element of to the index If is chosen randomly, this step is likely
https://en.wikipedia.org/wiki/Antivirus%20software
Antivirus software (abbreviated to AV software), also known as anti-malware, is a computer program used to prevent, detect, and remove malware. Antivirus software was originally developed to detect and remove computer viruses, hence the name. However, with the proliferation of other malware, antivirus software started to protect against other computer threats. Some products also include protection from malicious URLs, spam, and phishing. History 1949–1980 period (pre-antivirus days) Although the roots of the computer virus date back as early as 1949, when the Hungarian scientist John von Neumann published the "Theory of self-reproducing automata", the first known computer virus appeared in 1971 and was dubbed the "Creeper virus". This computer virus infected Digital Equipment Corporation's (DEC) PDP-10 mainframe computers running the TENEX operating system. The Creeper virus was eventually deleted by a program created by Ray Tomlinson and known as "The Reaper". Some people consider "The Reaper" the first antivirus software ever written – it may be the case, but it is important to note that the Reaper was actually a virus itself specifically designed to remove the Creeper virus. The Creeper virus was followed by several other viruses. The first known that appeared "in the wild" was "Elk Cloner", in 1981, which infected Apple II computers. In 1983, the term "computer virus" was coined by Fred Cohen in one of the first ever published academic papers on computer viruses. Cohen used the term "computer virus" to describe programs that: "affect other computer programs by modifying them in such a way as to include a (possibly evolved) copy of itself." (note that a more recent definition of computer virus has been given by the Hungarian security researcher Péter Szőr: "a code that recursively replicates a possibly evolved copy of itself"). The first IBM PC compatible "in the wild" computer virus, and one of the first real widespread infections, was "Brain" in 1986. From then, the number of viruses has grown exponentially. Most of the computer viruses written in the early and mid-1980s were limited to self-reproduction and had no specific damage routine built into the code. That changed when more and more programmers became acquainted with computer virus programming and created viruses that manipulated or even destroyed data on infected computers. Before internet connectivity was widespread, computer viruses were typically spread by infected floppy disks. Antivirus software came into use, but was updated relatively infrequently. During this time, virus checkers essentially had to check executable files and the boot sectors of floppy disks and hard disks. However, as internet usage became common, viruses began to spread online. 1980–1990 period (early days) There are competing claims for the innovator of the first antivirus product. Possibly, the first publicly documented removal of an "in the wild" computer virus (i.e. the "Vienna virus") was pe
https://en.wikipedia.org/wiki/Kowloon%E2%80%93Canton%20Railway
The Kowloon–Canton Railway (KCR; ) was a railway network in Hong Kong. It was owned and operated by the Kowloon–Canton Railway Corporation (KCRC) until 2007. Rapid transit services, a light rail system, feeder bus routes within Hong Kong, and intercity passenger and freight train services to China on the KCR network, have been operated by the MTR Corporation since 2007. While still owned by its previous operator, the KCR network (which is wholly owned by the Hong Kong Government through the KCRC) has been operated by the MTR Corporation Limited under a 50-year, extensible, service concession since 2 December 2007. The two companies have merged their local metro lines into one unified fare system. Immediately after the merger, steps were taken to integrate the network into the same fare system as the MTR, and gates between the two networks were removed in several stages in 2008. Although the MTR Corporation is a listed company, the Hong Kong Government is the controlling shareholder with a stake of about 75%. In 2006, the local KCR local passenger train network (i.e. intercity services excluded) recorded an annual ridership of 544 million. History 19th century During the 19th century, the western colonial powers competed with each other to establish and maintain commercial and political spheres of influence in China. Hong Kong held a vital position in protecting British trading interests in South China. The idea of connecting Hong Kong and China with a railway was first proposed to prominent Hong Kong businessmen in March 1864 by a British railway engineer, Sir Rowland MacDonald Stephenson, who had considerable experience of developing railways in India. The minutes of the committee of the Chamber of Commerce meeting in Hong Kong, where a letter setting out his ideas was considered on 7 March 1864, stated that "the opinions of the Chamber in respect to his suggestions should be recorded in the form of a letter, to the effect that the Committee deem it essential for the advancement of the project that short lines of railway should at first only be tried, and that it is not advisable at present to interfere with any water communications which are already established and can generally be worked more cheaply than railway traffic." The main reason for this decision, which effectively killed MacDonald Stephenson's idea, was that many of the businessmen concerned had a personal interest in protecting their investments in the established shipping companies that enjoyed a monopoly on carriage of passengers and goods into and out of China. It took a further 30 years before the idea of building a railway from Hong Kong to China was again given serious consideration. In the last decade of the 19th century increasing diplomatic and trade activity by France in Canton and the granting of a concession to Belgian interests to construct a railway from Peking to Hankow, led to diplomatic and commercial concerns from the British about protecting Hong Kong as the
https://en.wikipedia.org/wiki/Mattel%20Aquarius
Aquarius is a home computer designed by Radofin and released by Mattel Electronics in 1983. Based on the Zilog Z80 microprocessor, the system has a rubber chiclet keyboard, 4K of RAM, and a subset of Microsoft BASIC in ROM. It connects to a television set for audiovisual output, and uses a cassette tape recorder for secondary data storage. A limited number of peripherals, such as a 40-column thermal printer, a 4-color printer/plotter, and a 300 baud modem, were released. The Aquarius was discontinued in October 1983, only a few months after it was launched. Development Looking to compete in the home computer market, Mattel Electronics turned to Radofin, the Hong Kong based manufacturer of their Intellivision consoles. Radofin had designed two computer systems. Internally they were known as "Checkers" and the more sophisticated "Chess". Mattel contracted for these to become the Aquarius and Aquarius II, respectively. Aquarius was announced in 1982 and finally released in June 1983, at a price of $160. Production ceased four months later because of poor sales. Mattel paid Radofin to take back the marketing rights. Four other companies: CEZAR Industries, CRIMAC, New Era Incentives, and Bentley Industries also marketed the unit and accessories. The Aquarius was often bundled with the Mini-Expander peripheral, which added game pads, an additional cartridge port for memory expansion, and the General Instrument AY-3-8910 sound chip. Other peripherals were the Data recorder, 40 column thermal printer, 4K and 16K RAM carts. Less common first party peripherals include a 300 baud cartridge modem, 32k RAM cart, 4 color plotter, and Quick Disk drive. Reception Although less expensive than the TI-99/4A and VIC-20, the Aquarius had comparatively weak graphics and limited memory. Internally, Mattel programmers adopted Bob Del Principe's mock slogan, "Aquarius -a system for the seventies". Of the 32 software titles Mattel announced for the unit, only 21 were released, most of which were ports of Intellivision games. Because of the hardware limitations of the Aquarius, the quality of many games suffered. There was such a lack of programmable graphics that Mattel added a special character set (see Character set section), so the games could at least use semigraphics. As a magazine of the time put it, "The Aquarius suffered one of the shortest lifespans of any computer—it was discontinued by Mattel almost as soon as it hit store shelves, a victim of the 1983 home computer price wars." Just after the release of the Aquarius, Mattel announced plans for the Aquarius II, and there is evidence that the Aquarius II reached the market in small numbers, but was also not a commercial success. Technical specifications CPU: Zilog Z80 @ 3.5 MHz Memory: 4K RAM, expandable to 36K RAM; 8K ROM Keyboard: 48-key rubber chiclet keyboard Display: 80x72 semigraphics in 16 colors (TEA1002 chip, 40x24 text characters - with a 25th "zero" row at top - with a size of 8x8 pixels
https://en.wikipedia.org/wiki/CDP
CDP may refer to: Places Census-designated place, an unincorporated area in the U.S. for which census data is collected Cuddapah Airport (IATA identifier: CDP), Andhra Pradesh, India Technology Cache Discovery Protocol, an extension to the BitTorrent file-distribution system Certificate in Data Processing, a professional certification conferred by the ICCP Charger Downstream Port, a type of battery-charging USB port Cisco Discovery Protocol, a proprietary data link network protocol developed by Cisco Systems Columbia Data Products, formerly a computer manufacturer, now a software company Composers Desktop Project, non-realtime audio digital-signal processing (DSP) software Content delivery platform, a system for managing and deploying web content Continuous Data Protection, whereby computer data is continuously backed up Customer data platform, marketer-based management system for customer profiles Science and medicine Coronary Drug Project, a trial of treatments for coronary heart disease Cytidine diphosphate, a nucleotide Political parties Centrist Democratic Party of the Philippines Congress for Democracy and Progress, a political party of Burkina Faso Constitutional Democratic Party (disambiguation) Christian Democratic Party (Australia) Other uses Carbon Disclosure Project, of greenhouse gas emissions Cassa Depositi e Prestiti, investment bank of Italy Center for Domestic Preparedness, a U.S. government emergency response training facility Châteauneuf-du-Pape AOC, a French wine appellation Coeur de Pirate, a leading Canadian writer and singer of popular music in French and English, real name Béatrice Martin, born Montreal 1989 Collateralized debt position, a type of structured asset-backed security Collett Dickenson Pearce, a British advertising agency See also
https://en.wikipedia.org/wiki/Outward%20Bound
Outward Bound (OB) is an international network of outdoor education organisations that was founded in the United Kingdom by Lawrence Holt and Kurt Hahn in 1941. Today there are organisations, called schools, in over 35 countries which are attended by more than 150,000 people each year. Outward Bound International is a non-profit membership and licensing organisation for the international network of Outward Bound schools. The Outward Bound Trust is an educational charity established in 1946 to operate the schools in the United Kingdom. Separate organisations operate the schools in each of the other countries in which Outward Bound operates. Outward Bound helped to shape the U.S. Peace Corps and numerous other outdoor adventure programs. Its aim is to foster the personal growth and social skills of participants by using challenging expeditions in the outdoors. History The first Outward Bound school was opened in Aberdyfi, Wales in 1941 by Lawrence Holt and Kurt Hahn with financial support from the Blue Funnel Line shipping company. The name Outward Bound was derived from the nautical term for a ship leaving safe harbour for the open sea. Outward Bound grew out of Hahn's work in the development of the Gordonstoun school and what is now known as the Duke of Edinburgh's Award. Outward Bound's founding mission, during the Second World War, was to improve the survival chances of young seamen should their ships be torpedoed in the mid-Atlantic. James Martin Hogan served as warden for the first year of the school. This mission was established and then expanded by Capt. J. F. "Freddy" Fuller who took over the leadership of the Aberdyfi school in 1942 and served the Outward Bound movement as senior warden until 1971. Fuller had been seconded from the Blue Funnel Line following wartime experience during the Battle of the Atlantic of surviving two successive torpedo attacks and commanding an open lifeboat in the Atlantic Ocean for thirty-five days without losing a single member of the crew. An educational charity, named The Outward Bound Trust, was established in 1946 to operate the school. A second school followed in England at Eskdale Green in 1950. The first Outward Bound program for women was held in 1951. During the next decade, several other schools opened around the United Kingdom. A school in Lumut, Malaysia opened in 1954, the first outside the United Kingdom. Outward Bound Australia was founded in 1956. The first Outward Bound USA course was run in Puerto Rico in 1961 for the Peace Corps, which it helped to shape. Outward Bound New Zealand was founded in 1962, Outward Bound Singapore established in 1967 and Outward Bound Hong Kong in 1970. Outward Bound Costa Rica was founded in 1991. From the inception of Outward Bound, community service was an integral part of the program, especially in the areas of sea and mountain rescues and this remains an important part of the training for both staff and students. During the period 1941 to 1965 in the Un
https://en.wikipedia.org/wiki/M.U.L.E.
M.U.L.E. is a 1983 multiplayer video game written for the Atari 8-bit family of home computers by Ozark Softscape. Designer Danielle Bunten Berry (credited as Dan Bunten) took advantage of the four joystick ports of the Atari 400 and 800 to allow four-player simultaneous play. M.U.L.E. was one of the first five games published in 1983 by new company Electronic Arts, alongside Axis Assassin, Archon: The Light and the Dark, Worms?, and Hard Hat Mack. Primarily a turn-based strategy game, it incorporates real-time elements where players compete directly as well as aspects that simulate economics. The game was ported to the Commodore 64, Nintendo Entertainment System, and IBM PC (as a self-booting disk). Japanese versions also exist for the PC-88, Sharp X1, and MSX2 computers. Like the subsequent models of the Atari 8-bit family, none of these systems allow four players with separate joysticks. The Commodore 64 version lets four players share joysticks, with two players using the keyboard during action portions. Gameplay Set on the fictional planet Irata (Atari backwards), the game is an exercise in supply and demand economics involving competition among four players, with computer opponents automatically filling in for any missing players. Players choose the race of their colonist, which has advantages and disadvantages that can be paired to their respective strategies. To win, players not only compete against each other to amass the largest amount of wealth, but must also cooperate for the survival of the colony. Central to the game is the acquisition and use of Multiple Use Labor Elements, or M.U.L.E.s, to develop and harvest resources from the player's real estate. Depending on how it is outfitted, a M.U.L.E. can be configured to harvest Energy, Food, Smithore (from which M.U.L.E.s are constructed), and Crystite (a valuable mineral available only at the "Tournament" level). Players must balance supply and demand of these elements, buying what they need and selling what they don't. Players may exploit or create shortages by refusing to sell to other players or to the "store", which raises the price of the resource on the following turns. Scheming between players is encouraged by allowing collusion, which initiates a mode allowing a private transaction. Crystite is the one commodity that is not influenced by supply and demand considerations, being deemed to be sold off-world, so the strategy with this resource is somewhat different; a player may attempt to maximize production without fear of having too much supply for the demand. Each resource is required to do certain things on each turn. For instance, if a player is short on Food, there is less time to take one's turn. If a player is short on Energy, some land plots won't produce any output, while a shortage of Smithore raises the price of M.U.L.E.s and prevents the store from manufacturing new M.U.L.E.s. Players must deal with periodic random events such as runaway M.U.L.E.s, sunspot ac
https://en.wikipedia.org/wiki/Operational%20semantics
Operational semantics is a category of formal programming language semantics in which certain desired properties of a program, such as correctness, safety or security, are verified by constructing proofs from logical statements about its execution and procedures, rather than by attaching mathematical meanings to its terms (denotational semantics). Operational semantics are classified in two categories: structural operational semantics (or small-step semantics) formally describe how the individual steps of a computation take place in a computer-based system; by opposition natural semantics (or big-step semantics) describe how the overall results of the executions are obtained. Other approaches to providing a formal semantics of programming languages include axiomatic semantics and denotational semantics. The operational semantics for a programming language describes how a valid program is interpreted as sequences of computational steps. These sequences then are the meaning of the program. In the context of functional programming, the final step in a terminating sequence returns the value of the program. (In general there can be many return values for a single program, because the program could be nondeterministic, and even for a deterministic program there can be many computation sequences since the semantics may not specify exactly what sequence of operations arrives at that value.) Perhaps the first formal incarnation of operational semantics was the use of the lambda calculus to define the semantics of Lisp. Abstract machines in the tradition of the SECD machine are also closely related. History The concept of operational semantics was used for the first time in defining the semantics of Algol 68. The following statement is a quote from the revised ALGOL 68 report: The meaning of a program in the strict language is explained in terms of a hypothetical computer which performs the set of actions that constitute the elaboration of that program. (Algol68, Section 2) The first use of the term "operational semantics" in its present meaning is attributed to Dana Scott (Plotkin04). What follows is a quote from Scott's seminal paper on formal semantics, in which he mentions the "operational" aspects of semantics. It is all very well to aim for a more ‘abstract’ and a ‘cleaner’ approach to semantics, but if the plan is to be any good, the operational aspects cannot be completely ignored. (Scott70) Approaches Gordon Plotkin introduced the structural operational semantics, Matthias Felleisen and Robert Hieb the reduction semantics, and Gilles Kahn the natural semantics. Small-step semantics Structural operational semantics Structural operational semantics (SOS, also called structured operational semantics or small-step semantics) was introduced by Gordon Plotkin in (Plotkin81) as a logical means to define operational semantics. The basic idea behind SOS is to define the behavior of a program in terms of the behavior of its parts, thus providing
https://en.wikipedia.org/wiki/GPE%20Palmtop%20Environment
GPE (a recursive acronym for GPE Palmtop Environment) is a graphical user interface environment for handheld computers, such as palmtops and personal digital assistants (PDAs), running some Linux kernel-based operating system. GPE is a complete environment of software components and applications which makes it possible to use a Linux handheld for tasks such as personal information management (PIM), audio playback, email, and web browsing. GPE is free and open-source software, subject to the terms of the GNU General Public License (GPL) or the GNU Lesser General Public License (LGPL). Supported devices GPE is bundled with embedded Linux distributions targeting the following platforms: Sharp Zaurus Hewlett-Packard iPAQ Hewlett-Packard Jornada 72x Siemens AG SIMpad SL4 In addition, GPE maintainers and the open source community are developing ports for additional devices: GamePark Holdings GP2x Nokia 770 Nokia N800 Palm TX Palm Treo 650 HTC Universal HTC Typhoon HTC Tornado HTC Wizard HTC Apache On February 5, 2007, The GPE project announced GPE Phone Edition, a new variant of GPE developed for mobile phones. Software components GPE does not have any of the GNOME Core Applications, but instead software was written from scratch, tailored to the embedded environment. GPE is based on GTK+, and because GTK+ did not gain support for Wayland until versions 3.10, GPE uses X11 as its windowing system, e.g. with the combination X.Org Server/Matchbox. The project provides an infrastructure for easy and powerful application development by providing core software such as shared libraries, database schemata, and building on available technology including SQLite, D-BUS, GStreamer and several of the more common standards defined by freedesktop.org. One of the major goals of the GPE project is to encourage people to work on free software for mobile devices and to experiment with writing a GUI for embedded devices. Some of the applications already developed for GPE include: GPE-Contacts - A contacts manager GPE-Calendar - The calendar application GPE-Edit - A simple text editor GPE-Filemanager - A file manager with MIME type and remote access support GPE-Gallery - Small and easy to use image viewer GPE-Games - A small collection of tiny games GPE-Mini-Browser - A CSS and JavaScript compatible compact web browser GPE-Sketchbook - Create notes and sketches GPE-Soundbite - A voice memo tool GPE-ToDo - A task list manager GPE-Timesheet - Track time spend on tasks Starling - A GStreamer based audio player GPE's PIM applications (GPE-Contacts, GPE-Calendar, GPE-ToDo) can be synchronized with their desktop and web counterparts (such as Novell Evolution, Mozilla Sunbird and Google Calendar) through the use of GPE-Syncd and the OpenSync framework. GPE also contains a number of GUI utilities for configuring 802.11 Wireless LAN, Bluetooth, IrDA, Firewall, ALSA, Package Management, among others. A mobile push e-mail client ba
https://en.wikipedia.org/wiki/DDR2%20SDRAM
Double Data Rate 2 Synchronous Dynamic Random-Access Memory (DDR2 SDRAM) is a double data rate (DDR) synchronous dynamic random-access memory (SDRAM) interface. It is a JEDEC standard (JESD79-2); first published in September 2003. DDR2 succeeded the original DDR SDRAM specification, and was itself succeeded by DDR3 SDRAM in 2007. DDR2 DIMMs are neither forward compatible with DDR3 nor backward compatible with DDR. In addition to double pumping the data bus as in DDR SDRAM (transferring data on the rising and falling edges of the bus clock signal), DDR2 allows higher bus speed and requires lower power by running the internal clock at half the speed of the data bus. The two factors combine to produce a total of four data transfers per internal clock cycle. Since the DDR2 internal clock runs at half the DDR external clock rate, DDR2 memory operating at the same external data bus clock rate as DDR results in DDR2 being able to provide the same bandwidth but with better latency. Alternatively, DDR2 memory operating at twice the external data bus clock rate as DDR may provide twice the bandwidth with the same latency. The best-rated DDR2 memory modules are at least twice as fast as the best-rated DDR memory modules. The maximum capacity on commercially available DDR2 DIMMs is 8GB, but chipset support and availability for those DIMMs is sparse and more common 2GB per DIMM are used. History DDR2 SDRAM was first produced by Samsung in 2001. In 2003, the JEDEC standards organization presented Samsung with its Technical Recognition Award for the company's efforts in developing and standardizing DDR2. DDR2 was officially introduced in the second quarter of 2003 at two initial clock rates: 200 MHz (referred to as PC2-3200) and 266 MHz (PC2-4200). Both performed worse than the original DDR specification due to higher latency, which made total access times longer. However, the original DDR technology tops out at a clock rate around 200 MHz (400 MT/s). Higher performance DDR chips exist, but JEDEC has stated that they will not be standardized. These chips are mostly standard DDR chips that have been tested and rated to be capable of operation at higher clock rates by the manufacturer. Such chips draw significantly more power than slower-clocked chips, but usually offered little or no improvement in real-world performance. DDR2 started to become competitive against the older DDR standard by the end of 2004, as modules with lower latencies became available. Specification Overview The key difference between DDR2 and DDR SDRAM is the increase in prefetch length. In DDR SDRAM, the prefetch length was two bits for every bit in a word; whereas it is four bits in DDR2 SDRAM. During an access, four bits were read or written to or from a four-bit-deep prefetch queue. This queue received or transmitted its data over the data bus in two data bus clock cycles (each clock cycle transferred two bits of data). Increasing the prefetch length allowed DDR2 SDRAM to double
https://en.wikipedia.org/wiki/FK
FK or fk may refer to: In arts and entertainment: Flyer Killer, fictional automated robots in the Terminator film franchise. Fox Kids, a former American children's television programming block. Funky Kong, a video game character. Place: FK postcode area, UK, centred on Falkirk in Scotland. Falkland Islands, FIPS PUB 10-4 territory code and ISO 3166 digram .fk, country code top-level domain (ccTLD) for the Falkland Islands. Other uses: First aid kit First Corridor rail coach Football Club, abbreviated "FK" in Slavic and Balkan countries Foreign key, in database design Forward kinematics, in robotics and animation, the use of kinematic equations to find the position of an articulated object Fuck, an English-language vulgarity Africa West Airlines (IATA airline designator FK) Finders Keepers kinetic friction, in physics, mechanics
https://en.wikipedia.org/wiki/GN
GN may refer to: Businesses and organizations Air Gabon (IATA code: GN), an airline based in Libreville, Gabon Gamers Nexus, an online computer journalism organization. Gendarmerie Nationale (disambiguation), any of several national police forces Gente Nueva, a Mexican criminal organization GN Store Nord, a Danish manufacturer GN (car), a British car company operating from 1910 to 1925 Great Northern Railway (U.S.), a railway that ran from St. Paul to Seattle Guardia Nacional (disambiguation), a national guard or military in some Latin American nations Music G. N. (album), a 1981 album by Gianna Nannini GN (album), a 2017 album by Ratboys Places Guinea (ISO country code: GN), a nation in West Africa .gn, the Internet top-level domain for Guinea Science and technology Graduate nurse Suzuki GN series, a range of motorcycles Giganewton, a metric unit Glomerulonephritis, a medical condition Grain (unit), a unit of mass Ground Network, former name of Near Earth Network Guide number, for an electronic camera flash Other uses Gastronorm sizes, a set of food storage containers based on EN 631 standard Gn (digraph), a two-character combination in various languages Guarani language (ISO 639-1 code "gn") gn, abbreviation for guinea, a former British coin and currency unit "Good night", in the slang of the cryptocurrency community "Get naked", in urban slang "Green nose"
https://en.wikipedia.org/wiki/Words%20%28Unix%29
words is a standard file on Unix and Unix-like operating systems, and is simply a newline-delimited list of dictionary words. It is used, for instance, by spell-checking programs. The words file is usually stored in or . On Debian and Ubuntu, the file is provided by the package, or its provider packages , , etc. On Fedora and Arch Linux, the file is provided by the package. The package is sourced from data from the Moby Project, a public domain compilation of words. References External links Sample words file from Duke CS department Unix Unix software
https://en.wikipedia.org/wiki/Series%20and%20parallel%20circuits
Two-terminal components and electrical networks can be connected in series or parallel. The resulting electrical network will have two terminals, and itself can participate in a series or parallel topology. Whether a two-terminal "object" is an electrical component (e.g. a resistor) or an electrical network (e.g. resistors in series) is a matter of perspective. This article will use "component" to refer to a two-terminal "object" that participate in the series/parallel networks. Components connected in series are connected along a single "electrical path", and each component has the same electric current through it, equal to the current through the network. The voltage across the network is equal to the sum of the voltages across each component. Components connected in parallel are connected along multiple paths, and each component has the same voltage across it, equal to the voltage across the network. The current through the network is equal to the sum of the currents through each component. The two preceding statements are equivalent, except for exchanging the role of voltage and current. A circuit composed solely of components connected in series is known as a series circuit; likewise, one connected completely in parallel is known as a parallel circuit. Many circuits can be analyzed as a combination of series and parallel circuits, along with other configurations. In a series circuit, the current that flows through each of the components is the same, and the voltage across the circuit is the sum of the individual voltage drops across each component. In a parallel circuit, the voltage across each of the components is the same, and the total current is the sum of the currents flowing through each component. Consider a very simple circuit consisting of four light bulbs and a 12-volt automotive battery. If a wire joins the battery to one bulb, to the next bulb, to the next bulb, to the next bulb, then back to the battery in one continuous loop, the bulbs are said to be in series. If each bulb is wired to the battery in a separate loop, the bulbs are said to be in parallel. If the four light bulbs are connected in series, the same current flows through all of them and the voltage drop is 3 volts across each bulb, which may not be sufficient to make them glow. If the light bulbs are connected in parallel, the currents through the light bulbs combine to form the current in the battery, while the voltage drop is 12 volts across each bulb and they all glow. In a series circuit, every device must function for the circuit to be complete. If one bulb burns out in a series circuit, the entire circuit is broken. In parallel circuits, each light bulb has its own circuit, so all but one light could be burned out, and the last one will still function. Series circuits Series circuits are sometimes referred to as current-coupled or daisy chain-coupled. The current in a series circuit goes through every component in the circuit. Therefore, all of the c
https://en.wikipedia.org/wiki/AMC
AMC may refer to: Film and television AMC Theatres, an American movie theater chain AMC Networks, an American entertainment company AMC (TV channel) AMC+, streaming service AMC Networks International, an entertainment company AMC (Asian TV channel), TV channel AMC (European TV channel), TV channel AMC (African and Middle Eastern TV channel), TV channel Other Australian Multiplex Cinemas All My Children, a TV series Education Academia Mexicana de Ciencias, a Mexican education organization American Mathematics Competitions Andhra Medical College, Andhra Pradesh, India Archif Menywod Cymru, Welsh organisation also known as Women's Archive Wales Army Medical College, Punjab, Pakistan Australian Maritime College, Launceston, Tasmania Australian Mathematics Competition Ayub Medical College, Khyber-Pakhtunkhwa, Pakistan AMC Institutions, Bangalore, India Finance Ameriquest Mortgage Company Association management company Agricultural Mortgage Corporation Annual Management Charge see Total expense ratio Asset management company Actively managed certificates, a type of exchange-traded product China Asset Management, or ChinaAMC Medicine Advance market commitment, a government guarantee to buy a medicine if developed Amoxicillin, an antibiotic Amylmetacresol, an antiseptic used in throat lozenges Arab Medical Center, Amman, Jordan Arthrogryposis multiplex congenita, a congenital disorder Atlantic Modal Cluster, a set of haplotypes Australian Medical Council Academic Medical Center, the University of Amsterdam hospital in the Netherlands Military AMC 34, a French tank used prior to World War II AMC 35, a French tank used during World War II Air Materiel Command of the U.S. Air Force Air Mobility Command, a major command of the U.S. Air Force Allied Military Currency, used during World War II Armed Merchant Cruiser, a type of UK Royal Navy ship Australian Maritime College, Launceston, Tasmania United States Army Materiel Command, a command of the U.S. Army Music Adelaide Music Collective, founded by David Day (broadcaster) American Music Center, a non-profit organization American Music Club, a band Asian Music Circle, an organisation promoting Indian and other Asian art and culture Asian Music Circuit, a charitable organization Asian Music Chart, part of Official Charts Company People A.M.C, British Drum-and-bass DJ and producer Transport Air Malta, ICAO code AMC Airport Metro Connector, Los Angeles Amalgamated Motor Cycles Ltd, a British motorcycle manufacturer AMC Airlines American Motors Corporation, an American automobile company Telecommunications Adaptive modulation and coding, in wireless communications AMC-3, a communications satellite launched 1997 AMC-18, a communications satellite launched 2006 Advanced Mezzanine Card, a telecommunications specification Albanian Mobile Communications, a telecommunications company Other uses 7-Amino-4-methylcoumarin, a fluorochrome Advanced Mic
https://en.wikipedia.org/wiki/Rich%20Skrenta
Richard J. Skrenta Jr. (born June 6, 1967) is an American computer programmer and Silicon Valley entrepreneur who created the web search engine blekko. Biography Richard J. Skrenta Jr. was born in Pittsburgh on June 6, 1967. In 1982, at age 15, as a high school student at Mt. Lebanon High School, Skrenta wrote the Elk Cloner virus that infected Apple II computers. It is widely believed to have been one of the first large-scale self-spreading personal computer viruses ever created. In 1989, Skrenta graduated with a B.A. in computer science from Northwestern University. Between 1989 and 1991, Skrenta worked at Commodore Business Machines with Amiga Unix. In 1989, Skrenta started working on a multiplayer simulation game. In 1994, it was launched under the name Olympia as a pay-for-play PBEM game by Shadow Island Games. Between 1991 and 1995, Skrenta worked at Unix System Labs, and from 1996 to 1998 with IP-level encryption at Sun Microsystems. He later left Sun and became one of the founders of DMOZ. He stayed on board after the Netscape acquisition, and continued to work on the directory as well as Netscape Search, AOL Music, and AOL Shopping. After his stint at AOL, Skrenta went on to cofound Topix LLC, a Web 2.0 company in the news aggregation and forums market. In 2005, Skrenta and his fellow cofounders sold a 75% share of Topix to a newspaper consortium made up of Tribune, Gannett, and Knight Ridder. In the late 2000s, Skrenta headed the startup company Blekko Inc, which was an Internet search engine. Blekko received early investment support from Marc Andreessen and began public beta testing on November 1, 2010. In 2015, IBM acquired both the Blekko company and search engine for their Watson computer system. Skrenta was involved in the development of VMS Monster, an old MUD for VMS. VMS Monster was part of the inspiration for TinyMUD. He is also known for his role in developing TASS, an ancestor of tin, the popular threaded Usenet newsreader for Unix systems. References External links Skrenta.com American computer programmers MUD developers 1967 births Living people Northwestern University alumni People from Mt. Lebanon, Pennsylvania DMOZ sv:Rich Skrenta
https://en.wikipedia.org/wiki/Game%20port
The game port is a device port that was found on IBM PC compatible and other computer systems throughout the 1980s and 1990s. It was the traditional connector for joystick input, and occasionally MIDI devices, until made obsolete by USB in the late 1990s. Originally located on a dedicated Game Control Adapter expansion card, the game port was later integrated with PC sound cards, and still later on the PC's motherboard. During the transition to USB, many input devices used the game port and a USB adapter dongle was included for systems without a game port. History Pre-IBM game ports At the time IBM was developing its game port, there was no industry standard for controller ports, although the Atari joystick port was close. It was introduced in 1977 with the Atari Video Computer System, and was later used on the VIC-20 (1980), Commodore 64 (1982), and Amstrad's PC1512 (1986). In contrast with the IBM design, the Atari port was primarily designed for digital inputs (including a pair of two-axis/four-contact digital joysticks, each with a single pushbutton trigger). Its only analog connections were intended for paddles -- although, as there were two analog inputs per port, each port could theoretically support a two-axis analog joystick, touchpad, trackball, or mouse (some of these being eventually developed for Atari systems). The Apple II, BBC Micro, TRS-80 Color Computer, and other popular 8-bit machines all used different, incompatible, joysticks and ports. In most respects, the IBM design was similar to, or more advanced than, existing designs. Initial IBM PC type game ports The IBM PC game port first appeared during the initial launch of the original IBM PC in 1981, in the form of an optional US$55 expansion card known as the Game Control Adapter. The design allowed for four analog axes and four buttons on one port, allowing two joysticks or four paddles to be connected via a special "Y-splitter" cable. Originally available only as add-on that took up an entire slot, game ports remained relatively rare in the early days of the IBM PC, and most games used the keyboard as an input. IBM did not release a joystick of its own for the PC, which did not help. The most common device available was the Kraft joystick, originally developed for the Apple II but easily adapted to the IBM with the addition of another button on the back of the case. When IBM finally did release a joystick, for the IBM PCjr, it was a version of the Kraft stick. However, it connected to the computer using two incompatible 7-pin connectors, which were mechanically connected together as part of a larger multi-pin connector on the back of the machine. This eliminated the need for the Y-adapter. Adapters for Atari-style "digital" sticks were also common during this era. The game port became somewhat more common in the mid-1980s, as improving electronic density began to produce expansion cards with ever-increasing functionality. By 1983, it was common to see cards combinin
https://en.wikipedia.org/wiki/GEDCOM
GEDCOM ( ), complete name FamilySearch GEDCOM, is a de facto open file format specification to store genealogical data, and import or export it between compatible genealogy software. GEDCOM is an acronym standing for Genealogical Data Communication. GEDCOM was developed by the Church of Jesus Christ of Latter-day Saints (LDS Church) as an aid to genealogical research. Most genealogy software supports importing from and exporting to GEDCOM format. As of version 7.0, a GEDCOM file is defined as UTF-8 encoded plain text. This file contains genealogical information about individuals such as names, events, and relationships; metadata links these records together. GEDCOM 7.0 is the first version to use semantic versioning, and is the most recent minor version of the specification. The predecessor to 7.0, GEDCOM 5.5.1, was released as a draft in 1999. It has received only minor updates in the subsequent 20 years. The lack of updates to the standard and deficiencies in its capabilities began to see some genealogy programs add proprietary extensions to the format, which are not always recognized by other genealogy programs, such as the GEDCOM 5.5 EL (Extended Locations) specification. Other standards, such as GEDCOM X, have been suggested as complete replacements for GEDCOM. GEDCOM 5.5.1 final, released in 2019, remains the industry's format standard for the exchange of genealogical data. With the release of GEDCOM 7.0 in 2021, however, a push is underway to see 7.0 adopted. FamilySearch intends to be GEDCOM 7.0 compatible in Quarter 3 of 2022, and Ancestry.com has 7.0 compatibility on its roadmap but has not yet specified an implementation date. FamilySearch GEDCOM has a GitHub repository. Model GEDCOM uses a lineage-linked data model, with link emphasis on the nuclear family, and the individuals (children) produced by that family. These historical goals are described in the 7.0 specification document, "The FAM record was originally structured to represent families where a male HUSB (husband or father) and female WIFE (wife or mother) produce CHIL (children)." The document goes on to say that these record types may be used more flexibly to reflect different family concepts. "The FAM record may also be used for cultural parallels to this, including nuclear families, marriage, cohabitation, fostering, adoption, and so on, regardless of the gender of the partners...The individuals pointed to by the HUSB and WIFE are collectively referred to as 'partners', 'parents' or 'spouses'." File structure A GEDCOM file consists of a header section, records, and a trailer section. Within these sections, records represent people (INDI record), families (FAM records), sources of information (SOUR records), and other miscellaneous records, including notes. Every line of a GEDCOM file begins with a level number where all top-level records (HEAD, TRLR, SUBN, and each INDI, FAM, OBJE, NOTE, REPO, SOUR, and SUBM) begin with a line with level 0, while other level number
https://en.wikipedia.org/wiki/Computational%20neuroscience
Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system. Computational neuroscience employs computational simulations to validate and solve mathematical models, and so can be seen as a sub-field of theoretical neuroscience; however, the two fields are often synonymous. The term mathematical neuroscience is also used sometimes, to stress the quantitative nature of the field. Computational neuroscience focuses on the description of biologically plausible neurons (and neural systems) and their physiology and dynamics, and it is therefore not directly concerned with biologically unrealistic models used in connectionism, control theory, cybernetics, quantitative psychology, machine learning, artificial neural networks, artificial intelligence and computational learning theory; although mutual inspiration exists and sometimes there is no strict limit between fields, with model abstraction in computational neuroscience depending on research scope and the granularity at which biological entities are analyzed. Models in theoretical neuroscience are aimed at capturing the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, and chemical coupling via network oscillations, columnar and topographic architecture, nuclei, all the way up to psychological faculties like memory, learning and behavior. These computational models frame hypotheses that can be directly tested by biological or psychological experiments. History The term 'computational neuroscience' was introduced by Eric L. Schwartz, who organized a conference, held in 1985 in Carmel, California, at the request of the Systems Development Foundation to provide a summary of the current status of a field which until that point was referred to by a variety of names, such as neural modeling, brain theory and neural networks. The proceedings of this definitional meeting were published in 1990 as the book Computational Neuroscience. The first of the annual open international meetings focused on Computational Neuroscience was organized by James M. Bower and John Miller in San Francisco, California in 1989. The first graduate educational program in computational neuroscience was organized as the Computational and Neural Systems Ph.D. program at the California Institute of Technology in 1985. The early historical roots of the field can be traced to the work of people including Louis Lapicque, Hodgkin & Huxley, Hubel and Wiesel, and David Marr. Lapicque introduced the integrate and fire model of the neuron in a seminal article published in 1907, a model still popular for artificial neural networks studies because of its simplicity (see a recent review). About 40 years
https://en.wikipedia.org/wiki/Wayland
Wayland may refer to: Computers Wayland (protocol), a graphical display system for Unix-like computers Fiction Jace Wayland, a character in the Mortal Instruments book series Wayland (Star Wars), a planet in the Star Wars fictional universe Turk Wayland, in the Rennie Stride mystery series by Patricia Kennealy-Morrison Music Wayland (band), a US rock music band Mythology and folklore Wayland the Smith, figure from northern European folklore Places United Kingdom HM Prison Wayland, Norfolk Wayland, Norfolk Wayland Wood, near Watton, Norfolk Wayland Rural District, merged into Breckland District, Norfolk, UK Wayland's Smithy, a Neolithic site in the UK United States Wayland, Iowa Wayland, Kentucky Wayland, Massachusetts Wayland, Michigan Wayland, Missouri Wayland, New York Wayland (village), New York Wayland, Ohio Wayland Baptist University (Alaska) Wayland Baptist University (Texas) Wayland Seminary, the Washington, D.C. school of the National Theological Institute Wayland Township, Michigan, which borders the city in Allegan County Wayland Township, Chariton County, Missouri People Given name Wayland Flowers (1939–1988), American puppeteer Wayland Young (1923–2009), British writer and SDP and Labour Party politician Wayland Becker (1910–1984), American football player Wayland Dean (1902–1930), Major League Baseball pitcher Wayland Drew (1932–1998), writer born in Oshawa, Ontario Wayland Hand (1907–1986), American folklorist Wayland Holyfield (born 1942), prominent American songwriter Wayland Hoyt (1838–1910), American Baptist minister and author Wayland Minot (1889–1957), American football player Wayland Maxfield Parrish (1887–?), writer Wayland Tunley (1937–2012), British architect Surname Francis Wayland (1796–1865), American Baptist educationist and former president of Brown University John Wayland (1849–1890), President of the Chico Board of Trustees, the governing body of Chico, California from 1889 to 1890 Julius Wayland (1854–1912), US socialist Susan Wayland (born 1980), German fashion model who features in adult photography Tom Wayland (born 1974), American voice actor April Halprin Wayland (born 1954), American children's and young adult author, poet, and teacher Hank Wayland (1906–1983), American swing jazz double-bassist Newton Wayland (1940–2013), American orchestral conductor, arranger, composer and keyboardist William Wayland (1869–1950), English Conservative Party politician See also Wayland Academy (disambiguation) Waylon (disambiguation) Wieland (disambiguation) Weiland (disambiguation) Weyland (disambiguation) Wyland (disambiguation)
https://en.wikipedia.org/wiki/Soulseek
Soulseek is a peer-to-peer (P2P) file-sharing network and application, used mostly to exchange music. It was created by Nir Arbel, an Israeli programmer from Safed. The current Soulseek network is the second to have been in operation, both run by the same management. The older network, used up to version 156 of the client, was shut down after usage dropped to almost nothing. Version 157 of the client was the last for Microsoft Windows only, and work on it ceased in 2008. Its replacement, SoulseekQT, is available for Windows, macOS, and Linux. SoulseekQt has slightly different functionality compared to the 157 client interface. Key features Content As a peer to peer (P2P) file sharing program, the accessible content is determined by the users of the Soulseek client, and what files they choose to share. The network has historically had a diverse mix of music, including underground and independent artists, unreleased music, such as demos and mixtapes, bootlegs, live tracks, and live DJ sets, but releases from major and independent labels can also be found. Central server Soulseek depends on a pair of central servers. One server supports the original client and network Version 156, with the other supporting the newer network (functioning with clients 157 and Qt). While these central servers are key to coordinating searches and hosting chat rooms, they do not actually play a part in the transfer of files between users, which takes place directly between the users concerned. (See Single Source Downloads below). Searching Users can search for items; the results returned being a list of files whose names match the search term used. Searches may be explicit or may use wildcards/patterns or terms to be excluded. For example, searching for blue suede -shoes will return a list of files whose names containing the strings blue and suede, but files containing the string shoes in their names will be excluded. A feature specific to the Soulseek search engine is the inclusion of the folder names and file paths in the search list. This allows users to search by folder name. For example, typing in experimental will return all the files that are contained in folders having that name, providing quick access to bands and albums in a determined musical genre. The list of search results shows details, such as the full name and path of the file, its size, the user who is hosting the file, together with that users' average transfer rate, and brief details about the encoded track itself, such as bit rate, length, etc. The resulting search list may then be sorted in a variety of ways and individual files (or folders) chosen for download. The Soulseek protocol search algorithms are not published, as those algorithms run on the server. Single source (one-to-one) downloads Soulseek does not support multi-source downloading or "swarming" like other post-Napster clients, and must fetch a requested file from a single source. (By contrast, swarming allows a requested
https://en.wikipedia.org/wiki/Off-by-one%20error
An off-by-one error or off-by-one bug (known by acronyms OBOE, OBO, OB1 and OBOB) is a logic error that involves a numerical value incorrectly bigger or smaller by one. It often occurs in computer programming when a loop iterates one time too many or too few. Such problem arises, for instance, when a programmer writes non-strict inequality (≤) in a terminating condition where strict inequality (<) should have been used (or vice versa). Off-by-one errors also stem from confusion over zero-based numbering. An off-by-one error can sometimes appear in a mathematical context. Cases Looping over arrays Consider an array of items, and items m through n (inclusive) are to be processed. How many items are there? An intuitive answer may be n − m, but that is off by one, exhibiting a fencepost error; the correct answer is n – m + 1. For this reason, ranges in computing are often represented by half-open intervals; the range from m to n (inclusive) is represented by the range from m (inclusive) to n + 1 (exclusive) to avoid fencepost errors. For example, a loop that iterates five times (from 0 to 4 inclusive) can be written as a half-open interval from 0 to 5: for (index = 0; index < 5; index++) { /* Body of the loop */ } The loop body is executed first of all with equal to 0; then becomes 1, 2, 3, and finally 4 on successive iterations. At that point, becomes 5, so is false and the loop ends. However, if the comparison used were <= (less than or equal to), the loop would be carried out six times: takes the values 0, 1, 2, 3, 4, and 5. Likewise, if were initialized to 1 rather than 0, there would only be four iterations: takes the values 1, 2, 3, and 4. Both of these alternatives can cause off-by-one errors. Another such error can occur if a do-while loop is used in place of a while loop (or vice versa.) A do-while loop is guaranteed to run at least once. Array-related confusion may also result from differences in programming languages. Numbering from 0 is most common, but some languages start array numbering with 1. Pascal has arrays with user-defined indices. This makes it possible to model the array indices after the problem domain. Fencepost error A fencepost error (occasionally called a telegraph pole, lamp-post, or picket fence error) is a specific type of off-by-one error. An early description of this error appears in the works of Vitruvius. The following problem illustrates the error: The common answer of 10 posts is wrong. This response comes from dividing the length of the fence by the spacing apart from each post, with the quotient being erroneously classified as the number of posts. In actuality, the fence has 10 sections and 11 posts. In this scenario, a fence with n sections will have posts. Conversely, if the fence contains n posts, it will contain sections. This relationship is important to consider when dealing with the reverse error. The reverse error occurs when the number of posts is known and the number of secti
https://en.wikipedia.org/wiki/ABAP
ABAP (Advanced Business Application Programming, originally Allgemeiner Berichts-Aufbereitungs-Prozessor, German for "general report preparation processor") is a high-level programming language created by the German software company SAP SE. It is currently positioned, alongside Java, as the language for programming the SAP NetWeaver Application Server, which is part of the SAP NetWeaver platform for building business applications. Introduction ABAP is one of the many application-specific fourth-generation languages (4GLs) first developed in the 1980s. It was originally the report language for SAP R/2, a platform that enabled large corporations to build mainframe business applications for materials management and financial and management accounting. ABAP used to be an abbreviation of Allgemeiner Berichts-Aufbereitungs-Prozessor, German for "generic report preparation processor", but was later renamed to the English Advanced Business Application Programming. ABAP was one of the first languages to include the concept of Logical Databases (LDBs), which provides a high level of abstraction from the basic database level(s),which supports every platform, language and units. The ABAP language was originally used by developers to develop the SAP R/3 platform. It was also intended to be used by SAP customers to enhance SAP applications – customers can develop custom reports and interfaces with ABAP programming. The language was geared towards more technical customers with programming experience. ABAP remains as the language for creating programs for the client–server R/3 system, which SAP first released in 1992. As computer hardware evolved through the 1990s, more and more of SAP's applications and systems were written in ABAP. By 2001, all but the most basic functions were written in ABAP. In 1999, SAP released an object-oriented extension to ABAP called ABAP Objects, along with R/3 release 4.6. SAP's current development platform NetWeaver supports both ABAP and Java. ABAP has an abstraction between the business applications, the operating system and database. This ensures that applications do not depend directly upon a specific server or database platform and can easily be ported from one platform to another. SAP Netweaver currently runs on UNIX (AIX, HP-UX, Solaris, Linux), Microsoft Windows, i5/OS on IBM System i (formerly iSeries, AS/400), and z/OS on IBM System z (formerly zSeries, S/390). Supported databases are HANA, SAP ASE (formerly Sybase), IBM Db2, Informix, MaxDB, Oracle, and Microsoft SQL Server (support for Informix was discontinued in SAP Basis release 7.00). ABAP Runtime Environment All ABAP programs reside inside the SAP database. They are not stored in separate external files like Java or C++ programs. In the database all ABAP code exists in two forms: source code, which can be viewed and edited with the ABAP Workbench tools; and generated code, a binary representation somewhat comparable with Java bytecode. ABAP programs execute
https://en.wikipedia.org/wiki/Mixin
In object-oriented programming languages, a mixin (or mix-in) is a class that contains methods for use by other classes without having to be the parent class of those other classes. How those other classes gain access to the mixin's methods depends on the language. Mixins are sometimes described as being "included" rather than "inherited". Mixins encourage code reuse and can be used to avoid the inheritance ambiguity that multiple inheritance can cause (the "diamond problem"), or to work around lack of support for multiple inheritance in a language. A mixin can also be viewed as an interface with implemented methods. This pattern is an example of enforcing the dependency inversion principle. History Mixins first appeared in Symbolics's object-oriented Flavors system (developed by Howard Cannon), which was an approach to object-orientation used in Lisp Machine Lisp. The name was inspired by Steve's Ice Cream Parlor in Somerville, Massachusetts: The owner of the ice cream shop offered a basic flavor of ice cream (vanilla, chocolate, etc.) and blended in a combination of extra items (nuts, cookies, fudge, etc.) and called the item a "mix-in", his own trademarked term at the time. Definition Mixins are a language concept that allows a programmer to inject some code into a class. Mixin programming is a style of software development, in which units of functionality are created in a class and then mixed in with other classes. A mixin class acts as the parent class, containing the desired functionality. A subclass can then inherit or simply reuse this functionality, but not as a means of specialization. Typically, the mixin will export the desired functionality to a child class, without creating a rigid, single "is a" relationship. Here lies the important difference between the concepts of mixins and inheritance, in that the child class can still inherit all the features of the parent class, but, the semantics about the child "being a kind of" the parent need not be necessarily applied. Advantages It provides a mechanism for multiple inheritance by allowing one class to use common functionality from multiple classes, but without the complex semantics of multiple inheritance. Code reusability: Mixins are useful when a programmer wants to share functionality between different classes. Instead of repeating the same code over and over again, the common functionality can simply be grouped into a mixin and then included into each class that requires it. Mixins allow inheritance and use of only the desired features from the parent class, not necessarily all of the features from the parent class. Implementations In Simula, classes are defined in a block in which attributes, methods and class initialization are all defined together; thus all the methods that can be invoked on a class are defined together, and the definition of the class is complete. In Flavors, a mixin is a class from which another class can inherit slot definitions and methods. The mi
https://en.wikipedia.org/wiki/SCTV
SCTV may refer to: SCTV (TV network), an Indonesian television network Second City Television, a Canadian sketch comedy television program Sichuan Radio and Television, a Chinese television station Seven Regional, formerly Southern Cross Television, a television station throughout regional Australia South Coast Television, formerly South Coast Community Television, a deflector and Digital TV service in County Cork, Ireland
https://en.wikipedia.org/wiki/Bruun%27s%20FFT%20algorithm
Bruun's algorithm is a fast Fourier transform (FFT) algorithm based on an unusual recursive polynomial-factorization approach, proposed for powers of two by G. Bruun in 1978 and generalized to arbitrary even composite sizes by H. Murakami in 1996. Because its operations involve only real coefficients until the last computation stage, it was initially proposed as a way to efficiently compute the discrete Fourier transform (DFT) of real data. Bruun's algorithm has not seen widespread use, however, as approaches based on the ordinary Cooley–Tukey FFT algorithm have been successfully adapted to real data with at least as much efficiency. Furthermore, there is evidence that Bruun's algorithm may be intrinsically less accurate than Cooley–Tukey in the face of finite numerical precision (Storn, 1993). Nevertheless, Bruun's algorithm illustrates an alternative algorithmic framework that can express both itself and the Cooley–Tukey algorithm, and thus provides an interesting perspective on FFTs that permits mixtures of the two algorithms and other generalizations. A polynomial approach to the DFT Recall that the DFT is defined by the formula: For convenience, let us denote the N roots of unity by ωNn (n = 0, ..., N − 1): and define the polynomial x(z) whose coefficients are xn: The DFT can then be understood as a reduction of this polynomial; that is, Xk is given by: where mod denotes the polynomial remainder operation. The key to fast algorithms like Bruun's or Cooley–Tukey comes from the fact that one can perform this set of N remainder operations in recursive stages. Recursive factorizations and FFTs In order to compute the DFT, we need to evaluate the remainder of modulo N degree-1 polynomials as described above. Evaluating these remainders one by one is equivalent to the evaluating the usual DFT formula directly, and requires O(N2) operations. However, one can combine these remainders recursively to reduce the cost, using the following trick: if we want to evaluate modulo two polynomials and , we can first take the remainder modulo their product , which reduces the degree of the polynomial and makes subsequent modulo operations less computationally expensive. The product of all of the monomials for k=0..N-1 is simply (whose roots are clearly the N roots of unity). One then wishes to find a recursive factorization of into polynomials of few terms and smaller and smaller degree. To compute the DFT, one takes modulo each level of this factorization in turn, recursively, until one arrives at the monomials and the final result. If each level of the factorization splits every polynomial into an O(1) (constant-bounded) number of smaller polynomials, each with an O(1) number of nonzero coefficients, then the modulo operations for that level take O(N) time; since there will be a logarithmic number of levels, the overall complexity is O (N log N). More explicitly, suppose for example that , and that , and so on. The corresponding FFT a
https://en.wikipedia.org/wiki/Bit%20rate
In telecommunications and computing, bit rate (bitrate or as a variable R) is the number of bits that are conveyed or processed per unit of time. The bit rate is expressed in the unit bit per second (symbol: bit/s), often in conjunction with an SI prefix such as kilo (1 kbit/s = 1,000 bit/s), mega (1 Mbit/s = 1,000 kbit/s), giga (1 Gbit/s = 1,000 Mbit/s) or tera (1 Tbit/s = 1,000 Gbit/s). The non-standard abbreviation bps is often used to replace the standard symbol bit/s, so that, for example, 1 Mbps is used to mean one million bits per second. In most computing and digital communication environments, one byte per second (symbol: B/s) corresponds to 8 bit/s. Prefixes When quantifying large or small bit rates, SI prefixes (also known as metric prefixes or decimal prefixes) are used, thus: Binary prefixes are sometimes used for bit rates. The International Standard (IEC 80000-13) specifies different abbreviations for binary and decimal (SI) prefixes (e.g., 1 KiB/s = 1024 B/s = 8192 bit/s, and 1 MiB/s = 1024 KiB/s). In data communications Gross bit rate In digital communication systems, the physical layer gross bitrate, raw bitrate, data signaling rate, gross data transfer rate or uncoded transmission rate (sometimes written as a variable Rb or fb) is the total number of physically transferred bits per second over a communication link, including useful data as well as protocol overhead. In case of serial communications, the gross bit rate is related to the bit transmission time as: The gross bit rate is related to the symbol rate or modulation rate, which is expressed in bauds or symbols per second. However, the gross bit rate and the baud value are equal only when there are only two levels per symbol, representing 0 and 1, meaning that each symbol of a data transmission system carries exactly one bit of data; for example, this is not the case for modern modulation systems used in modems and LAN equipment. For most line codes and modulation methods: More specifically, a line code (or baseband transmission scheme) representing the data using pulse-amplitude modulation with different voltage levels, can transfer bits per pulse. A digital modulation method (or passband transmission scheme) using different symbols, for example amplitudes, phases or frequencies, can transfer bits per symbol. This results in: An exception from the above is some self-synchronizing line codes, for example Manchester coding and return-to-zero (RTZ) coding, where each bit is represented by two pulses (signal states), resulting in: A theoretical upper bound for the symbol rate in baud, symbols/s or pulses/s for a certain spectral bandwidth in hertz is given by the Nyquist law: In practice this upper bound can only be approached for line coding schemes and for so-called vestigial sideband digital modulation. Most other digital carrier-modulated schemes, for example ASK, PSK, QAM and OFDM, can be characterized as double sideband modulation, resulting in the f
https://en.wikipedia.org/wiki/ATSC%20standards
Advanced Television Systems Committee (ATSC) standards are an American set of standards for digital television transmission over terrestrial, cable and satellite networks. It is largely a replacement for the analog NTSC standard and, like that standard, is used mostly in the United States, Mexico, Canada, South Korea and Trinidad & Tobago. Several former NTSC users, such as Japan, have not used ATSC during their digital television transition, because they adopted other systems such as ISDB developed by Japan, and DVB developed in Europe, for example. The ATSC standards were developed in the early 1990s by the Grand Alliance, a consortium of electronics and telecommunications companies that assembled to develop a specification for what is now known as HDTV. The standard is now administered by the Advanced Television Systems Committee. It includes a number of patented elements, and licensing is required for devices that use these parts of the standard. Key among these is the 8VSB modulation system used for over-the-air broadcasts. ATSC technology was primarily developed with patent contributions from LG Electronics, which holds most of the patents for the ATSC standard. ATSC includes two primary high definition video formats, 1080i and 720p. It also includes standard-definition formats, although initially only HDTV services were launched in the digital format. ATSC can carry multiple channels of information on a single stream, and it is common for there to be a single high-definition signal and several standard-definition signals carried on a single 6 MHz (former NTSC) channel allocation. Background The high-definition television standards defined by the ATSC produce widescreen 16:9 images up to 1920×1080 pixels in sizemore than six times the display resolution of the earlier standard. However, many different image sizes are also supported. The reduced bandwidth requirements of lower-resolution images allow up to six standard-definition "subchannels" to be broadcast on a single 6 MHz TV channel. ATSC standards are marked A/x (x is the standard number) and can be downloaded for free from the ATSC's website at ATSC.org. ATSC Standard A/53, which implemented the system developed by the Grand Alliance, was published in 1995; the standard was adopted by the Federal Communications Commission in the United States in 1996. It was revised in 2009. ATSC Standard A/72 was approved in 2008 and introduces H.264/AVC video coding to the ATSC system. ATSC supports 5.1-channel surround sound using Dolby Digital's AC-3 format. Numerous auxiliary datacasting services can also be provided. Many aspects of ATSC are patented, including elements of the MPEG video coding, the AC-3 audio coding, and the 8VSB modulation. The cost of patent licensing, estimated at up to per digital TV receiver, had prompted complaints by manufacturers. As with other systems, ATSC depends on numerous interwoven standards, e.g., the EIA-708 standard for digital closed captioning, lea
https://en.wikipedia.org/wiki/PC%20bang
A PC bang () is a type of internet cafe or LAN gaming center in South Korea. Patrons can play use computers, often to play video games in person with friends, for an hourly fee. Although the per capita penetration of personal computers and broadband internet access in South Korea is one of the highest in the world, PC bangs remain popular as they provide a social meeting place for gamers (especially school-aged gamers) to play together with their peers. Aside from the social aspect, PC bangs''' ability to offer access to expensive and powerful high-end personal computers (better known as gaming PCs), designed specifically for video gaming, at a comparatively low price has also bolstered their popularity. History The origin of PC bang starts with '전자카페' ('Jeonja Kape', which literally translates to 'Electronic Cafe') in South Korea opened in March 1988, which was then closed in 1991. The original creators of the '전자카페', Ahn Sang-soo (Professor of Hongik University) and Gum Nu-ri (Professor of Kookmin University), launched this electronic cafe next to Hongik University. At the time, people were able to use two 16-bit computers, which were connected by a telephone line. However, it was only known to locals and not widely known, yet. In April 1994, the first Internet cafe was opened. Jung Min-Ho founded the first public Internet cafe, named BNC, in Seocho District (서초구). It gained immense popularity, a first for such type of cafe. From 1988 to 1993, the press had labeled such cafes as "electronic cafe"; however, after the opening of BNC, labels such as "modem cafe", "network cafe", and "cyber cafe" have been introduced by the press. Industry The most played games in PC bangs are known as massively multiplayer online role-playing games, in which more than 100,000 people around the globe can play at the same time. PC bangs rose to popularity following the release of the PC game StarCraft in 1998. At the time South Korea had a thriving computer industry with Internet use reaching over 50% of the population. , 25 million citizens were using the Internet, and 14.4 million Korean homes were equipped with Internet access. Accompanying this high rate of home Internet access it is estimated the number of PC bangs grew from 100 to 25,000 between 1997 and 2011. Many popular South Korean multiplayer games provide players with incentives which encourage them to play from a PC bang. For example, the Nexon games Kart Rider and BnB reward players with bonus "Lucci" — the games' virtual currencies — when they log on from a PC bang. Demographics Although PC bangs are used by all ages and genders, they are most popular with male gamers in their teens and twenties. Throughout the day, the demographics of the PC room change. Most PC rooms are open 24 hours. In the mornings, the primary type of user is an adult male, between 30 and 50. During the afternoons, young males come in groups between 1-3 pm. During this time is when PC bangs are the noisiest. Around dinner
https://en.wikipedia.org/wiki/Radio%20National
Radio National, known on-air as RN, is an Australia-wide public service broadcasting radio network run by the Australian Broadcasting Corporation (ABC). From 1947 until 1985, the network was known as ABC Radio 2. History 1937: Predecessors and beginnings From 1928, the National Broadcasting Service, as part of the federal Postmaster-General's Department, gradually took over responsibility for all the existing stations that were sponsored by public licence fees ("A" Class licences). The outsourced Australian Broadcasting Company supplied programs from 1929. In 1932 a commission was established, merging the original ABC company and the National Broadcasting Service. It is from this time that Radio National dates as a distinct network within the ABC, in which a system of program relays was developed during the subsequent decades to link stations spread across the nation. The beginnings of Radio National lie with Sydney radio station 2FC, which aired its first test broadcast on 5 December 1923 and officially went to air on 9 January 1924. 2FC stood for Farmer and Company, the original owner of the station before the ABC bought the station in 1937. The ABC then rolled out a national network across the country, somewhat similar in nature to the BBC National Programme. The origins of the other stations in the network were: 3AR Melbourne – 26 January 1924 "Associated Radio Company of Australia", organised by Esmond Laurence Kiernan and others. 5CL Adelaide – 20 November 1924 "Central Broadcasters Ltd". 7ZL Hobart – 17 December 1924. 4QG Brisbane – 27 July 1925 "Queensland Government" (operated by the Queensland Radio Service, an agency within the Office of the Chief Secretary). 6WN Perth – 5 October 1938 "Wanneroo". 2CY Canberra – 23 December 1938. 2NA Newcastle – 20 December 1943. The first transmitters for 2FC, 5CL and 4QG were made by AWA with power of 5 kW (note that until about 1931 in Australia, transmitter powers were defined in terms of DC input to final amplifier, typically about 3 times that of the power into the antenna; thus power today would be stated as about 1.7 kW). They used a MT7A valve for the final high power RF stage and a MT7B for the modulator. The power supply was 12,000 volts from three phase power rectified by MR7 valves. 4QG commenced with a 500 Watt transmitter which continued for about 6 months until the 5 kW unit was commissioned. The radio transmitters for 3AR and 2FC were upgraded to 10 kW in a contract let in 1938 to STC. The transmitters were designed by Charles Strong in London, and were notable in using negative feedback to ensure a high quality flat frequency response. From 1947 until the mid-1980s, "Radio 2" (as it came to be known) was broadcast to the major metropolitan centres, with a large broadcast footprint in adjacent areas due to the powerful AM transmitters in use. It contained most of the ABC's national programming. The power level of 2FC and 3AR was upgraded to 50 kW in the early 1950s. The transm
https://en.wikipedia.org/wiki/Neurocognition
Neurocognitive functions are cognitive functions closely linked to the function of particular areas, neural pathways, or cortical networks in the brain, ultimately served by the substrate of the brain's neurological matrix (i.e. at the cellular and molecular level). Therefore, their understanding is closely linked to the practice of neuropsychology and cognitive neuroscience – two disciplines that broadly seek to understand how the structure and function of the brain relate to cognition and behaviour. A neurocognitive deficit is a reduction or impairment of cognitive function in one of these areas, but particularly when physical changes can be seen to have occurred in the brain, such as aging related physiological changes or after neurological illness, mental illness, drug use, or brain injury. A clinical neuropsychologist may specialise in using neuropsychological tests to detect and understand such deficits, and may be involved in the rehabilitation of an affected person. The discipline that studies neurocognitive deficits to infer normal psychological function is called cognitive neuropsychology. Etymology The term neurocognitive is a recent addition to the nosology of clinical Psychiatry and Psychology. It was rarely used before the publication of the DSM-5, which updated the psychiatric classification of disorders listed in the "Delirium, Dementia, and Amnestic and Other Cognitive Disorders" chapter of the DSM-IV. Following the 2013 publication of the DSM-5, the use of the term "neurocognitive" − increased steadily. Adding the prefix "neuro-" to the word "cognitive" is an example of pleonasm because analogous to expressions like "burning fire" and "black darkness," the prefix "neuro-" adds no further useful information to the term "cognitive". In the field of clinical neurology, clinicians continue using the simpler term "cognitive", due to the absence of evidence for human cognitive processes that do not involve the nervous system. See also Cognition Cognitive neuropsychology Cognitive neuroscience Cognitive rehabilitation therapy Neurology Neuropsychology Neuropsychological test Neurotoxic Brain fog Hallucinogen persisting perception disorder Depersonalization Dementia Mild cognitive impairment Attention deficit hyperactivity disorder Concussions in sport References Further reading Green, K. J. (1998). Schizophrenia from a Neurocognitive Perspective. Boston, Allyn and Bacon. Cognition Cognitive neuroscience Neuropsychology
https://en.wikipedia.org/wiki/Rich%20client
In computer networking, a rich client (also called heavy, fat or thick client) is a computer (a "client" in client–server network architecture) that typically provides rich functionality independent of the central server. This kind of computer was originally known as just a "client" or "thick client," in contrast with "thin client", which describes a computer heavily dependent on a server's applications. A rich client may be described as having a rich user interaction. While a rich client still requires at least periodic connection to a network or central server , it is often characterised by the ability to perform many functions without a connection. In contrast, a thin client generally does as little processing as possible on the client, relying on access to the server each time input data needs to be processed or validated. Introduction The designer of a client–server application decides which parts of the task should be executed on the client, and which on the server. This decision can crucially affect the cost of clients and servers, the robustness and security of the application as a whole, and the flexibility of the design to later modification or porting. The characteristics of the user interface often force the decision on a designer. For instance, a drawing package could require download of an initial image from a server, and allow all edits to be made locally, returning the revised drawing to the server upon completion. This would require a rich client and might be characterised by a long delay to start and stop (while a whole complex drawing was transferred), but quick to edit. Conversely, a thin client could download just the visible parts of the drawing at the beginning and send each change back to the server to update the drawing. This might be characterised by a short start-up time, but a tediously slow editing process. History The original server clients were simple text display terminals including Wyse VDUs, and rich clients were generally not used until the increase in PC usage. The original driving force for thin client computing was often cost; at a time when CRT terminals and PCs were relatively expensive, the thin-client–server architecture enabled the ability to deploy the desktop computing experience to many users. As PC prices decreased, combined with a drop in software licensing costs, rich client–server architectures became more attractive. For users, the rich client device provided a more-responsive platform and often an improved Graphical User Interface (GUI) than what could be achieved in a thin client environment. In more recent years, the Internet has tended to drive the thin client model despite the prodigious processing power that a modern PC has available. Centrally hosted rich client applications Probably the thinnest clients, sometimes called "ultra thin," are remote desktop applications, e.g. the Citrix products, and Microsoft's Remote Desktop Services, which effectively allow applications to r
https://en.wikipedia.org/wiki/APM
APM, apm, or Apm may refer to: Technology Computer technology Active policy management, a discipline within enterprise software Advanced Power Management, a legacy technology in personal computers Apple Partition Map, computer disk partition scheme Application performance management, a discipline within systems management Other Accurate Pistonic Motion, a line of stereo speakers using square drivers manufactured by Sony ArduPilotMega (APM), an open source unmanned aerial vehicle (UAV) platform Attached Pressurized Module, the former name of the Columbus module of the International Space Station Automated people mover, a driverless train often used in large airports Social sciences and management Agile project management, a style of project management for agile software development projects Application portfolio management Advanced Progressive Matrices, a subset of Raven's Progressive Matrices which is an intelligence test Police and military Australian Police Medal, awarded for distinguished service by a member of an Australian police force Assistant Provost Marshal, a military rank Anti-personnel mine, a type of explosive used against people Army of the Republic of Macedonia Organizations and companies United States American Peace Mobilization, a communist front group active before the Nazi invasion of the Soviet Union during World War II American Poetry Museum, Washington D.C., USA American Public Media, the production and distribution arm of Minnesota Public Radio (MPR) Applied Micro Circuits Corporation, a fabless semiconductor company in the Silicon Valley Associated Production Music, a large production music company China Mainland Beijing apm, a shopping center and office tower in Beijing, China Hong Kong and Macau Apm (Hong Kong), a shopping centre and office tower in Kwun Tong, New Kowloon, Hong Kong United Kingdom Association for Project Management in the United Kingdom K Sports F.C., a football club in England previously known as APM Other APM Monaco, a fashion jewelry company APM Terminals, container terminal operator based in the Netherlands Allied Peoples Movement, a Nigerian political party Australian Paper Mills, former business in Melbourne (APM), France Other Actions per minute, a term used in real-time strategy games Aspartame, an artificial, non-saccharide sweetener
https://en.wikipedia.org/wiki/Power%20management
Power management is a feature of some electrical appliances, especially copiers, computers, computer CPUs, computer GPUs and computer peripherals such as monitors and printers, that turns off the power or switches the system to a low-power state when inactive. In computing this is known as PC power management and is built around a standard called ACPI, this supersedes APM. All recent computers have ACPI support. Motivations PC power management for computer systems is desired for many reasons, particularly: Reduce overall energy consumption Prolong battery life for portable and embedded systems Reduce cooling requirements Reduce noise Reduce operating costs for energy and cooling Lower power consumption also means lower heat dissipation, which increases system stability, and less energy use, which saves money and reduces the impact on the environment. Processor level techniques The power management for microprocessors can be done over the whole processor, or in specific components, such as cache memory and main memory. With dynamic voltage scaling and dynamic frequency scaling, the CPU core voltage, clock rate, or both, can be altered to decrease power consumption at the price of potentially lower performance. This is sometimes done in real time to optimize the power-performance tradeoff. Examples: AMD Cool'n'Quiet AMD PowerNow! IBM EnergyScale Intel SpeedStep Transmeta LongRun and LongRun2 VIA LongHaul (PowerSaver) Additionally, processors can selectively power off internal circuitry (power gating). For example: Newer Intel Core processors support ultra-fine power control over the functional units within the processors. AMD CoolCore technology get more efficient performance by dynamically activating or turning off parts of the processor. Intel VRT technology split the chip into a 3.3V I/O section and a 2.9V core section. The lower core voltage reduces power consumption. Heterogenous computing ARM's big.LITTLE architecture can migrate processes between faster "big" cores and more power efficient "LITTLE" cores. Operating system level: hibernation When a computer system hibernates it saves the contents of the RAM to disk and powers down the machine. On startup it reloads the data. This allows the system to be completely powered off while in hibernate mode. This requires a file the size of the installed RAM to be placed on the hard disk, potentially using up space even when not in hibernate mode. Hibernate mode is enabled by default in some versions of Windows and can be disabled in order to recover this disk space. In GPUs Graphics processing unit (GPUs) are used together with a CPU to accelerate computing in variety of domains revolving around scientific, analytics, engineering, consumer and enterprise applications. All of this comes with some drawbacks, the high computing capability of GPUs comes at the cost of high power dissipation. Much research has been done over the power dissipation issue of GPUs and many technique
https://en.wikipedia.org/wiki/Faces%20%28disambiguation%29
Faces are the front areas of heads. Faces may also refer to: Computing and Internet Faces (video game), a 1990 computer game JavaServer Faces, a Java-based Web application framework for interfaces faces for Unix, the continuation of vismon Film and television Faces (1934 film), a British drama film Faces (1968 film), a film by John Cassavetes "Faces" (Star Trek: Voyager), an episode of Star Trek: Voyager "Faces", an episode of The Good Doctor Music Faces (band), a British rock band active in the early 1970s Faces (festival), a music festival in Raseborg, Finland since 1998 Albums Faces (Clarke-Boland Big Band album) (1969) Faces (Gábor Szabó album) (1977) Faces (Earth, Wind & Fire album) (1980) Faces (John Berry album) (1996) Faces (Chris Caffery album) (2005) Faces (Mt. Helium album) (2008) Faces (EP), by Residual Kid (2012) Faces (mixtape), by Mac Miller (2014) Faces (Irma album) (2014) Faces (David Lyttle album) (2015) Songs "Faces" (Nik Kershaw song) (1984) "Faces", by Night Ranger from 7 Wishes (1985) "Faces" (Run-D.M.C. song) (1991) "Faces" (2 Unlimited song) (1993) "Faces", by Cat Power from Myra Lee (1996) "Faces" (Candyland and Shoffy song) (2016) "Faces", by Gavin James (2019) "Faces", by Scary Kids Scaring Kids from their eponymous album (2007) "Faces", by Young Thug from Punk (2021) See also Face (disambiguation) Faeces Wong-Baker Faces Pain Rating Scale
https://en.wikipedia.org/wiki/Amiga%204000
The Amiga 4000, or A4000, from Commodore is the successor of the Amiga 2000 and Amiga 3000 computers. There are two models: the A4000/040 released in October 1992 with a Motorola 68040 CPU, and the A4000/030 released in April 1993 with a Motorola 68EC030. The Amiga 4000 system design was generally similar to that of the A3000, but introduced the Advanced Graphics Architecture (AGA) chipset with enhanced graphics. The SCSI system from previous Amigas was replaced by the lower-cost Parallel ATA. The original A4000 is housed in a beige horizontal desktop box with a separate keyboard. Later, Commodore released an expanded tower version called the A4000T. The machine is reported to have sold 11,300 units in Germany. Technical information Processor and RAM The stock A4000 shipped with either a Motorola 68EC030 or 68040 CPU, 2 MB of Amiga Chip RAM and up to 16 MB of additional RAM in 32-bit SIMMs. There is a non-functional jumper that was intended to expand the "chip RAM" to 8MB. Later, third-party developers created various CPU expansion boards featuring higher-rated 68040, 68060 and PowerPC CPUs. Such hardware also typically offers faster and higher-capacity RAM (128 MB or greater). A4000-CR version Unlike previous Amiga models, early A4000 machines have the CPU mounted in an expansion board; the motherboard does not have an integrated CPU. Later revisions of the A4000 have the CPU and 2 MB RAM surface-mounted on the motherboard in an effort to reduce costs. These machines are known as the A4000-CR (cost-reduced) and the surface-mounted CPU is a 68EC030. The cost-reduced models also make use of a non-rechargeable lithium battery for real-time clock battery backup rather than a rechargeable NiCad battery. The NiCad backup battery is one of the most common causes of problems in an aging device that uses one because it has a tendency to eventually leak. The released fluids are somewhat corrosive and can eventually damage the circuitry. Graphics and sound The A4000 is the first Amiga model to have shipped with Commodore's third-generation Amiga chipset, the 32-bit Advanced Graphics Architecture (AGA). As the name implies, AGA introduces improved graphical abilities, specifically, a palette expanded from 12-bit color depth (4,096 colors) to 24-bit (16.8 million colors) and new 64, 128, 256 and 262,144 (HAM-8) color modes. Unlike earlier Amiga chipsets, all color modes are available at all display resolutions. AGA also improves sprite capacity and graphics performance. The on-board sound hardware remains identical to that of the original Amiga chipset (the Paula sound chip), namely, four DMA-driven 8-bit PCM channels, with two channels for the left speaker and two for the right. Peripherals and expansion The A4000 has a number of Amiga-specific connectors, including two DE-9 ports for joysticks, mice, and light pens, a standard 25-pin RS-232 serial port and a 25-pin Centronics parallel port. As a result, at launch the A4000 was compatible with many
https://en.wikipedia.org/wiki/Stack%20%28abstract%20data%20type%29
In computer science, a stack is an abstract data type that serves as a collection of elements, with two main operations: Push, which adds an element to the collection, and Pop, which removes the most recently added element that was not yet removed. Additionally, a peek operation can, without modifying the stack, return the value of the last element added. Calling this structure a stack is by analogy to a set of physical items stacked one atop another, such as a stack of plates. The order in which an element added to or removed from a stack is described as last in, first out, referred to by the acronym LIFO. As with a stack of physical objects, this structure makes it easy to take an item off the top of the stack, but accessing a datum deeper in the stack may require taking off multiple other items first. Considered as a linear data structure, or more abstractly a sequential collection, the push and pop operations occur only at one end of the structure, referred to as the top of the stack. This data structure makes it possible to implement a stack as a singly linked list and as a pointer to the top element. A stack may be implemented to have a bounded capacity. If the stack is full and does not contain enough space to accept another element, the stack is in a state of stack overflow. A stack is needed to implement depth-first search. History Stacks entered the computer science literature in 1946, when Alan M. Turing used the terms "bury" and "unbury" as a means of calling and returning from subroutines. Subroutines and a 2-level stack had already been implemented in Konrad Zuse's Z4 in 1945. Klaus Samelson and Friedrich L. Bauer of Technical University Munich proposed the idea of a stack called (Engl. "operational cellar") in 1955 and filed a patent in 1957. In March 1988, by which time Samelson was deceased, Bauer received the IEEE Computer Pioneer Award for the invention of the stack principle. Similar concepts were developed, independently, by Charles Leonard Hamblin in the first half of 1954 and by with his (Engl. "automatic memory") in 1958. Stacks are often described using the analogy of a spring-loaded stack of plates in a cafeteria. Clean plates are placed on top of the stack, pushing down any already there. When a plate is removed from the stack, the one below it pops up to become the new top plate. Non-essential operations In many implementations, a stack has more operations than the essential "push" and "pop" operations. An example of a non-essential operation is "top of stack", or "peek", which observes the top element without removing it from the stack. This could be done with a "pop" followed by a "push" to return the same data to the stack, so it is not considered an essential operation. If the stack is empty, an underflow condition will occur upon execution of either the "stack top" or "pop" operations. Additionally, many implementations provide a check if the stack is empty and one that returns its size. Software s
https://en.wikipedia.org/wiki/IBM%20Portable%20Personal%20Computer
The IBM Portable Personal Computer 5155 model 68 is an early portable computer developed by IBM after the success of the suitcase-size Compaq Portable. It was released in February 1984 and was quickly replaced by the IBM Convertible, only roughly two years after its debut. Design The Portable was basically a PC/XT motherboard, transplanted into a Compaq-style luggable case. The system featured 256 kilobytes of memory (expandable to 640 KB), an added CGA card connected to an internal monochrome amber composite monitor, and one or two half-height -inch 360 KB floppy disk drives, manufactured by Qume. Unlike the Compaq Portable, which used a dual-mode monitor and special display card, IBM used a stock CGA card and a 9-inch amber monochrome composite monitor, which had lower resolution. It could, however, display color if connected to an external monitor or television. A separate 83-key keyboard and cable was provided, which uses a front panel mounted phone jack styled connector RJ11. The cable from the connector then went to the back of the machine, where the original XT keyboard jack was. Experts stated that IBM developed the Portable in part because its sales force needed a computer that would compete against the Compaq Portable. If less sophisticated than the Compaq, the IBM had the advantage of a lower price tag. The motherboard had eight expansion slots. The power supply was rated 114 watts and was suitable for operation on either 115 or 230 VAC. Hard disks were a very common third-party add-on as IBM did not offer them from the factory. Typically in a two-drive context, floppy drive A: ran the operating system, and drive B: would be used for application and data diskettes. Its selling point as a "portable" was that it combined the monitor into a base unit approximating a medium-sized suitcase that could be simply set on its flat side, the back panel slid away to reveal the power connector, plugged in, the keyboard folded down or detached, and booted up for use, though printers at the time, if needed, still tended to be less "portable". At thirty pounds, it may have been difficult to carry for some, and was often referred to as “luggable”. Timeline References Notes IBM (1984). Personal Computer Hardware Reference Library: Guide to Operations, Portable Personal Computer. IBM Part Numbers 6936571 and 1502332. External links Obsolete Technology Website: IBM Portable PC 5155 model 68 IBM 5155 information at www.minuszerodegrees.net 5155 Portable Portable computers Computer-related introductions in 1984
https://en.wikipedia.org/wiki/IBM%20PC%20Convertible
The IBM PC Convertible (model 5140) is a laptop computer made by IBM, first sold in April 1986. The Convertible was IBM's first laptop-style computer, following the luggable IBM Portable, and introduced the 3½-inch floppy disk format to the IBM product line. Like modern laptops, it featured power management and the ability to run from batteries. It was replaced in 1991 by the IBM PS/2 L40 SX, and in Japan by the IBM Personal System/55note, the predecessor to the ThinkPad. Predecessors IBM had been working on a laptop for some time before the Convertible. In 1983, work was underway on a laptop similar to the Tandy Model 100, codenamed "Sweetpea", but it was rejected by Don Estridge for not being PC compatible. Another attempt in 1984 produced the "P-14" prototype machine, but it failed to pass IBM's human factors tests, especially after poor public reception of the display in the competing Data General-One. Description The PC Convertible came in three models: PC Convertible, PC Convertible Model 2 and PC Convertible Model 3. The latter two were released in October 1987 and are primarily distinguished by their LCD panels. The original Convertible used a non-backlit panel which was considered difficult to read. The Model 2 lacked a backlight as well but upgraded to an improved supertwist panel, and the Model 3 included a backlight. The other hardware specifications are largely the same for all three models. The CPU is an Intel 80C88, the CMOS version of the Intel 8088 CPU. The base configuration included of RAM, expandable to , dual 3.5-inch floppy drives, and a monochrome, CGA-compatible LCD screen. It weighed just over 12 pounds and featured a built-in carrying handle, with a battery rated for 10 hours (4 hours in the backlit Model 3). The first model was introduced at a price of , the Model 2 at with 256K of RAM and with 640K, and the Model 3 at with 256K of RAM. The LCD screen displayed characters, but has a very wide aspect ratio, so text characters and graphics are compressed vertically, appearing half their normal height. The display is capable of text and graphics modes of and pixels. The PC Convertible has expansion capabilities through a proprietary ISA-based port on the rear of the machine. Extension modules, including a small printer and a video output module, were provided as plastic modules that snap into place. The machine can also take an internal modem, but has no room for an internal hard disk. The concept and the design of the body was made by German industrial designer Richard Sapper. Pressing the power button on the computer does not turn it off, but puts the machine into "suspend" mode, which will hold the machine's state as long as battery power lasts, to save on boot time. The CMOS 80C88 CPU has a static core, which holds its state indefinitely by stopping the system clock oscillator, and can resume processing when the clock signal is restarted as long as it is kept powered. The system RAM in the Convertible
https://en.wikipedia.org/wiki/Apache%20SpamAssassin
Apache SpamAssassin is a computer program used for e-mail spam filtering. It uses a variety of spam-detection techniques, including DNS and fuzzy checksum techniques, Bayesian filtering, external programs, blacklists and online databases. It is released under the Apache License 2.0 and is a part of the Apache Foundation since 2004. The program can be integrated with the mail server to automatically filter all mail for a site. It can also be run by individual users on their own mailbox and integrates with several mail programs. Apache SpamAssassin is highly configurable; if used as a system-wide filter it can still be configured to support per-user preferences. History Apache SpamAssassin was created by Justin Mason, who had maintained a number of patches against an earlier program named filter.plx by Mark Jeftovic, which in turn was begun in August 1997. Mason rewrote all of Jeftovic's code from scratch and uploaded the resulting codebase to SourceForge on April 20, 2001. In Summer 2004 the project became an Apache Software Foundation project and later officially renamed to Apache SpamAssassin. The SpamAssassin 3.4.2 release in September 2019 was the first in over three years, but the developers say that "The project has picked up a new set of developers and is moving forward again.". In December 2019, version 3.4.3 of SpamAssassin was released. In April, 2021, version 3.4.6 of SpamAssassin was released. It was announced that development of version 4.0.0 would become project's focus. Methods of usage Apache SpamAssassin is a Perl-based application ( in CPAN) which is usually used to filter all incoming mail for one or several users. It can be run as a standalone application or as a subprogram of another application (such as a Milter, SA-Exim, Exiscan, MailScanner, MIMEDefang, Amavis) or as a client () that communicates with a daemon (). The client/server or embedded mode of operation has performance benefits, but under certain circumstances may introduce additional security risks. Typically either variant of the application is set up in a generic mail filter program, or it is called directly from a mail user agent that supports this, whenever new mail arrives. Mail filter programs such as procmail can be made to pipe all incoming mail through Apache SpamAssassin with an adjustment to a user's file. Operation Apache SpamAssassin comes with a large set of rules which are applied to determine whether an email is spam or not. Most rules are based on regular expressions that are matched against the body or header fields of the message, but Apache SpamAssassin also employs a number of other spam-fighting techniques. The rules are called "tests" in the SpamAssassin documentation. Each test has a score value that will be assigned to a message if it matches the test's criteria. The scores can be positive or negative, with positive values indicating "spam" and negative "ham" (non-spam messages). A message is matched against all tests and Apache
https://en.wikipedia.org/wiki/Home%20directory
A home directory is a file system directory on a multi-user operating system containing files for a given user of the system. The specifics of the home directory (such as its name and location) are defined by the operating system involved; for example, Linux / BSD (FHS) systems use /home/ or /usr/home/ and Windows systems between 2000 and Server 2003 keep home directories in a folder named Documents and Settings. Description A user's home directory is intended to contain that user's files; including text documents, music, pictures, videos, etc. It may also include their configuration files of preferred settings for any software they have used there and might have tailored to their liking: web browser bookmarks, favorite desktop wallpaper and themes, stored passwords to any external services accessed via a given software, etc. The user can install executable software in this directory, but it will only be available to users with permission to execute files in this directory. The home directory can be organized further with the use of sub-directories. The content of a user's home directory is protected by file-system permissions, and by default is accessible to all authenticated users and administrators. Any other user that has been granted administrator privileges has authority to access any protected location on the file system including other users' home directories. Benefits Separating user data from system-wide data avoids redundancy and makes backups of important files relatively simple. Furthermore, Trojan horses, viruses, and worms running under the user's name and with their privileges will in most cases only be able to alter the files in the user's home directory, and perhaps some files belonging to workgroups the user is a part of, but not actual system files. Default home directory per operating system Subdirectories The file on many Linux systems defines the subdirectories created for users by default. Creation is normally done with the first login by Xdg-user-dirs, a tool to help manage "well known" user directories like desktop, downloads, documents, pictures, videos, or music. The tool is also capable of localization (i.e. translation) of the folders' names. Other features, per operating system Unix In Unix, the working directory is automatically set to a user's home directory when they log in. In many built-in commands, typing the (tilde) character is equivalent to specifying the current user's home directory. The Unix superuser has access to all directories on the file system, and hence can access home directories of all users. The superuser's home directory on older systems was , but on many newer systems it is located at (Linux, BSD), or (Mac OS X). VMS In the OpenVMS operating system, a user's home directory is called the root directory, and the equivalent of a Unix/DOS/Windows/AmigaOS root directory is referred to as the Master File Directory. Single-user operating systems Single-user operating systems simply ha
https://en.wikipedia.org/wiki/Core%20War
Core War is a 1984 programming game created by D. G. Jones and A. K. Dewdney in which two or more battle programs (called "warriors") compete for control of a virtual computer. These battle programs are written in an abstract assembly language called Redcode. The standards for the language and the virtual machine were initially set by the International Core Wars Society (ICWS), but later standards were determined by community consensus. Gameplay At the beginning of a game, each battle program is loaded into memory at a random location, after which each program executes one instruction in turn. The goal of the game is to cause the processes of opposing programs to terminate (which happens if they execute an invalid instruction), leaving the victorious program in sole possession of the machine. The earliest published version of Redcode defined only eight instructions. The ICWS-86 standard increased the number to 10 while the ICWS-88 standard increased it to 11. The currently used 1994 draft standard has 16 instructions. However, Redcode supports a number of different addressing modes and (starting from the 1994 draft standard) instruction modifiers which increase the actual number of operations possible to 7168. The Redcode standard leaves the underlying instruction representation undefined and provides no means for programs to access it. Arithmetic operations may be done on the two address fields contained in each instruction, but the only operations supported on the instruction codes themselves are copying and comparing for equality. Constant instruction length and time Each Redcode instruction occupies exactly one memory slot and takes exactly one cycle to execute. The rate at which a process executes instructions, however, depends on the number of other processes in the queue, as processing time is shared equally. Circular memory The memory is addressed in units of one instruction. The memory space (or core) is of finite size, but only relative addressing is used, that is, address 0 always refers to the currently executing instruction, address 1 to the instruction after it, and so on. The maximum address value is set to equal one less than the number of memory locations and will wrap around if necessary. As a result, there is a one-to-one correspondence between addresses and memory locations, but it is impossible for a Redcode program to determine any absolute address. A process that encounters no invalid or jump instructions will continue executing successive instructions endlessly, eventually returning to the instruction where it started. Low-level multiprocessing Instead of a single instruction pointer a Redcode simulator has a process queue for each program containing a variable number of instruction pointers which the simulator cycles through. Each program starts with only one process, but new processes may be added to the queue using the SPL instruction. A process dies when it executes a instruction or performs a division by zero. A
https://en.wikipedia.org/wiki/X11%20color%20names
In computing, on the X Window System, X11 color names are represented in a simple text file, which maps certain strings to RGB color values. It was traditionally shipped with every X11 installation, hence the name, and is usually located in <X11root>/lib/X11/rgb.txt. The web colors list is descended from it but differs for certain color names. Color names are not standardized by Xlib or the X11 protocol. The list does not show continuity either in selected color values or in color names, and some color triplets have multiple names. Despite this, graphic designers and others got used to them, making it practically impossible to introduce a different list. In earlier releases of X11 (prior to the introduction of Xcms), server implementors were encouraged to modify the RGB values in the reference color database to account for gamma correction. As of X.Org Release 7.4 rgb.txt is no longer included in the roll up release, and the list is built directly into the server. The optional module xorg/app/rgb contains the stand-alone rgb.txt file. The list first shipped with X10 release 3 (X10R3) on 7 June 1986, having been checked into RCS by Jim Gettys in 1985. The same list was in X11R1 on 18 September 1987. Approximately the full list as is available today shipped with X11R4 on 29 January 1989, with substantial additions by Paul Ravelling (who added colors based on Sinclair Paints samples), John C. Thomas (who added colors based on a set of 72 Crayola crayons he had on hand) and Jim Fulton (who reconciled contributions to produce the X11R4 list). The project was running DEC VT240 terminals at the time, so would have worked to that device. In some applications multipart names are written with spaces, in others joined together, often in camel case. They are usually matched insensitive of case and the X Server source code contains spaced aliases for most entries; this article uses spaces and uppercase initials except where variants with spaces are not specified in the actual code. Clashes between web and X11 colors in the CSS color scheme The first versions of Mosaic and Netscape Navigator used the X11 colors as the basis for the web colors list, as both were originally X applications. The W3C specifications SVG and CSS level 3 module Color eventually adopted the X11 list with some changes. The present W3C list is a superset of the 16 "VGA colors" defined in HTML 3.2 and CSS level 1. One notable difference between X11 and W3C is the case of "Gray" and its variants. In HTML, "Gray" is specifically reserved for the 128 triplet (50% gray) . However, in X11, "gray" was assigned to the 190 triplet (74.5%) , which is close to W3C "Silver" at 192 (75.3%) , and had "Light Gray" at 211 (83%) and "Dark Gray" at 169 (66%) counterparts. As a result, the combined CSS 3.0 color list that prevails on the web today produces "Dark Gray" as a significantly lighter tone than plain "Gray" , because "Dark Gray" was descended from X11 – for it did not exist in HTML nor
https://en.wikipedia.org/wiki/Consistency%20model
In computer science, a consistency model specifies a contract between the programmer and a system, wherein the system guarantees that if the programmer follows the rules for operations on memory, memory will be consistent and the results of reading, writing, or updating memory will be predictable. Consistency models are used in distributed systems like distributed shared memory systems or distributed data stores (such as filesystems, databases, optimistic replication systems or web caching). Consistency is different from coherence, which occurs in systems that are cached or cache-less, and is consistency of data with respect to all processors. Coherence deals with maintaining a global order in which writes to a single location or single variable are seen by all processors. Consistency deals with the ordering of operations to multiple locations with respect to all processors. High level languages, such as C++ and Java, maintain the consistency contract by translating memory operations into low-level operations in a way that preserves memory semantics, reordering some memory instructions, and encapsulating required synchronization with library calls such as pthread_mutex_lock(). Example Assume that the following case occurs: The row X is replicated on nodes M and N The client A writes row X to node M After a period of time t, client B reads row X from node N The consistency model determines whether client B will definitely see the write performed by client A, will definitely not, or cannot depend on seeing the write. Types Consistency models define rules for the apparent order and visibility of updates, and are on a continuum with tradeoffs. There are two methods to define and categorize consistency models; issue and view. Issue Issue method describes the restrictions that define how a process can issue operations. View View method which defines the order of operations visible to processes. For example, a consistency model can define that a process is not allowed to issue an operation until all previously issued operations are completed. Different consistency models enforce different conditions. One consistency model can be considered stronger than another if it requires all conditions of that model and more. In other words, a model with fewer constraints is considered a weaker consistency model. These models define how the hardware needs to be laid out and at a high-level, how the programmer must code. The chosen model also affects how the compiler can re-order instructions. Generally, if control dependencies between instructions and if writes to same location are ordered, then the compiler can reorder as required. However, with the models described below, some may allow writes before loads to be reordered while some may not. Strong consistency models Strict consistency Strict consistency is the strongest consistency model. Under this model, a write to a variable by any processor needs to be seen instantaneously by all processors
https://en.wikipedia.org/wiki/MMH
MMH may refer to: Monomethylhydrazine, CH3N2H3, a chemical Mammoth Yosemite Airport. IATA code Mackay Memorial Hospital, Taipei, Taiwan Multilinear Modular Hashing, a computer algorithm Manual material handling Mark McHugh, a Gaelic footballer Martin McHugh (Gaelic footballer), a Gaelic footballer
https://en.wikipedia.org/wiki/M3
M3, M-3 or M03 may refer to: Computing and electronics Apple M3, a central processing unit in the Apple M series Intel m3, a brand of microprocessors M.3 (aka NF1/NGSFF), a specification for internally mounted expansion cards Leica M3, a landmark 35mm rangefinder camera Modula-3 (M3), a programming language M3, a British peak programme meter standard used for measuring the volume of audio broadcasts m3, a macro processor for the AP-3 minicomputer, the predecessor to m4 M3, a surface-mount version of the 1N4003 general-purpose silicon rectifier diode M3 (email client), an unreleased email client for the Vivaldi browser Entertainment M3, a comic book created by Vicente Alcazar M3 adapter, a Game Boy Advance movie player M3 (Canadian TV channel), a music and entertainment television channel M3 (Hungarian TV channel), a Hungarian television channel M3: Malay Mo Ma-develop, a 2010 Philippine TV series M3 Music Card, a 2007 flash-based MP3 player M3 Perfect, M3 Simply and M3 Real, Nintendo DS and 3DS storage devices M3: Sono Kuroki Hagane, a 2014 Japanese anime television series Military Weapons 37 mm Gun M3, a light American anti-tank gun M3/M3E1 Multi-role Anti-armor Anti-tank Weapon System (MAAWS) (AKA: Carl Gustaf 8.4cm recoilless rifle), an 84 mm man-portable reloadable anti-tank recoilless rifle 90 mm Gun M1/M2/M3, an American anti-aircraft and anti-tank gun 105 mm Howitzer M3, an American light artillery piece Benelli M3 Super 90, an Italian semi-automatic shotgun M3 20mm cannon, a United States development of the Hispano-Suiza HS.404 M3 fighting knife, a World War II American issue knife M3 machine gun, a variant of the M2 Browning M3 submachine gun (AKA: Grease Gun), an American submachine gun M3 tripod, a modern tripod for the M2 Browning machine gun M3, a code name for a United States military mission at Roosevelt Roads Naval Station in Puerto Rico Vehicles M3 Amphibious Rig, a German self-propelled amphibious bridging vehicle M3 Bradley, an American infantry fighting vehicle M3 Gun Motor Carriage, American tank destroyer M3 half-track, an armored military vehicle M3 Lee, an American medium tank; also known as "M3 Grant" in Commonwealth service Panhard M3 PTT, a French armored personnel carrier (M3), a WWI British Royal Navy monitor M3 Ram, a Canadian cruiser tank M3 Scout Car, an American armored vehicle M3 Stuart, an American light tank , a Swedish Royal Navy mine layer , a Swedish Navy mine sweeper (1940–1955) , a British Royal Navy minelayer submarine of the post-WWI period Music Major third (M3), a type of musical interval Major thirds tuning (M3 tuning), a regular tuning with major-third intervals between successive strings Minor third (m3), a type of musical interval M3 (album), a 1999 album by Mushroomhead M3 (band), an American rock band M3 Classic Whitesnake, a band featuring Bernie Marsden, Micky Moody and Neil Murray M3 Records, a record label Korg M3, a workstation syn
https://en.wikipedia.org/wiki/Table
Table may refer to: Table (database), how the table data arrangement is used within databases Table (furniture), a piece of furniture with a flat surface and one or more legs Table (information), a data arrangement with rows and columns Table (landform), a flat area of land Table (parliamentary procedure) Table (sports), a ranking of the teams in a sports league Tables (board game) Mathematical table Table, surface of the sound board (music) of a string instrument Al-Ma'ida, the fifth surah of the Qur'an, usually translated as “The Table” Calligra Tables, a spreadsheet application Water table See also Spreadsheet, a computer application Table cut, a type of diamond cut The Table (disambiguation) Table Mountain (disambiguation) Table Rock (disambiguation) Tabler (disambiguation) Tablet (disambiguation)
https://en.wikipedia.org/wiki/Chinese%20character%20encoding
In computing, Chinese character encodings can be used to represent text written in the CJK languages—Chinese, Japanese, Korean—and (rarely) obsolete Vietnamese, all of which use Chinese characters. Several general-purpose character encodings accommodate Chinese characters, and some of them were developed specifically for Chinese. In addition to Unicode (with the set of CJK Unified Ideographs), local encoding systems exist. The Chinese Guobiao (or GB, "national standard") system is used in Mainland China and Singapore, and the (mainly) Taiwanese Big5 system is used in Taiwan, Hong Kong and Macau as the two primary "legacy" local encoding systems. Guobiao is usually displayed using simplified characters and Big5 is usually displayed using traditional characters. There is however no mandated connection between the encoding system and the font used to display the characters; font and encoding are usually tied together for practical reasons. The issue of which encoding to use can also have political implications, as GB is the official standard of the People's Republic of China and Big5 is a de facto standard of Taiwan. In contrast to the situation with Japanese, there has been relatively little overt opposition to Unicode, which solves many of the issues involved with GB and Big5. Unicode is widely regarded as politically neutral, has good support for both simplified and traditional characters, and can be easily converted to and from the GB and Big5. Furthermore, Unicode has the advantage of not being limited only to Chinese, since it contains character codes for (nearly) every language. Guobiao The Guobiao (GB) line of character encodings start with the Simplified Chinese charset GB 2312 published in 1980. Two encoding schemes existed for GB 2312: a one-or-two byte 8-bit EUC-CN encoding commonly used, and a 7-bit encoding called HZ for usenet posts. A traditional variant called GB/T 12345 was published in 1990. The EUC-CN form was later extended into GBK to include all Unicode 1.1 CJK Ideographs in 1993, abandoning the ISO-2022 model. By doing so, GBK includes Traditional Chinese characters in addition to simplified ones in GB2312. GBK gained popularity through the widespread Code page 936 implementation found in Microsoft Windows 95. In 2000, GB 18030 was published as GBK's successor. This new encoding includes a four-byte UTF which encodes all Unicode codepoints not previously encoded. In 2005, GB 18030 was published to contain reference glyphs for scripts used by ethnic minorities in China, as well as glyphs from CJK Unified Ideographs Extension B due to the update of Unicode. Adobe-GB1 is the corresponding PostScript charset for GB encodings. Big5 The Big5 family of character encodings start with the initial definition by the consortium of five companies in Taiwan that developed it. It is a double-byte character set (DBCS) somehow similar to Shift JIS, often combined with a MBCS like ASCII. Quite a few vendors as well as official ex
https://en.wikipedia.org/wiki/Distributed%20control%20system
A distributed control system (DCS) is a computerised control system for a process or plant usually with many control loops, in which autonomous controllers are distributed throughout the system, but there is no central operator supervisory control. This is in contrast to systems that use centralized controllers; either discrete controllers located at a central control room or within a central computer. The DCS concept increases reliability and reduces installation costs by localising control functions near the process plant, with remote monitoring and supervision. Distributed control systems first emerged in large, high value, safety critical process industries, and were attractive because the DCS manufacturer would supply both the local control level and central supervisory equipment as an integrated package, thus reducing design integration risk. Today the functionality of Supervisory control and data acquisition (SCADA) and DCS systems are very similar, but DCS tends to be used on large continuous process plants where high reliability and security is important, and the control room is not geographically remote. Many machine control systems exhibit similar properties as plant and process control systems do. Structure The key attribute of a DCS is its reliability due to the distribution of the control processing around nodes in the system. This mitigates a single processor failure. If a processor fails, it will only affect one section of the plant process, as opposed to a failure of a central computer which would affect the whole process. This distribution of computing power local to the field Input/Output (I/O) connection racks also ensures fast controller processing times by removing possible network and central processing delays. The accompanying diagram is a general model which shows functional manufacturing levels using computerised control. Referring to the diagram; Level 0 contains the field devices such as flow and temperature sensors, and final control elements, such as control valves Level 1 contains the industrialised Input/Output (I/O) modules, and their associated distributed electronic processors. Level 2 contains the supervisory computers, which collect information from processor nodes on the system, and provide the operator control screens. Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and monitoring targets Level 4 is the production scheduling level. Levels 1 and 2 are the functional levels of a traditional DCS, in which all equipment are part of an integrated system from a single manufacturer. Levels 3 and 4 are not strictly process control in the traditional sense, but where production control and scheduling takes place. Technical points The processor nodes and operator graphical displays are connected over proprietary or industry standard networks, and network reliability is increased by dual redundancy cabling over diverse
https://en.wikipedia.org/wiki/C%2B%2BBuilder
C++Builder is a rapid application development (RAD) environment for developing software in the C++ programming language. Originally developed by Borland, it is owned by Embarcadero Technologies, a subsidiary of Idera. C++Builder can compile apps for Windows (both IA-32 and x64), iOS, macOS, and Android (32-bit only). It includes tools that allow drag-and-drop visual development, making programming easier by incorporating a WYSIWYG graphical user interface builder. C++Builder is the sibling product of Delphi, an IDE that uses the Object Pascal programming language. C++Builder combines the Visual Component Library (VCL) and IDE written in Object Pascal with multiple C++ compilers. C++Builder and Delphi can generate mutually compatible binaries. C++ methods can call Object Pascal methods and vice versa. Since both Delphi and C++ use the same back-end linker, the debugger can step from Delphi code into C++ transparently. In addition, C++Builder projects can include Delphi code. (The reverse is not possible.) Technology C++Builder uses the same IDE as Delphi, and shares many core libraries. Notable shared Delphi (Object Pascal code) and C++ Builder routines include the FastMM4 memory manager, which was developed as a community effort within the FastCode project, the entire UI framework known as the VCL, which is written in Object Pascal, as well as base system routines, many of which have been optimised for both Delphi and C++Builder through the FastCode project. C++Builder projects can include Delphi code. The Delphi compiler emits C++ headers, allowing C++ code to link to Delphi classes and methods as though they were written in C++. The reverse (C++ code being used by Delphi) is not as straightforward but possible. C++Builder originally targeted only the Microsoft Windows platform. Later versions incorporated Borland CLX, a cross-platform development visual component library based on Qt, that supports Windows and Linux, however CLX is now abandoned. The current version by Embarcadero supports cross-platform development using the new Firemonkey (FMX) library. Editions C++ Builder is available in four editions with increasing features and price: Community: Available for free for one year but has a limited commercial-use license. Includes local database connectivity and some library source code. Professional: Adds cross-platform compilation for macOS, (until version 10.2.2: iOS and Android requiring the purchase of the additional Mobile Add-On pack), more library source code, code formatting, and a full commercial license. Enterprise: Includes the mobile target platforms and adds client/server database connectivity, Enterprise Mobility Services, and DataSnap multi-tier SDK. Architect: Adds data modeling tools. History Traditionally, the release cycle was such that Delphi got major enhancements first, with C++Builder following, though recent versions have been released at the same time as their Delphi equivalents. 1.0 Borland announce
https://en.wikipedia.org/wiki/Foundation%20Kit
The Foundation Kit, or just Foundation for short, is an Objective-C framework in the OpenStep specification. It provides basic classes such as wrapper classes and data structure classes. This framework uses the prefix NS (for NeXTSTEP). It is also part of Cocoa and of the Swift standard library. Classes NSObject This class is the most common base class for Objective-C hierarchies and provides standard methods for working with objects by managing the memory associated with them and querying them. NSString and NSMutableString A class used for string manipulation, representing a Unicode string (most typically using UTF-16 as its internal format). NSString is immutable, and thus can only be initialized but not modified. NSMutableString is a modifiable version. NSValue and NSNumber NSValue is a wrapper class for C data types, and NSNumber is a wrapper class for C number data types such as int, double, and float. The data structures in Foundation Kit can only hold objects, not primitive types, so wrappers such as NSValue and NSNumber are used in those data structures. NSArray and NSMutableArray A dynamic array of objects, supporting constant-time indexing. NSArray is an immutable version that can only be initialized with objects but not modified. NSMutableArray may be modified by adding and removing objects. NSDictionary and NSMutableDictionary An associative data container of key-value pairs with unique keys. Searching and element addition and removal (in the case of NSMutableDictionary) is faster-than-linear. However, the order of the elements within the container is not guaranteed. NSSet and NSMutableSet An associative container of unique keys, similar to NSDictionary, with the difference that members do not contain a data object. NSData and NSMutableData A wrapper for raw byte data. An object of this type can dynamically allocate and manage its data, or it can refer to data owned by and managed by something else (such as a static numeric array). NSDate, NSTimeZone and NSCalendar Classes that store times and dates and represent calendrical information. They offer methods for calculating date and time differences. Together with NSLocale, they provide methods for displaying dates and times in many formats, and for adjusting times and dates based on location in the world. Major implementations macOS and iOS The Foundation Kit is part of the macOS Cocoa API. Beginning as the successor to OPENSTEP/Mach, this framework has deviated from OpenStep compliance, and is in some places incompatible. The Foundation Kit is in the iOS Cocoa Touch API. This framework is based on the macOS Cocoa. GNUstep The Foundation Kit is implemented in GNUstep's Base Package (libs-base). This implementation is mostly comparable (4 classes are missing) and aims to be comparable with both the OpenStep API and later macOS additions. The missing classes have been dropped by Apple as well. Cocotron The Foundation Kit is implemented in Cocotron, an open-source imp
https://en.wikipedia.org/wiki/CCT
CCT may refer to: Computation Computational complexity theory Computer-Controlled Teletext, an electronic circuit, see Teletext Internet Computer Chess Tournament Economics Compulsory Competitive Tendering, see Best value Conditional cash transfer Currency Carry Trade, see Carry (investment) Education Center for Computation and Technology at Louisiana State University, USA Clarkson College of Technology, the original name of Clarkson University Communication, Culture & Technology, M.A. program at Georgetown University College of Ceramic Technology at Kolkata, India Centre for Converging Technologies, University of Rajasthan at Jaipur, India Cisco Certified Technician, an IT certification from Cisco Systems Government Congo Chine Télécoms, now Orange RDC, a company of the Democratic Republic of the Congo Constitutional Court of Thailand United States Air Force Combat Control Team Medicine and psychology Caring Cancer Trust Central corneal thickness Certificate of Completion of Training, which doctors in the UK receive on completion of their specialist training Client-Centered Therapy, see Person-centered psychotherapy Cognitive complexity theory Controlled Cord Traction, a technique used to manage certain types of Postpartum haemorrhage Cortical collecting tubule in kidney Religion Christian Churches Together, an ecumenical organization Christian Community Theater, a theater program for ages eight to adult Churches Conservation Trust, a charity to conserve redundant churches in England Science Carbon capture technology, various technologies used in carbon capture Coal pollution mitigation ("clean coal") technology Cold cathode tube Colossal carbon tube Continuous cooling transformation Correlated color temperature GCxGC Catch connective tissue CCT, a codon for the amino acid Proline Social science Consumer culture theory Sports Coca-Cola Tigers, former basketball team Transportation California Coastal Trail Capital Crescent Trail, Washington, DC Central California Traction Company, railroad in California, reporting marks CCT Cobb Community Transit serving Cobb County Georgia (US), now known as CobbLinc Corridor Cities Transitway, a proposed transit line in Montgomery County, Maryland Cotswold Canals Trust, a canal restoration trust in southern England Covered Carriage Truck, a Mk1 British Rail carriage Cross City Tunnel, a road tunnel in Sydney
https://en.wikipedia.org/wiki/VCL
VCL may refer to: Computing Varnish Configuration Language, a domain-specific language used for configuring the Varnish Proxy / Server Video Coding Layer, a layer in H.264/AVC and HEVC Virus Creation Laboratory, an MS-DOS program designed to create computer viruses Visual Component Library, a programming library for Delphi and C++Builder Visual Class Library, an internal part of OpenOffice.org and LibreOffice Voluntary collective licensing, an alternative approach to solve the problem of software piracy Other uses Vinculin, a mammal protein Vickers-Carden-Loyd tankette, a British tankette Voluntary Committee of Lawyers, a former organization to repeal prohibition of alcohol in the US Vampire Cheerleaders, a manga series Chu Lai Airport (IATA code: VCL)
https://en.wikipedia.org/wiki/Sydney%20Trains%20M%20set
The Sydney Trains M sets, also referred to as the Millennium trains, are a class of electric multiple units that operate on the Sydney Trains network. Built by EDi Rail between 2002 and 2005, the first sets initially entered service under the CityRail brand on 1 July 2002 after short delays due to electrical defects. The M sets were built as "fourth generation" trains for Sydney's suburban rail fleet, replacing the 1960s Tulloch carriages and providing extra capacity on the suburban rail network. The sets currently operate on the T2 Inner West & Leppington, T3 Bankstown, T5 Cumberland, T7 Olympic Park and T8 Airport & South lines. Design The Millennium train, like the entire Sydney Trains fleet and electric NSW TrainLink fleet, is a double decker. It is a four car consist, with the middle two cars being non-control motor cars and the two outer cars being driving control trailer cars fitted with the pantograph. The Millennium train was the first to be equipped with an AC drive system unlike the Tangara, which has a DC drive system. The sets usually operate in eight-car formations with two four-car sets combined. While the Millennium train concept is an evolution of the Tangara concept (manufactured by A Goninan & Co), the Millennium train introduced new features such as internal electronic destination indicators, automated digital voice announcements for upcoming stops, a return to reversible seating, surveillance cameras, wider stairways, a new safety yellow colour scheme, and push-button opened internal doors. The Millennium Train also introduced crumple zones to absorb impact in a collision. Interiors were designed by Transport Design International. The train also features emergency help points, allowing passengers to contact the train crew in an emergency. The help points are located on the sides of the stairwell to the upper deck. There are actually two help points in the same location, with a large one at face height with a microphone and speaker, and a lower one with a microphone only. There are also emergency door releases which were retrofitted to the trains. These allow passengers to manually open the doors in an emergency, as recommended in the report for the Waterfall rail accident. The retrofit program was stated as having been completed in November 2014. Like with the T, A and B sets, the M sets feature Scharfenberg couplers. M sets are wide, being classed by Transport for NSW as medium width trains, which allows them to operate within the whole Sydney Trains suburban network. Unlike sets M2–M35, set M1 has a slightly different interior design with differently coloured doors and different seat handles for unknown reasons. Delivery The cars were constructed by EDi Rail at Cardiff Workshops. The contract included a 15-year maintenance agreement with EDi Rail to maintain the trains at a specialised maintenance centre at Eveleigh. During testing and initial revenue service, they ran as four car sets, with eight car sets commen
https://en.wikipedia.org/wiki/Sydney%20Trains%20T%20set
The T sets, also referred to as the Tangara trains, are a class of electric multiple units that currently operate on the Sydney Trains network. Built by A Goninan & Co, the sets entered service between 1988 and 1995, initially under the State Rail Authority and later on CityRail. The T sets were built as "third-generation" trains for Sydney's rail fleet, coinciding with the final withdrawals of the "Red Rattler" sets from service in the late 1980s and early 1990s. The Tangaras were initially built as two classes; the long-distance G sets and the suburban T sets, before being merged after successive refurbishments. Design The Tangara is a double-deck four-car set, with the two outer cars being driving control trailers (carrying a D prefix) that are fitted with one pantograph each and the middle two cars being non-control motor cars (carrying an N prefix). All sets are equipped with chopper control. Unlike most other Sydney Trains rolling stock, the seats on the suburban T sets are fixed, meaning that half the seats face backwards. Former G sets, however, do have reversible seats. History Initial delivery In July 1986, the Government of New South Wales awarded A Goninan & Co a contract for 450 carriages. In 1993, it was decided that the last 80 carriages of the order would be built to a modified design to operate peak-hour services to Wyong, Port Kembla and Dapto. In 1996, five spare driving trailers were ordered. The Tangara name is of Aboriginal origin, meaning to go Two subclasses of Tangara were built, the suburban sets targeted as T sets, and outer-suburban sets originally targeted as G sets. The T sets replaced the first generation of Sydney's electric rolling stock. The G sets differed from the T sets in originally having manual door buttons, high-backed reversible seats, toilets, fresh water dispensers and luggage racks. Additionally, the G sets were delivered with a revised design at the front and rear of the train, notably an angular cutout in the bottom of their noses. Additionally, the pinstriped grey panels below the cab windows were replaced with light orange panels for improved visibility. All T sets have a number plate below a hundred while all G sets are numbered at or above The first train (set T20) was unveiled at Sydney Central in December 1987, heavily promoted as the "train of the 21st century", operating a promotional service on 28 January 1988 targeted as TAN1, and entering regular service on 12 April 1988. The final T set (set T59, formerly T92) was delivered in February 1994 and the final G set (set T100, formerly G32) in October 1995. The cars built were: T set driving trailer cars: D6101-D6284 with additional spare cars D6285-D6289 T set non-driving motor cars: N5101-N5284 with additional spare car N5285 G set driving trailer cars: OD6801-OD6840 with additional spare car OD6841 G set non-driving motor cars: ON5801-ON5820 G set non-driving motor cars with toilet: ONL5851-ONL5870 Set G7 was fitted with an AC d
https://en.wikipedia.org/wiki/Polyglot%20%28computing%29
In computing, a polyglot is a computer program or script (or other file) written in a valid form of multiple programming languages or file formats. The name was coined by analogy to multilingualism. A polyglot file is composed by combining syntax from two or more different formats. When the file formats are to be compiled or interpreted as source code, the file can be said to be a polyglot program, though file formats and source code syntax are both fundamentally streams of bytes, and exploiting this commonality is key to the development of polyglots. Polyglot files have practical applications in compatibility, but can also present a security risk when used to bypass validation or to exploit a vulnerability. History Polyglot programs have been crafted as challenges and curios in hacker culture since at least the early 1990s. A notable early example, named simply polyglot was published on the Usenet group rec.puzzles in 1991, supporting 8 languages, though this was inspired by even earlier programs. In 2000, a polyglot program was named a winner in the International Obfuscated C Code Contest. In the 21st century, polyglot programs and files gained attention as a covert channel mechanism for propagation of malware. Construction A polyglot is composed by combining syntax from two or more different formats, leveraging various syntactic constructs that are either common between the formats, or constructs that are language specific but carrying different meaning in each language. A file is a valid polyglot if it can be successfully interpreted by multiple interpreting programs. For example, a PDF-Zip polyglot might be opened as both a valid PDF document and decompressed as a valid zip archive. To maintain validity across interpreting programs, one must ensure that constructs specific to one interpreter are not interpreted by another, and vice versa. This is often accomplished by hiding language-specific constructs in segments interpreted as comments or plain text of the other format. Examples C, PHP, and Bash Two commonly used techniques for constructing a polyglot program are to make use of languages that use different characters for comments, and to redefine various tokens as others in different languages. These are demonstrated in this public domain polyglot written in ANSI C, PHP and bash: Highlit for Bash #define a /* #<?php echo "\010Hello, world!\n";// 2> /dev/null > /dev/null \ ; // 2> /dev/null; x=a; $x=5; // 2> /dev/null \ ; if (($x)) // 2> /dev/null; then return 0; // 2> /dev/null; fi #define e ?> #define b */ #include <stdio.h> #define main() int main(void) #define printf printf( #define true ) #define function function main() { printf "Hello, world!\n"true/* 2> /dev/null | grep -v true*/; return 0; } #define c /* main #*/ Highlit for PHP #define a /* #<?php echo "\010Hello, world!\n";// 2> /dev/null > /dev/null \ ; // 2> /dev/null; x=a; $x=5; // 2> /dev/null \ ; if (($x)) // 2> /dev/null; then retu
https://en.wikipedia.org/wiki/MDM
MDM may refer to: Computers and data Master data management, the organization and control of reference or master data shared by disparate IT systems and groups Metadata management, storing and organizing information about other information Mobile device management, software for the administration of smartphones and other mobile devices Multiplexer-demultiplexer Meter data management, data storage and management software Entertainment Melodic death metal, a music genre M-D-Emm, a British electronic music group Mere Dead Men, a British punk band Modern Drunkard, a magazine Moi dix Mois, a Japanese metal band My Dear Melancholy, an album by The Weeknd Science and medicine Mdm2 protein, encoded by the MDM2 gene in humans Medical decision-making, part of differential diagnosis in clinical medicine Multiple drafts model, a theory of consciousness Portal of Medical Data Models, medical research infrastructure Music-dependent memory, a subtype of context-dependent memory Organizations and businesses Democratic Movement of Mozambique (Movimento Democrático de Moçambique), a political party Mass Democratic Movement, part of the United Democratic Front in South Africa Médecins du Monde (MdM), a medical humanitarian organisation Movement for a Democratic Military, a GI antiwar and resistance organization during the Vietnam War MDM Bank in Russia Other uses Mayogo language (ISO 639-3:mdm), spoken in the DR Congo MDM Observatory, in Arizona MDM-1 Fox, a glider aircraft MDM Motorsports, an American professional stock car racing team Mechanically deboned meat, a meat-handling process
https://en.wikipedia.org/wiki/GNU%20Project
The GNU Project () is a free software, mass collaboration project announced by Richard Stallman on September 27, 1983. Its goal is to give computer users freedom and control in their use of their computers and computing devices by collaboratively developing and publishing software that gives everyone the rights to freely run the software, copy and distribute it, study it, and modify it. GNU software grants these rights in its license. In order to ensure that the entire software of a computer grants its users all freedom rights (use, share, study, modify), even the most fundamental and important part, the operating system (including all its numerous utility programs) needed to be free software. Stallman decided to call this operating system GNU (a recursive acronym meaning "GNU's not Unix!"), basing its design on that of Unix, a proprietary operating system. According to its manifesto, the founding goal of the project was to build a free operating system, and if possible, "everything useful that normally comes with a Unix system so that one could get along without any software that is not free." Development was initiated in January 1984. In 1991, the Linux kernel appeared, developed outside the GNU project by Linus Torvalds, and in December 1992 it was made available under version 2 of the GNU General Public License. Combined with the operating system utilities already developed by the GNU project, it allowed for the first operating system that was free software, commonly known as Linux. The project's current work includes software development, awareness building, political campaigning, and sharing of new material. Origins Richard Stallman announced his intent to start coding the GNU Project in a Usenet message in September 1983. Despite never having used Unix prior, Stallman felt that it was the most appropriate system design to use as a basis for the GNU Project, as it was portable and "fairly clean". When the GNU project first started they had an Emacs text editor with Lisp for writing editor commands, a source level debugger, a yacc-compatible parser generator, and a linker. The GNU system required its own C compiler and tools to be free software, so these also had to be developed. By June 1987, the project had accumulated and developed free software for an assembler, an almost finished portable optimizing C compiler (GCC), an editor (GNU Emacs), and various Unix utilities (such as ls, grep, awk, make and ld). They had an initial kernel that needed more updates. Once the kernel and the compiler were finished, GNU was able to be used for program development. The main goal was to create many other applications to be like the Unix system. GNU was able to run Unix programs but was not identical to it. GNU incorporated longer file names, file version numbers, and a crashproof file system. The GNU Manifesto was written to gain support and participation from others for the project. Programmers were encouraged to take part in any aspect of th
https://en.wikipedia.org/wiki/Shared-nothing%20architecture
A shared-nothing architecture (SN) is a distributed computing architecture in which each update request is satisfied by a single node (processor/memory/storage unit) in a computer cluster. The intent is to eliminate contention among nodes. Nodes do not share (independently access) the same memory or storage. One alternative architecture is shared everything, in which requests are satisfied by arbitrary combinations of nodes. This may introduce contention, as multiple nodes may seek to update the same data at the same time. SN eliminates single points of failure, allowing the overall system to continue operating despite failures in individual nodes and allowing individual nodes to upgrade hardware or software without a system-wide shutdown. A SN system can scale simply by adding nodes, since no central resource bottlenecks the system. In databases, a term for the part of a database on a single node is a shard. A SN system typically partitions its data among many nodes. A refinement is to replicate commonly used but infrequently modified data across many nodes, allowing more requests to be resolved on a single node. History Michael Stonebraker at the University of California, Berkeley used the term in a 1986 database paper. Teradata delivered the first SN database system in 1983. Tandem Computers NonStop systems, a shared-nothing implementation of hardware and software was released to market in 1976. Tandem Computers later released NonStop SQL, a shared-nothing relational database, in 1984. Applications Shared-nothing is popular for web development. Shared-nothing architectures are prevalent for data warehousing applications, although requests that require data from multiple nodes can dramatically reduce throughput. See also References Data partitioning Distributed computing architecture