source stringlengths 32 199 | text stringlengths 26 3k |
|---|---|
https://en.wikipedia.org/wiki/Point%20cloud | A point cloud is a discrete set of data points in space. The points may represent a 3D shape or object. Each point position has its set of Cartesian coordinates (X, Y, Z). Point clouds are generally produced by 3D scanners or by photogrammetry software, which measure many points on the external surfaces of objects around them. As the output of 3D scanning processes, point clouds are used for many purposes, including to create 3D computer-aided design (CAD) models for manufactured parts, for metrology and quality inspection, and for a multitude of visualizing, animating, rendering, and mass customization applications.
Alignment and registration
Point clouds are often aligned with 3D models or with other point clouds, a process termed point set registration.
For industrial metrology or inspection using industrial computed tomography, the point cloud of a manufactured part can be aligned to an existing model and compared to check for differences. Geometric dimensions and tolerances can also be extracted directly from the point cloud.
Conversion to 3D surfaces
While point clouds can be directly rendered and inspected, point clouds are often converted to polygon mesh or triangle mesh models, non-uniform rational B-spline (NURBS) surface models, or CAD models through a process commonly referred to as surface reconstruction.
There are many techniques for converting a point cloud to a 3D surface. Some approaches, like Delaunay triangulation, alpha shapes, and ball pivoting, build a network of triangles over the existing vertices of the point cloud, while other approaches convert the point cloud into a volumetric distance field and reconstruct the implicit surface so defined through a marching cubes algorithm.
In geographic information systems, point clouds are one of the sources used to make digital elevation model of the terrain. They are also used to generate 3D models of urban environments. Drones are often used to collect a series of RGB images which can be later processed on a computer vision algorithm platform such as on AgiSoft Photoscan, Pix4D, DroneDeploy or Hammer Missions to create RGB point clouds from where distances and volumetric estimations can be made.
Point clouds can also be used to represent volumetric data, as is sometimes done in medical imaging. Using point clouds, multi-sampling and data compression can be achieved.
MPEG Point Cloud Compression
MPEG began standardizing point cloud compression (PCC) with a Call for Proposal (CfP) in 2017. Three categories of point clouds were identified: category 1 for static point clouds, category 2 for dynamic point clouds, and category 3 for LiDAR sequences (dynamically acquired point clouds). Two technologies were finally defined: G-PCC (Geometry-based PCC, ISO/IEC 23090 part 9) for category 1 and category 3; and V-PCC (Video-based PCC, ISO/IEC 23090 part 5) for category 2. The first test models were developed in October 2017, one for G-PCC (TMC13) and another one for V-PCC (TMC2). |
https://en.wikipedia.org/wiki/Computability%20theory | Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory.
Basic questions addressed by computability theory include:
What does it mean for a function on the natural numbers to be computable?
How can noncomputable functions be classified into a hierarchy based on their level of noncomputability?
Although there is considerable overlap in terms of knowledge and methods, mathematical computability theorists study the theory of relative computability, reducibility notions, and degree structures; those in the computer science field focus on the theory of subrecursive hierarchies, formal methods, and formal languages.
Introduction
Computability theory originated in the 1930s, with work of Kurt Gödel, Alonzo Church, Rózsa Péter, Alan Turing, Stephen Kleene, and Emil Post.
The fundamental results the researchers obtained established Turing computability as the correct formalization of the informal idea of effective calculation. In 1952, these results led Kleene to coin the two names "Church's thesis" and "Turing's thesis". Nowadays these are often considered as a single hypothesis, the Church–Turing thesis, which states that any function that is computable by an algorithm is a computable function. Although initially skeptical, by 1946 Gödel argued in favor of this thesis:
With a definition of effective calculation came the first proofs that there are problems in mathematics that cannot be effectively decided. In 1936, Church and Turing were inspired by techniques used by Gödel to prove his incompleteness theorems - in 1931, Gödel independently demonstrated that the is not effectively decidable. This result showed that there is no algorithmic procedure that can correctly decide whether arbitrary mathematical propositions are true or false.
Many problems in mathematics have been shown to be undecidable after these initial examples were established. In 1947, Markov and Post published independent papers showing that the word problem for semigroups cannot be effectively decided. Extending this result, Pyotr Novikov and William Boone showed independently in the 1950s that the word problem for groups is not effectively solvable: there is no effective procedure that, given a word in a finitely presented group, will decide whether the element represented by the word is the identity element of the group. In 1970, Yuri Matiyasevich proved (using results of Julia Robinson) Matiyasevich's theorem, which implies that Hilbert's tenth problem has no effective solution; this problem asked whether there is an effective procedure to decide whether a Diophantine equation over the integers has a solution in |
https://en.wikipedia.org/wiki/Image%20registration | Image registration is the process of transforming different sets of data into one coordinate system. Data may be multiple photographs, data from different sensors, times, depths, or viewpoints. It is used in computer vision, medical imaging, military automatic target recognition, and compiling and analyzing images and data from satellites. Registration is necessary in order to be able to compare or integrate the data obtained from these different measurements.
Algorithm classification
Intensity-based vs feature-based
Image registration or image alignment algorithms can be classified into intensity-based and feature-based. One of the images is referred to as the moving or source and the others are referred to as the target, fixed or sensed images. Image registration involves spatially transforming the source/moving image(s) to align with the target image. The reference frame in the target image is stationary, while the other datasets are transformed to match to the target. Intensity-based methods compare intensity patterns in images via correlation metrics, while feature-based methods find correspondence between image features such as points, lines, and contours. Intensity-based methods register entire images or sub-images. If sub-images are registered, centers of corresponding sub images are treated as corresponding feature points. Feature-based methods establish a correspondence between a number of especially distinct points in images. Knowing the correspondence between a number of points in images, a geometrical transformation is then determined to map the target image to the reference images, thereby establishing point-by-point correspondence between the reference and target images. Methods combining intensity-based and feature-based information have also been developed.
Transformation models
Image registration algorithms can also be classified according to the transformation models they use to relate the target image space to the reference image space. The first broad category of transformation models includes linear transformations, which include rotation, scaling, translation, and other affine transforms. Linear transformations are global in nature, thus, they cannot model local geometric differences between images.
The second category of transformations allow 'elastic' or 'nonrigid' transformations. These transformations are capable of locally warping the target image to align with the reference image. Nonrigid transformations include radial basis functions (thin-plate or surface splines, multiquadrics, and compactly-supported transformations), physical continuum models (viscous fluids), and large deformation models (diffeomorphisms).
Transformations are commonly described by a parametrization, where the model dictates the number of parameters. For instance, the translation of a full image can be described by a single parameter, a translation vector. These models are called parametric models. Non-parametric models on the other ha |
https://en.wikipedia.org/wiki/Finder%20%28software%29 | The Finder is the default file manager and graphical user interface shell used on all Macintosh operating systems. Described in its "About" window as "The Macintosh Desktop Experience", it is responsible for the launching of other applications, and for the overall user management of files, disks, and network volumes. It was introduced with the first Macintosh computer, and also exists as part of GS/OS on the Apple IIGS. It was rewritten completely with the release of Mac OS X in 2001.
In a tradition dating back to the Classic Mac OS of the 1980s and 1990s, the Finder icon is the smiling screen of a computer, known as the Happy Mac logo.
Features
The Finder uses a view of the file system that is rendered using a desktop metaphor; that is, the files and folders are represented as appropriate icons. It uses a similar interface to Apple's Safari browser, where the user can click on a folder to move to it and move between locations using "back" and "forward" arrow buttons. Like Safari, the Finder uses tabs to allow the user to view multiple folders; these tabs can be pulled off the window to make them separate windows. There is a "favorites" sidebar of commonly used and important folders on the left of the Finder window.
The classic Mac OS Finder uses a spatial metaphor quite different from the more browser-like approach of the modern macOS Finder. In the classic Finder, opening a new folder opens the location in a new window: Finder windows are 'locked' so that they would only ever display the contents of one folder. It also allows extensive customization, with the user being able to give folders custom icons matching their content. This approach emphasizes the different locations of files within the operating system, but navigating to a folder nested inside multiple other folders fills the desktop with a large number of windows that the user may not wish to have open. These must then be closed individually. Holding down the option key when opening a folder would also close its parent, but this trick was not discoverable and remained under the purview of power users.
The modern Finder uses macOS graphics APIs to display previews of a range of files, such as images, applications and PDF files. The Quick Look feature allows users to quickly examine documents and images in more detail from the finder by pressing the space bar without opening them in a separate application. The user can choose how to view files, with options such as large icons showing previews of files, a list with details such as date of last creation or modification, a Gallery View (replacing the previous Cover flow in macOS Mojave), and a "column view" influenced by macOS's direct ancestor NeXTSTEP.
The modern Finder displays some aspects of the file system outside its windows. Mounted external volumes and disk image files can be displayed on the desktop. There is a trash can on the Dock in macOS, to which files can be dragged to mark them for deletion, and to which drives can |
https://en.wikipedia.org/wiki/MSX | MSX is a standardized home computer architecture, announced by ASCII Corporation on June 16, 1983. It was initially conceived by Microsoft as a product for the Eastern sector, and jointly marketed by Kazuhiko Nishi, the director at ASCII Corporation. Microsoft and Nishi conceived the project as an attempt to create unified standards among various home computing system manufacturers of the period, in the same fashion as the VHS standard for home video tape machines. The first MSX computer sold to the public was a Mitsubishi ML-8000, released on October 21, 1983, thus marking its official release date.
MSX systems were popular in Japan and several other countries. There are differing accounts of MSX sales. One source claims 9 million MSX units were sold worldwide, including in Japan alone, whereas ASCII corporation founder Kazuhiko Nishi claims that 3 million were sold in Japan, and 1 million overseas. Despite Microsoft's involvement, few MSX-based machines were released in the United States.
The meaning of the acronym MSX remains a matter of debate. In 2001, Kazuhiko Nishi recalled that many assumed that it was derived from "Microsoft Extended", referring to the built-in Microsoft Extended BASIC (MSX BASIC). Others believed that it stood for "Matsushita-Sony". Nishi said that the team's original definition was "Machines with Software eXchangeability", although in 1985 he said it was named after the MX missile. According to his book in 2020, he considered the name of the new standard should consist of three letters, like VHS. He felt "MSX" was fit because it means "the next of Microsoft", and it also contains the first letters of Matsushita (Panasonic) and Sony.
Before the success of Nintendo's Family Computer, the MSX was the platform that major Japanese game studios such as Konami and Hudson Soft developed for. The Metal Gear series, for example, was first written for MSX hardware.
History
In the early 1980s, most home computers manufactured in Japan such as the NEC PC-6001 and PC-8000 series, Fujitsu's FM-7 and FM-8, and Hitachi's Basic Master featured a variant of the Microsoft BASIC interpreter integrated into their on-board ROMs. The hardware design of these computers and the various dialects of their BASICs were incompatible. Other Japanese consumer electronics firms such as Panasonic, Canon, Casio, Yamaha, Pioneer, and Sanyo were searching for ways to enter the new home computer market.
Major Japanese electronics companies entered the computer market in the 1960s, and Panasonic (Matsushita Electric Industrial) was also developing mainframe computers. The Japanese economy was facing a recession after the 1964 Summer Olympics and Panasonic decided to exit the computer business and focus on home appliances. The decision was a huge success, and Panasonic grew to become one of the largest electronics companies. In the late 1970s, the company investigated other business areas outside of home appliances. Panasonic also saw potential in the |
https://en.wikipedia.org/wiki/Jon%20Lech%20Johansen | Jon Lech Johansen (born November 18, 1983, in Harstad, Norway), also known as DVD Jon, is a Norwegian programmer who has worked on reverse engineering data formats. He wrote the DeCSS software, which decodes the Content Scramble System used for DVD licensing enforcement. Johansen is a self-trained software engineer, who quit high school during his first year to spend more time with the DeCSS case. He moved to the United States and worked as a software engineer from October 2005 until November 2006. He then moved to Norway but moved back to the United States in June 2007.
Education
In a post on his blog, he said that in the 1990s he started with a book (Programming the 8086/8088), the web ("Fravia's site was a goldmine") and IRC ("Lurked in a x86 assembly IRC channel and picked up tips from wise wizards.")
DeCSS prosecution
After Johansen released DeCSS, he was taken to court in Norway for computer hacking in 2002. The prosecution was conducted by the Norwegian National Authority for the Investigation and Prosecution of Economic and Environmental Crime (Økokrim in Norwegian), after a complaint by the US DVD Copy Control Association (DVD-CCA) and the Motion Picture Association (MPA). Johansen has denied writing the decryption code in DeCSS, saying that this part of the project originated from someone in Germany. He only developed the GUI component of the software. His defense was assisted by the Electronic Frontier Foundation. The trial opened in the Oslo District Court on December 9, 2002, with Johansen pleading not guilty to charges that had a maximum penalty of two years in prison or large fines. The defense argued that no illegal access was obtained to anyone else's information, since Johansen owned the DVDs himself. They also argued that it is legal under Norwegian law to make copies of such data for personal use. The verdict was announced on January 7, 2003, acquitting Johansen of all charges.
Two further levels of appeals were available to the prosecutors, to the appeals court and then to the Supreme Court. Økokrim filed an appeal on January 20, 2003, and it was reported on February 28 that the Borgarting Court of Appeal had agreed to hear the case. Johansen's second DeCSS trial began in Oslo on December 2, 2003, and resulted in an acquittal on December 22, 2003. Økokrim announced on January 5, 2004, that it would not appeal the case to the Supreme Court.
Other projects
In the first decade of the 21st century, Johansen's career has included many other projects.
2001
In 2001, Johansen released OpenJaz, a reverse-engineered set of drivers for Linux, BeOS and Windows 2000 that allow operation of the JazPiper MP3 digital audio player without its proprietary drivers.
2003
In November 2003, Johansen released QTFairUse, an open source program which dumps the raw output of a QuickTime Advanced Audio Coding (AAC) stream to a file, which could bypass the Digital Rights Management (DRM) software used to encrypt content of music from media suc |
https://en.wikipedia.org/wiki/Amstrad%20PCW | The Amstrad PCW series is a range of personal computers produced by British company Amstrad from 1985 to 1998, and also sold under licence in Europe as the "Joyce" by the German electronics company Schneider in the early years of the series' life. The PCW, short for Personal Computer Word-processor, was targeted at the word processing and home office markets. When it was launched the cost of a PCW system was under 25% of the cost of almost all IBM-compatible PC systems in the UK, and as a result the machine was very popular both in the UK and in Europe, persuading many technophobes to venture into using computers. The series is reported to have sold 1.5 million units. However the last two models, introduced in the mid-1990s, were commercial failures, being squeezed out of the market by the falling prices, greater capabilities and wider range of software for IBM-compatible PCs.
The series consists of PCW 8256 and PCW 8512 (introduced in 1985), PCW 9512 (introduced in 1987), PCW 9256 (introduced in 1991), PCW 10 and PcW16 (introduced in 1995). These models are described in detail on the "Models and features" section.
In all models, including the last, the monitor's casing included the CPU, RAM, floppy disk drives and power supply for all of the systems' components. All except the last included a printer in the price. Early models used 3-inch floppy disks, while those sold from 1991 onwards used 3½-inch floppies, which became the industry standard around the time the PCW series was launched. A variety of inexpensive products and services were launched to copy 3-inch floppies to the 3½-inch format so that data could be transferred to other machines.
All machines in the series used a Z80 CPU, initially running at 4MHz with higher speeds in later models. RAM varied between 256 or 512 KB depending on models.
All models except the last included the Locoscript word processing program, the CP/M Plus operating system, Mallard BASIC and the Logo programming language at no extra cost. The last model (PcW16) used a custom GUI operating system.
A wide range of other CP/M office software and several games became available, some commercially produced and some free. Although Amstrad supplied all but the last model as text based systems, graphical user interface peripherals and the supporting software also became available. The last model had its own unique GUI operating system and set of office applications, which were included in the price. However, none of the software for previous PCW models could run on this system.
Development and launch
In 1984, Tandy Corporation executive Steve Leininger, designer of the TRS-80 Model I, admitted that "as an industry we haven't found any compelling reason to buy a computer for the home" other than for word processing. Amstrad's founder Alan Sugar realised that most computers in the United Kingdom were used for word processing at home, and allegedly sketched an outline design for a low cost replacement for typewriters |
https://en.wikipedia.org/wiki/Communication%20channel | A communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used for information transfer of, for example, a digital bit stream, from one or several senders to one or several receivers. A channel has a certain capacity for transmitting information, often measured by its bandwidth in Hz or its data rate in bits per second.
Communicating an information signal across distance requires some form of pathway or medium. These pathways, called communication channels, use two types of media: Transmission line (e.g. twisted-pair, coaxial, and fiber-optic cable) and broadcast (e.g. microwave, satellite, radio, and infrared).
In information theory, a channel refers to a theoretical channel model with certain error characteristics. In this more general view, a storage device is also a communication channel, which can be sent to (written) and received from (reading) and allows communication of an information signal across time.
Examples
Examples of communications channels include:
A connection between initiating and terminating communication endpoints of a telecommunication circuit.
A single path provided by a transmission medium via either
physical separation, such as by multipair cable or
separation, such as by frequency-division or time-division multiplexing.
A path for conveying electrical or electromagnetic signals, usually distinguished from other parallel paths.
A data storage device which can communicate a message over time.
The portion of a storage medium, such as a track or band, that is accessible to a given reading or writing station or head.
A buffer from which messages can be put and got.
In a communications system, the physical or logical link that connects a data source to a data sink.
A specific radio frequency, pair or band of frequencies, usually named with a letter, number, or codeword, and often allocated by international agreement, for example:
Marine VHF radio uses some 88 channels in the VHF band for two-way FM voice communication. Channel 16, for example, is 156.800 MHz. In the US, seven additional channels, WX1 - WX7, are allocated for weather broadcasts.
Television channels such as North American TV Channel 2 at 55.25 MHz, Channel 13 at 211.25 MHz. Each channel is 6 MHz wide. This was based on the bandwidth required by analog television signals. Since 2006, television broadcasting has switched to digital modulation (digital television) which uses image compression to transmit a television signal in a much smaller bandwidth, so each of these physical channels has been divided into multiple virtual channels each carrying a DTV channel.
Original Wi-Fi uses 13 channels in the ISM bands from 2412 MHz to 2484 MHz in 5 MHz steps.
The radio channel between an amateur radio repeater and a amateur radio operator uses two frequencies often 600 kHz (0.6 MHz) apar |
https://en.wikipedia.org/wiki/List%20of%20digital%20library%20projects | This is a list of digital library projects.
See also
Bibliographic database
List of academic databases and search engines
List of online databases
List of online encyclopedias
List of open-access journals
List of search engines
References
Digital library projects
Digital library projects
Digital library projects
Digital library projects |
https://en.wikipedia.org/wiki/Standard%20Template%20Library | The Standard Template Library (STL) is a software library originally designed by Alexander Stepanov for the C++ programming language that influenced many parts of the C++ Standard Library. It provides four components called algorithms, containers, functions, and iterators.
The STL provides a set of common classes for C++, such as containers and associative arrays, that can be used with any built-in type and with any user-defined type that supports some elementary operations (such as copying and assignment). STL algorithms are independent of containers, which significantly reduces the complexity of the library.
The STL achieves its results through the use of templates. This approach provides compile-time polymorphism that is often more efficient than traditional run-time polymorphism. Modern C++ compilers are tuned to minimize abstraction penalties arising from heavy use of the STL.
The STL was created as the first library of generic algorithms and data structures for C++, with four ideas in mind: generic programming, abstractness without loss of efficiency, the Von Neumann computation model, and value semantics.
The STL and the C++ Standard Library are two distinct entities.
History
In November 1993 Alexander Stepanov presented a library based on generic programming to the ANSI/ISO committee for C++ standardization. The committee's response was overwhelmingly favorable and led to a request from Andrew Koenig for a formal proposal in time for the March 1994 meeting. The committee had several requests for changes and extensions and the committee members met with Stepanov and Meng Lee to help work out the details. The requirements for the most significant extension (associative containers) had to be shown to be consistent by fully implementing them, a task Stepanov delegated to David Musser. A proposal received final approval at the July 1994 ANSI/ISO committee meeting. Subsequently, the Stepanov and Lee document 17 was incorporated into the ANSI/ISO C++ draft standard (1, parts of clauses 17 through 27).
The prospects for early widespread dissemination of the STL were considerably improved with Hewlett-Packard's decision to make its implementation freely available on the Internet in August 1994. This implementation, developed by Stepanov, Lee, and Musser during the standardization process, became the basis of many implementations offered by compiler and library vendors today.
Composition
Containers
The STL contains sequence containers and associative containers. The containers are objects that store data. The standard sequence containers include , , and . The standard associative containers are , , , , , , and . There are also container adaptors , , and , that are containers with specific interface, using other containers as implementation.
{| class="wikitable"
|-
! Container
! Description
|-
! colspan="2" | Simple containers
|-
! pair
| The pair container is a simple associative container consisting of a 2-tuple of data elements or |
https://en.wikipedia.org/wiki/Cytoskeleton | The cytoskeleton is a complex, dynamic network of interlinking protein filaments present in the cytoplasm of all cells, including those of bacteria and archaea. In eukaryotes, it extends from the cell nucleus to the cell membrane and is composed of similar proteins in the various organisms. It is composed of three main components:microfilaments, intermediate filaments, and microtubules, and these are all capable of rapid growth or disassembly depending on the cell's requirements.
A multitude of functions can be performed by the cytoskeleton. Its primary function is to give the cell its shape and mechanical resistance to deformation, and through association with extracellular connective tissue and other cells it stabilizes entire tissues. The cytoskeleton can also contract, thereby deforming the cell and the cell's environment and allowing cells to migrate. Moreover, it is involved in many cell signaling pathways and in the uptake of extracellular material (endocytosis), the segregation of chromosomes during cellular division, the cytokinesis stage of cell division, as scaffolding to organize the contents of the cell in space and in intracellular transport (for example, the movement of vesicles and organelles within the cell) and can be a template for the construction of a cell wall. Furthermore, it can form specialized structures, such as flagella, cilia, lamellipodia and podosomes. The structure, function and dynamic behavior of the cytoskeleton can be very different, depending on organism and cell type. Even within one cell, the cytoskeleton can change through association with other proteins and the previous history of the network.
A large-scale example of an action performed by the cytoskeleton is muscle contraction. This is carried out by groups of highly specialized cells working together. A main component in the cytoskeleton that helps show the true function of this muscle contraction is the microfilament. Microfilaments are composed of the most abundant cellular protein known as actin. During contraction of a muscle, within each muscle cell, myosin molecular motors collectively exert forces on parallel actin filaments. Muscle contraction starts from nerve impulses which then causes increased amounts of calcium to be released from the sarcoplasmic reticulum. Increases in calcium in the cytosol allows muscle contraction to begin with the help of two proteins, tropomyosin and troponin. Tropomyosin inhibits the interaction between actin and myosin, while troponin senses the increase in calcium and releases the inhibition. This action contracts the muscle cell, and through the synchronous process in many muscle cells, the entire muscle.
History
In 1903, Nikolai K. Koltsov proposed that the shape of cells was determined by a network of tubules that he termed the cytoskeleton. The concept of a protein mosaic that dynamically coordinated cytoplasmic biochemistry was proposed by Rudolph Peters in 1929 while the term (cytosquelette, in French) wa |
https://en.wikipedia.org/wiki/Business%20logic | In computer software, business logic or domain logic is the part of the program that encodes the real-world business rules that determine how data can be created, stored, and changed. It is contrasted with the remainder of the software that might be concerned with lower-level details of managing a database or displaying the user interface, system infrastructure, or generally connecting various parts of the program.
Details and example
Business logic:
Prescribes how business objects interact with one another
Enforces the routes and the methods by which business objects are accessed and updated
Business rules:
Model real-life business objects (such as accounts, loans, itineraries, and inventories)
Business logic comprises:
Workflows that are the ordered tasks of passing documents or data from one participant (a person or a software system) to another.
Business logic should be distinguished from business rules. Business logic is the portion of an enterprise system which determines how data is transformed or calculated, and how it is routed to people or software (workflow). Business rules are formal expressions of business policy. Anything that is a process or procedure is business logic, and anything that is neither a process nor a procedure is a business rule. Welcoming a new visitor is a process (workflow) consisting of steps to be taken, whereas saying every new visitor must be welcomed is a business rule. Further, business logic is procedural whereas business rules are declarative.
For example, an e-commerce website might allow visitors to add items to a shopping cart, specify a shipping address, and supply payment information. The business logic of the website might include a workflow such as:
The sequence of events that happens during checkout, for example a multi-page form which first asks for the shipping address, then for the billing address, next page will contain the payment method, and last page will show congratulations.
There will also be business rules of the website:
Adding an item more than once from the item description page increments the quantity for that item.
Specific formats that the visitor's address, email address, and credit card information must follow.
A specific communication protocol for talking to the credit card network
The web site software also contains other code which is not considered part of business logic nor business rules:
Peripheral content not related to the core business data, such as the HTML that defines the colors, appearance, background image, and navigational structure of the site
Generic error-handling code (e.g., which displays the HTTP Error Code 500 page)
Initialization code that runs when the web server starts up the site, which sets up the system
Monitoring infrastructure to make sure all the parts of the site are working properly (e.g., the billing system is available)
Generic code for making network connections, transmitting objects to the database, parsing user input via |
https://en.wikipedia.org/wiki/Correlation | In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics it usually refers to the degree to which a pair of variables are linearly related.
Familiar examples of dependent phenomena include the correlation between the height of parents and their offspring, and the correlation between the price of a good and the quantity the consumers are willing to purchase, as it is depicted in the so-called demand curve.
Correlations are useful because they can indicate a predictive relationship that can be exploited in practice. For example, an electrical utility may produce less power on a mild day based on the correlation between electricity demand and weather. In this example, there is a causal relationship, because extreme weather causes people to use more electricity for heating or cooling. However, in general, the presence of a correlation is not sufficient to infer the presence of a causal relationship (i.e., correlation does not imply causation).
Formally, random variables are dependent if they do not satisfy a mathematical property of probabilistic independence. In informal parlance, correlation is synonymous with dependence. However, when used in a technical sense, correlation refers to any of several specific types of mathematical operations between the tested variables and their respective expected values. Essentially, correlation is the measure of how two or more variables are related to one another. There are several correlation coefficients, often denoted or , measuring the degree of correlation. The most common of these is the Pearson correlation coefficient, which is sensitive only to a linear relationship between two variables (which may be present even when one variable is a nonlinear function of the other). Other correlation coefficients – such as Spearman's rank correlation – have been developed to be more robust than Pearson's, that is, more sensitive to nonlinear relationships. Mutual information can also be applied to measure dependence between two variables.
Pearson's product-moment coefficient
The most familiar measure of dependence between two quantities is the Pearson product-moment correlation coefficient (PPMCC), or "Pearson's correlation coefficient", commonly called simply "the correlation coefficient". It is obtained by taking the ratio of the covariance of the two variables in question of our numerical dataset, normalized to the square root of their variances. Mathematically, one simply divides the covariance of the two variables by the product of their standard deviations. Karl Pearson developed the coefficient from a similar but slightly different idea by Francis Galton.
A Pearson product-moment correlation coefficient attempts to establish a line of best fit through a dataset of two variables by essentially laying out the exp |
https://en.wikipedia.org/wiki/VxWorks | VxWorks is a real-time operating system (or RTOS) developed as proprietary software by Wind River Systems, a subsidiary of Aptiv. First released in 1987, VxWorks is designed for use in embedded systems requiring real-time, deterministic performance and, in many cases, safety and security certification for industries such as aerospace, defense, medical devices, industrial equipment, robotics, energy, transportation, network infrastructure, automotive, and consumer electronics.
VxWorks supports AMD/Intel architecture, POWER architecture, ARM architectures and RISC-V. The RTOS can be used in multicore asymmetric multiprocessing (AMP), symmetric multiprocessing (SMP), and mixed modes and multi-OS (via Type 1 hypervisor) designs on 32- and 64-bit processors.
VxWorks comes with the kernel, middleware, board support packages, Wind River Workbench development suite and complementary third-party software and hardware technologies. In its latest release, VxWorks 7, the RTOS has been re-engineered for modularity and upgradeability so the OS kernel is separate from middleware, applications and other packages. Scalability, security, safety, connectivity, and graphics have been improved to address Internet of Things (IoT) needs.
History
VxWorks started in the late 1980s as a set of enhancements to a simple RTOS called VRTX sold by Ready Systems (becoming a Mentor Graphics product in 1995). Wind River acquired rights to distribute VRTX and significantly enhanced it by adding, among other things, a file system and an integrated development environment. In 1987, anticipating the termination of its reseller contract by Ready Systems, Wind River developed its own kernel to replace VRTX within VxWorks.
Published in 2003 with a Wind River copyright, "Real-Time Concepts for Embedded Systems"
describes the development environment, runtime setting, and system call families of the RTOS.
Written by Wind River employees with a foreword by Jerry Fiddler, chairman, and co-founder of Wind River, the textbook is an excellent tutorial on the RTOS. (It does not, however, replace Wind River documentation as might be needed by practicing engineers.)
Some key milestones for VxWorks include:
1980s: VxWorks adds support for 32-bit processors.
1990s: VxWorks 5 becomes the first RTOS with a networking stack.
2000s: VxWorks 6 supports SMP and adds derivative industry-specific platforms.
2010s: VxWorks adds support for 64-bit processing and introduces VxWorks 7 for IoT in 2016.
2020s: VxWorks continues to update and add support, including the ability to power the Mars 2020 lander.
Platform overview
VxWorks supports Intel architecture, Power architecture, and ARM architectures. The RTOS can be used in multi-core asymmetric multiprocessing (AMP), symmetric multiprocessing (SMP), mixed modes and multi-OS (via Type 1 hypervisor) designs on 32- and 64- bit processors.
The VxWorks consists of a set of runtime components and development tools. The run time components are an operatin |
https://en.wikipedia.org/wiki/Cut%2C%20copy%2C%20and%20paste | In human–computer interaction and user interface design, cut, copy, and paste are related commands that offer an interprocess communication technique for transferring data through a computer's user interface. The cut command removes the selected data from its original position, while the copy command creates a duplicate; in both cases the selected data is kept in temporary storage (the clipboard). The data from the clipboard is later inserted wherever a paste command is issued. The data remains available to any application supporting the feature, thus allowing easy data transfer between applications.
The command names are an interface metaphor based on the physical procedure used in manuscript editing to create a page layout.
This interaction technique has close associations with related techniques in graphical user interfaces (GUIs) that use pointing devices such as a computer mouse (by drag and drop, for example). Typically, clipboard support is provided by an operating system as part of its GUI and widget toolkit.
The capability to replicate information with ease, changing it between contexts and applications, involves privacy concerns because of the risks of disclosure when handling sensitive information. Terms like cloning, copy forward, carry forward, or re-use refer to the dissemination of such information through documents, and may be subject to regulation by administrative bodies.
History
Origins
The term "cut and paste" comes from the traditional practice in manuscript-editings whereby people would cut paragraphs from a page with scissors and paste them onto another page. This practice remained standard into the 1980s. Stationery stores sold "editing scissors" with blades long enough to cut an 8½"-wide page. The advent of photocopiers made the practice easier and more flexible.
The act of copying/transferring text from one part of a computer-based document ("buffer") to a different location within the same or different computer-based document was a part of the earliest on-line computer editors. As soon as computer data entry moved from punch-cards to online files (in the mid/late 1960s) there were "commands" for accomplishing this operation. This mechanism was often used to transfer frequently-used commands or text snippets from additional buffers into the document, as was the case with the QED text editor.
Early methods
The earliest editors (designed for teleprinter terminals) provided keyboard commands to delineate a contiguous region of text, then delete or move it. Since moving a region of text requires first removing it from its initial location and then inserting it into its new location, various schemes had to be invented to allow for this multi-step process to be specified by the user. Often this was done with a "move" command, but some text editors required that the text be first put into some temporary location for later retrieval/placement. In 1983, the Apple Lisa became the first text editing system to call that tempo |
https://en.wikipedia.org/wiki/Multiple%20instruction%2C%20multiple%20data | In computing, multiple instruction, multiple data (MIMD) is a technique employed to achieve parallelism. Machines using MIMD have a number of processors that function asynchronously and independently. At any time, different processors may be executing different instructions on different pieces of data.
MIMD architectures may be used in a number of application areas such as computer-aided design/computer-aided manufacturing, simulation, modeling, and as communication switches. MIMD machines can be of either shared memory or distributed memory categories. These classifications are based on how MIMD processors access memory. Shared memory machines may be of the bus-based, extended, or hierarchical type. Distributed memory machines may have hypercube or mesh interconnection schemes.
Examples
An example of MIMD system is Intel Xeon Phi, descended from Larrabee microarchitecture. These processors have multiple processing cores (up to 61 as of 2015) that can execute different instructions on different data.
Most parallel computers, as of 2013, are MIMD systems.
Shared memory model
In shared memory model the processors are all connected to a "globally available" memory, via either software or hardware means. The operating system usually maintains its memory coherence.
From a programmer's point of view, this memory model is better understood than the distributed memory model. Another advantage is that memory coherence is managed by the operating system and not the written program. Two known disadvantages are: scalability beyond thirty-two processors is difficult, and the shared memory model is less flexible than the distributed memory model.
There are many examples of shared memory (multiprocessors): UMA (uniform memory access), COMA (cache-only memory access).
Bus-based
MIMD machines with shared memory have processors which share a common, central memory. In the simplest form, all processors are attached to a bus which connects them to memory. This means that every machine with shared memory shares a specific CM, common bus system for all the clients.
For example, if we consider a bus with clients A, B, C connected on one side and P, Q, R connected on the opposite side,
any one of the clients will communicate with the other by means of the bus interface between them.
Hierarchical
MIMD machines with hierarchical shared memory use a hierarchy of buses (as, for example, in a "fat tree") to give processors access to each other's memory. Processors on different boards may communicate through inter-nodal buses. Buses support communication between boards. With this type of architecture, the machine may support over nine thousand processors.
Distributed memory
In distributed memory MIMD (multiple instruction, multiple data) machines, each processor has its own individual memory location. Each processor has no direct knowledge about other processor's memory. For data to be shared, it must be passed from one processor to another as a message. Since t |
https://en.wikipedia.org/wiki/Exim | Exim is a mail transfer agent (MTA) used on Unix-like operating systems. Exim is a free software distributed under the terms of the GNU General Public License, and it aims to be a general and flexible mailer with extensive facilities for checking incoming e-mail.
Exim has been ported to most Unix-like systems, as well as to Microsoft Windows using the Cygwin emulation layer. Exim 4 is currently the default MTA on Debian Linux systems.
Many Exim installations exist, especially within Internet service providers and universities in the United Kingdom. Exim is also widely used with the GNU Mailman mailing list manager, and cPanel.
In March 2023 a study performed by E-Soft, Inc., approximated that 59% of the publicly reachable mail-servers on the Internet ran Exim.
Origin
The first version of Exim was written in 1995 by Philip Hazel for use in the University of Cambridge Computing Service’s e-mail systems. The name initially stood for EXperimental Internet Mailer. It was originally based on an older MTA, Smail-3, but it has since diverged from Smail-3 in its design and philosophy.
Design model
Exim, like Smail, still follows the Sendmail design model, where a single binary controls all the facilities of the MTA. Exim has well-defined stages during which it gains or loses privileges.
Exim's security has had a number of serious security problems diagnosed over the years. Since the redesigned version 4 was released there have been four remote code execution flaws and one conceptual flaw concerning how much trust it is appropriate to place in the run-time user; the latter was fixed in a security lockdown in revision 4.73, one of the very rare occasions when Exim has broken backwards compatibility with working configurations.
Configuration
Exim is highly configurable and therefore has features that are lacking in other MTAs. It has always had substantial facilities for mail policy controls, providing facilities for the administrator to control who may send or relay mail through the system. In version 4.x this has matured to an Access Control List based system allowing very detailed and flexible controls. The integration of a framework for content scanning, which allowed for easier integration of anti-virus and anti-spam measures, happened in the 4.x releases. This made Exim very suitable for enforcing diverse mail policies.
The configuration is done through a (typically single) configuration file, which must include the main section with generic settings and variables, as well as the following optional sections:
the access control list (ACL) section which defines behaviour during the SMTP sessions,
the routers section which includes a number of processing elements which operate on addresses (the delivery logic), each tried in turn,
the transports section which includes processing elements which transmit actual messages to destinations,
the retry section where policy on retrying messages that fail to get delivered at the first attempt is defin |
https://en.wikipedia.org/wiki/SICP%20%28disambiguation%29 | SICP may refer to:
Structure and Interpretation of Computer Programs, an introductory computer programming book
St. Ignatius College Preparatory, Jesuit high school in San Francisco, California, U.S.
St. Ignatius College Preparatory School, Jesuit high school in Chicago, Illinois, U.S. |
https://en.wikipedia.org/wiki/Daniel%20Sleator | Daniel Dominic Kaplan Sleator (born 10 December 1953) is a Professor of Computer Science at Carnegie Mellon University, Pittsburgh, United States. In 1999, he won the ACM Paris Kanellakis Award (jointly with Robert Tarjan) for the splay tree data structure.
He was one of the pioneers in amortized analysis of algorithms, early examples of which were the analyses of the move-to-front heuristic, and splay trees. He invented many data structures with Robert Tarjan, such as splay trees, link/cut trees, and skew heaps.
The Sleator and Tarjan paper on the move-to-front heuristic first suggested the idea of comparing an online algorithm to an optimal offline algorithm, for which the term competitive analysis was later coined in a paper of Karlin, Manasse, Rudolph, and Sleator. Sleator also developed the theory of link grammars, and the Serioso music analyzer for analyzing meter and harmony in written music.
Personal life
Sleator was born to William Warner Sleator, Jr., a professor of physiology and biophysics, and Esther Kaplan Sleator, a pediatrician who did pioneering research on attention deficit disorder (ADD). He is the younger brother of William Sleator, who wrote science fiction for young adults.
Sleator commercialized the volunteer-based Internet Chess Server into the Internet Chess Club despite outcry from fellow volunteers. The ICS has since become one of the most successful internet-based commercial chess servers.
From 2003 to 2008, Sleator co-hosted the progressive talk show Left Out on WRCT-FM with Carnegie Mellon University School of Computer Science faculty member Bob Harper.
He is also an active member of the competitive programming platform Codeforces.
References
External links
The CMU home page of Daniel Sleator
The Internet Chess Club
Paris Kanellakis Theory and Practice Award
Left Out radio show
American computer scientists
Carnegie Mellon University faculty
Living people
1953 births
Theoretical computer scientists
Stanford University alumni
Competitive programmers |
https://en.wikipedia.org/wiki/SportsCenter | SportsCenter (SC) is an American daily sports news television program that serves as the flagship program and brand of American cable and satellite television network ESPN. The show covers various sports teams and athletes from around the world and often shows highlights of sports from the day. Originally broadcast only once per day, SportsCenter now has up to twelve airings each day, excluding overnight repeats. The show often covers the major sports in the U.S. including basketball, hockey, football, and baseball. SportsCenter is also known for its recaps after sports events and its in-depth analysis.
Since it premiered upon the network's launch on September 7, 1979, the show has broadcast more than 60,000 episodes, more than any other program on American television; SportsCenter is broadcast from ESPN's studio facilities in Bristol, Connecticut, Washington, D.C., and Los Angeles.
Overview and format
As of 2023, SportsCenter normally runs live at the following times:
Weekdays: 7:00–8:00 a.m.(ESPN), 2:00–3:00 p.m., 6:00–7:00 p.m. and 11:00 p.m.–1:00 a.m. ET.
Saturday: 7:00 a.m.–9:00 p.m., 6:00–7:00 p.m. and 11:00 a.m.–3:00 a.m. ET.
Sunday: 7:00–9:00 a.m., 10:00 a.m.–12:00 p.m. and 11:00 p.m.–12:30 a.m. ET.
The program's runtime and starting time depend on the games' runtime. In case a game overlaps the starting time of any SportsCenter edition, it is occasionally moved to either ESPN2 or ESPNews (depending on whether one of the networks is carrying an event) until the event concludes. Conversely, SportsCenter may start early and run longer if the preceding event finishes early or breaking sports news requires it.
Most editions of the show originate from a studio at ESPN's headquarters in Bristol, Connecticut. However, the Scott Van Pelt edition of SportsCenter has been produced out of a studio in Washington, D.C., inside the ABC News bureau since 2020, in the former studio of Around the Horn.. The 1 a.m. Eastern edition of SportsCenter has been produced out of ESPN's Los Angeles Production Center at L.A. Live since 2009; that edition also is repeated during the overnight hours.
ESPN also produces short 90-second capsules known as SportsCenter Right Now, which air at select points within game telecasts on the network and sister broadcast network ABC to provide updates of other ongoing and recently concluded sporting events.
In addition to providing game highlights and news from the day in sports outside of the scheduled slate of games (including team player and management transactions, injury reports and other news), the program also features live reports from sites of sports events scheduled to be held or already concluded, extensive analysis of completed and upcoming sports events from sport-specific analysts and special contributors, and feature segments providing interviews with players, coaches, and franchise management in the headlines. In addition to airing simulcasts or network-exclusive editions on sister networks ESPN2 and E |
https://en.wikipedia.org/wiki/Plaintext | In cryptography, plaintext usually means unencrypted information pending input into cryptographic algorithms, usually encryption algorithms. This usually refers to data that is transmitted or stored unencrypted.
Overview
With the advent of computing, the term plaintext expanded beyond human-readable documents to mean any data, including binary files, in a form that can be viewed or used without requiring a key or other decryption device. Information—a message, document, file, etc.—if to be communicated or stored in an unencrypted form is referred to as plaintext.
Plaintext is used as input to an encryption algorithm; the output is usually termed ciphertext, particularly when the algorithm is a cipher. Codetext is less often used, and almost always only when the algorithm involved is actually a code. Some systems use multiple layers of encryption, with the output of one encryption algorithm becoming "plaintext" input for the next.
Secure handling
Insecure handling of plaintext can introduce weaknesses into a cryptosystem by letting an attacker bypass the cryptography altogether. Plaintext is vulnerable in use and in storage, whether in electronic or paper format. Physical security means the securing of information and its storage media from physical, attack—for instance by someone entering a building to access papers, storage media, or computers. Discarded material, if not disposed of securely, may be a security risk. Even shredded documents and erased magnetic media might be reconstructed with sufficient effort.
If plaintext is stored in a computer file, the storage media, the computer and its components, and all backups must be secure. Sensitive data is sometimes processed on computers whose mass storage is removable, in which case physical security of the removed disk is vital. In the case of securing a computer, useful (as opposed to handwaving) security must be physical (e.g., against burglary, brazen removal under cover of supposed repair, installation of covert monitoring devices, etc.), as well as virtual (e.g., operating system modification, illicit network access, Trojan programs). Wide availability of keydrives, which can plug into most modern computers and store large quantities of data, poses another severe security headache. A spy (perhaps posing as a cleaning person) could easily conceal one, and even swallow it if necessary.
Discarded computers, disk drives and media are also a potential source of plaintexts. Most operating systems do not actually erase anything— they simply mark the disk space occupied by a deleted file as 'available for use', and remove its entry from the file system directory. The information in a file deleted in this way remains fully present until overwritten at some later time when the operating system reuses the disk space. With even low-end computers commonly sold with many gigabytes of disk space and rising monthly, this 'later time' may be months later, or never. Even overwriting the portion of a di |
https://en.wikipedia.org/wiki/Data%20General | Data General Corporation was one of the first minicomputer firms of the late 1960s. Three of the four founders were former employees of Digital Equipment Corporation (DEC).
Their first product, 1969's Data General Nova, was a 16-bit minicomputer intended to both outperform and cost less than the equivalent from DEC, the 12-bit PDP-8. A basic Nova system cost two-thirds or less than a similar PDP-8 while running faster, offering easy expandability, being significantly smaller, and proving more reliable in the field. Combined with Data General RDOS (DG/RDOS) and programming languages like Data General Business Basic, Novas provided a multi-user platform far ahead of many contemporary systems. A series of updated Nova machines were released through the early 1970s that kept the Nova line at the front of the 16-bit mini world.
The Nova was followed by the Eclipse series which offered much larger memory capacity while still being able to run Nova code without modification. The Eclipse launch was marred by production problems and it was some time before it was a reliable replacement for the tens of thousands of Novas in the market. As the mini world moved from 16-bit to 32, DG introduced the Data General Eclipse MV/8000, whose development was extensively documented in the popular book, The Soul of a New Machine. Although DG's computers were successful, the introduction of the IBM PC in 1981 marked the beginning of the end for minicomputers, and by the end of the decade, the entire market had largely disappeared. The introduction of the Data General/One in 1984 did nothing to stop the erosion.
In a major business pivot, in 1989 DG released the AViiON series of scalable Unix systems which spanned from desktop workstations to departmental servers. This scalability was managed through the use of NUMA, allowing a number of commodity processors to work together in a single system. Following AViiON was the CLARiiON series of network-attached storage systems which became a major product line in the later 1990s. This led to a purchase by EMC, the major vendor in the storage space at that time. EMC shut down all of DG's lines except for CLARiiON, which continued sales until 2012.
History
Origin, founding and early years: Nova and SuperNova
Data General (DG) was founded by several engineers from Digital Equipment Corporation who were frustrated with DEC's management and left to form their own company. The chief founders were Edson de Castro, Henry Burkhardt III, and Richard Sogge of Digital Equipment (DEC), and Herbert Richman of Fairchild Semiconductor. The company was founded in Hudson, Massachusetts, in 1968. Harvey Newquist was hired from Computer Control Corporation to oversee manufacturing.
Edson de Castro was the chief engineer in charge of the PDP-8, DEC's line of inexpensive computers that created the minicomputer market. It was designed specifically to be used in laboratory equipment settings; as the technology improved, it was reduced in size to |
https://en.wikipedia.org/wiki/Recombination | Recombination may refer to:
Carrier generation and recombination, in semiconductors, the cancellation of mobile charge carriers (electrons and holes)
Crossover (genetic algorithm), also called recombination
Genetic recombination, the process by which genetic material is broken and joined to other genetic material
Bacterial recombination
Homologous recombination
Plasma recombination, the formation of neutral atoms from the capture of free electrons by the cations in a plasma
Recombination (chemistry), the opposite of dissociation
Cage effect, a special kind of recombination reaction that appears in condensed phases
Recombination (cosmology), the time at which protons and electrons formed neutral hydrogen in the timeline of the Big Bang |
https://en.wikipedia.org/wiki/Knoppix | KNOPPIX ( ) is an operating system based on Debian designed to be run directly from a CD / DVD (Live CD) or a USB flash drive (Live USB), one of the first live operating system distributions (just after Yggdrasil Linux). Knoppix was developed by, and named after, Linux consultant Klaus Knopper. When starting a program, it is loaded from the removable medium and decompressed into a RAM drive. The decompression is transparent and on-the-fly.
Although KNOPPIX is primarily designed to be used as a Live CD, it can also be installed on a hard disk like a typical operating system. Computers that support booting from USB devices can load KNOPPIX from a live USB flash drive or memory card.
There are two main editions: the traditional compact-disc (700 megabytes) edition and the DVD (4.7 gigabytes) "Maxi" edition. The CD edition had not been updated since June 2013 until recently. As of version 9.1, CD images are being released once again. Each main edition has two language-specific editions: English and German.
KNOPPIX mostly consists of free and open source software, but also includes some proprietary software, as long as it fulfills certain conditions.
Knoppix can be used to copy files easily from hard drives with inaccessible operating systems. To quickly and more safely use Linux software, the Live CD can be used instead of installing another OS.
Contents
More than 1000 software packages are included on the CD edition, and more than 2600 packages are included on the DVD edition. Up to nine gigabytes can be stored on the DVD in compressed form. These packages include:
LXDE, a lightweight X11 desktop environment; default since Knoppix 6.0 and later
MPlayer, with MP3 audio, and Ogg Vorbis audio playback support
Internet access software, including the KPPP dialer and ISDN utilities
The Iceweasel web browser (based on Mozilla Firefox)
The Icedove e-mail client (based on Mozilla Thunderbird)
GIMP, an image manipulation program
Tools for data rescue and system repair
Network analysis and administration tools
LibreOffice, a comprehensive office suite
Terminal server
Hardware requirements
Minimum hardware requirements for Knoppix:
Intel/AMD-compatible processor (i486 or later)
Minimum RAM memory requirements:
32 MB for text mode;
Live environment with no swap:
512MB for graphics mode with just LXDE
1GB to use the web browser and productivity software
2GB recommended
Bootable optical drive:
DVD-ROM for current versions;
CD-ROM for version 7.2 and older, or a boot floppy and standard CD-ROM (IDE/ATAPI or SCSI)
Standard SVGA-compatible graphics card
Serial or PS/2 standard mouse, or an IMPS/2-compatible USB-mouse.
Saving changes in the environment
Prior to Knoppix 3.8.2, any documents or settings a user created would disappear upon reboot. This lack of persistence then made it necessary to save documents directly to a hard drive partition, over the network, or to some removable media, such as a USB flash drive.
It was also possible |
https://en.wikipedia.org/wiki/Domain%20name%20registry | A domain name registry is a database of all domain names and the associated registrant information in the top level domains of the Domain Name System (DNS) of the Internet that enables third party entities to request administrative control of a domain name. Most registries operate on the top-level and second-level of the DNS.
A registry operator, sometimes called a network information center (NIC), maintains all administrative data of the domain and generates a zone file which contains the addresses of the nameservers for each domain. Each registry is an organization that manages the registration of domain names within the domains for which it is responsible, controls the policies of domain name allocation, and technically operates its domain. It may also fulfill the function of a domain name registrar, or may delegate that function to other entities.
Domain names are managed under a hierarchy headed by the Internet Assigned Numbers Authority (IANA), which manages the top of the DNS tree by administrating the data in the root nameservers. IANA also operates the int registry for intergovernmental organizations, the arpa zone for protocol administration purposes, and other critical zones such as root-servers.net. IANA delegates all other domain name authority to other domain name registries and a full list is available on their web site. Country code top-level domains (ccTLD) are delegated by IANA to national registries such as DENIC in Germany and Nominet in the United Kingdom.
Operation
Some name registries are government departments (e.g., the registry for India gov.in). Some are co-operatives of Internet service providers (such as DENIC) or not-for profit companies (such as Nominet UK). Others operate as commercial organizations, such as the US registry (nic.us).
The allocated and assigned domain names are made available by registries by use of the WHOIS system and via their domain name servers.
Some registries sell the names directly, and others rely on separate entities to sell them. For example, names in the .com top-level domains are in some sense sold "wholesale" at a regulated price by VeriSign, and individual domain name registrars sell names "retail" to businesses and consumers.
Policies
Allocation policies
Historically, domain name registries operated on a first-come-first-served system of allocation but may reject the allocation of specific domains on the basis of political, religious, historical, legal or cultural reasons. For example, in the United States, between 1996 and 1998, InterNIC automatically rejected domain name applications based on a list of perceived obscenities.
Registries may also control matters of interest to their local communities; for example, the German, Japanese and Polish registries have introduced internationalized domain names to allow use of local non-ASCII characters.
Dispute policies
Domains which are registered with ICANN registrars, generally have to use the Uniform Domain-Name Dispute-Res |
https://en.wikipedia.org/wiki/Rhonda%20Shear | Rhonda Honey Shear (born November 12, 1954) is an American television personality, comedian, actress, and entrepreneur. She is known for her role as a host in the 1990s USA Network's weekend B movie show, USA Up All Night. In 2001, she started an intimate apparel business that was marketed on Home Shopping Network (HSN), with one of her most successful products being the Ahh Bra in 2010. She is a regular participant in Tampa Bay's annual Fashion Week events.
Early life
Shear was born in New Orleans, Louisiana. She attended Loyola University, earning a Bachelor of Arts degree in communications. After graduating from Loyola in 1977, she moved to Los Angeles, California, to pursue a career in Hollywood.
Career
Modeling, hosting, and acting
Shear earned titles in several beauty contests, including holding the titles of Miss Louisiana USA 1975 for Miss USA and that of Miss Louisiana for both the Miss World and the Miss International pageants. She appeared as a contestant in The Gong Show in 1979. She also appeared as grown up Kimmy Gibbler in the 1987 sitcom, Full House.
Shear is best known for her role as a host of the USA Network's 1980s and '90s weekend B-movie show, USA Up All Night. From 1991 to 1998, she hosted in-studio and on-location segments that typically aired on Friday nights, replacing comedian Caroline Schlitt (the Friday night host for the show's first few years). She also occasionally hosted the show with her Saturday counterpart, Gilbert Gottfried, in addition to making cameos on his edition. Her trademark manner of speaking the show's title, by raising her voice an octave when saying the word "Up", became a catch phrase. Shear also briefly hosted a comedy program called Spotlight Cafe on WWOR-TV in Secaucus, New Jersey, hosted previously by comic Judy Tenuta.
Shear made two subsequent nude appearances in Playboy: First, in their "Funny Girls" pictorial in June 1991, then in her own pictorial titled "Rhonda Is Up All Night" in October 1993.
Shear also co-starred in numerous sitcoms from playing the Fonz's girlfriend on Happy Days to the sexy neighbor on Married... with Children, before making her mark as a comedian. She then made her way into stand-up comedy, headlining as a successful comedian in Las Vegas, Los Angeles, and New York, and eventually touring across the country with Comedy PJ Party, an on-stage slumber party featuring a number of comedians.
Entrepreneur career
In 2001, Shear, with her husband Van Fagan, started Shear Enterprises. She began designing from her home office in 2003 with three employees, and launched the Rhonda Shear Intimates line at Home Shopping Network. Her products were picked up on shopping networks around the world, including The Shopping Channel (Canada) and Ideal World Shopping (U.K.). Rhonda Shear Intimates has continued to grow and is now represented in over 40 countries with over 25 employees in her St. Petersburg, Florida offices.
In 2010, Shear designed the Ahh Bra, and the product |
https://en.wikipedia.org/wiki/Usability%20engineering | Usability engineering is a professional discipline that focuses on improving the usability of interactive systems. It draws on theories from computer science and psychology to define problems that occur during the use of such a system. Usability Engineering involves the testing of designs at various stages of the development process, with users or with usability experts. The history of usability engineering in this context dates back to the 1980s. In 1988, authors John Whiteside and John Bennett—of Digital Equipment Corporation and IBM, respectively—published material on the subject, isolating the early setting of goals, iterative evaluation, and prototyping as key activities. The usability expert Jakob Nielsen is a leader in the field of usability engineering. In his 1993 book Usability Engineering, Nielsen describes methods to use throughout a product development process—so designers can ensure they take into account the most important barriers to learnability, efficiency, memorability, error-free use, and subjective satisfaction before implementing the product. Nielsen’s work describes how to perform usability tests and how to use usability heuristics in the usability engineering lifecycle. Ensuring good usability via this process prevents problems in product adoption after release. Rather than focusing on finding solutions for usability problems—which is the focus of a UX or interaction designer—a usability engineer mainly concentrates on the research phase. In this sense, it is not strictly a design role, and many usability engineers have a background in computer science because of this. Despite this point, its connection to the design trade is absolutely crucial, not least as it delivers the framework by which designers can work so as to be sure that their products will connect properly with their target usership.
International standards
Usability engineers sometimes work to shape an interface such that it adheres to accepted operational definitions of user requirements documentation. For example, the International Organization for Standardization approved definitions (see e.g., ISO 9241 part 11) usability are held by some to be a context, efficiency, and satisfaction with which specific users should be able to perform tasks. Advocates of this approach engage in task analysis, then prototype interface design, and usability testing on those designs. On the basis of such tests, the technology is potentially redesigned if necessary.
The National Institute of Standards and Technology has collaborated with industry to develop the Common Industry Specification for Usability – Requirements, which serves as a guide for many industry professionals. The specifications for successful usability in biometrics were also developed by the NIST. Usability.gov, a no-longer maintained website formerly operated by the US General Services Administration, provided a tutorial and wide general reference for the design of usable websites.
Usability, especially |
https://en.wikipedia.org/wiki/Ls | In computing, ls is a command to list computer files and directories in Unix and Unix-like operating systems. It is specified by POSIX and the Single UNIX Specification.
It is available in the EFI shell, as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities,
or as part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2.
The numerical computing environments MATLAB and GNU Octave include an ls
function with similar functionality.
In other environments, such as DOS, OS/2, and Microsoft Windows, similar functionality is provided by the dir command.
History
An ls utility appeared in the first version of AT&T UNIX, the name inherited from a similar command in Multics also named 'ls', short for the word "list". is part of the X/Open Portability Guide since issue 2 of 1987. It was inherited into the first version of POSIX.1 and the Single Unix Specification.
Behavior
Unix and Unix-like operating systems maintain the idea of a working directory. When invoked without arguments, ls lists the files in the working directory. If a directory is specified as an argument, the files in that directory are listed; if a file is specified, that file is listed. Multiple directories and files may be specified.
In many Unix-like systems, names starting with a dot (.) are hidden. Examples are ., which refers to the working directory, and .., which refers to its parent directory. Hidden names are not shown by default. With -a, all names, including all hidden names, are shown. Using -A shows all names, including hidden names, except for . and ... File names specified explicitly (for example ls .secret) are always listed.
Without options, ls displays names only.
The different implementations have different options, but common options include:
-l Long format, displaying Unix file types, permissions, number of hard links, owner, group, size, last-modified date-time and name. If the modified date is older than 6 months, the time is replaced with the year. Some implementations add additional flags to permissions.
-h Output sizes in human readable format (e.g., 1K (kilobytes), 234M (megabytes), 2G (gigabytes)). This option is not part of the POSIX standard, although implemented in several systems, e.g., GNU coreutils in 1997, FreeBSD 4.5 in 2002, and Solaris 9 in 2002.
Additional options controlling how items are displayed include:
-R Recursively list items in subdirectories.
-t Sort the list by modification time (default sort is alphabetically).
-u Sort the list by last access time.
-c Sort the list by last attribute (status) change time.
-r Reverse the order, for example most recent time last.
--full-time Show times down to the second and millisecond instead of just the minute.
-1 One entry per line.
-m Stream format; list items across the page, separated by commas.
-g Include group but not owner.
-o Include owner but not group (when combined with -g both group and owner are |
https://en.wikipedia.org/wiki/Radical%20Faeries | The Radical Faeries are a loosely affiliated worldwide network and countercultural movement seeking to redefine queer consciousness through secular spirituality. Sometimes deemed a form of modern Paganism, the movement also adopts elements from anarchism and environmentalism.
Rejecting hetero-imitation, the Radical Faerie movement began during the 1970s sexual revolution among gay men in the United States. The movement has expanded in tandem with the larger gay rights movement, challenging commercialization and patriarchal aspects of modern LGBTQ+ life while celebrating eclectic constructs and rituals. Faeries tend to be fiercely independent, anti-establishment, and community-focused.
The Radical Faerie movement was founded in California in 1979 by gay activists Harry Hay, Mitch Walker, and Don Kilhefner, and Hay’s partner, John Burnside. Influenced by the legacy of the counterculture of the 1960s, they held the first Spiritual Conference for Radical Faeries in Arizona in September 1979. From there, various regional Faerie Circles were formed, and other large rural gatherings organized. Kilhefner and Walker broke from Hay and the Faeries in 1980 to form Treeroots, a non-profit organization, to systematically address the unconscious, which they felt was being neglected by Hay to become a toxic flaw in Radical Faerie organizing. Still, the movement continued to grow, having expanded into an international network soon after the second Faerie gathering in 1980.
Today Radical Faeries embody a wide range of genders, sexual orientations, and identities. Sanctuaries and gatherings are generally open to all, though several gatherings still focus on the particular spiritual experience of man-loving men co-creating temporary autonomous zones. Faerie sanctuaries adapt rural living and environmentally sustainable ways of using modern technologies as part of creative expression. Radical Faerie communities are sometimes inspired by indigenous, native or traditional spiritualities, especially those that incorporate genderqueer sensibilities.
Philosophy and ritual
Hay's biographer Stuart Timmons described the Faeries as a "mixture of a political alternative, a counter-culture, and a spirituality movement." Peter Hennan asserted that the Faeries contained elements of "Marxism, feminism, paganism, Native American and New Age spirituality, anarchism, the mythopoetic men's movement, radical individualism, the therapeutic culture of self-fulfillment and self-actualization, earth-based movements in support of sustainable communities, spiritual solemnity coupled with a camp sensibility, gay liberation and drag."
The Radical Faerie movement was a reaction against the social emptiness that many gay men felt was present both in the heterosexual establishment and the assimilationist gay community. As one Faerie commented, in his opinion mainstream gay culture was "an oppressive parody of straight culture", taking place primarily in bars and not encouraging people to " |
https://en.wikipedia.org/wiki/Community%20network | A community network is a computer-based system that is intended to help support (usually geographical) communities by supporting, augmenting, and extending already existing social networks, by using networking technologies by, and for, a community.
Free-nets and civic networks indicate roughly the same range of online projects and services, usually focused on bulletin board systems and online information, but sometimes also providing a means of network access directly to the Internet or other networks; whereas community technology centers (CTCs) and telecentres generally indicate a physical facility to compensate for lack of access to information and communication technologies (ICTs).
Function
Community networks often provide web space, e-mail, and other services for free, without advertising. VillageSoup launched a distinct form of community networking in 1997. This form uses display ads and informational postings from fee-paying business and organization members to generate revenue critical to the support of professional journalists producing news for the community.
Community network organizations often engage in training and other services and sometimes are involved in policy work. The Seattle Community Network is a prominent example.
When one looks at the entries of community network directories or the papers and Web sites whose titles and names include "community network" or "community networking", it is noticeable that a variety of practices exist. This diversity can be seen in the types of information and services offered, who operates the network, and the area covered.
The most extensive array of information services in a community network includes news from professional and amateur reporters, news and information from businesses and organizations; community events listings; weather forecasts; listings of governmental offices, businesses and organizations; and galleries of images of the place. Services include requesting alerts and RSS feeds; making reservations; searching for goods and services; purchasing images and auction items; and posting personal and commercial advertisements. A printed periodic publication is sometimes a service of the community network.
Some community networks limit themselves to functions such as facilitating communication among non-profit organizations.
Internet-based volunteer networks of blogs and groups have been formed in the internet social-networking field as well. The Alabama Charity Network, for example, provided another place for people to connect to fundraisers and charity information using internet-based social networking.
The entities in charge of planning and operating the community networks may be government offices, chambers of commerce, public libraries, non-profit organizations, for-profit entities or volunteer groups.
The primary goals of a community network may include providing a sustainable, trusted platform for an urban neighborhood, suburban village or exurban town or region to |
https://en.wikipedia.org/wiki/Cd%20%28command%29 | The command, also known as (change directory), is a command-line shell command used to change the current working directory in various operating systems. It can be used in shell scripts and batch files.
Implementations
The command has been implemented in operating systems such as Unix, DOS, IBM OS/2, MetaComCo TRIPOS, AmigaOS (where if a bare path is given, cd is implied), Microsoft Windows, ReactOS, and Linux. On MS-DOS, it is available in versions 2 and later. DR DOS 6.0 also includes an implementation of the and commands. The command is also available in the open source MS-DOS emulator DOSBox and in the EFI shell. It is named in HP MPE/iX. The command is analogous to the Stratus OpenVOS command.
is frequently included built directly into a command-line interpreter. This is the case in most of the Unix shells (Bourne shell, tcsh, bash, etc.), cmd.exe on Microsoft Windows NT/2000+ and Windows PowerShell on Windows 7+ and COMMAND.COM on DOS/ Microsoft Windows 3.x-9x/ME.
The system call that effects the command in most operating systems is that is defined by POSIX.
Command line shells on Windows usually use the Windows API to change the current working directory, whereas on Unix systems calls the POSIX C function. This means that when the command is executed, no new process is created to migrate to the other directory as is the case with other commands such as ls. Instead, the shell itself executes this command. This is because, when a new process is created, child process inherits the directory in which the parent process was created. If the command inherits the parent process' directory, then the objective of the command cd will never be achieved.
Windows PowerShell, Microsoft's object-oriented command line shell and scripting language, executes the command (cmdlet) within the shell's process. However, since PowerShell is based on the .NET Framework and has a different architecture than previous shells, all of PowerShell's cmdlets like , etc. run in the shell's process. Of course, this is not true for legacy commands which still run in a separate process.
Usage
A directory is a logical section of a file system used to hold files. Directories may also contain other directories. The command can be used to change into a subdirectory, move back into the parent directory, move all the way back to the root directory or move to any given directory.
Consider the following subsection of a Unix filesystem, which shows a user's home directory (represented as ) with a file, , and three subdirectories.
If the user's current working directory is the home directory (), then entering the command ls followed by might produce the following transcript:
user@wikipedia:~$ ls
workreports games encyclopedia text.txt
user@wikipedia:~$ cd games
user@wikipedia:~/games$
The user is now in the "games" directory.
A similar session in DOS (though the concept of a "home directory" may not apply, depending on the specific version) would look like th |
https://en.wikipedia.org/wiki/Directory | Directory may refer to:
Directory (computing), or folder, a file system structure in which to store computer files
Directory (OpenVMS command)
Directory service, a software application for organizing information about a computer network's users and resources
Directory (political), a system under which a country is ruled by a college of several people who jointly exercise the powers of a head of state or head of government
French Directory, the government in revolutionary France from 1795 to 1799
Business directory, a listing of information about suppliers and manufacturers
City directory, a listing of residents, streets, businesses, organizations or institutions, giving their location in a city
Telephone directory, a book which allows telephone numbers to be found given the subscriber's name
Web directory, an organized collection of links to websites
See also
Director (disambiguation)
Directorate (disambiguation) |
https://en.wikipedia.org/wiki/Dir%20%28command%29 | In computing, dir (directory) is a command in various computer operating systems used for computer file and directory listing. It is one of the basic commands to help navigate the file system. The command is usually implemented as an internal command in the command-line interpreter (shell). On some systems, a more graphical representation of the directory structure can be displayed using the tree command.
Implementations
The command is available in the command-line interface (CLI) of the operating systems Digital Research CP/M, MP/M, Intel ISIS-II, iRMX 86, Cromemco CDOS, MetaComCo TRIPOS, DOS, IBM/Toshiba 4690 OS, IBM OS/2, Microsoft Windows, Singularity, Datalight ROM-DOS, ReactOS, GNU, AROS and in the DCL command-line interface used on DEC VMS, RT-11 and RSX-11. It is also supplied with OS/8 as a CUSP (Commonly-Used System Program).
The dir command is supported by Tim Paterson's SCP 86-DOS. On MS-DOS, the command is available in versions 1 and later. It is also available in the open source MS-DOS emulator DOSBox. MS-DOS prompts "Abort, Retry, Fail?" after being commanded to list a directory with no diskette in the drive.
The numerical computing environments MATLAB and GNU Octave include a dir
function with similar functionality.
Examples
DOS, Windows, ReactOS
List all files and directories in the current working directory.
List any text files and batch files (filename extension ".txt" or ".bat").
Recursively list all files and directories in the specified directory and any subdirectories, in wide format, pausing after each screen of output. The directory name is enclosed in double-quotes, to prevent it from being interpreted is as two separate command-line options because it contains a whitespace character.
List any NTFS junction points:
Unices
dir is not a Unix command; Unix has the analogous ls command instead. The GNU operating system, however, has a dir command that "is equivalent to ls -C -b; that is, by default files are listed in columns, sorted vertically, and special characters are represented by backslash escape sequences". Actually, for compatibility reasons, ls produces device-dependent output. The dir instruction, unlike ls -Cb, produces device-independent output.
See also
Directory (OpenVMS command)
List of DOS commands
ls (corresponding command for *nix systems)
References
Further reading
External links
dir | Microsoft Docs
Open source DIR implementation that comes with MS-DOS v2.0
Dir command syntax and examples
CP/M commands
Internal DOS commands
Microcomputer software
Microsoft free software
MSX-DOS commands
OS/2 commands
ReactOS commands
Windows commands
Windows administration |
https://en.wikipedia.org/wiki/Plug%20and%20play | In computing, a plug and play (PnP) device or computer bus is one with a specification that facilitates the recognition of a hardware component in a system without the need for physical device configuration or user intervention in resolving resource conflicts. The term "plug and play" has since been expanded to a wide variety of applications to which the same lack of user setup applies.
Expansion devices are controlled and exchange data with the host system through defined memory or I/O space port addresses, direct memory access channels, interrupt request lines and other mechanisms, which must be uniquely associated with a particular device to operate. Some computers provided unique combinations of these resources to each slot of a motherboard or backplane. Other designs provided all resources to all slots, and each peripheral device had its own address decoding for the registers or memory blocks it needed to communicate with the host system. Since fixed assignments made expansion of a system difficult, devices used several manual methods for assigning addresses and other resources, such as hard-wired jumpers, pins that could be connected with wire or removable straps, or switches that could be set for particular addresses. As microprocessors made mass-market computers affordable, software configuration of I/O devices was advantageous to allow installation by non-specialist users. Early systems for software configuration of devices included the MSX standard, NuBus, Amiga Autoconfig, and IBM Microchannel. Initially all expansion cards for the IBM PC required physical selection of I/O configuration on the board with jumper straps or DIP switches, but increasingly ISA bus devices were arranged for software configuration. By 1995, Microsoft Windows included a comprehensive method of enumerating hardware at boot time and allocating resources, which was called the "Plug and Play" standard.
Plug and play devices can have resources allocated at boot-time only, or may be hotplug systems such as USB and IEEE 1394 (FireWire).
History of device configuration
Some early microcomputer peripheral devices required the end user physically to cut some wires and solder together others in order to make configuration changes; such changes were intended to be largely permanent for the life of the hardware.
As computers became more accessible to the general public, the need developed for more frequent changes to be made by computer users unskilled with using soldering irons. Rather than cutting and soldering connections, configuration was accomplished by jumpers or DIP switches.
Later on this configuration process was automated: Plug and Play.
MSX
The MSX system, released in 1983, was designed to be plug and play from the ground up, and achieved this by a system of slots and subslots, where each had its own virtual address space, thus eliminating device addressing conflicts in its very source. No jumpers or any manual configuration was required, and the indepe |
https://en.wikipedia.org/wiki/Root%20directory | In a computer file system, and primarily used in the Unix and Unix-like operating systems, the root directory is the first or top-most directory in a hierarchy. It can be likened to the trunk of a tree, as the starting point where all branches originate from. The root file system is the file system contained on the same disk partition on which the root directory is located; it is the filesystem on top of which all other file systems are mounted as the system boots up.
Unix-like systems
Unix abstracts the nature of this tree hierarchy entirely and in Unix and Unix-like systems the root directory is denoted by the / (slash) sign. Though the root directory is conventionally referred to as /, the directory entry itself has no name its path is the "empty" part before the initial directory separator character (/). All file system entries, including mounted file systems are "branches" of this root.
chroot
In UNIX-like operating systems, each process has its own idea of what the root directory is. For most processes this is the same as the system's actual root directory, but it can be changed by calling the system call. This is typically done to create a secluded environment to run software that requires legacy libraries and sometimes to simplify software installation and debugging. Chroot is not meant to be used for enhanced security as the processes inside can break out.
Super-root
Some Unix systems support a directory below the root directory. Normally, "/.." points back to the same inode as "/", however, under , this can be changed to point to a super-root directory, where remote trees can be mounted. If, for example, two workstations "pcs2a" and "pcs2b" were connected via "connectnodes" and "uunite" startup script, "/../pcs2b" could be used to access the root directory of "pcs2b" from "pcs2a".
Related uses
On many Unixes, there is also a directory named (pronounced "slash root"). This is the home directory of the 'root' superuser. On many Mac and iOS systems this superuser home directory is .
See also
Filesystem Hierarchy Standard (FHS)
Parent directory
Working directory
References
File system directories |
https://en.wikipedia.org/wiki/Cult%20Awareness%20Network | The Cult Awareness Network (CAN) was an anti-cult organization founded by deprogrammer Ted Patrick that provided information on groups it considered "cults", as well as support and referrals to deprogrammers. It operated (initially under the name “Citizens’ Freedom Foundation”) from the mid 1970s to the mid 1990s in the United States.
The Cult Awareness Network was the most notable organization to emerge from the anti-cult movement in America. In the 1970s, a growing number of large and small New Religious Movements caused alarm in some sections of the community, based in part on the fear of "brainwashing" or "mind control" allegedly employed by these groups. The Cult Awareness Network presented itself as a source of information about "cults"; by 1991 it was monitoring over 200 groups that it referred to as "mind-control cults". It also promoted a form of coercive intervention by self-styled "deprogrammers" who would, for a significant fee, forcibly detain or even abduct the cult member and subject them to a barrage of attacks on their beliefs, supposedly in order to counter the effects of the brainwashing. The practice, which could involve criminal actions such as kidnapping and false imprisonment, generated controversy, and Ted Patrick and others faced both civil and criminal proceedings.
After CAN lost a lawsuit and filed for bankruptcy in 1996, lawyer and Scientologist Steven L. Hayes acquired the rights to CAN's name, logo, PO box, and hot-line phone number, and licensed the name to the "Foundation for Religious Freedom", who established the New Cult Awareness Network. Hayes made the purchase with funds raised from private donations, not from the Church of Scientology, although a number of scientologists had been among the most active participants in a coalition of religious freedom advocates from whom he had collected money. The Church of Scientology had previously been one of CAN's main targets.
History
In the United States in the early 1970s there was an increasing number of New Religious Movements. In 1971, Ted Patrick founded FREECOG (Parents Committee to Free Our Sons and Daughters from the Children of God). In 1974, he founded the more wide-ranging "Citizen's Freedom Foundation" (CFF), and began offering 'deprogramming' services to people who wanted to break a family member's connection to an NRM. The deprogramming methods involved abduction, physical restraint, detention over days or weeks, food and sleep deprivation, prolonged verbal and emotional abuse, and desecration of the symbols of the victim's faith. The perpetrators' justification for these actions was that the individual had been "brainwashed", and was not amenable to reason.
Brainwashing theory denied the possibility of authentic spiritual choice for an NRM member, proposing instead that such individuals were subject to systematic mind control programs that overrode their capacity for independent volition. Ted Patrick's theory of brainwashing was that individuals were |
https://en.wikipedia.org/wiki/CiteSeerX | {{DISPLAYTITLE:CiteSeerX}}
CiteSeerX (formerly called CiteSeer) is a public search engine and digital library for scientific and academic papers, primarily in the fields of computer and information science.
CiteSeer's goal is to improve the dissemination and access of academic and scientific literature. As a non-profit service that can be freely used by anyone, it has been considered as part of the open access movement that is attempting to change academic and scientific publishing to allow greater access to scientific literature. CiteSeer freely provided Open Archives Initiative metadata of all indexed documents and links indexed documents when possible to other sources of metadata such as DBLP and the ACM Portal. To promote open data, CiteSeerX shares its data for non-commercial purposes under a Creative Commons license.
CiteSeer is considered as a predecessor of academic search tools such as Google Scholar and Microsoft Academic Search. CiteSeer-like engines and archives usually only harvest documents from publicly available websites and do not crawl publisher websites. For this reason, authors whose documents are freely available are more likely to be represented in the index.
CiteSeer changed its name to ResearchIndex at one point and then changed it back.
History
CiteSeer and CiteSeer.IST
CiteSeer was created by researchers Lee Giles, Kurt Bollacker and Steve Lawrence in 1997 while they were at the NEC Research Institute (now NEC Labs), Princeton, New Jersey, US. CiteSeer's goal was to actively crawl and harvest academic and scientific documents on the web and use autonomous citation indexing to permit querying by citation or by document, ranking them by citation impact. At one point, it was called ResearchIndex.
CiteSeer became public in 1998 and had many new features unavailable in academic search engines at that time. These included:
Autonomous Citation Indexing automatically created a citation index that can be used for literature search and evaluation.
Citation statistics and related documents were computed for all articles cited in the database, not just the indexed articles.
Reference linking, allowing browsing of the database using citation links.
Citation context showed the context of citations to a given paper, allowing a researcher to quickly and easily see what other researchers have to say about an article of interest.
Related documents were shown using citation and word based measures, and an active and continuously updated bibliography is shown for each document.
CiteSeer was granted a United States patent # 6289342, titled "Autonomous citation indexing and literature browsing using citation context", on September 11, 2001. The patent was filed on May 20, 1998, and has priority to January 5, 1998. A continuation patent (US Patent # 6738780) was filed on May 16, 2001, and granted on May 18, 2004.
After NEC, in 2004 it was hosted as CiteSeer.IST on the World Wide Web at the College of Information Sciences and Tec |
https://en.wikipedia.org/wiki/Working%20directory | In computing, the working directory of a process is a directory of a hierarchical file system, if any, dynamically associated with each process. It is sometimes called the current working directory (CWD), e.g. the BSD getcwd function, or just current directory. When the process refers to a file using a simple file name or relative path (as opposed to a file designated by a full path from a root directory), the reference is interpreted relative to the working directory of the process. So for example a process with working directory /rabbit-shoes that asks to create the file foo.txt will end up creating the file /rabbit-shoes/foo.txt.
In operating systems
In most computer file systems, every directory has an entry (usually named ".") which points to the directory itself.
In most DOS and UNIX command shells, as well as in the Microsoft Windows command line interpreters cmd.exe and Windows PowerShell, the working directory can be changed by using the CD or CHDIR commands. In Unix shells, the pwd command outputs a full pathname of the working directory; the equivalent command in DOS and Windows is CD or CHDIR without arguments (whereas in Unix, cd used without arguments takes the user back to his/her home directory).
The environment variable PWD (in Unix/Linux shells), or the pseudo-environment variables CD (in Windows COMMAND.COM and cmd.exe, but not in OS/2 and DOS), or _CWD, _CWDS, _CWP and _CWPS (under 4DOS, 4OS2, 4NT etc.) can be used in scripts, so that one need not start an external program. Microsoft Windows file shortcuts have the ability to store the working directory.
COMMAND.COM in DR-DOS 7.02 and higher provides ECHOS, a variant of the ECHO command omitting the terminating linefeed. This can be used to create a temporary batchjob storing the working directory in an environment variable like CD for later use, for example:
ECHOS SET CD=> SETCD.BAT
CHDIR >> SETCD.BAT
CALL SETCD.BAT
DEL SETCD.BAT
Alternatively, under Multiuser DOS and DR-DOS 7.02 and higher, various internal and external commands support a parameter /B (for "Batch"). This modifies the output of commands to become suitable for direct command line input (when redirecting it into a batch file) or usage as a parameter for other commands (using it as input for another command). Where CHDIR would issue a directory path like C:\DOS, a command like CHDIR /B would issue CHDIR C:\DOS instead, so that CHDIR /B > RETDIR.BAT would create a temporary batchjob allowing to return to this directory later on.
The working directory is also displayed by the $P token of the PROMPT command To keep the prompt short even inside of deep subdirectory structures, the DR-DOS 7.07 COMMAND.COM supports a $W token to display only the deepest subdirectory level. So, where a default PROMPT $P$G would result f.e. in C:\DOS> or C:\DOS\DRDOS>, a PROMPT $N:$W$G would instead yield C:DOS> and C:DRDOS>, respectively. A similar facility (using $W and $w) was added to 4DOS as well.
Under DOS, the absolut |
https://en.wikipedia.org/wiki/Distance-vector%20routing%20protocol | A distance-vector routing protocol in data networks determines the best route for data packets based on distance. Distance-vector routing protocols measure the distance by the number of routers a packet has to pass; one router counts as one hop. Some distance-vector protocols also take into account network latency and other factors that influence traffic on a given route. To determine the best route across a network, routers using a distance-vector protocol exchange information with one another, usually routing tables plus hop counts for destination networks and possibly other traffic information. Distance-vector routing protocols also require that a router inform its neighbours of network topology changes periodically.
Distance-vector routing protocols use the Bellman–Ford algorithm to calculate the best route. Another way of calculating the best route across a network is based on link cost, and is implemented through link-state routing protocols.
The term distance vector refers to the fact that the protocol manipulates vectors (arrays) of distances to other nodes in the network. The distance vector algorithm was the original ARPANET routing algorithm and was implemented more widely in local area networks with the Routing Information Protocol (RIP).
Overview
Distance-vector routing protocols use the Bellman–Ford algorithm. In these protocols, each router does not possess information about the full network topology. It advertises its distance value (DV) calculated to other routers and receives similar advertisements from other routers unless changes are done in the local network or by neighbours (routers). Using these routing advertisements each router populates its routing table. In the next advertisement cycle, a router advertises updated information from its routing table. This process continues until the routing tables of each router converge to stable values.
Some of these protocols have the disadvantage of slow convergence.
Examples of distance-vector routing protocols:
Routing Information Protocol (RIP)
Routing Information Protocol Version 2 (RIPv2)
Routing Information Protocol Next Generation (RIPng), an extension of RIP version 2 with support for IPv6
Interior Gateway Routing Protocol (IGRP)
Enhanced Interior Gateway Routing Protocol (EIGRP)
Methodology
Routers that use distance-vector protocol determine the distance between themselves and a destination. The best route for Internet Protocol packets that carry data across a data network is measured in terms of the numbers of routers (hops) a packet has to pass to reach its destination network. Additionally some distance-vector protocols take into account other traffic information, such as network latency. To establish the best route, routers regularly exchange information with neighbouring routers, usually their routing table, hop count for a destination network and possibly other traffic related information. Routers that implement distance-vector protocol rely purely on the i |
https://en.wikipedia.org/wiki/ElcomSoft | ElcomSoft is a privately owned software company headquartered in Moscow, Russia. Since its establishment in 1990, the company has been working on computer security programs, with the main focus on password and system recovery software.
DMCA case
On July 16, 2001, Dmitry Sklyarov, a Russian citizen employed by ElcomSoft who was at the time visiting the United States for DEF CON, was arrested and charged for violating the United States DMCA law by writing ElcomSoft's Advanced eBook Processor software. He was later released on bail and allowed to return to Russia, and the charges against him were dropped. The charges against ElcomSoft were not, and a court case ensued, attracting much public attention and protest. On December 17, 2002, ElcomSoft was found not guilty of all four charges under the DMCA.
Thunder Tables
Thunder Tables is the company's own technology developed to ensure guaranteed recovery of Microsoft Word and Microsoft Excel documents protected with 40-bit encryption. The technology first appeared in 2007 and employs the time–memory tradeoff method to build pre-computed hash tables, which open the corresponding files in a matter of seconds instead of days. These tables take around four gigabytes. So far, the technology is used in two password recovery programs: Advanced Office Password Breaker and Advanced PDF Password Recovery.
Cracking Wi-Fi passwords with GPUs
In 2009 ElcomSoft released a tool that takes WPA/WPA2 Hash Codes and uses brute-force methods to guess the password associated with a wireless network.
The advantages of using such methods over the traditional ones, such as rainbow tables, are numerous.
Vulnerability in Canon authentication software
On November 30, 2010, Elcomsoft announced that the encryption system used by Canon cameras to ensure that pictures and Exif metadata have not been altered was flawed and cannot be fixed.
On that same day, Dmitry Sklyarov gave a presentation at the Confidence 2.0 conference in Prague demonstrating the flaws. Among others, he showed an image of an astronaut planting a flag of the Soviet Union on the moon; all the images pass Canon's authenticity verification.
Nude celebrity photo leak
In 2014, an attacker used the Elcomsoft Phone Password Breaker to determine celebrity Jennifer Lawrence's password and obtain nude photos. Wired said about Apple's cloud services, "...cloud services might be about as secure as leaving your front door key under the mat."
References
Software companies established in 1990
Computer law
Cryptography law
Software companies of Russia
Computer security software companies
Companies based in Moscow
Russian companies established in 1990
Cryptographic attacks
Password cracking software |
https://en.wikipedia.org/wiki/Tux%20Racer | Tux Racer is a 2000 open-source winter sports racing video game starring the Linux mascot, Tux the penguin. It was originally developed by Jasmin Patry as a computer graphics project at the University of Waterloo. Later on, Patry and the newly founded Sunspire Studios, composed of several former students of the university, expanded it. In the game, the player controls Tux as he slides down a course of snow and ice collecting herrings.
Tux Racer was officially downloaded over one million times as of 2001. It also was well received, often being acclaimed for the graphics, fast-paced gameplay, and replayability, and was a fan favorite among Linux users and the free software community. The game's popularity secured the development of a commercialized release that included enhanced graphics and multiplayer, and it also became the first GPL-licensed game to receive an arcade adaptation. It is the only product that Sunspire Studios developed and released, after which the company liquidated.
Gameplay
Tux Racer is a racing game in which the player must control Tux across a mountainside. Tux can turn left, right, brake, jump, and paddle, and flap his wings. If the player presses the brakes and turn buttons, Tux will perform a tight turn. Pressing the paddling buttons on the ground gives Tux some additional speed. The paddling stops giving speed and in turn slows Tux down when the speedometer turns yellow. Tux can slide off slopes or charge his jumps to temporarily launch into midair, during which he can flap his flippers to fly farther and adjust his direction left or right. The player can also reset the penguin should he be stuck in any part of the course.
Courses are composed of various terrain types that affect Tux's performance. Sliding on ice allows speeding at the expense of traction, and snow allows for more maneuverability. However, rocky patches slow him down, as does crashing into trees. The player gains points by collecting herrings scattered along the courses, and the faster the player finishes the course, the higher the score. Players can select cups, where progression is by completing a series of courses in order by satisfying up to three requirements: collecting sufficient herring, finishing the course below a specified time, and scoring enough points. Failing to meet all the criteria or aborting the race costs a life, and should the player lose all four lives, they must reenter the cup and start over. During level selection, the player can choose daytime settings and weather conditions such as wind and fog that affect the gameplay. Maps are composed of three separately saved raster layers that each determine a map's elevation, terrain layout, and object placement.
Commercial version
The commercial version of Tux Racer introduces new content. Besides Tux, players can select one of three other characters to race as: Samuel the seal, Boris the polar bear, and Neva the penguin. Some courses contain jump and speed pads as power-ups, and pl |
https://en.wikipedia.org/wiki/Comb%20sort | Comb sort is a relatively simple sorting algorithm originally designed by Włodzimierz Dobosiewicz and Artur Borowy in 1980, later rediscovered (and given the name "Combsort") by Stephen Lacey and Richard Box in 1991. Comb sort improves on bubble sort in the same way that Shellsort improves on insertion sort.
nist.govs "diminishing increment sort" definition mentions the term 'comb sort' as visualizing iterative passes of the data, "where the teeth of a comb touch;" the former term is linked to Don Knuth.
Algorithm
The basic idea is to eliminate turtles, or small values near the end of the list, since in a bubble sort these slow the sorting down tremendously. Rabbits, large values around the beginning of the list, do not pose a problem in bubble sort.
In bubble sort, when any two elements are compared, they always have a gap (distance from each other) of 1. The basic idea of comb sort is that the gap can be much more than 1. The inner loop of bubble sort, which does the actual swap, is modified such that the gap between swapped elements goes down (for each iteration of outer loop) in steps of a "shrink factor" k: .
The gap starts out as the length of the list n being sorted divided by the shrink factor k (generally 1.3; see below) and one pass of the aforementioned modified bubble sort is applied with that gap. Then the gap is divided by the shrink factor again, the list is sorted with this new gap, and the process repeats until the gap is 1. At this point, comb sort continues using a gap of 1 until the list is fully sorted. The final stage of the sort is thus equivalent to a bubble sort, but by this time most turtles have been dealt with, so a bubble sort will be efficient.
The shrink factor has a great effect on the efficiency of comb sort. k = 1.3 has been suggested as an ideal shrink factor by the authors of the original article after empirical testing on over 200,000 random lists. A value too small slows the algorithm down by making unnecessarily many comparisons, whereas a value too large fails to effectively deal with turtles, making it require many passes with a gap of 1.
The pattern of repeated sorting passes with decreasing gaps is similar to Shellsort, but in Shellsort the array is sorted completely each pass before going on to the next-smallest gap. Comb sort's passes do not completely sort the elements. This is the reason that Shellsort gap sequences have a larger optimal shrink factor of about 2.2.
Pseudocode
function combsort(array input) is
gap := input.size // Initialize gap size
shrink := 1.3 // Set the gap shrink factor
sorted := false
loop while sorted = false
// Update the gap value for a next comb
gap := floor(gap / shrink)
if gap ≤ 1 then gap := 1
sorted := true // If there are no swaps this pass, we are done
end if
// A single "comb" over the input list
i := 0
loop while i + gap < input.size // See She |
https://en.wikipedia.org/wiki/Recall | Recall may refer to:
Recall (bugle call), a signal to stop
Recall (information retrieval), a statistical measure
ReCALL (journal), an academic journal about computer-assisted language learning
Recall (memory)
Recall (Overwatch), a 2016 animated short
The Recall, a 2017 Canadian-American film
Recall election, a procedure by which voters can remove an elected official
Letter of recall, sent to return an ambassador from a country
Product recall, a request by a business to return a product
Recalled (film), a South Korean mystery thriller film
"Recall", a song by Susumu Hirasawa on the 1995 album Sim City
Recall, UK term for hook flash
See also
Perfect recall (disambiguation)
Total recall (disambiguation)
Remember (disambiguation)
Recalled to Life (disambiguation) |
https://en.wikipedia.org/wiki/Alpha%E2%80%93beta%20pruning | Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. It is an adversarial search algorithm used commonly for machine playing of two-player combinatorial games (Tic-tac-toe, Chess, Connect 4, etc.). It stops evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. When applied to a standard minimax tree, it returns the same move as minimax would, but prunes away branches that cannot possibly influence the final decision.
History
John McCarthy during the Dartmouth Workshop met Alex Bernstein of IBM, who was writing a chess program. McCarthy invented alpha-beta search and recommended it to him, but Bernstein was "unconvinced".
Allen Newell and Herbert A. Simon who used what John McCarthy calls an "approximation" in 1958 wrote that alpha–beta "appears to have been reinvented a number of times". Arthur Samuel had an early version for a checkers simulation. Richards, Timothy Hart, Michael Levin and/or Daniel Edwards also invented alpha–beta independently in the United States. McCarthy proposed similar ideas during the Dartmouth workshop in 1956 and suggested it to a group of his students including Alan Kotok at MIT in 1961. Alexander Brudno independently conceived the alpha–beta algorithm, publishing his results in 1963. Donald Knuth and Ronald W. Moore refined the algorithm in 1975. Judea Pearl proved its optimality in terms of the expected running time for trees with randomly assigned leaf values in two papers. The optimality of the randomized version of alpha–beta was shown by Michael Saks and Avi Wigderson in 1986.
Core idea
A game tree can represent many two-player zero-sum games, such as chess, checkers, and reversi. Each node in the tree represents a possible situation in the game. Each terminal node (outcome) of a branch is assigned a numeric score that determines the value of the outcome to the player with the next move.
The algorithm maintains two values, alpha and beta, which respectively represent the minimum score that the maximizing player is assured of and the maximum score that the minimizing player is assured of. Initially, alpha is negative infinity and beta is positive infinity, i.e. both players start with their worst possible score. Whenever the maximum score that the minimizing player (i.e. the "beta" player) is assured of becomes less than the minimum score that the maximizing player (i.e., the "alpha" player) is assured of (i.e. beta < alpha), the maximizing player need not consider further descendants of this node, as they will never be reached in the actual play.
To illustrate this with a real-life example, suppose somebody is playing chess, and it is their turn. Move "A" will improve the player's position. The player continues to look for moves to make sure a better one hasn't been missed. Move "B" is also a goo |
https://en.wikipedia.org/wiki/Evaluation%20function | An evaluation function, also known as a heuristic evaluation function or static evaluation function, is a function used by game-playing computer programs to estimate the value or goodness of a position (usually at a leaf or terminal node) in a game tree. Most of the time, the value is either a real number or a quantized integer, often in nths of the value of a playing piece such as a stone in go or a pawn in chess, where n may be tenths, hundredths or other convenient fraction, but sometimes, the value is an array of three values in the unit interval, representing the win, draw, and loss percentages of the position.
There do not exist analytical or theoretical models for evaluation functions for unsolved games, nor are such functions entirely ad-hoc. The composition of evaluation functions is determined empirically by inserting a candidate function into an automaton and evaluating its subsequent performance. A significant body of evidence now exists for several games like chess, shogi and go as to the general composition of evaluation functions for them.
Games in which game playing computer programs employ evaluation functions include chess, go, shogi (Japanese chess), othello, hex, backgammon, and checkers. In addition, with the advent of programs such as MuZero, computer programs also use evaluation functions to play video games, such as those from the Atari 2600. Some games like tic-tac-toe are strongly solved, and do not require search or evaluation because a discrete solution tree is available.
Relation to search
A tree of such evaluations is usually part of a search algorithm, such as Monte Carlo tree search or a minimax algorithm like alpha–beta search. The value is presumed to represent the relative probability of winning if the game tree were expanded from that node to the end of the game. The function looks only at the current position (i.e. what spaces the pieces are on and their relationship to each other) and does not take into account the history of the position or explore possible moves forward of the node (therefore static). This implies that for dynamic positions where tactical threats exist, the evaluation function will not be an accurate assessment of the position. These positions are termed non-quiescent; they require at least a limited kind of search extension called quiescence search to resolve threats before evaluation. Some values returned by evaluation functions are absolute rather than heuristic, if a win, loss or draw occurs at the node.
There is an intricate relationship between search and knowledge in the evaluation function. Deeper search favors less near-term tactical factors and more subtle long-horizon positional motifs in the evaluation. There is also a trade-off between efficacy of encoded knowledge and computational complexity: computing detailed knowledge may take so much time that performance decreases, so approximations to exact knowledge are often better. Because the evaluation function depends |
https://en.wikipedia.org/wiki/Link-state%20routing%20protocol | Link-state routing protocols are one of the two main classes of routing protocols used in packet switching networks for computer communications, the others being distance-vector routing protocols. Examples of link-state routing protocols include Open Shortest Path First (OSPF) and Intermediate System to Intermediate System (IS-IS).
The link-state protocol is performed by every switching node in the network (i.e., nodes that are prepared to forward packets; in the Internet, these are called routers). The basic concept of link-state routing is that every node constructs a map of the connectivity to the network, in the form of a graph, showing which nodes are connected to which other nodes. Each node then independently calculates the next best logical path from it to every possible destination in the network. Each collection of best paths will then form each node's routing table.
This contrasts with distance-vector routing protocols, which work by having each node share its routing table with its neighbours, in a link-state protocol the only information passed between nodes is connectivity related. Link-state algorithms are sometimes characterized informally as each router, "telling the world about its neighbors."
Overview
In link-state routing protocols, each router possesses information about the complete network topology. Each router then independently calculates the best next hop from it for every possible destination in the network using local information of the topology. The collection of best-next-hops forms the routing table.
This contrasts with distance-vector routing protocols, which work by having each node share its routing table with its neighbours. In a link-state protocol, the only information passed between the nodes is the information used to construct the connectivity maps.
Examples of link-state routing protocols:
Open Shortest Path First (OSPF)
Intermediate System to Intermediate System (IS-IS)
History
What is believed to be the first adaptive routing network of computers, using link-state routing as its heart, was designed and implemented during 1976-1977 by a team from Plessey Radar led by Bernard J Harris; the project was for "Wavell" a system of computer command and control for the British Army.
The first link-state routing concept was published in 1979 by John M. McQuillan (then at Bolt, Beranek and Newman) as a mechanism that would calculate routes more quickly when network conditions changed, and thus lead to more stable routing.
Later work at BBN Technologies showed how to use the link-state technique in a hierarchical system (i.e., one in which the network was divided into areas) so that each switching node does not need a map of the entire network, only the area(s) in which it is included.
The technique was later adapted for use in the contemporary link-state routing protocols IS-IS and OSPF. Cisco literature refers to Enhanced Interior Gateway Routing Protocol (EIGRP) as a "hybrid" protocol, despite the fact |
https://en.wikipedia.org/wiki/SDP | SDP may refer to:
Computing
Scenario Design Power, a power level mode of certain generations of Intel's mobile processors
Semidefinite programming, an optimization procedure
Service data point, a node in mobile telecommunication networks
Service delivery platform, a mobile telecommunications component
Service Design Package, the repository of all design information for a service in ITIL
Service discovery protocol, a type of service discovery for network services
Session Description Protocol, a communication protocol for describing multimedia sessions
Single-dealer platform, software used in financial trading
Sockets Direct Protocol, a low-level remote-computing protocol
Software Defined Perimeter, also called "Black Cloud", an approach to computer security
Music
Stephen Dale Petit (born 1969), an American blues musician
Scha Dara Parr, a Japanese hip-hop group
Stuart Price (born 1977), a British music producer who occasionally remixes under the moniker SDP
SDP (band), a German pop/hip-hop duo
Political parties
Social Democratic Party, a list of parties with this name
Socialist Democratic Party (disambiguation)
Europe
Social Democratic Party (Andorra)
Social Democratic Party of Bosnia and Herzegovina
Social Democratic Party of Croatia
Social Democratic Party of Finland
Social Democratic Party of Germany
Social Democratic Party in the GDR
Sudeten German Party (Sudetendeutsche Partei)
Social Democratic Party (Latvia)
Social Democratic Party of Montenegro
Social Democratic Party (Serbia 2001–10)
Social Democratic Party (Serbia 2014-)
Social Democratic Party of Serbia
Social Democratic Party (UK)
Social Democratic Party (UK, 1988)
Social Democratic Party (UK, 1990–present)
Socialist Democratic Party (Turkey)
Elsewhere
Social Democratic Party of America
Socialist Democratic Party (Canada)
Socialist Democratic Party (Chile)
Social Democratic Party (Japan)
Socialist Democratic Party (Japan)
Social Democratic Party (New Zealand)
Socialist Democrat Party (Peru)
Singapore Democratic Party
Communist Party of Kenya, formerly known as the Social Democratic Party of Kenya
Transport
SDP, the IATA code for Sand Point Airport in the Aleutian Islands, Alaska, US
SDP, the National Rail code for Sandplace railway station, Cornwall, UK
Stoomtrein Dendermonde-Puurs, the Dendermonde–Puurs Steam Railway heritage railway in Belgium
Other uses
San Diego Padres, American professional baseball team
School District of Philadelphia
sdp, the ISO 639-3 code for the Sherdukpen language spoken in Arunachal Pradesh, India
Society of Decorative Painters
Stardust Pictures, a film studio, operating as a subsidiary of Stardust Promotion
State Domestic Product, in economics
Substantive due process, legal principal in the United States
See also
SPD (disambiguation)
DSP (disambiguation)
PDS (disambiguation)
Democratic Socialist Party (disambiguation)
Party of Democratic Socialism (disambiguation) |
https://en.wikipedia.org/wiki/Network%20Time%20Protocol | The Network Time Protocol (NTP) is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. In operation since before 1985, NTP is one of the oldest Internet protocols in current use. NTP was designed by David L. Mills of the University of Delaware.
NTP is intended to synchronize all participating computers to within a few milliseconds of Coordinated Universal Time (UTC). It uses the intersection algorithm, a modified version of Marzullo's algorithm, to select accurate time servers and is designed to mitigate the effects of variable network latency. NTP can usually maintain time to within tens of milliseconds over the public Internet, and can achieve better than one millisecond accuracy in local area networks under ideal conditions. Asymmetric routes and network congestion can cause errors of 100 ms or more.
The protocol is usually described in terms of a client–server model, but can as easily be used in peer-to-peer relationships where both peers consider the other to be a potential time source. Implementations send and receive timestamps using the User Datagram Protocol (UDP) on port number 123. They can also use broadcasting or multicasting, where clients passively listen to time updates after an initial round-trip calibrating exchange. NTP supplies a warning of any impending leap second adjustment, but no information about local time zones or daylight saving time is transmitted.
The current protocol is version 4 (NTPv4), which is backward compatible with version 3.
History
In 1979, network time synchronization technology was used in what was possibly the first public demonstration of Internet services running over a trans-Atlantic satellite network, at the National Computer Conference in New York. The technology was later described in the 1981 Internet Engineering Note (IEN) 173 and a public protocol was developed from it that was documented in . The technology was first deployed in a local area network as part of the Hello routing protocol and implemented in the Fuzzball router, an experimental operating system used in network prototyping, where it ran for many years.
Other related network tools were available both then and now. They include the Daytime and Time protocols for recording the time of events, as well as the ICMP Timestamp messages and IP Timestamp option (). More complete synchronization systems, although lacking NTP's data analysis and clock disciplining algorithms, include the Unix daemon timed, which uses an election algorithm to appoint a server for all the clients; and the Digital Time Synchronization Service (DTSS), which uses a hierarchy of servers similar to the NTP stratum model.
In 1985, NTP version 0 (NTPv0) was implemented in both Fuzzball and Unix, and the NTP packet header and round-trip delay and offset calculations, which have persisted into NTPv4, were documented in . Despite the relatively slow computers and networks available at the |
https://en.wikipedia.org/wiki/Microsoft%20Flight%20Simulator | Microsoft Flight Simulator is a series of flight simulator programs for MS-DOS, Classic Mac OS and Microsoft Windows operating systems. It was an early product in the Microsoft application portfolio and differed significantly from Microsoft's other software, which was largely business-oriented. As of November 2022, Microsoft Flight Simulator is the longest-running software product line for Microsoft, predating Windows by three years. Microsoft Flight Simulator is one of the longest-running PC video game series of all time.
Bruce Artwick began the development of Flight Simulator in 1977. His company, Sublogic, initially distributed it for various personal computers. In 1981, Artwick was approached by Microsoft's Alan M. Boyd who was interested in creating a "definitive game" that would graphically demonstrate the difference between older 8-bit computers, such as the Apple II, and the new 16-bit computers, such as the IBM PC, still in development. In 1982, Artwick's company licensed a version of Flight Simulator for the IBM PC to Microsoft, which marketed it as Microsoft Flight Simulator 1.00.
In 2009, Microsoft closed down Aces Game Studio, which was the department responsible for creating and maintaining the Flight Simulator series. In 2014, Dovetail Games were granted the rights by Microsoft to port the Gold Edition of Microsoft's Flight Simulator X to Steam and publish Flight Simulator X: Steam Edition.
Microsoft announced a new installment at E3 in 2019, simply titled Microsoft Flight Simulator, to be released initially on PC and ported over to the Xbox Series X at a later date. On July 12, 2020, Microsoft opened up preorders and announced that Microsoft Flight Simulator for PC will be available on August 18, 2020. The company announced three different versions of the title – standard, deluxe, and premium deluxe, each providing an incremental set of gameplay features, including airports, and airplanes to choose from. The Xbox edition was released on July 27, 2021.
History
Microsoft Flight Simulator began as a set of articles written by Bruce Artwick in 1976 about a 3D computer graphics program. When the magazine editor said that subscribers wanted to buy the program, Artwick set to work to create it and incorporated a company called Sublogic Corporation in 1977. The company began selling flight simulators for several computer platforms, including the 8080, Altair 8800, and IMSAI 8080. In 1979 Sublogic released FS1 Flight Simulator for the Apple II. In 1980, Sublogic released a version for the TRS-80, and in 1982 they licensed an IBM PC version with CGA graphics to Microsoft, which was released as Microsoft Flight Simulator 1.00 on a self-booting disk. In the early days of less-than-100% IBM PC compatible systems, Flight Simulator and Lotus 1-2-3 were used as unofficial compatibility test software for new PC clone models.
Sublogic continued to develop for other platforms and ported Flight Simulator II to the Apple II in 1983; the Commodor |
https://en.wikipedia.org/wiki/Linear%20interpolation | In mathematics, linear interpolation is a method of curve fitting using linear polynomials to construct new data points within the range of a discrete set of known data points.
Linear interpolation between two known points
If the two known points are given by the coordinates and , the linear interpolant is the straight line between these points. For a value in the interval , the value along the straight line is given from the equation of slopes
which can be derived geometrically from the figure on the right. It is a special case of polynomial interpolation with .
Solving this equation for , which is the unknown value at , gives
which is the formula for linear interpolation in the interval . Outside this interval, the formula is identical to linear extrapolation.
This formula can also be understood as a weighted average. The weights are inversely related to the distance from the end points to the unknown point; the closer point has more influence than the farther point. Thus, the weights are and , which are normalized distances between the unknown point and each of the end points. Because these sum to 1,
yielding the formula for linear interpolation given above.
Interpolation of a data set
Linear interpolation on a set of data points is defined as the concatenation of linear interpolants between each pair of data points. This results in a continuous curve, with a discontinuous derivative (in general), thus of differentiability class
Linear interpolation as approximation
Linear interpolation is often used to approximate a value of some function using two known values of that function at other points. The error of this approximation is defined as
where denotes the linear interpolation polynomial defined above:
It can be proven using Rolle's theorem that if has a continuous second derivative, then the error is bounded by
That is, the approximation between two points on a given function gets worse with the second derivative of the function that is approximated. This is intuitively correct as well: the "curvier" the function is, the worse the approximations made with simple linear interpolation become.
History and applications
Linear interpolation has been used since antiquity for filling the gaps in tables. Suppose that one has a table listing the population of some country in 1970, 1980, 1990 and 2000, and that one wanted to estimate the population in 1994. Linear interpolation is an easy way to do this. It is believed that it was used in the Seleucid Empire (last three centuries BC) and by the Greek astronomer and mathematician Hipparchus (second century BC). A description of linear interpolation can be found in the ancient Chinese mathematical text called The Nine Chapters on the Mathematical Art (九章算術), dated from 200 BC to AD 100 and the Almagest (2nd century AD) by Ptolemy.
The basic operation of linear interpolation between two values is commonly used in computer graphics. In that field's jargon it is sometimes call |
https://en.wikipedia.org/wiki/Block%20cipher%20mode%20of%20operation | In cryptography, a block cipher mode of operation is an algorithm that uses a block cipher to provide information security such as confidentiality or authenticity. A block cipher by itself is only suitable for the secure cryptographic transformation (encryption or decryption) of one fixed-length group of bits called a block. A mode of operation describes how to repeatedly apply a cipher's single-block operation to securely transform amounts of data larger than a block.
Most modes require a unique binary sequence, often called an initialization vector (IV), for each encryption operation. The IV has to be non-repeating and, for some modes, random as well. The initialization vector is used to ensure distinct ciphertexts are produced even when the same plaintext is encrypted multiple times independently with the same key. Block ciphers may be capable of operating on more than one block size, but during transformation the block size is always fixed. Block cipher modes operate on whole blocks and require that the last part of the data be padded to a full block if it is smaller than the current block size. There are, however, modes that do not require padding because they effectively use a block cipher as a stream cipher.
Historically, encryption modes have been studied extensively in regard to their error propagation properties under various scenarios of data modification. Later development regarded integrity protection as an entirely separate cryptographic goal. Some modern modes of operation combine confidentiality and authenticity in an efficient way, and are known as authenticated encryption modes.
History and standardization
The earliest modes of operation, ECB, CBC, OFB, and CFB (see below for all), date back to 1981 and were specified in FIPS 81, DES Modes of Operation. In 2001, the US National Institute of Standards and Technology (NIST) revised its list of approved modes of operation by including AES as a block cipher and adding CTR mode in SP800-38A, Recommendation for Block Cipher Modes of Operation. Finally, in January, 2010, NIST added XTS-AES in SP800-38E, Recommendation for Block Cipher Modes of Operation: The XTS-AES Mode for Confidentiality on Storage Devices. Other confidentiality modes exist which have not been approved by NIST. For example, CTS is ciphertext stealing mode and available in many popular cryptographic libraries.
The block cipher modes ECB, CBC, OFB, CFB, CTR, and XTS provide confidentiality, but they do not protect against accidental modification or malicious tampering. Modification or tampering can be detected with a separate message authentication code such as CBC-MAC, or a digital signature. The cryptographic community recognized the need for dedicated integrity assurances and NIST responded with HMAC, CMAC, and GMAC. HMAC was approved in 2002 as FIPS 198, The Keyed-Hash Message Authentication Code (HMAC), CMAC was released in 2005 under SP800-38B, Recommendation for Block Cipher Modes of Operation: The CMAC Mod |
https://en.wikipedia.org/wiki/CD%20ripper | A CD ripper, CD grabber, or CD extractor is software that rips raw digital audio in Compact Disc Digital Audio (CD-DA) format tracks on a compact disc to standard computer sound files, such as WAV or MP3.
A more formal term used for the process of ripping audio CDs is digital audio extraction (DAE).
History
In the early days of computer CD-ROM drives and audio compression mechanisms (such as MP2), CD ripping was considered undesirable by copyright holders, with some attempting to retrofit copy protection into the simple ISO9660 standard. As time progressed, most music publishers became more open to the idea that since individuals had bought the music, they should be able to create a copy for their own personal use on their own computer. This is not yet entirely true; even with some current digital music delivery mechanisms, there are considerable restrictions on what an end user can do with their paid for (and therefore personally licensed) audio. Windows Media Player's default behavior is to add copy protection measures to ripped music, with a disclaimer that if this is not done, the end user is held entirely accountable for what is done with their music. This suits most users who simply want to store their music on a memory stick, MP3 player or portable hard disk and listen to it on any PC or compatible device.
Etymology
The Jargon File entry for rip notes that the term originated in Amiga slang, where it referred to the extraction of multimedia content from program data.
Design
As an intermediate step, some ripping programs save the extracted audio in a lossless format such as WAV, FLAC, or even raw PCM audio. The extracted audio can then be encoded with a lossy codec like MP3, Vorbis, WMA or AAC. The encoded files are more compact and are suitable for playback on digital audio players. They may also be played back in a media player program on a computer.
Most ripping programs will assist in tagging the encoded files with metadata. The MP3 file format, for example, allows tags with title, artist, album and track number information. Some will try to identify the disc being ripped by looking up network services like AMG's LASSO, FreeDB, Gracenote's CDDB, GD3 or MusicBrainz, or attempt text extraction if CD-Text has been stored.
Some all-in-one ripping programs can simplify the entire process by ripping and burning the audio to disc in one step, possibly re-encoding the audio on-the-fly in the process.
Some CD ripping software is specifically intended to provide an especially accurate or "secure" rip, including Exact Audio Copy, cdda2wav, CDex and cdparanoia.
Compact disc seek jitter
In the context of digital audio extraction from compact discs, seek jitter causes extracted audio samples to be doubled-up or skipped entirely if the Compact Disc drive re-seeks. The problem occurs because the Red Book does not require block-accurate addressing during seeking. As a result, the extraction process may restart a few samples early or late, r |
https://en.wikipedia.org/wiki/Media%20player%20software | Media player software is a type of application software for playing multimedia computer files like audio and video files. Media players commonly display standard media control icons known from physical devices such as tape recorders and CD players, such as play ( ), pause ( ), fastforward (⏩️), rewind (⏪), and stop ( ) buttons. In addition, they generally have progress bars (or "playback bars"), which are sliders to locate the current position in the duration of the media file.
Mainstream operating systems have at least one default media player. For example, Windows comes with Windows Media Player, Microsoft Movies & TV and Groove Music, while macOS comes with QuickTime Player and Music. Linux distributions come with different media players, such as SMPlayer, Amarok, Audacious, Banshee, MPlayer, mpv, Rhythmbox, Totem, VLC media player, and xine. Android comes with YouTube Music for audio and Google Photos for video, and smartphone vendors such as Samsung may bundle custom software.
Functionality focus
The basic feature set of media players are a seek bar, a timer with the current and total playback time, playback controls (play, pause, previous, next, stop), playlists, a "repeat" mode, and a "shuffle" (or "random") mode for curiosity and to facilitate searching long timelines of files.
Different media players have different goals and feature sets. Video players are a group of media players that have their features geared more towards playing digital video. For example, Windows DVD Player exclusively plays DVD-Video discs and nothing else. Media Player Classic can play individual audio and video files but many of its features such as color correction, picture sharpening, zooming, set of hotkeys, DVB support and subtitle support are only useful for video material such as films and cartoons. Audio players, on the other hand, specialize in digital audio. For example, AIMP exclusively plays audio formats. MediaMonkey can play both audio and video formats, but many of its features including media library, lyric discovery, music visualization, online radio, audiobook indexing, and tag editing are geared toward consumption of audio material; watching video files on it can be a trying feat. General-purpose media players also do exist. For example, Windows Media Player has exclusive features for both audio and video material, although it cannot match the feature set of Media Player Classic and MediaMonkey combined.
By default, videos are played with fully visible field of view while filling at least either width or height of the viewport to appear as large as possible. Options to change the video's scaling and aspect ratio may include filling the viewport through either stretching or cropping, and "100% view" where each pixel of the video covers exactly one pixel on the screen.
Zooming into the field of view during playback may be implemented through a slider on any screen or with pinch zoom on touch screens, and moving the field of view may be i |
https://en.wikipedia.org/wiki/IDL | IDL may refer to:
Computing
Interface description language, any computer language used to describe a software component's interface
IDL specification language, the original IDL created by Lamb, Wulf and Nestor at Queen's University, Canada
OMG IDL, an IDL standardized by Object Management Group selected by the W3C for exposing the DOM of XML, HTML, CSS, and SVG documents
Microsoft Interface Definition Language, an extension of OMG IDL for supporting Microsoft's DCOM services
Web IDL, a variation of an IDL for describing APIs that are intended to be implemented in Web browsers
Interactive Data Language, a data analysis language popular for science applications
ICAD Design Language, a knowledge-based engineering language used with the software ICAD
Places
John F. Kennedy International Airport, formerly named Idlewild Airport with IATA airport code IDL
Indianola Municipal Airport, by FAA airport code
Inner Dispersal Loop, the common name of Interstate 444, a highway in downtown Tulsa, Oklahoma
Other uses
International Date Line, the time zone date boundary
Intermediate-density lipoprotein
International Drivers License
International Darts League, a defunct major darts tournament
IDL Drug Stores, a now-defunct independent drug store cooperative
Internet Defense League, a website |
https://en.wikipedia.org/wiki/MenuetOS | MenuetOS is an operating system with a monolithic preemptive, real-time kernel written in FASM assembly language. The system also includes video drivers. It runs on 64-bit and 32-bit x86 architecture computers. Its author is Ville M. Turjanmaa. It has a graphical desktop, games, and networking abilities (TCP/IP stack). One distinctive feature is that it fits on one floppy disk. On an Intel Pentium MMX system, one person reported a boot time of "probably ."
History
32-bit
MenuetOS was originally written for 32-bit x86 architectures and released under the GPL-2.0-only license, thus many of its applications are distributed under the GPL.
64-bit
The 64-bit MenuetOS, often referred to as Menuet 64, remains a platform for learning 64-bit assembly language programming. The 64-bit Menuet is distributed without charge for personal and educational use only, but without the source code, and the license includes a clause that prohibits disassembly.
Multi-core support was added on 24 Feb 2010.
Features
MenuetOS development has focused on fast, simple, efficient implementation. MenuetOS has networking abilities, and a working TCP/IP stack. Most of the networking code is written by Mike Hibbett.
The main focus of Menuet has been on making an environment for easy assembly programming, but it is still possible to run software written in high-level programming languages on the assembler core. The biggest single effort towards high-level language support is Jarek Pelczar's work in porting C libraries to Menuet.
The GUI at version 0.99 supports display resolutions up to (16 million colours) with window transparency. The OS has support for several classes of USB 2.0 peripherals. MenuetOS ships with the shareware versions of Quake and Doom.
For disk access, MenuetOS supports the FAT32 file system. Write support is only possible to USB connected devices.
Distributions
32-bit
Menuet32
GridWorks "EZ" distribution (comprehensive 32-bit archive packages) (CD/HD Boots)
64-bit
The 64-bit main distribution is now proprietary. Several distributions of the 32-bit GPL MenuetOS still exist, including translations in Russian, Chinese, Czech, and Serbian.
Menuet64
See also
KolibriOS - A free fork of MenuetOS 32-bit
References
David Chisnall (Jun 22, 2007) A Roundup of Free Operating Systems. MenuetOS, informIT
MenuetOS - 32bit-Betriebssystem auf einer Floppy, Der Standard, 12 May 2003
Eugenia Loli-Queru (5 Sep 2001) Interview With Ville Turjanmaa, the Creator of MenuetOS, OSNews
Ville M. Turjanmaa (December 1, 2001) The Menuet Operating System. Packing a lot of punch into a small package, Dr. Dobb's
External links
MenuetOS homepage (Menuet64 oriented)
MenuetOS C Library
MenuetOS compared to AtheOS and SkyOS (2002)
an interview with Ville Turjanmaa and Madis Kalme, two of the MenuetOS developers (2009)
Floppy disk-based operating systems
X86-64 operating systems
X86 operating systems
Assembly language software
Proprietary operating systems
Hobbyist o |
https://en.wikipedia.org/wiki/FASM | FASM (flat assembler) is an assembler for x86 processors. It supports Intel-style assembly language on the IA-32 and x86-64 computer architectures. It claims high speed, size optimizations, operating system (OS) portability, and macro abilities. It is a low-level assembler and intentionally uses very few command-line options. It is free and open-source software.
All versions of FASM can directly output any of the following: flat "raw" binary (usable also as MS-DOS COM executable or SYS driver), objects: Executable and Linkable Format (ELF) or Common Object File Format (COFF) (classic or MS-specific), or executables in either MZ, ELF, or Portable Executable (PE) format (including WDM drivers, allows custom MZ DOS stub). An unofficial port targeting the ARM architecture (FASMARM) also exists.
History
The project was started in 1999 by Tomasz Grysztar, a.k.a. Privalov, at that time an undergraduate student of mathematics from Poland. It was released publicly in March 2000. FASM is completely written in assembly language and comes with full source. It is self-hosting and has been able to assemble itself since version 0.90 (May 4, 1999).
FASM originally ran in 16-bit flat real mode. 32-bit support was added and then supplemented with optional DPMI support. Designed to be easy to port to any operating system with flat 32-bit addressing, it was ported to Windows, then Linux.
Design
FASM does not support as many high-level statements as MASM or TASM. It provides syntax features and macros, which make it possible to customize or create missing statements. Its memory-addressing syntax is similar to TASM's ideal mode and NASM. Brackets are used to denote memory operands as in both assemblers, but their size is placed outside the brackets, like in NASM.
FASM is a multi-pass assembler. It makes extensive code-size optimization and allows unconstrained forward referencing. An unusual FASM construct is defining procedures only if they are used somewhere in the code, something that in most languages is done per-object by the linker.
FASM is based on the "same source, same output" principle: the contents of the resulting file are not affected by the command line. Such an approach saves FASM sources from compiling problems often present in many assembly projects. On the other hand, it makes it harder to maintain a project that consists of multiple separately compiled source files or mixed-language projects. However, there exists a Win32 wrapper called FA, which mitigates this problem. FASM projects can be built from one source file directly into an executable file without a linking stage.
IDE
Fresh, an internet community supported project started by John Found, is an integrated development environment for FASM. Fresh currently supports Microsoft Windows and Linux.
Use
Operating systems written with FASM:
MenuetOS – 32- and 64-bit GUI operating systems by Ville Turijanmaa
KolibriOS
Compilers that use FASM as a backend:
PureBasic
High Level Ass |
https://en.wikipedia.org/wiki/Point%20estimation | In statistics, point estimation involves the use of sample data to calculate a single value (known as a point estimate since it identifies a point in some parameter space) which is to serve as a "best guess" or "best estimate" of an unknown population parameter (for example, the population mean). More formally, it is the application of a point estimator to the data to obtain a point estimate.
Point estimation can be contrasted with interval estimation: such interval estimates are typically either confidence intervals, in the case of frequentist inference, or credible intervals, in the case of Bayesian inference. More generally, a point estimator can be contrasted with a set estimator. Examples are given by confidence sets or credible sets. A point estimator can also be contrasted with a distribution estimator. Examples are given by confidence distributions, randomized estimators, and Bayesian posteriors.
Properties of point estimates
Biasedness
“Bias” is defined as the difference between the expected value of the estimator and the true value of the population parameter being estimated. It can also be described that the closer the expected value of a parameter is to the measured parameter, the lesser the bias. When the estimated number and the true value is equal, the estimator is considered unbiased. This is called an unbiased estimator. The estimator will become a best unbiased estimator if it has minimum variance. However, a biased estimator with a small variance may be more useful than an unbiased estimator with a large variance. Most importantly, we prefer point estimators that have the smallest mean square errors.
If we let T = h(X1,X2, . . . , Xn) be an estimator based on a random sample X1,X2, . . . , Xn, the estimator T is called an unbiased estimator for the parameter θ if E[T] = θ, irrespective of the value of θ. For example, from the same random sample we have E(x̄) = µ (mean) and E(s2) = σ2 (variance), then x̄ and s2 would be unbiased estimators for µ and σ2. The difference E[T ] − θ is called the bias of T ; if this difference is nonzero, then T is called biased.
Consistency
Consistency is about whether the point estimate stays close to the value when the parameter increases its size. The larger the sample size, the more accurate the estimate is. If a point estimator is consistent, its expected value and variance should be close to the true value of the parameter. An unbiased estimator is consistent if the limit of the variance of estimator T equals zero.
Efficiency
Let T1 and T2 be two unbiased estimators for the same parameter θ. The estimator T2 would be called more efficient than estimator T1 if Var(T2) < Var(T1), irrespective of the value of θ. We can also say that the most efficient estimators are the ones with the least variability of outcomes. Therefore, if the estimator has smallest variance among sample to sample, it is both most efficient and unbiased. We extend the notion of efficiency by saying that estimator T2 |
https://en.wikipedia.org/wiki/Interval%20estimation | In statistics, interval estimation is the use of sample data to estimate an interval of possible values of a parameter of interest. This is in contrast to point estimation, which gives a single value.
The most prevalent forms of interval estimation are confidence intervals (a frequentist method) and credible intervals (a Bayesian method);
less common forms include likelihood intervals and fiducial intervals.
Other forms of statistical intervals include tolerance intervals (covering a proportion of a sampled population) and prediction intervals (an estimate of a future observation, used mainly in regression analysis).
Non-statistical methods that can lead to interval estimates include fuzzy logic.
Discussion
The scientific problems associated with interval estimation may be summarised as follows:
When interval estimates are reported, they should have a commonly held interpretation in the scientific community and more widely. In this regard, credible intervals are held to be most readily understood by the general public. Interval estimates derived from fuzzy logic have much more application-specific meanings.
For commonly occurring situations there should be sets of standard procedures that can be used, subject to the checking and validity of any required assumptions. This applies for both confidence intervals and credible intervals.
For more novel situations there should be guidance on how interval estimates can be formulated. In this regard confidence intervals and credible intervals have a similar standing but there are differences:
credible intervals can readily deal with prior information, while confidence intervals cannot.
confidence intervals are more flexible and can be used practically in more situations than credible intervals: one area where credible intervals suffer in comparison is in dealing with non-parametric models (see non-parametric statistics).
There should be ways of testing the performance of interval estimation procedures. This arises because many such procedures involve approximations of various kinds and there is a need to check that the actual performance of a procedure is close to what is claimed. The use of stochastic simulations makes this is straightforward in the case of confidence intervals, but it is somewhat more problematic for credible intervals where prior information needs to be taken properly into account. Checking of credible intervals can be done for situations representing no-prior-information but the check involves checking the long-run frequency properties of the procedures.
Severini (1991) discusses conditions under which credible intervals and confidence intervals will produce similar results, and also discusses both the coverage probabilities of credible intervals and the posterior probabilities associated with confidence intervals.
In decision theory, which is a common approach to and justification for Bayesian statistics, interval estimation is not of direct interest. The outcome is a decisi |
https://en.wikipedia.org/wiki/C128 | C128 may refer to:
Commodore 128, a home / personal computer
a production designation for the XC-120 Packplane aircraft |
https://en.wikipedia.org/wiki/Loopback | Loopback (also written loop-back) is the routing of electronic signals or digital data streams back to their source without intentional processing or modification. It is primarily a means of testing the communications infrastructure.
Applications
There are many example applications. It may be a communication channel with only one communication endpoint. Any message transmitted by such a channel is immediately and only received by that same channel. In telecommunications, loopback devices perform transmission tests of access lines from the serving switching center, which usually does not require the assistance of personnel at the served terminal. Loop around is a method of testing between stations that are not necessarily adjacent, wherein two lines are used, with the test being done at one station and the two lines are interconnected at the distant station. A patch cable may also function as loopback, when applied manually or automatically, remotely or locally, facilitating a loop-back test.
Where a system (such as a modem) involves round-trip analog-to-digital processing, a distinction is made between analog loopback, where the analog signal is looped back directly, and digital loopback, where the signal is processed in the digital domain before being re-converted to an analog signal and returned to the source.
Telecommunications
In telecommunications, loopback, or a loop, is a hardware or software method which feeds a received signal or data back to the sender. It is used as an aid in debugging physical connection problems. As a test, many data communication devices can be configured to send specific patterns (such as all ones) on an interface and can detect the reception of this signal on the same port. This is called a loopback test and can be performed within a modem or transceiver by connecting its output to its own input. A circuit between two points in different locations may be tested by applying a test signal on the circuit in one location, and having the network device at the other location send a signal back through the circuit. If this device receives its own signal back, this proves that the circuit is functioning.
A hardware loop is a simple device that physically connects the receiver channel to the transmitter channel. In the case of a network termination connector such as X.21, this is typically done by simply connecting the pins together in the connector. Media such as optical fiber or coaxial cable, which have separate transmit and receive connectors, can simply be looped together with a single strand of the appropriate medium.
A modem can be configured to loop incoming signals from either the remote modem or the local terminal. This is referred to as loopback or software loop.
Serial interfaces
A serial communications transceiver can use loopback for testing its functionality. For example, a device's transmit pin connected to its receive pin will result in the device receiving exactly what it transmits. Moving this |
https://en.wikipedia.org/wiki/Hardware%20random%20number%20generator | In computing, a hardware random number generator (HRNG), true random number generator (TRNG), non-deterministic random bit generator (NRBG), or physical random number generator is a device that generates random numbers from a physical process capable of producing entropy (in other words, the device always has access to a physical entropy source), unlike the pseudorandom number generator (PRNG, a.k.a. "deterministic random bit generator", DRBG) that utilizes a deterministic algorithm and non-physical nondeterministic random bit generators that do not include hardware dedicated to generation of entropy.
Nature provides ample phenomena that generate low-level, statistically random "noise" signals, including thermal and shot noise, jitter and metastability of electronic circuits, Brownian motion, atmospheric noise. Researchers also used the photoelectric effect, involving a beam splitter, other quantum phenomena, and even the nuclear decay (due to practical considerations the latter, as well as the atmospheric noise, is not viable). While "classical" (non-quantum) phenomena are not truly random, an unpredictable physical system is usually acceptable as a source of randomness, so the qualifiers "true" and "physical" are used interchangeably.
A hardware random number generator is expected to output near-perfect random numbers ("full entropy"). A physical process usually does not have this property, and a practical TRNG typically includes few blocks:
a noise source that implements the physical process producing the entropy. Usually this process is analog, so a digitizer is used to convert the output of the analog source into a binary representation;
a conditioner that improves the quality of the random bits;
health tests. TRNGs are mostly used in cryptographical algorithms that get completely broken if the random numbers have low entropy, so the testing functionality is usually included.
Hardware random number generators generally produce only a limited number of random bits per second. In order to increase the available output data rate, they are often used to generate the "seed" for a faster PRNG. DRBG also helps with the noise source "anonymization" (whitening out the noise source identifying characteristics) and entropy extraction. With a proper DRBG algorithm selected (cryptographically secure pseudorandom number generator, CSPRNG), the combination can satisfy the requirements of Federal Information Processing Standards and Common Criteria standards.
Uses
Hardware random generators can be used in any application that needs randomness. However, in many scientific applications additional cost and complexity of a TRNG (when compared with pseudo random number generators) provide no meaningful benefits. TRNGs have additional drawbacks for data science and statistical applications: impossibility to re-run a series of numbers unless they are stored, reliance on an analog physical entity can obscure the failure of the source. The TRNGs there |
https://en.wikipedia.org/wiki/SINCGARS | Single Channel Ground and Airborne Radio System (SINCGARS) is a very high frequency combat-net radio (CNR) used by U.S. and allied military forces. In the CNR network, the SINCGARS’ primary role is voice transmission between surface and airborne command and control assets.
The SINCGARS family replaced the Vietnam War-era synthesized single frequency radios (AN/PRC-77 and AN/VRC-12), although it can work with them. The airborne AN/ARC-201 radio is phasing out the older tactical air-to-ground radios (AN/ARC-114 and AN/ARC-131).
The SINCGARS is designed on a modular basis to achieve maximum commonality among various ground, maritime, and airborne configurations. A common receiver transmitter (RT) is used in the ground configurations. The modular design also reduces the burden on the logistics system to provide repair parts.
The SINCGARS can operate in either the single channel or frequency hop (FH) mode, and stores both single channel frequencies and FH load sets. The system is compatible with all current U.S. and allied VHF-FM radios in the single channel, non-secure mode. The SINCGARS operates on any of 2320 channels between 30 and 88 megahertz (MHz) with a channel separation of 25 kilohertz (kHz). It accepts either digital or analog inputs and superimposes the signal onto a radio frequency (RF) carrier wave. In FH mode, the input changes frequency about 100 times per second over portions of the tactical VHF-FM range. These continual changes in frequency hinder threat intercept and jamming units from locating or disrupting friendly communications. The SINCGARS provides data rates up to 16,000 bits per second. Enhanced data modes provide packet and RS-232 data. The enhanced data modes available with the System Improvement Program (SIP) and Advanced System Improvement Program (ASIP) radios also enable forward error correction (FEC), and increased speed, range, and accuracy of data transmissions.
Most ground SINCGARS radios have the ability to control output power; however, most airborne SINCGARS radio sets are fixed power. Those RTs with power settings can vary transmission range from approximately 200 meters (660 feet) to 10 kilometers (km) (6.2 miles). Adding a power amplifier increases the line of sight (LOS) range to approximately 40 km (25 miles). (These ranges are for planning purposes only; terrain, weather, and antenna height have an effect on transmission range.) The variable output power level allows users to operate on the minimum power necessary to maintain reliable communications, thus lessening the electromagnetic signature given off by their radio sets. This ability is of particular importance at major command posts, which operate in multiple networks.
SC CNR users outside the FH network can use a hailing method to request access to the network. When hailing a network, a user outside the network contacts the network control station (NCS) on the cue frequency. In the active FH mode, the SINCGARS radio gives audible and visual sig |
https://en.wikipedia.org/wiki/Enhanced%20Graphics%20Adapter | The Enhanced Graphics Adapter (EGA) is an IBM PC graphics adapter and de facto computer display standard from 1984 that superseded the CGA standard introduced with the original IBM PC, and was itself superseded by the VGA standard in 1987. In addition to the original EGA card manufactured by IBM, many compatible third-party cards were manufactured, and EGA graphics modes continued to be supported by VGA and later standards.
History
EGA was introduced in October 1984 by IBM, shortly after its new PC/AT. The EGA could be installed in previously released IBM PCs, but required a ROM upgrade on the mainboard.
Chips and Technologies' first product, announced in September 1985, was a four chip EGA chipset that handled the functions of 19 of IBM's proprietary chips on the original Enhanced Graphics Adapter. By that November's COMDEX, more than a half dozen companies had introduced EGA-compatible boards based on C&T's chipset.
Between 1984 and 1987, several third-party manufacturers produced compatible cards, such as the Autoswitch EGA or Genoa Systems' Super EGA chipset. Later cards supporting an extended version of the VGA were similarly named Super VGA.
The EGA standard was made obsolete in 1987 by the introduction of MCGA and VGA with the PS/2 computer line.
Adoption
Commercial software began supporting EGA soon after its introduction, with The Ancient Art of War, released in 1984. Microsoft Flight Simulator v2.12, Jet, Silent Service, and Cyrus, all released in 1985, offered EGA support, along with Windows 1.0. Sierra's King's Quest III, released in 1986, was one of the earliest mainstream PC games to use it.
By 1987, EGA support was commonplace. Most software made up to 1991 could run in EGA, although the vast majority of commercial games used 320 × 200 with 16 colors for backwards compatibility with CGA and Tandy, and to support users who did not own an enhanced EGA monitor. 350-line modes were mostly used by freeware/shareware games and application software, although SimCity is a notable example of a commercial game that runs in 640 × 350 with 16 colors mode.
Hardware design
The original IBM EGA was an 8-bit PC ISA card with 64 KB of onboard RAM. An optional daughter-board (the Graphics Memory Expansion Card) provided a minimum of 64 KB additional RAM, and up to 192 KB if fully populated with the Graphics Memory Module Kit. Without these upgrades, the card would be limited to four colors in 640 × 350 mode.
Output was via direct-drive RGB, as with the CGA, but no composite video output was included. MDA and CGA monitors could be driven, as well as newly released enhanced color monitors for use specifically with EGA.
EGA-specific monitors used a dual-sync design which could switch from the 15.7 kHz of 200-line modes to 21.8 kHz for 350-line modes.
Many EGA cards have DIP switches on the back of the card to select the monitor type. If CGA is selected, the card will operate in 200-line mode and use 8x8 characters in text mode. If EGA i |
https://en.wikipedia.org/wiki/Outlier | In statistics, an outlier is a data point that differs significantly from other observations. An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter are sometimes excluded from the data set. An outlier can be an indication of exciting possibility, but can also cause serious problems in statistical analyses.
Outliers can occur by chance in any distribution, but they can indicate novel behaviour or structures in the data-set, measurement error, or that the population has a heavy-tailed distribution. In the case of measurement error, one wishes to discard them or use statistics that are robust to outliers, while in the case of heavy-tailed distributions, they indicate that the distribution has high skewness and that one should be very cautious in using tools or intuitions that assume a normal distribution. A frequent cause of outliers is a mixture of two distributions, which may be two distinct sub-populations, or may indicate 'correct trial' versus 'measurement error'; this is modeled by a mixture model.
In most larger samplings of data, some data points will be further away from the sample mean than what is deemed reasonable. This can be due to incidental systematic error or flaws in the theory that generated an assumed family of probability distributions, or it may be that some observations are far from the center of the data. Outlier points can therefore indicate faulty data, erroneous procedures, or areas where a certain theory might not be valid. However, in large samples, a small number of outliers is to be expected (and not due to any anomalous condition).
Outliers, being the most extreme observations, may include the sample maximum or sample minimum, or both, depending on whether they are extremely high or low. However, the sample maximum and minimum are not always outliers because they may not be unusually far from other observations.
Naive interpretation of statistics derived from data sets that include outliers may be misleading. For example, if one is calculating the average temperature of 10 objects in a room, and nine of them are between 20 and 25 degrees Celsius, but an oven is at 175 °C, the median of the data will be between 20 and 25 °C but the mean temperature will be between 35.5 and 40 °C. In this case, the median better reflects the temperature of a randomly sampled object (but not the temperature in the room) than the mean; naively interpreting the mean as "a typical sample", equivalent to the median, is incorrect. As illustrated in this case, outliers may indicate data points that belong to a different population than the rest of the sample set.
Estimators capable of coping with outliers are said to be robust: the median is a robust statistic of central tendency, while the mean is not. However, the mean is generally a more precise estimator.
Occurrence and causes
In the case of normally distributed data, the three sigma rule m |
https://en.wikipedia.org/wiki/Box%20plot | In descriptive statistics, a box plot or boxplot is a method for graphically demonstrating the locality, spread and skewness groups of numerical data through their quartiles. In addition to the box on a box plot, there can be lines (which are called whiskers) extending from the box indicating variability outside the upper and lower quartiles, thus, the plot is also called the box-and-whisker plot and the box-and-whisker diagram. Outliers that differ significantly from the rest of the dataset may be plotted as individual points beyond the whiskers on the box-plot.
Box plots are non-parametric: they display variation in samples of a statistical population without making any assumptions of the underlying statistical distribution (though Tukey's boxplot assumes symmetry for the whiskers and normality for their length). The spacings in each subsection of the box-plot indicate the degree of dispersion (spread) and skewness of the data, which are usually described using the five-number summary. In addition, the box-plot allows one to visually estimate various L-estimators, notably the interquartile range, midhinge, range, mid-range, and trimean. Box plots can be drawn either horizontally or vertically.
History
The range-bar method was first introduced by Mary Eleanor Spear in her book "Charting Statistics" in 1952 and again in her book "Practical Charting Techniques" in 1969. The box-and-whisker plot was first introduced in 1970 by John Tukey, who later published on the subject in his book "Exploratory Data Analysis" in 1977.
Elements
A boxplot is a standardized way of displaying the dataset based on the five-number summary: the minimum, the maximum, the sample median, and the first and third quartiles.
Minimum (Q0 or 0th percentile): the lowest data point in the data set excluding any outliers
Maximum (Q4 or 100th percentile): the highest data point in the data set excluding any outliers
Median (Q2 or 50th percentile): the middle value in the data set
First quartile (Q1 or 25th percentile): also known as the lower quartile qn(0.25), it is the median of the lower half of the dataset.
Third quartile (Q3 or 75th percentile): also known as the upper quartile qn(0.75), it is the median of the upper half of the dataset.
In addition to the minimum and maximum values used to construct a box-plot, another important element that can also be employed to obtain a box-plot is the interquartile range (IQR), as denoted below:
Interquartile range (IQR) : the distance between the upper and lower quartiles
Whiskers
A box-plot usually includes two parts, a box and a set of whiskers as shown in Figure 2. The box is drawn from Q1 to Q3 with a horizontal line drawn in the middle to denote the median. The whiskers must end at an observed data point, but can be defined in various ways.
In the most straight-forward method, the boundary of the lower whisker is the minimum value of the data set, and the boundary of the upper whisker is the maximum value of the dat |
https://en.wikipedia.org/wiki/Five-number%20summary | The five-number summary is a set of descriptive statistics that provides information about a dataset. It consists of the five most important sample percentiles:
the sample minimum (smallest observation)
the lower quartile or first quartile
the median (the middle value)
the upper quartile or third quartile
the sample maximum (largest observation)
In addition to the median of a single set of data there are two related statistics called the upper and lower quartiles. If data are placed in order, then the lower quartile is central to the lower half of the data and the upper quartile is central to the upper half of the data. These quartiles are used to calculate the interquartile range, which helps to describe the spread of the data, and determine whether or not any data points are outliers.
In order for these statistics to exist the observations must be from a univariate variable that can be measured on an ordinal, interval or ratio scale.
Use and representation
The five-number summary provides a concise summary of the distribution of the observations. Reporting five numbers avoids the need to decide on the most appropriate summary statistic. The five-number summary gives information about the location (from the median), spread (from the quartiles) and range (from the sample minimum and maximum) of the observations. Since it reports order statistics (rather than, say, the mean) the five-number summary is appropriate for ordinal measurements, as well as interval and ratio measurements.
It is possible to quickly compare several sets of observations by comparing their five-number summaries, which can be represented graphically using a boxplot.
In addition to the points themselves, many L-estimators can be computed from the five-number summary, including interquartile range, midhinge, range, mid-range, and trimean.
The five-number summary is sometimes represented as in the following table:
Example
This example calculates the five-number summary for the following set of observations: 0, 0, 1, 2, 63, 61, 27, 13.
These are the number of moons of each planet in the Solar System.
It helps to put the observations in ascending order: 0, 0, 1, 2, 13, 27, 61, 63. There are eight observations, so the median is the mean of the two middle numbers, (2 + 13)/2 = 7.5. Splitting the observations either side of the median gives two groups of four observations. The median of the first group is the lower or first quartile, and is equal to (0 + 1)/2 = 0.5. The median of the second group is the upper or third quartile, and is equal to (27 + 61)/2 = 44.
The smallest and largest observations are 0 and 63.
So the five-number summary would be 0, 0.5, 7.5, 44, 63.
Example in R
It is possible to calculate the five-number summary in the R programming language using the fivenum function. The summary function, when applied to a vector, displays the five-number summary together with the mean (which is not itself a part of the five-number summary). The fivenum uses a dif |
https://en.wikipedia.org/wiki/R.%20T.%20Crowley | Robert T. Crowley (born March 2, 1948) is a pioneer in the development and practice of Electronic Data Interchange (EDI), an early component of electronic commerce.
Crowley participated in the development of the early forms of EDI, working with Edward A. Guilbert, the creator of the technology, from the 1977 onwards, and assisted in the development of UN/EDIFACT, the international EDI standard developed through the United Nations. Active in many EDI projects around the world, he served as Chair of ANSI ASC X12, the US national standards body for EDI, from 1993 to 1995.
He is the founder of the EDI standards committee for the ocean transport industry (OCEAN), as well as the US Customs Electronic Systems Advisory Committee (CESAC), advising the US Customs Service (USCS) on matters of electronic commerce. Robert was also a founding member of TOPAS (Terminal Operator and Port Authority Subcommittee) that initiated EDI use between ship lines and terminal operators/ports.
Robert also served as Chair of the X12 Security Task Group for a number of years, and was one of the authors of the X12 technical report on the use of Extensible Markup Language XML for conducting EDI. He is now vice chair of ISO Technical Committee 154 US Technical Advisory Group (ISO TC154 US TAG), and Editor of document ISO8601 Representation of Dates and Times.
References
Robert T. Crowley; Senior Vice President, Research Triangle Commerce, Inc. at investing.businessweek.com
1948 births
Living people |
https://en.wikipedia.org/wiki/Transport%20in%20East%20Timor | In East Timor, transportation is reduced due to the nation's poverty, poor transportation infrastructure, and sparse communications networks.
There are no railways in the country. The general condition of the roads is inadequate, and telephone and Internet capabilities are limited outside the cities. The country has six airports, one of which has commercial and international flights.
Railways
East Timor has no railways. However, a master plan for a long electrified double-track railway was proposed in 2012, with a central line from Bobonaro to Lospalos, a western corridor from Dili to Betano and an eastern corridor from Baucau to Uatolari.
Roadways
Overview
East Timor has a road network of , of which about of roads are paved, and about are unpaved.
The road network is made up of national roads linking municipal capitals (~), municipal roads linking municipal capitals to towns and villages (~), urban roads within urban areas (~) and rural roads within rural areas (~). In a 2015 survey reported by the World Bank, 57% of the rural roads were rated either bad or poor.
National roads
East Timor has 20 arterial roads, designated as A-class roads (national roads), as follows:
In October 2016, the East Timorese government symbolically launched a rehabilitation project for the Dili–Manatuto–Baucau national road. Construction was to be undertaken in two sections, Dili–Manatuto, and Manatuto–Baucau, in each case by a Chinese construction company. The project was financed by the General State Budget, and also from a loan fund from the Japanese government, through the Japan International Cooperation Agency (JICA). It was due to be completed in mid-2019, and the completed road was officially inaugurated on 26 August 2022.
According to a road network connectivity quality assessment published in September 2019, the national road network already satisfactorily connected all national activity centres for all
types of vehicles in circulation. However, some of the road segments needed to be improved, in terms of road width, drainage, geometric design and traffic facilities.
Bridges
Bridges in Dili
Two road bridges over the Comoro River link central Dili with the west side of the city, including the Presidente Nicolau Lobato International Airport and the Tibar Bay port, which as at early 2022 was due to start operations later that year. The more important of these two bridges is the CPLP Bridge; its alternative, approximately to its south, is the Hinode Bridge.
At the north eastern corner of central Dili, the B. J. Habibie Bridge spans the , and connects central Dili with the eastern waterfront of the Bay of Dili.
Noefefan Bridge
This bridge, also known as the Tono Bridge, was inaugurated in 2017 as part of the ZEESM TL project in Oecusse.
Ports and harbors
Port of Dili – for passenger ships and cruise ships carrying international passengers
Tibar Bay Port – for import and export goods; opened on 30 September 2022
Merchant marine
Total
1
Shi |
https://en.wikipedia.org/wiki/Mkdir | The mkdir (make directory) command in the Unix, DOS, DR FlexOS, IBM OS/2, Microsoft Windows, and ReactOS operating systems is used to make a new directory. It is also available in the EFI shell and in the PHP scripting language. In DOS, OS/2, Windows and ReactOS, the command is often abbreviated to md.
The command is analogous to the Stratus OpenVOS create_dir command. MetaComCo TRIPOS and AmigaDOS provide a similar MakeDir command to create new directories. The numerical computing environments MATLAB and GNU Octave include an mkdir
function with similar functionality.
History
In early versions of Unix (4.1BSD and early versions of System V), this command had to be setuid root as the kernel did not have an mkdir syscall. Instead, it made the directory with mknod and linked in the . and .. directory entries manually. The command is available in MS-DOS versions 2 and later. Digital Research DR DOS 6.0 and Datalight ROM-DOS also include an implementation of the and commands.
The version of mkdir bundled in GNU coreutils was written by David MacKenzie.
It is also available in the open source MS-DOS emulator DOSBox and in KolibriOS.
Usage
Normal usage is as straightforward as follows:
mkdir name_of_directory
where name_of_directory is the name of the directory one wants to create. When typed as above (i.e. normal usage), the new directory would be created within the current directory. On Unix and Windows (with Command extensions enabled, the default), multiple directories can be specified, and mkdir will try to create all of them.
Options
On Unix-like operating systems, mkdir takes options. The options are:
-p (--parents): parents or path, will also create all directories leading up to the given directory that do not exist already. For example, mkdir -p a/b will create directory a if it doesn't exist, then will create directory b inside directory a. If the given directory already exists, ignore the error.
-m (--mode): mode, specify the octal permissions of directories created by mkdir .
-p is most often used when using mkdir to build up complex directory hierarchies, in case a necessary directory is missing or already there. -m is commonly used to lock down temporary directories used by shell scripts.
Examples
An example of -p in action is:
mkdir -p /tmp/a/b/c
If /tmp/a exists but /tmp/a/b does not, mkdir will create /tmp/a/b before creating /tmp/a/b/c.
And an even more powerful command, creating a full tree at once (this however is a Shell extension, nothing mkdir does itself):
mkdir -p tmpdir/{trunk/sources/{includes,docs},branches,tags}
If one is using variables with mkdir in a bash script, POSIX `special' built-in command 'eval' would serve its purpose.
DOMAIN_NAME=includes,docs
eval "mkdir -p tmpdir/{trunk/sources/{${DOMAIN_NAME}},branches,tags}"
This will create:
tmpdir
|__
| | |
branches tags trunk
|
sources
|_
| |
|
https://en.wikipedia.org/wiki/Bob%20Hope | Leslie Townes "Bob" Hope (May 29, 1903 – July 27, 2003) was an American comedian, actor, entertainer, and producer with a career that spanned nearly 80 years and achievements in vaudeville, network radio, television, and USO Tours. He appeared in more than 70 short and feature films, starring in 54. These included a series of seven Road to ... musical comedy films with long-time friend Bing Crosby as his partner.
Hope hosted the Academy Awards show 19 times, more than any other host. He also appeared in many stage productions and television roles and wrote 14 books. The song "Thanks for the Memory" was his signature tune.
Hope was born in the Eltham district of southeast London. He arrived in the United States with his family at the age of four, and grew up near Cleveland, Ohio. He became a boxer in the 1910s but moved into show business in the early 1920s, initially as a comedian and dancer on the vaudeville circuit before acting on Broadway. He began appearing on radio and in films starting in 1934. He was praised for his comedic timing, specializing in one-liners and rapid-fire delivery of jokes that were often self-deprecating.
Between 1941 and 1991, Hope made 57 tours for the United Service Organizations (USO), entertaining military personnel around the world. In 1997, Congress passed a bill that made him an honorary veteran of the Armed Forces.
Hope retired from public life in 1998 and died in 2003, at 100.
Early years
Leslie Townes Hope was born on May 29, 1903, in Eltham, County of London (now part of the Royal Borough of Greenwich) in a terraced house on Craigton Road in Well Hall, where there is now a blue plaque in his memory. He was the fifth of seven sons of William Henry Hope, a stonemason from Weston-super-Mare, Somerset, and Welsh mother Avis (née Townes), a light opera singer from Barry, Vale of Glamorgan who later worked as a cleaner. William and Avis married in April 1891 and lived at 12 Greenwood Street in Barry before moving to Whitehall, Bristol, and then to St George, Bristol. The family emigrated to the United States aboard the SS Philadelphia, passing through Ellis Island, New York on March 30, 1908, before moving on to Cleveland, Ohio.
From age 12, Hope earned pocket money by singing, dancing, and performing comedy on the street. He entered numerous dancing and amateur talent contests as Lester Hope, and won a prize in 1915 for his impersonation of Charlie Chaplin. For a time, he attended the Boys' Industrial School in Lancaster, Ohio, and as an adult donated sizable sums of money to the institution. He had a brief career as a boxer in 1919, fighting under the name Packy East. He had three wins and one loss, and he participated in a few staged charity bouts later in life. In December 1920, 17-year-old Hope and his brothers became US citizens when their British parents became naturalized Americans.
In 1921, while working as a lineman for a power company, Hope was assisting his brother Jim in clearing trees when |
https://en.wikipedia.org/wiki/Core%20memory%20%28disambiguation%29 | Core memory or magnetic-core memory, is a form of random access computer memory used by computers in the mid-20th century.
Core Memory or core memory may also refer to:
Core rope memory, a form of read only computer memory first used in the 1960s
Core memories, plot-critical items in the 2005 video game Star Fox Assault
Core Memories, plot-critical items in the 2015 animated film Inside Out |
https://en.wikipedia.org/wiki/Ico | is an action-adventure game developed by Japan Studio and Team Ico and published by Sony Computer Entertainment for the PlayStation 2. It was released in North America and Japan in 2001 and Europe in 2002 in various regions. It was designed and directed by Fumito Ueda, who wanted to create a minimalist game around a "boy meets girl" concept. Originally planned for the PlayStation, Ico took approximately four years to develop. The team employed a "subtracting design" approach to reduce elements of gameplay that interfered with the game's setting and story in order to create a high level of immersion.
The protagonist is a young boy named Ico who was born with horns, which his village considers a bad omen. Warriors lock him away in an abandoned fortress. During his explorations of the fortress, Ico encounters Yorda, the daughter of the castle's Queen. The Queen plans to use Yorda's body to extend her own lifespan. Learning this, Ico seeks to escape the castle with Yorda, keeping her safe from the shadowy creatures that attempt to draw her back. Throughout the game, the player controls Ico as he explores the castle, solves puzzles and assists Yorda across obstacles.
Ico introduced several design and technical elements, including a story told with minimal dialogue, bloom lighting, and key frame animation, that have influenced subsequent games. Although not a commercial success, it was critically acclaimed for its art, original gameplay and story elements and received several awards, including "Game of the Year" nominations and three Game Developers Choice Awards. Considered a cult classic, it has been called one of the greatest video games ever made, and is often brought up in discussions about video games as an art form. It has influenced numerous video games since its release. It was rereleased in Europe in 2006 in conjunction with Shadow of the Colossus, the spiritual successor to Ico. A high-definition remaster of the game was released alongside Shadow of the Colossus for the PlayStation 3 in The Ico & Shadow of the Colossus Collection in 2011.
Gameplay
Ico is primarily a three-dimensional platform game. The player controls Ico from a third-person perspective as he explores the castle and attempts to escape it with Yorda. The camera is fixed in each room or area but swivels to follow Ico or Yorda as they move; the player can also pan the view a small degree in other directions to observe more of the surroundings. The game includes many elements of platform games; for example, the player must have Ico jump, climb, push and pull objects, and perform other tasks such as solving puzzles in order to progress within the castle.
These actions are complicated by the fact that only Ico can carry out these actions; Yorda can jump only short distances and cannot climb over tall barriers. The player must use Ico so that he helps Yorda cross obstacles, such as by lifting her to a higher ledge, or by arranging the environment to allow Yorda to cross a la |
https://en.wikipedia.org/wiki/Moria%20%281983%20video%20game%29 | The Dungeons of Moria, usually referred to as simply Moria, is a computer game inspired by J. R. R. Tolkien's novel The Lord of the Rings. The objective of the game is to dive deep into the Mines of Moria and kill the Balrog. Moria, along with Hack (1984) and Larn (1986), is considered to be the first roguelike game, and the first to include a town level.
Moria was the basis of the better known Angband roguelike game, and influenced the preliminary design of Blizzard Entertainment's Diablo.
Gameplay
The player's goal is to descend to the depths of Moria to defeat the Balrog, akin to a boss battle. As with Rogue, levels are not persistent: when the player leaves the level and then tries to return, a new level is procedurally generated. Among other improvements to Rogue, there is a persistent town at the highest level where players can buy and sell equipment.
Moria begins with creation of a character. The player first chooses a "race" from the following: Human, Half-Elf, Elf, Halfling, Gnome, Dwarf, Half-Orc, or Half-Troll. Racial selection determines base statistics and class availability. One then selects the character's "class" from the following: Warrior, Mage, Priest, Rogue, Ranger, or Paladin. Class further determines statistics, as well as the abilities acquired during gameplay. Mages, Rangers, and Rogues can learn magic; Priests and Paladins can learn prayers. Warriors possess no additional abilities.
The player begins the game with a limited number of items on a town level consisting of six shops: (1) a General Store, (2) an Armory, (3) a Weaponsmith, (4) a Temple, (5) an Alchemy shop, and (6) a Magic-Users store. A staircase on this level descends into a series of randomly generated underground mazes. Deeper levels contain more powerful monsters and better treasures. Each time the player ascends or descends a staircase, a new level is created and the old one discarded; only the town persists throughout the game.
As in most roguelikes, it is impossible to reload from a save if your character dies, as the game saves the state only upon exit, preventing save-scumming that is a key strategy in most computer games that allow saving. However, it is possible to save the file that is generated by the game (MORIA.SAV in the Windows version) to a backup location, then restore/replace that file after the character had been killed.
The balrog (represented by the upper-case letter B) is encountered at the deepest depths of the dungeon. Once the balrog has been killed, the game has been won, and no further saving of the game is possible.
Player characteristics
The player has many characteristics in the game. Some characteristics, like sex, weight, and height, cannot be changed once the player has been created, while other characteristics like strength, intelligence, and armor class can be modified by using certain items in a particular way. Mana and hit points are replenished by rest or by some other magical means. Gold accrues as the player |
https://en.wikipedia.org/wiki/Jail%20%28disambiguation%29 | A jail is a prison.
Jail may also refer to:
Computing
Chroot jail, the result of a chroot
FreeBSD jail, a system-level virtualization mechanism
In operating-system-level virtualization, any virtual user-space instance
Entertainment
Jail (2009 film), a Bollywood prison drama
Jail (2018 film), Nigerian
Jail (2021 film), an Indian action film
Jail (Big Mama Thornton album), 1975
Jail (TV series), an American reality show
Jail (Monopoly), a feature of the board game
"Jail" (song), on the 2021 album Donda by Kanye West |
https://en.wikipedia.org/wiki/ANSI%20%28disambiguation%29 | ANSI is the American National Standards Institute, a private nonprofit organization that oversees the development of voluntary consensus standards.
ANSI may also refer to:
Computing
ANSI character set (disambiguation)
ANSI escape code sequences, an in-band signalling mechanism for terminals and terminal emulators
ANSI, BASIC programming language standards
Places
Ansi City, an ancient city of the Goguryeo in modern Anshan city, China
Ansi, Estonia, village in Saaremaa Parish, Saare County, Estonia
People
Al-Ansi, Arab tribe
Aswad Ansi, Arab false prophet
Ansi Agolli (born 1982), Albanian football player
Ansi Molina
Ansi Nika (born 1990), Albanian football player
Nasser bin Ali al-Ansi (1975–2015), Al-Qaeda leader
Other uses
Area of Natural and Scientific Interest, used by the Government of Ontario, Canada to classify land zones |
https://en.wikipedia.org/wiki/Dana%20Scott | Dana Stewart Scott (born October 11, 1932) is an American logician who is the emeritus Hillman University Professor of Computer Science, Philosophy, and Mathematical Logic at Carnegie Mellon University; he is now retired and lives in Berkeley, California. His work on automata theory earned him the Turing Award in 1976, while his collaborative work with Christopher Strachey in the 1970s laid the foundations of modern approaches to the semantics of programming languages. He has worked also on modal logic, topology, and category theory.
Early career
He received his B.A. in Mathematics from the University of California, Berkeley, in 1954. He wrote his Ph.D. thesis on Convergent Sequences of Complete Theories under the supervision of Alonzo Church while at Princeton, and defended his thesis in 1958. Solomon Feferman (2005) writes of this period:
After completing his Ph.D. studies, he moved to the University of Chicago, working as an instructor there until 1960. In 1959, he published a joint paper with Michael O. Rabin, a colleague from Princeton, titled Finite Automata and Their Decision Problem (Scott and Rabin 1959) which introduced the idea of nondeterministic machines to automata theory. This work led to the joint bestowal of the Turing Award on the two, for the introduction of this fundamental concept of computational complexity theory.
University of California, Berkeley, 1960–1963
Scott took up a post as Assistant Professor of Mathematics, back at the University of California, Berkeley, and involved himself with classical issues in mathematical logic, especially set theory and Tarskian model theory. He proved that the axiom of constructibility is incompatible with the existence of a measurable cardinal, a result considered seminal in the evolution of Set Theory.
During this period he started supervising Ph.D. students, such as James Halpern (Contributions to the Study of the Independence of the Axiom of Choice) and Edgar Lopez-Escobar (Infinitely Long Formulas with Countable Quantifier Degrees).
Modal and tense logic
Scott also began working on modal logic in this period, beginning a collaboration with John Lemmon, who moved to Claremont, California, in 1963. Scott was especially interested in Arthur Prior's approach to tense logic and the connection to the treatment of time in natural-language semantics, and began collaborating with Richard Montague (Copeland 2004), whom he had known from his days as an undergraduate at Berkeley. Later, Scott and Montague independently discovered an important generalisation of Kripke semantics for modal and tense logic, called Scott-Montague semantics (Scott 1970).
John Lemmon and Scott began work on a modal-logic textbook that was interrupted by Lemmon's death in 1966. Scott circulated the incomplete monograph amongst colleagues, introducing a number of important techniques in the semantics of model theory, most importantly presenting a refinement of canonical model that became standard, and introduc |
https://en.wikipedia.org/wiki/Clean%20%28programming%20language%29 | Clean is a general-purpose purely functional computer programming language. It was called the Concurrent Clean System, then the Clean System, later just Clean. Clean has been developed by a group of researchers from the Radboud University in Nijmegen since 1987.
Features
The language Clean first appeared in 1987. Although development of the language has slowed, some researchers are still working in the language. In 2018, a spin-off company was founded that uses Clean.
Clean shares many properties and syntax with a younger sibling language, Haskell: referential transparency, list comprehension, guards, garbage collection, higher order functions, currying, and lazy evaluation. However, Clean deals with mutable state and input/output (I/O) through a uniqueness type system, in contrast to Haskell's use of monads. The compiler takes advantage of the uniqueness type system to generate more efficient code, because it knows that at any point during the execution of the program, only one reference can exist to a value with a unique type. Therefore, a unique value can be changed in place.
An integrated development environment (IDE) for Microsoft Windows is included in the Clean distribution.
Examples
Hello world:
Start = "Hello, world!"
Factorial:
Fibonacci sequence:
Infix operator:
(^) infixr 8 :: Int Int -> Int
(^) x 0 = 1
(^) x n = x * x ^ (n-1)
The type declaration states that the function is a right associative infix operator with priority 8: this states that x*x^(n-1) is equivalent to x*(x^(n-1)) as opposed to (x*x)^(n-1). This operator is pre-defined in StdEnv, the Clean standard library.
How Clean works
Computing is based on graph rewriting and reduction. Constants such as numbers are graphs and functions are graph rewriting formulas. This, combined with compiling to native code, makes Clean programs which use high abstraction run relatively fast according to The Computer Language Benchmarks Game.
Compiling
Compilation of Clean to machine code is performed as follows:
Source files (.icl) and definition files (.dcl) are translated into Core Clean, a basic variant of Clean, by the compiler frontend written in Clean.
Core clean is converted into Clean's platform-independent intermediate language (.abc), by the compiler backend written in Clean and C.
Intermediate ABC code is converted to object code (.o) by the code generator written in C.
Object code is linked with other files in the module and the runtime system and converted into a normal executable using the system linker (when available) or a dedicated linker written in Clean on Windows.
Earlier versions of the Clean compiler were written completely in C, thus avoiding bootstrapping issues.
The ABC machine
The ABC code mentioned above is an intermediate representation for an abstract machine. Because machine code generation for ABC code is relatively straightforward, this makes it easy to support new architectures. The ABC machine is an imperative abstract graph rewriting machin |
https://en.wikipedia.org/wiki/Uniqueness%20type | In computing, a unique type guarantees that an object is used in a single-threaded way, with at most a single reference to it. If a value has a unique type, a function applied to it can be optimized to update the value in-place in the object code. Such in-place updates improve the efficiency of functional languages while maintaining referential transparency. Unique types can also be used to integrate functional and imperative programming.
Introduction
Uniqueness typing is best explained using an example. Consider a function readLine that reads the next line of text from a given file:
function readLine(File f) returns String
return line where
String line = doImperativeReadLineSystemCall(f)
end
end
Now doImperativeReadLineSystemCall reads the next line from the file using an OS-level system call which has the side effect of changing the current position in the file. But this violates referential transparency because calling it multiple times with the same argument will return different results each time as the current position in the file gets moved. This in turn makes readLine violate referential transparency because it calls doImperativeReadLineSystemCall.
However, using uniqueness typing, we can construct a new version of readLine that is referentially transparent even though it's built on top of a function that's not referentially transparent:
function readLine2(unique File f) returns (unique File, String)
return (differentF, line) where
String line = doImperativeReadLineSystemCall(f)
File differentF = newFileFromExistingFile(f)
end
end
The unique declaration specifies that the type of f is unique; that is to say that f may never be referred to again by the caller of readLine2 after readLine2 returns, and this restriction is enforced by the type system. And since readLine2 does not return f itself but rather a new, different file object differentF, this means that it's impossible for readLine2 to be called with f as an argument ever again, thus preserving referential transparency while allowing for side effects to occur.
Programming languages
Uniqueness types are implemented in functional programming languages such as Clean, Mercury, SAC and Idris. They are sometimes used for doing I/O operations in functional languages in lieu of monads.
A compiler extension has been developed for the Scala programming language which uses annotations to handle uniqueness in the context of message passing between actors.
Relationship to linear typing
A unique type is very similar to a linear type, to the point that the terms are often used interchangeably, but there is in fact a distinction: actual linear typing allows a non-linear value to be typecast to a linear form, while still retaining multiple references to it. Uniqueness guarantees that a value has no other references to it, while linearity guarantees that no more references can be made to a value.
Linearity and uniqueness can be seen as particularly distin |
https://en.wikipedia.org/wiki/Formal%20methods | In computer science, formal methods are mathematically rigorous techniques for the specification, development, analysis, and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.
Formal methods employ a variety of theoretical computer science fundamentals, including logic calculi, formal languages, automata theory, control theory, program semantics, type systems, and type theory.
Background
Semi-formal methods are formalisms and languages that are not considered fully "formal". It defers the task of completing the semantics to a later stage, which is then done either by human interpretation or by interpretation through software like code or test case generators.
Taxonomy
Formal methods can be used at a number of levels:
Level 0: Formal specification may be undertaken and then a program developed from this informally. This has been dubbed formal methods lite. This may be the most cost-effective option in many cases.
Level 1: Formal development and formal verification may be used to produce a program in a more formal manner. For example, proofs of properties or refinement from the specification to a program may be undertaken. This may be most appropriate in high-integrity systems involving safety or security.
Level 2: Theorem provers may be used to undertake fully formal machine-checked proofs. Despite improving tools and declining costs, this can be very expensive and is only practically worthwhile if the cost of mistakes is very high (e.g., in critical parts of operating system or microprocessor design).
Further information on this is expanded below.
As with programming language semantics, styles of formal methods may be roughly classified as follows:
Denotational semantics, in which the meaning of a system is expressed in the mathematical theory of domains. Proponents of such methods rely on the well-understood nature of domains to give meaning to the system; critics point out that not every system may be intuitively or naturally viewed as a function.
Operational semantics, in which the meaning of a system is expressed as a sequence of actions of a (presumably) simpler computational model. Proponents of such methods point to the simplicity of their models as a means to expressive clarity; critics counter that the problem of semantics has just been delayed (who defines the semantics of the simpler model?).
Axiomatic semantics, in which the meaning of the system is expressed in terms of preconditions and postconditions that are true before and after the system performs a task, respectively. Proponents note the connection to classical logic; critics note that such semantics never really describe what a system does (merely what is true before and afterwards).
Lightweight formal methods
Some practitioners |
https://en.wikipedia.org/wiki/Program%20analysis | In computer science, program analysis is the process of automatically analyzing the behavior of computer programs regarding a property such as correctness, robustness, safety and liveness.
Program analysis focuses on two major areas: program optimization and program correctness. The first focuses on improving the program’s performance while reducing the resource usage while the latter focuses on ensuring that the program does what it is supposed to do.
Program analysis can be performed without executing the program (static program analysis), during runtime (dynamic program analysis) or in a combination of both.
Static program analysis
In the context of program correctness, static analysis can discover vulnerabilities during the development phase of the program. These vulnerabilities are easier to correct than the ones found during the testing phase since static analysis leads to the root of the vulnerability.
Due to many forms of static analysis being computationally undecidable, the mechanisms for doing it will not always terminate with the right answer either because they sometimes return a false negative ("no problems found" when the code does in fact have problems) or a false positive, or because they never return the wrong answer but sometimes never terminate. Despite their limitations, the first type of mechanism might reduce the number of vulnerabilities, while the second can sometimes give strong assurance of the lack of a certain class of vulnerabilities.
Incorrect optimizations are highly undesirable. So, in the context of program optimization, there are two main strategies to handle computationally undecidable analysis:
An optimizer that is expected to complete in a relatively short amount of time, such as the optimizer in an optimizing compiler, may use a truncated version of an analysis that is guaranteed to complete in a finite amount of time, and guaranteed to only find correct optimizations.
A third-party optimization tool may be implemented in such a way as to never produce an incorrect optimization, but also so that it can, in some situations, continue running indefinitely until it finds one (which may never happen). In this case, the developer using the tool would have to stop the tool and avoid running the tool on that piece of code again (or possibly modify the code to avoid tripping up the tool).
However, there is also a third strategy that is sometimes applicable for languages that are not completely specified, such as C. An optimizing compiler is at liberty to generate code that does anything at runtime even crashes if it encounters source code whose semantics are unspecified by the language standard in use.
Control-flow
The purpose of control-flow analysis is to obtain information about which functions can be called at various points during the execution of a program. The collected information is represented by a control-flow graph (CFG) where the nodes are instructions of the program and the edges represent |
https://en.wikipedia.org/wiki/Scene%20graph | A scene graph is a general data structure commonly used by vector-based graphics editing applications and modern computer games, which arranges the logical and often spatial representation of a graphical scene. It is a collection of nodes in a graph or tree structure. A tree node may have many children but only a single parent, with the effect of a parent applied to all its child nodes; an operation performed on a group automatically propagates its effect to all of its members. In many programs, associating a geometrical transformation matrix (see also transformation and matrix) at each group level and concatenating such matrices together is an efficient and natural way to process such operations. A common feature, for instance, is the ability to group related shapes and objects into a compound object that can then be manipulated as easily as a single object.
Scene graphs in graphics editing tools
In vector-based graphics editing, each leaf node in a scene graph represents some atomic unit of the document, usually a shape such as an ellipse or Bezier path. Although shapes themselves (particularly paths) can be decomposed further into nodes such as spline nodes, it is practical to think of the scene graph as composed of shapes rather than going to a lower level of representation.
Another useful and user-driven node concept is the layer. A layer acts like a transparent sheet upon which any number of shapes and shape groups can be placed. The document then becomes a set of layers, any of which can be conveniently made invisible, dimmed, or locked (made read-only). Some applications place all layers in a linear list, while others support layers within layers to any desired depth.
Internally, there may be no real structural difference between layers and groups at all, since they are both just nodes of a scene graph. If differences are needed, a common type declaration in C++ would be to make a generic node class, and then derive layers and groups as subclasses. A visibility member, for example, would be a feature of a layer, but not necessarily of a group.
Scene graphs in games and 3D applications
Scene graphs are useful for modern games using 3D graphics and increasingly large worlds or levels. In such applications, nodes in a scene graph (generally) represent entities or objects in the scene.
For instance, a game might define a logical relationship between a knight and a horse so that the knight is considered an extension to the horse. The scene graph would have a 'horse' node with a 'knight' node attached to it.
The scene graph may also describe the spatial, as well as the logical, relationship of the various entities: the knight moves through 3D space as the horse moves.
In these large applications, memory requirements are major considerations when designing a scene graph. For this reason, many large scene graph systems use geometry instancing to reduce memory costs and increase speed. In our example above, each knight is a separate scene n |
https://en.wikipedia.org/wiki/ACL2 | ACL2 ("A Computational Logic for Applicative Common Lisp") is a software system consisting of a programming language, an extensible theory in a first-order logic, and an automated theorem prover. ACL2 is designed to support automated reasoning in inductive logical theories, mostly for software and hardware verification. The input language and implementation of ACL2 are written in Common Lisp. ACL2 is free and open-source software.
Overview
The ACL2 programming language is an applicative (side-effect free) variant of Common Lisp. ACL2 is untyped. All ACL2 functions are total — that is, every function maps each object in the ACL2 universe to another object in its universe.
ACL2's base theory axiomatizes the semantics of its programming language and its built-in functions. User definitions in the programming language that satisfy a definitional principle extend the theory in a way that maintains the theory's logical consistency.
The core of ACL2's theorem prover is based on term rewriting, and this core is extensible in that user-discovered theorems can be used as ad hoc proof techniques for subsequent conjectures.
ACL2 is intended to be an "industrial strength" version of the Boyer–Moore theorem prover, NQTHM. Toward this goal, ACL2 has many features to support clean engineering of interesting mathematical and computational theories. ACL2 also derives efficiency from being built on Common Lisp; for example, the same specification that is the basis for inductive verification can be compiled and run natively.
In 2005, the authors of the Boyer-Moore family of provers, which includes ACL2, received the ACM Software System Award "for pioneering and engineering a most effective theorem prover (...) as a formal methods tool for verifying safety-critical hardware and software."
Proofs
ACL2 has had numerous industrial applications. In 1995, J Strother Moore, Matt Kaufmann and Tom Lynch used ACL2 to prove the correctness of the floating point division operation of the AMD K5 microprocessor in the wake of the Pentium FDIV bug. The interesting applications page of the ACL2 documentation has a summary of some uses of the system.
Industrial users of ACL2 include AMD, Arm, Centaur Technology, IBM, Intel, Oracle, and Collins Aerospace.
See also
List of proof assistants
References
External links
ACL2 website
ACL2s - ACL2 Sedan - An Eclipse-based interface developed by Peter Dillinger and Pete Manolios that includes powerful features to provide users with more automation and support for specifying conjectures and proving theorems with ACL2.
Lisp (programming language)
Common Lisp (programming language) software
Proof assistants
Free theorem provers
Lisp programming language family
Software using the BSD license |
https://en.wikipedia.org/wiki/Interactive%20voice%20response | Interactive voice response (IVR) is a technology that allows telephone users to interact with a computer-operated telephone system through the use of voice and DTMF tones input with a keypad. In telephony, IVR allows customers to interact with a company's host system via a telephone keypad or by speech recognition, after which services can be inquired about through the IVR dialogue. IVR systems can respond with pre-recorded or dynamically generated audio to further direct users on how to proceed. IVR systems deployed in the network are sized to handle large call volumes and also used for outbound calling as IVR systems are more intelligent than many predictive dialer systems.
IVR systems can be used standing alone to create self-service solutions for mobile purchases, banking payments, services, retail orders, utilities, travel information and weather conditions. In combination with systems such an automated attendant and ACD, call routing can be optimized for a better caller experience and workforce efficiency.
IVR systems are often combined with automated attendant functionality. The term voice response unit (VRU) is sometimes used as well.
History
Despite the increase in IVR technology during the 1970s, the technology was considered complex and expensive for automating tasks in call centers. Early voice response systems were DSP technology based and limited to small vocabularies. In the early 1980s, Leon Ferber's Perception Technology became the first mainstream market competitor, after hard drive technology (read/write random-access to digitized voice data) had reached a cost-effective price point. At that time, a system could store digitized speech on disk, play the appropriate spoken message, and process the human's DTMF response.
As call centers began to migrate to multimedia in the late 1990s, companies started to invest in computer telephony integration (CTI) with IVR systems. IVR became vital for call centers deploying universal queuing and routing solutions and acted as an agent which collected customer data to enable intelligent routing decisions. With improvements in technology, systems could use speaker-independent voice recognition of a limited vocabulary instead of requiring the person to use DTMF signaling.
Starting in the 2000s, voice response became more common and cheaper to deploy. This was due to increased CPU power and the migration of speech applications from proprietary code to the VXML standard.
Technology
DTMF decoding and speech recognition are used to interpret the caller's response to voice prompts. DTMF tones are entered via the telephone keypad.
Other technologies include using text-to-speech (TTS) to speak complex and dynamic information, such as e-mails, news reports or weather information. IVR technology is also being introduced into automobile systems for hands-free operation. TTS is computer generated synthesized speech that is no longer the robotic voice traditionally associated with computers. Real v |
https://en.wikipedia.org/wiki/Computer%20telephony%20integration | Computer telephony integration, also called computer–telephone integration or CTI, is a common name for any technology that allows interactions on a telephone and a computer to be coordinated. The term is predominantly used to describe desktop-based interaction for helping users be more efficient, though it can also refer to server-based functionality such as automatic call routing.
Common functions
By application type
CTI applications tend to run on either a user's desktop, or an unattended server.
Common desktop functions provided by CTI applications
Screen popping - Call information display (caller's number (ANI), number dialed (DNIS), and Screen pop on answer, with or without using calling line data. Generally this is used to search a business application for the caller's details.
Dialing - Automatic dialing and computer-controlled dialing (power dial, preview dial, and predictive dial).
Phone control - Includes call control (answer, hang up, hold, conference, etc.) and feature control (DND, call forwarding, etc.).
Transfers - Coordinated phone and data transfers between two parties (i.e., pass on the Screen pop with the call.).
Call center - Allows users to log in as a call center agent and control their agent state (Ready, Busy, Not ready, Break, etc.).
Common server functions provided by CTI applications
Call routing - The automatic routing of calls to a new destination based on criteria normally involving a database lookup of the caller's number (ANI) or number dialed (DNIS).
Advanced call reporting functions - Using the detailed data that comes from CTI to provide better-than-normal call reporting.
Voice recording integration - Using data from CTI to enrich the data stored against recorded calls.
By connection type
Computer-phone connections can be split into two categories:
First-party call control
Operates as if there is a direct connection between the user's computer and the phone set. Examples are a modem or a phone plugged directly into the computer. Typically, only the computer associated with the phone can control it by sending commands directly to the phone and thus this type of connection is suitable for desktop applications only. The computer can generally control all the functions of the phone at the computer user's discretion.
Third-party call control
Interactions between arbitrary numbers of computers and telephones are made through and coordinated by a dedicated telephony server. Consequently, the server governs which information and functions are available to a user. The user's computer generally connects to the telephony server over the local network.
History and main CTI technologies
The origins of CTI can be found in simple screen population (or "screen pop") technology. This allows data collected from the telephone systems to be used as input data to query databases with customer information and populate that data instantaneously in the customer service representative screen. The net effect is the agent already |
https://en.wikipedia.org/wiki/CTI | CTI may stand for:
Companies and organizations
CTI Consultants, engineering consulting firm in Richmond, Virginia
CTI Electronics Corporation, a manufacturer of industrial computer peripherals
CTI Móvil, a Latin American mobile network operator
CTI Records, a jazz record label
Chung T'ien Television, CTi TV, a cable television network in Taiwan
City Telecom (Hong Kong), telecommunications provider
Garda Counter-Terrorism International, a section of the Irish national police
Connectivity Technologies, Inc., an American wire and cable company
Schools
CTI Education Group higher education institution, South Africa
Central Training Institute Jabalpur, India
Curtiss-Wright Technical Institute, trade school for aircraft maintenance training
DePaul University School of Computer Science, Telecommunications and Information Systems
Other
Cyber threat intelligence, Cyber threat intelligence
Canberra Tennis International, Australian tennis tournament
Comparative Tracking Index, for measuring the electrical breakdown (tracking) properties of insulating material
Computer telephony integration, technology allowing computers and telephones to be integrated or coordinated
Cuito Cuanavale Airport, an Angolan airport, IATA code
Cryptologic technician interpretive, a US Navy cryptologic technician rating
Ivory Coast, ITU code
Conquer the Island, a computer game mode where areas on a gamemap must be conquered |
https://en.wikipedia.org/wiki/Computer-aided%20manufacturing | Computer-aided manufacturing (CAM) also known as computer-aided modeling or computer-aided machining is the use of software to control machine tools in the manufacturing of work pieces. This is not the only definition for CAM, but it is the most common. It may also refer to the use of a computer to assist in all operations of a manufacturing plant, including planning, management, transportation and storage. Its primary purpose is to create a faster production process and components and tooling with more precise dimensions and material consistency, which in some cases, uses only the required amount of raw material (thus minimizing waste), while simultaneously reducing energy consumption.
CAM is now a system used in schools and lower educational purposes.
CAM is a subsequent computer-aided process after computer-aided design (CAD) and sometimes computer-aided engineering (CAE), as the model generated in CAD and verified in CAE can be input into CAM software, which then controls the machine tool. CAM is used in many schools alongside CAD to create objects.
Overview
Traditionally, CAM has been numerical control (NC) programming tool, wherein two-dimensional (2-D) or three-dimensional (3-D) models of components are generated in CAD. As with other "computer-aided" technologies, CAM does not eliminate the need for skilled professionals such as manufacturing engineers, NC programmers, or machinists. CAM leverages both the value of the most skilled manufacturing professionals through advanced productivity tools, while building the skills of new professionals through visualization, simulation and optimization tools.
A CAM tool generally converts a model to a language the target machine in question understands, typically G-Code. The numerical control can be applied to machining tools, or more recently to 3D printers.
History
Early commercial applications of CAM were in large companies in the automotive and aerospace industries; for example, Pierre Béziers work developing the CAD/CAM application UNISURF in the 1960s for car body design and tooling at Renault. Alexander Hammer at DeLaval Steam Turbine Company invented a technique to progressively drill turbine blades out of a solid metal block of metal with the drill controlled by a punch card reader in 1950. Boeing first obtained NC machines in 1956, made by companies such as Kearney and Trecker, Stromberg-Carlson and Thompson Ramo Waldridge.
Historically, CAM software was seen to have several shortcomings that necessitated an overly high level of involvement by skilled CNC machinists. Fallows created the first CAD software but this had severe shortcomings and was promptly taken back into the developing stage. CAM software would output code for the least capable machine, as each machine tool control added on to the standard G-code set for increased flexibility. In some cases, such as improperly set up CAM software or specific tools, the CNC machine required manual editing before the program will run p |
https://en.wikipedia.org/wiki/Mind%20uploading | Mind uploading is a speculative process of whole brain emulation in which a brain scan is used to completely emulate the mental state of the individual in a digital computer. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind.
Substantial mainstream research in related areas is being conducted in neuroscience and computer science, including animal brain mapping and simulation, development of faster supercomputers, virtual reality, brain–computer interfaces, connectomics, and information extraction from dynamically functioning brains. According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are currently under active development; however, they will admit that others are, as yet, very speculative, but say they are still in the realm of engineering possibility.
Mind uploading may potentially be accomplished by either of two methods: copy-and-upload or copy-and-delete by gradual replacement of neurons (which can be considered as a gradual destructive uploading), until the original organic brain no longer exists and a computer program emulating the brain takes control over the body. In the case of the former method, mind uploading would be achieved by scanning and mapping the salient features of a biological brain, and then by storing and copying that information state into a computer system or another computational device. The biological brain may not survive the copying process or may be deliberately destroyed during it in some variants of uploading. The simulated mind could be within a virtual reality or simulated world, supported by an anatomic 3D body simulation model. Alternatively, the simulated mind could reside in a computer inside (or either connected to or remotely controlled) a (not necessarily humanoid) robot or a biological or cybernetic body.
Among some futurists and within the part of transhumanist movement, mind uploading is treated as an important proposed life extension or immortality technology (known as "digital immortality"). Some believe mind uploading is humanity's current best option for preserving the identity of the species, as opposed to cryonics. Another aim of mind uploading is to provide a permanent backup to our "mind-file", to enable interstellar space travel, and a means for human culture to survive a global disaster by making a functional copy of a human society in a computing device. Whole-brain emulation is discussed by some futurists as a "logical endpoint" of the topical computational neuroscience and neuroinformatics fields, both about brain simulation for medical research purposes. It is discussed in artificial intelligence research publications as an approach to strong AI (artificial general intelligence) and to at least weak superintelligence. Another approach is seed AI, which would not be based on existing |
https://en.wikipedia.org/wiki/MacOS%20version%20history | The history of macOS, Apple's current Mac operating system formerly named Mac OS X until 2011 and then OS X until 2016, began with the company's project to replace its "classic" Mac OS. That system, up to and including its final release Mac OS 9, was a direct descendant of the operating system Apple had used in its Mac computers since their introduction in 1984. However, the current macOS is a UNIX operating system built on technology that had been developed at NeXT from the 1980s until Apple purchased the company in early 1997.
Although it was originally marketed as simply "version 10" of Mac OS (indicated by the Roman numeral "X"), it has a completely different codebase from Mac OS 9, as well as substantial changes to its user interface. The transition was a technologically and strategically significant one. To ease the transition for users and developers, versions through 10.4 were able to run Mac OS 9 and its applications in the Classic Environment, a compatibility layer.
macOS was first released in 1999 as Mac OS X Server 1.0. It was built using the technologies Apple acquired from NeXT, but did not include the signature Aqua user interface (UI). The desktop version aimed at regular users—Mac OS X 10.0—shipped in March 2001. Since then, several more distinct desktop and server editions of macOS have been released. Starting with Mac OS X 10.7 Lion, macOS Server is no longer offered as a standalone operating system; instead, server management tools are available for purchase as an add-on. The macOS Server app was discontinued on April 21, 2022 and will stop working on macOS 13 Ventura or later. Starting with the Intel build of Mac OS X 10.5 Leopard, most releases have been certified as Unix systems conforming to the Single UNIX Specification.
Lion was referred to by Apple as "Mac OS X Lion" and sometimes as "OS X Lion"; Mountain Lion was officially referred to as just "OS X Mountain Lion", with the "Mac" being completely dropped. The operating system was further renamed to "macOS" starting with macOS Sierra.
From the introduction of machines not supporting the classic Mac OS in 2003 until the introduction of iPhone OS in early 2007, Mac OS X was Apple's only software platform.
macOS retained the major version number 10 throughout its development history until the release of macOS 11 Big Sur in 2020.
Mac OS X 10.0 and 10.1 were given names of big cats as internal code names ("Cheetah" and "Puma"). Starting with Mac OS X 10.2 Jaguar, big cat names were used as marketing names; starting with OS X 10.9 Mavericks, names of locations in California were used as marketing names instead.
The current major version, macOS 14 Sonoma, was announced on June 5, 2023 at WWDC 2023 and released on September 26 of that year.
Development
Development outside Apple
After Apple removed Steve Jobs from management in 1985, he left the company and attempted to create the "next big thing", with funding from Ross Perot and himself. The result was the NeXT Comp |
https://en.wikipedia.org/wiki/Hacktivism | Internet activism, hacktivism, or hactivism (a portmanteau of hack and activism), is the use of computer-based techniques such as hacking as a form of civil disobedience to promote a political agenda or social change. With roots in hacker culture and hacker ethics, its ends are often related to free speech, human rights, or freedom of information movements.
Hacktivist activities span many political ideals and issues. Freenet, a peer-to-peer platform for censorship-resistant communication, is a prime example of translating political thought and freedom of speech into code. Hacking as a form of activism can be carried out through a network of activists, such as Anonymous and WikiLeaks, or through a singular activist, working in collaboration toward common goals without an overarching authority figure.
"Hacktivism" is a controversial term with several meanings. The word was coined to characterize electronic direct action as working toward social change by combining programming skills with critical thinking. But just as hack can sometimes mean cyber crime, hacktivism can be used to mean activism that is malicious, destructive, and undermining the security of the Internet as a technical, economic, and political platform.
According to the United States 2020-2022 Counterintelligence Strategy, in addition to state adversaries and transnational criminal organizations, "ideologically motivated entities such as hacktivists, leaktivists, and public disclosure organizations, also pose significant threats".
Origins and definitions
Writer Jason Sack first used the term hacktivism in a 1995 article in conceptualizing New Media artist Shu Lea Cheang's film Fresh Kill. However, the term is frequently attributed to the Cult of the Dead Cow (cDc) member "Omega," who used it in a 1996 e-mail to the group. Due to the variety of meanings of its root words, the definition of hacktivism is nebulous and there exists significant disagreement over the kinds of activities and purposes it encompasses. Some definitions include acts of cyberterrorism while others simply reaffirm the use of technological hacking to effect social change.
Forms and methods
Self-proclaimed "hacktivists" often work anonymously, sometimes operating in groups while other times operating as a lone wolf with several cyber-personas all corresponding to one activist within the cyberactivism umbrella that has been gaining public interest and power in pop-culture. Hacktivists generally operate under apolitical ideals and express uninhibited ideas or abuse without being scrutinized by society while representing or defending themselves publicly under an anonymous identity giving them a sense of power in the cyberactivism community.
In order to carry out their operations, hacktivists might create new tools; or integrate or use a variety of software tools readily available on the Internet. One class of hacktivist activities includes increasing the accessibility of others to take politically motivated act |
https://en.wikipedia.org/wiki/Braided%20river | A braided river (also called braided channel or braided stream) consists of a network of river channels separated by small, often temporary, islands called braid bars or, in British English usage, aits or eyots.
Braided streams tend to occur in rivers with high sediment loads or coarse grain sizes, and in rivers with steeper slopes than typical rivers with straight or meandering channel patterns. They are also associated with rivers with rapid and frequent variation in the amount of water they carry, i.e., with "flashy" rivers, and with rivers with weak banks.
Braided channels are found in a variety of environments all over the world, including gravelly mountain streams, sand bed rivers, on alluvial fans, on river deltas, and across depositional plains.
Description
A braided river consists of a network of multiple shallow channels that diverge and rejoin around ephemeral braid bars. This gives the river a fancied resemblance to the interwoven strands of a braid. The braid bars, also known as channel bars, branch islands, or accreting islands, are usually unstable and may be completely covered at times of high water. The channels and braid bars are usually highly mobile, with the river layout often changing significantly during flood events. When the islets separating channels are stabilized by vegetation, so that they are more permanent features, they are sometimes called aits or eyots.
A braided river differs from a meandering river, which has a single sinuous channel. It is also distinct from an anastomosing river. Anastomosing rivers are similar to braided rivers in that they consist of multiple interweaving channels. However, anastomosing rivers consist of semi-permanent channels which are separated by floodplain rather than channel bars. These channels may themselves be braided.
Formation
The physical processes that determine whether a river will be braided or meandering are not fully understood. However, there is wide agreement that a river becomes braided when it carries an abundant supply of sediments.
Experiments with flumes suggest that a river becomes braided when a threshold level of sediment load or slope is reached. On timescales long enough for the river to evolve, a sustained increase in sediment load will increase the bed slope of the river, so that a variation of slope is equivalent to a variation in sediment load, provided the amount of water carried by the river is unchanged. A threshold slope was experimentally determined to be 0.016 (ft/ft) for a stream with poorly sorted coarse sand. Any slope over this threshold created a braided stream, while any slope under the threshold created a meandering stream or – for very low slopes – a straight channel. Also important to channel development is the proportion of suspended load sediment to bed load. An increase in suspended sediment allowed for the deposition of fine erosion-resistant material on the inside of a curve, which accentuated the curve and in some instances, caus |
https://en.wikipedia.org/wiki/A300%20%28disambiguation%29 | The Airbus A300 is a wide-body airliner.
A300 or A.300 may also refer to:
A300 road, a main road in Great Britain
Acorn Archimedes A300, a British home computer
Aero A.300, a 1938 Czechoslovakian bomber aircraft
Ansaldo A.300, a 1919 Italian general-purpose biplane aircraft
Midland Highway (Victoria), a highway in Victoria, Australia bears the designation A300 for most of its route
RFA Oakol (A300), a British fleet auxiliary vessel
Weishi A300, a version of Chinese 300 mm rocket artillery
AMD A300 platform, a system on a chip solution for AMD Ryzen processors |
https://en.wikipedia.org/wiki/GNU%20Compiler%20for%20Java | The GNU Compiler for Java (GCJ) is a discontinued free compiler for the Java programming language. It was part of the GNU Compiler Collection.
GCJ compiles Java source code to Java virtual machine (JVM) bytecode or to machine code for a number of CPU architectures. It could also compile class files and whole JARs that contain bytecode into machine code.
History
The GCJ runtime-libraries original source is from GNU Classpath project, but there is a code difference between the libgcj libraries. GCJ 4.3 uses the Eclipse Compiler for Java as a front-end.
In 2007, a lot of work was done to implement support for Java's two graphical APIs in GNU Classpath: AWT and Swing. Software support for AWT is still in development. "Once AWT support is working then Swing support can be considered. There is at least one free-software partial implementations of Swing that may be usable.". The GNU CLASSPATH was never completed to even Java 1.2 status and now appears to have been abandoned completely.
As of 2015, there were no new developments announced from GCJ and the product was in maintenance mode, with open-source Java toolchain development mostly happening within OpenJDK. GCJ was removed from the GCC trunk on September 30, 2016. Announcement of its removal was made with the release of the GCC 7.1, which does not contain it. GCJ remains part of GCC 6.
Performance
The compilation function in GCJ should have a faster start-up time than the equivalent bytecode launched in a JVM when compiling Java code into machine code.
Compiled Native Interface (CNI)
The Compiled Native Interface (CNI), previously named "Cygnus Native Interface", is a software framework for the GCJ that allows Java code to call, and be called by, native applications (programs specific to a hardware and operating-system platform) and libraries written in C++.
CNI closely resembles the JNI (Java Native Interface) framework which comes as a standard with various Java virtual machines.
Comparison of language use
The authors of CNI claim for various advantages over JNI:
CNI depends on Java classes appearing as C++ classes. For example,
given a Java class,
public class Int
{
public int i;
public Int(int i) { this.i = i; }
public static Int zero = new Int(0);
}
one can use the class thus:
#include <gcj/cni.h>
#include <Int>
Int *mult(Int *p, int k)
{
if (k == 0)
return Int::zero; // Static member access.
return new Int(p->i * k);
}
See also
Excelsior JET (Excelsior Java native code compiler)
IcedTea
Kaffe
SableVM
JamVM
Apache Harmony
Jikes
GraalVM - GraalVM's Native Image functionality is an ahead-of-time compilation technology that produces executable binaries of class files.
Java virtual machine
Free Java implementations
Kotlin - Kotlin/Native is a technology for compiling Kotlin to native binaries that run without any JVM. It comprises a LLVM-based backend for the Kotlin compiler and a native implementation of the Kotlin runtime library.
References
Exter |
https://en.wikipedia.org/wiki/Asynchrony | Asynchrony is the state of not being in synchronization.
Asynchrony or asynchronous may refer to:
Electronics and computing
Asynchrony (computer programming), the occurrence of events independent of the main program flow, and ways to deal with such events
Async/await
Asynchronous system, a system having no global clock, instead operating under distributed control
Asynchronous circuit, a sequential digital logic circuit not governed by a clock circuit or signal
Asynchronous communication, transmission of data without the use of an external clock signal
Asynchronous cellular automaton, a mathematical model of discrete cells which update their state independently
Asynchronous operation, a sequence of operations executed out of time coincidence with any event
Other uses
Asynchrony (game theory), when players in games update their strategies at different time intervals
Asynchronous learning, an educational method in which the teacher and student are separated in time
Asynchronous motor, a type of electric motor
Asynchronous multiplayer, a form of multiplayer gameplay in video games
Asynchronous muscles, muscles in which there is no one-to-one relationship between stimulation and contraction
Collaborative editing or asynchronous editing, the practice of groups producing works together through individual contributions
See also
async (album), 2017 album by Japanese musician and composer Ryuichi Sakamoto |
https://en.wikipedia.org/wiki/Bubble%20memory | Bubble memory is a type of non-volatile computer memory that uses a thin film of a magnetic material to hold small magnetized areas, known as bubbles or domains, each storing one bit of data. The material is arranged to form a series of parallel tracks that the bubbles can move along under the action of an external magnetic field. The bubbles are read by moving them to the edge of the material, where they can be read by a conventional magnetic pickup, and then rewritten on the far edge to keep the memory cycling through the material. In operation, bubble memories are similar to delay-line memory systems.
Bubble memory started out as a promising technology in the 1970s, offering memory density of an order similar to hard drives, but performance more comparable to core memory, while lacking any moving parts. This led many to consider it a contender for a "universal memory" that could be used for all storage needs. The introduction of dramatically faster semiconductor memory chips pushed bubble into the slow end of the scale, and equally dramatic improvements in hard-drive capacity made it uncompetitive in price terms. Bubble memory was used for some time in the 1970s and 1980s where its non-moving nature was desirable for maintenance or shock-proofing reasons. The introduction of flash storage and similar technologies rendered even this niche uncompetitive, and bubble disappeared entirely by the late 1980s.
History
Precursors
Bubble memory is largely the brainchild of a single person, Andrew Bobeck. Bobeck had worked on many kinds of magnetics-related projects through the 1960s, and two of his projects put him in a particularly good position for the development of bubble memory. The first was the development of the first magnetic-core memory system driven by a transistor-based controller, and the second was the development of twistor memory.
Twistor is essentially a version of core memory that replaces the "cores" with a piece of magnetic tape. The main advantage of twistor is its ability to be assembled by automated machines, as opposed to core, which was almost entirely manual. AT&T had great hopes for twistor, believing that it would greatly reduce the cost of computer memory and put them in an industry leading position. Instead, DRAM memories came onto the market in the early 1970s and rapidly replaced all previous random-access memory systems. Twistor ended up being used only in a few applications, many of them AT&T's own computers.
One interesting side effect of the twistor concept was noticed in production: under certain conditions, passing a current through one of the electrical wires running inside the tape would cause the magnetic fields on the tape to move in the direction of the current. If used properly, it allowed the stored bits to be pushed down the tape and pop off the end, forming a type of delay-line memory, but one where the propagation of the fields was under computer control, as opposed to automatically advancing at a s |
https://en.wikipedia.org/wiki/Freedom%20of%20information%20laws%20by%20country | Freedom of information laws allow access by the general public to data held by national governments and, where applicable, by state and local governments. The emergence of freedom of information legislation was a response to increasing dissatisfaction with the secrecy surrounding government policy development and decision making. In recent years Access to Information Act has also been used. They establish a "right-to-know" legal process by which requests may be made for government-held information, to be received freely or at minimal cost, barring standard exceptions. Also variously referred to as open records, or sunshine laws (in the United States), governments are typically bound by a duty to publish and promote openness. In many countries there are constitutional guarantees for the right of access to information, but these are usually unused if specific support legislation does not exist. Additionally, the United Nations Sustainable Development Goal 16 has a target to ensure public access to information and the protection of fundamental freedoms as a means to ensure accountable, inclusive and just institutions.
Introduction
Over 100 countries around the world have implemented some form of freedom of information legislation. Sweden's Freedom of the Press Act of 1766 is the oldest in the world.
Most freedom of information laws exclude the private sector from their jurisdiction thus information held by the private sector cannot be accessed as a legal right. This limitation has serious implications because the private sector performs many functions which were previously the domain of the public sector. As a result, information that was previously public is now within the private sector, and the private contractors cannot be forced to disclose information.
Other countries are working towards introducing such laws, and many regions of countries with national legislation have local laws. For example, all U.S. states have laws governing access to public documents belonging to the state and local taxing entities. Additionally, the U.S. Freedom of Information Act governs record management of documents in the possession of the federal government.
A related concept is open meetings legislation, which allows access to government meetings, not just to the records of them. In many countries, privacy or data protection laws may be part of the freedom of information legislation; the concepts are often closely tied together in political discourse.
A basic principle behind most freedom of information legislation is that the burden of proof falls on the body asked for information, not the person asking for it. The person making the request does not usually have to give an explanation for their actions, but if the information is not disclosed a valid reason has to be given.
In 2015 The UNESCO General Conference voted to designate 28 Sep as “International Day for the Universal Access to Information” or, as it is more commonly known, Access to Information Da |
https://en.wikipedia.org/wiki/Hijacking | Hijacking may refer to:
Common usage
Computing and technology
Bluejacking, the unsolicited transmission of data via Bluetooth
Brandjacking, the unauthorized use of a company's brand
Browser hijacking
Clickjacking (including likejacking and cursorjacking), a phenomenon of hijacking "clicks" in a website context
DLL hijacking
DNS hijacking
Domain hijacking
Hijack attack, in communication, a form of active wiretapping in which the attacker seizes control of a previously established communication association
BGP hijacking
Reverse domain hijacking
Session hijacking
Finance
Credit card hijacking
Transportation
Aircraft hijacking, the unlawful seizure of an aircraft by an individual or a group
Carjacking, a robbery in which the item stolen is a motor vehicle
Maritime hijacking, or piracy
Arts, entertainment, and media
Hijacking, in dance, a variation of lead and follow
A Hijacking, a 2012 Danish film
See also
Hi-Jacked, a 1950 film
"Hi-jacked" (Joe 90), a 1968 episode of Joe 90
Hijacked, a 2012 action, crime, thriller film directed by Brandon Nutt and starring Vinnie Jones, Rob Steinberg, and Craig Fairbrass
Hijack (disambiguation)
Hijacker (comics), three different Marvel Comics characters have used this moniker |
https://en.wikipedia.org/wiki/Silk%20Stalkings | Silk Stalkings is an American crime drama television series that premiered on CBS on November 7, 1991, as part of the network's late-night Crimetime After Primetime programming package. Broadcast for two seasons until CBS ended the Crimetime experiment in June 1993, the remaining six seasons ran exclusively on USA Network until the series finale on April 18, 1999. The show was creator Stephen J. Cannell's longest-running series. Its title is a wordplay on "silk stockings".
The series portrays the daily lives of two detectives who solve sexually-based crimes of passion ("silk stalkings") among the ultra-rich of Palm Beach, Florida. Most episodes were shot in San Diego, California, while others were filmed in Scottsdale, Arizona.
Synopsis
List of Silk Stalkings episodes
Chris and Rita
From 1991 to 1995, the lead characters were played by Rob Estes and Mitzi Kapture, as detectives Christopher Lorenzo and Rita Lee Lance, respectively. The story lines were told in a partially first-person perspective focusing on Lance, who would speak in voiceovers throughout the episodes. Early in the series, Ben Vereen played Rita's boss, Captain Hutchinson ("Hutch"). Vereen was compelled to retire from the show during the second season due to an off-screen accident, but he returned for a few guest appearances. His successor was Lt. Hudson, played by Robert Gossett, who stayed on until the third season. Chris and Rita's new boss, who would stay with the show for its duration, was Charlie Brill as Captain Harry Lipschitz. Brill went on to appear in the most episodes of any actor, 129, ahead of Kapture's 101 and Estes' 100. He was promoted to the opening credits starting with season six (none of the previous captains in the series had achieved this).
Brill's real-life wife Mitzi McCall played Lipschitz's free-spirited wife Frannie on the series, and the two provided some occasional comic relief amid the dramatic tension of the storylines. They also appeared in the second season playing completely different characters. Working prominently with Lorenzo and Lance was assistant district attorney George Donovan (William Anton). Various recurring characters came and went, notably Dennis Paladino as mob boss Donnie "Dogs" DiBarto (DiBelco in his first appearance); John Byner as Cotton Dunn, a cunning but likable con artist; Scott Atkins as Officer Perry, a rookie cop; Kim Morgan Greene as Melissa Cassidy, a late-night radio talk-show host, sex therapist, and old flame of Chris's; Danny Gans as Roger, a coroner who frequently (and unsuccessfully) tried to get Rita to go out with him; Marie Marshall as Solange, a local photographer with a faux French accent who crossed paths with Chris and Rita in the second season; and Lucy Lin, who played forensic expert Dr. Noriko Weinstein. Actress Freda Foh Shen took over Lin's role in later episodes. In the first season, Rita was shown to suffer from occasionally intense headaches, which were caused by a blood bubble in her brain. R |
https://en.wikipedia.org/wiki/FreeBSD%20Documentation%20License | The FreeBSD Documentation License is the license that covers most of the documentation for the FreeBSD operating system.
License
The license is very similar to the 2-clause Simplified BSD License used by the support of FreeBSD, however, it makes the applications of "source code" and "compile" less obscure in the context of documentation. It also includes an obligatory disclaimer about IEEE and Open Group manuscript in some old-fashioned sheets.
<nowiki>
The FreeBSD Documentation License
Copyright 1994-2015 The FreeBSD Project. All rights reserved.
Redistribution and use in source (SGML DocBook) and 'compiled' forms (SGML, HTML, PDF, PostScript,
RTF and so forth) with or without modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code (SGML DocBook) must retain the above copyright notice, this list of
conditions and the following disclaimer as the first lines of this file unmodified.
2. Redistributions in compiled form (transformed to other DTDs, converted to PDF, PostScript, RTF
and other formats) must reproduce the above copyright notice, this list of conditions and the
following disclaimer in the documentation and/or other materials provided with the distribution.
THIS DOCUMENTATION IS PROVIDED BY THE FREEBSD DOCUMENTATION PROJECT "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
THE FREEBSD DOCUMENTATION PROJECT BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Manual Pages
Manual Pages
Some FreeBSD manual pages contain text from the IEEE Std 1003.1, 2004 Edition, Standard for
Information Technology -- Portable Operating System Interface (POSIX®) specification. These manual
pages are subject to the following terms:
The Institute of Electrical and Electronics Engineers and The Open Group, have given us
permission to reprint portions of their documentation.
In the following statement, the phrase ``this text'' refers to portions of the system
documentation.
Portions of this text are reprinted and reproduced in electronic form in the FreeBSD manual
pages, from IEEE Std 1003.1, 2004 Edition, Standard for Information Technology --
Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue
6, Copyright (C) 2001-2004 by the Institute of Electrical and Electronics Engineers, Inc and
The Open Group. In the event of any discrepancy be |
https://en.wikipedia.org/wiki/Famicom%20Disk%20System | The commonly shortened to the Famicom Disk System or just Disk System, is a peripheral for Nintendo's Family Computer home video game console, released only in Japan on February 21, 1986. It uses proprietary floppy disks called "Disk Cards" for cheaper data storage and it adds a new high-fidelity sound channel for supporting Disk System games.
Fundamentally, the Disk System serves simply to enhance some aspects already inherent to the base Famicom system, with better sound and cheaper gamesthough with the disadvantages of high initial price, slow speed, and lower reliability. However, this boost to the market of affordable and writable mass storage temporarily served as an enabling technology for the creation of new types of video games. This includes the vast, open world, progress-saving adventures of the best-selling The Legend of Zelda (1986) and Metroid (1986), games with a cost-effective and swift release such as the best-selling Super Mario Bros. 2, and nationwide leaderboards and contests via the in-store Disk Fax kiosks, which are considered to be forerunners of today's online achievement and distribution systems.
By 1989, the Famicom Disk System was inevitably obsoleted by the improving semiconductor technology of game cartridges. The Disk System's lifetime sales reached 4.4 million units by 1990, making it the most successful console add-on of all time, despite not being sold outside of Japan. Its final game was released in 1992, its software was discontinued in 2003, and Nintendo officially discontinued its technical support in 2007.
History
By 1985, Nintendo's Family Computer was dominating the Japanese home video game market, selling over three million units within a year and a half. Because of its success, the company had difficulty with keeping up demand for new stock, often getting flooded with calls from retailers asking for more systems. Retailers also requested for cheaper games; the cost of chips and semiconductors made cartridges expensive to make, and often cost a lot of money for both stores and consumers to purchase. Chip shortages also created supply issues. To satisfy these requests, Nintendo began thinking of ways to potentially lower the cost of games. It turned towards the home computer market for inspiration; Nintendo specifically looked to floppy disks which were quickly becoming the standard for storage media for personal computers. Floppy disks were cheap to produce and rewritable, allowing games to be easily produced during the manufacturing process. Seeing its potential, Nintendo began work on a disk-based peripheral for the Famicom.
For its proprietary diskette platform, which they dubbed the "Disk Card", Nintendo chose to base it on Mitsumi's Quick Disk media format, a cheaper alternative to floppy disks for Japanese home computers. The Disk Card format presented a number of advantages over cartridges, such as increased storage capacity that allowed for larger games, additional sound channels, and the abi |
https://en.wikipedia.org/wiki/Logarithmic%20scale | A logarithmic scale (or log scale) is a way of displaying numerical data over a very wide range of values in a compact way. As opposed to a linear number line in which every unit of distance corresponds to adding by the same amount, on a logarithmic scale, every unit of length corresponds to multiplying the previous value by the same amount. Hence, such a scale is nonlinear. In nonlinear scale, the numbers 1, 2, 3, 4, 5, and so on would not be equally spaced. Rather, the numbers 10, 100, 1000, 10000, and 100000 would be equally spaced. Likewise, the numbers 2, 4, 8, 16, 32, and so on, would be equally spaced. Often exponential growth curves are displayed on a log scale, otherwise they would increase too quickly to fit within a small graph.
Common uses
The markings on slide rules are arranged in a log scale for multiplying or dividing numbers by adding or subtracting lengths on the scales.
The following are examples of commonly used logarithmic scales, where a larger quantity results in a higher value:
Richter magnitude scale and moment magnitude scale (MMS) for strength of earthquakes and movement in the Earth
Sound level, with units decibel
Neper for amplitude, field and power quantities
Frequency level, with units cent, minor second, major second, and octave for the relative pitch of notes in music
Logit for odds in statistics
Palermo Technical Impact Hazard Scale
Logarithmic timeline
Counting f-stops for ratios of photographic exposure
The rule of nines used for rating low probabilities
Entropy in thermodynamics
Information in information theory
Particle size distribution curves of soil
The following are examples of commonly used logarithmic scales, where a larger quantity results in a lower (or negative) value:
pH for acidity
Stellar magnitude scale for brightness of stars
Krumbein scale for particle size in geology
Absorbance of light by transparent samples
Some of our senses operate in a logarithmic fashion (Weber–Fechner law), which makes logarithmic scales for these input quantities especially appropriate. In particular, our sense of hearing perceives equal ratios of frequencies as equal differences in pitch. In addition, studies of young children in an isolated tribe have shown logarithmic scales to be the most natural display of numbers in some cultures.
Graphic representation
The top left graph is linear in the X and Y axes, and the Y-axis ranges from 0 to 10. A base-10 log scale is used for the Y axis of the bottom left graph, and the Y axis ranges from 0.1 to 1,000.
The top right graph uses a log-10 scale for just the X axis, and the bottom right graph uses a log-10 scale for both the X axis and the Y axis.
Presentation of data on a logarithmic scale can be helpful when the data:
covers a large range of values, since the use of the logarithms of the values rather than the actual values reduces a wide range to a more manageable size;
may contain exponential laws or power laws, since these will show up a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.