source stringlengths 32 199 | text stringlengths 26 3k |
|---|---|
https://en.wikipedia.org/wiki/Access%20point | Access point or Access Point may refer to:
Access Point (Antarctica), a rocky point on Anvers Island, Antarctica
Wireless access point, a device to connect to a wireless computer network
Subject access point, a method in a bibliographic database by which books, journals, and other documents are accessed
See also |
https://en.wikipedia.org/wiki/Hidden%20node%20problem | In wireless networking, the hidden node problem or hidden terminal problem occurs when a node can communicate with a wireless access point (AP), but cannot directly communicate with other nodes that are communicating with that AP. This leads to difficulties in medium access control sublayer since multiple nodes can send data packets to the AP simultaneously, which creates interference at the AP resulting in no packet getting through.
Although some loss of packets is normal in wireless networking, and the higher layers will resend them, if one of the nodes is transferring a lot of large packets over a long period, the other node may get very little goodput.
Practical protocol solutions exist to the hidden node problem. For example, Request To Send/Clear To Send (RTS/CTS) mechanisms where nodes send short packets to request permission of the access point to send longer data packets. Because responses from the AP are seen by all the nodes, the nodes can synchronize their transmissions to not interfere. However, the mechanism introduces latency, and the overhead can often be greater than the cost, particularly for short data packets.
Background
Hidden nodes in a wireless network are nodes that are out of range of other nodes or a collection of nodes. Consider a physical star topology with an access point with many nodes surrounding it in a circular fashion: each node is within communication range of the AP, but the nodes cannot communicate with each other.
For example, in a wireless network, it is likely that the node at the far edge of the access point's range, which is known as A, can see the access point, but it is unlikely that the same node can communicate with a node on the opposite end of the access point's range, C. These nodes are known as hidden.
Another example would be where A and C are either side of an obstacle that reflects or strongly absorbs radio waves, but nevertheless they can both still see the same AP.
The problem is when nodes A and C start to send packets simultaneously to the access point B. Because the nodes A and C cannot receive each other's signals, so they cannot detect the collision before or while transmitting, carrier-sense multiple access with collision detection (CSMA/CD) does not work, and collisions occur, which then corrupt the data received by the access point.
To overcome the hidden node problem, request-to-send/clear-to-send (RTS/CTS) handshaking (IEEE 802.11 RTS/CTS) is implemented at the Access Point in conjunction with the Carrier sense multiple access with collision avoidance (CSMA/CA) scheme. The same problem exists in a mobile ad hoc network (MANET).
IEEE 802.11 uses 802.11 RTS/CTS acknowledgment and handshake packets to partly overcome the hidden node problem. RTS/CTS is not a complete solution and may decrease throughput even further, but adaptive acknowledgements from the base station can help too.
The comparison with hidden stations shows that RTS/CTS packages in each traffic class are profi |
https://en.wikipedia.org/wiki/Biological%20database | Biological databases are libraries of biological sciences, collected from scientific experiments, published literature, high-throughput experiment technology, and computational analysis. They contain information from research areas including genomics, proteomics, metabolomics, microarray gene expression, and phylogenetics. Information contained in biological databases includes gene function, structure, localization (both cellular and chromosomal), clinical effects of mutations as well as similarities of biological sequences and structures.
Biological databases can be classified by the kind of data they collect (see below). Broadly, there are molecular databases (for sequences, molecules, etc.), functional databases (for physiology, enzyme activities, phenotypes, ecology etc), taxonomic databases (for species and other taxonomic ranks), images and other media, or specimens (for museum collections etc.)
Databases are important tools in assisting scientists to analyze and explain a host of biological phenomena from the structure of biomolecules and their interaction, to the whole metabolism of organisms and to understanding the evolution of species. This knowledge helps facilitate the fight against diseases, assists in the development of medications, predicting certain genetic diseases and in discovering basic relationships among species in the history of life.
Technical basis and theoretical concepts
Relational database concepts of computer science and Information retrieval concepts of digital libraries are important for understanding biological databases. Biological database design, development, and long-term management is a core area of the discipline of bioinformatics. Data contents include gene sequences, textual descriptions, attributes and ontology classifications, citations, and tabular data. These are often described as semi-structured data, and can be represented as tables, key delimited records, and XML structures.
Access
Most biological databases are available through web sites that organise data such that users can browse through the data online. In addition the underlying data is usually available for download in a variety of formats. Biological data comes in many formats. These formats include text, sequence data, protein structure and links. Each of these can be found from certain sources, for example:
Text formats are provided by PubMed and OMIM.
Sequence data is provided by GenBank, in terms of DNA, and UniProt, in terms of protein.
Protein structures are provided by PDB, SCOP, and CATH.
Problems and challenges
Biological knowledge is distributed among countless databases. This sometimes makes it difficult to ensure the consistency of information, e.g. when different names are used for the same species or different data formats. As a consequence, inter-operability is a constant challenge for information exchange. For instance, if a DNA sequence database stores the DNA sequence along the name of a species, a name cha |
https://en.wikipedia.org/wiki/Macintosh%20Portable | The Macintosh Portable is a laptop designed, manufactured, and sold by Apple Computer, Inc. from September 1989 to October 1991. It is the first battery-powered Macintosh, which garnered significant excitement from critics, but sales to customers were quite low. It featured a fast, sharp, and expensive monochrome active matrix LCD screen in a hinged design that covered the keyboard when the machine was not in use. The Portable was one of the early consumer laptops to employ an active matrix panel—only the most expensive of the initial PowerBook line, the PowerBook 170, had such a panel. The machine was designed to deliver high performance, at the cost of increased price and weight. The Portable was discontinued in October 1991.
The Portable has features similar to the Atari STacy, a version of their Atari ST computer which contained a built in keyboard and monitor. Macintosh Portable can run Macintosh System 6.0.4 through System 7.5.5.
Hardware
The pointer was a built-in trackball that could be removed and located on either side of the keyboard. There were three drive configurations available for Macintosh Portable. A Portable could ship with one floppy drive, with two floppy drives, or with a hard drive and a floppy drive. The floppy drive is 1.44 MB. Most Macintosh Portable units came with a hard drive. It was a custom-engineered Conner CP-3045 (known by Apple as "Hard Disk 40SC"). It holds 40 MB of data, consumes less power compared to most hard drives of its time, and it has a proprietary SCSI connector; adapters that allow standard SCSI drives to be used on the Portable exist, but they are expensive. At 16 pounds (7.2 kilograms) and 4 inches (10 centimetres) thick, the Portable was a heavy and bulky portable computer. The main contributor to the Portable's weight and bulk was its lead-acid battery.
Display issues
Despite the dramatic improvement in terms of ergonomics offered by the responsiveness, sharpness, and uniformity of its active matrix panel, one of the primary drawbacks of the Portable was poor readability in low-light situations. Consequently, in February 1991, Apple introduced a backlit Macintosh Portable (model M5126). The backlight feature was a welcomed improvement, but it reduced the battery life by about a half. An upgrade kit was also offered for the earlier model as well, which plugged into the ROM expansion slot. The Portable used expensive SRAM memory in an effort to maximize battery life and to provide an "instant on" low-power sleep mode. In the newer backlit Portable, Apple changed SRAM memory to the less expensive (but more power-hungry) pseudo-SRAM, which reduced the total RAM expansion to 8 MB, and lowered the price.
Battery issues
The lead-acid battery on the non-backlit Portable offered up to ten hours of usage time, and the Portable draws the same amount of power when turned off, and when in sleep mode. Unlike later portable computers from Apple and other manufacturers, the battery is wired in-series wit |
https://en.wikipedia.org/wiki/Largs | {{Infobox UK place
| gaelic_name = An Leargaidh Ghallda<ref>[http://www.ainmean-aite.org/database.asp?intent=details&id=530/Ainmean-Àite na h-Alba ~ Gaelic Place-names of Scotland]</ref>
| official_name = Largs
| type = Town
| static_image_name = DPP 00226.JPG
| static_image_caption = Part of Largs seafront showing part of the town centre of Largs
| static_image_width = 260px
| population =
| population_ref = ()
| unitary_scotland = North Ayrshire
| lieutenancy_scotland = Ayrshire and Arran
| constituency_westminster = North Ayrshire and Arran
| constituency_scottish_parliament = Cunninghame North
| country = Scotland
| coordinates =
| os_grid_reference = NS203592
| map_type =
| post_town = LARGS
| postcode_district = KA30
| postcode_area = KA
| dial_code = 01475
| edinburgh_distance_mi = 66
| london_distance_mi = 354
| website =
}}
Largs () is a town on the Firth of Clyde in North Ayrshire, Scotland, about from Glasgow. The original name means "the slopes" (An Leargaidh) in Scottish Gaelic.
A popular seaside resort with a pier, the town markets itself on its historic links with the Vikings and an annual festival is held each year in early September. In 1263 it was the site of the Battle of Largs between the Norwegian and the Scottish armies. The National Mòd has also been held here in the past.
History
There is evidence of human activity in the vicinity of Largs which can be dated to the Neolithic era. The Haylie Chambered Tomb in Douglas Park dates from c. 3000 BC.
Largs evolved from the estates of North Cunninghame over which the Montgomeries of Skelmorlie became temporal lords in the seventeenth century. Sir Robert Montgomerie built Skelmorlie Aisle in the ancient kirk of Largs in 1636 as a family mausoleum. Today the monument is all that remains of the old kirk.
From its beginnings as a small village around its kirk, Largs evolved into a busy and popular seaside resort in the nineteenth century. Large hotels appeared and the pier was constructed in 1834. It was not until 1895, however, that the railway made the connection to Largs, sealing the town's popularity.
It also became a fashionable place to live in and several impressive mansions were built, the most significant of which included 'Netherhall', the residence of William Thomson, Lord Kelvin, the physicist and engineer.
Largs has historical connections much further back, however. It was the site of the Battle of Largs in 1263, in which parts of a Scottish army attacked a small force of Norwegians attempting to salvage ships from a fleet carrying the armies of King Magnus Olafsson of |
https://en.wikipedia.org/wiki/Bar%20chart | A bar chart or bar graph is a chart or graph that presents categorical data with rectangular bars with heights or lengths proportional to the values that they represent. The bars can be plotted vertically or horizontally. A vertical bar chart is sometimes called a column chart.
A bar graph shows comparisons among discrete categories. One axis of the chart shows the specific categories being compared, and the other axis represents a measured value. Some bar graphs present bars clustered in groups of more than one, showing the values of more than one measured variable.
History
Many sources consider William Playfair (1759-1824) to have invented the bar chart and the Exports and Imports of Scotland to and from different parts for one Year from Christmas 1780 to Christmas 1781 graph from his The Commercial and Political Atlas to be the first bar chart in history. Diagrams of the velocity of a constantly accelerating object against time published in The Latitude of Forms (attributed to Jacobus de Sancto Martino or, perhaps, to Nicole Oresme) about 300 years before can be interpreted as "proto bar charts".
Usage
Bar graphs/charts provide a visual presentation of categorical data. Categorical data is a grouping of data into discrete groups, such as months of the year, age group, shoe sizes, and animals. These categories are usually qualitative. In a column (vertical) bar chart, categories appear along the horizontal axis and the height of the bar corresponds to the value of each category.
Bar charts have a discrete domain of categories, and are usually scaled so that all the data can fit on the chart. When there is no natural ordering of the categories being compared, bars on the chart may be arranged in any order. Bar charts arranged from highest to lowest incidence are called Pareto charts.
Grouped (clustered) and stacked
Bar graphs can also be used for more complex comparisons of data with grouped (or "clustered") bar charts, and stacked bar charts.
In grouped (clustered) bar charts, for each categorical group there are two or more bars color-coded to represent a particular grouping. For example, a business owner with two stores might make a grouped bar chart with different colored bars to represent each store: the horizontal axis would show the months of the year and the vertical axis would show revenue.
Alternatively, a stacked bar chart stacks bars on top of each other so that the height of the resulting stack shows the combined result. Stacked bar charts are not suited to data sets having both positive and negative values.
Grouped bar charts usually present the information in the same order in each grouping. Stacked bar charts present the information in the same sequence on each bar.
Variable-width (variwide)
Variable-width bar charts, sometimes abbreviated variwide (bar) charts, are bar charts having bars with non-uniform widths. Generally:
Bars represent quantities with respective rectangles of areas A that are respective arithmet |
https://en.wikipedia.org/wiki/Gift%20wrapping%20algorithm | In computational geometry, the gift wrapping algorithm is an algorithm for computing the convex hull of a given set of points.
Planar case
In the two-dimensional case the algorithm is also known as Jarvis march, after R. A. Jarvis, who published it in 1973; it has O(nh) time complexity, where n is the number of points and h is the number of points on the convex hull. Its real-life performance compared with other convex hull algorithms is favorable when n is small or h is expected to be very small with respect to n. In general cases, the algorithm is outperformed by many others ( See Convex hull algorithms).
Algorithm
For the sake of simplicity, the description below assumes that the points are in general position, i.e., no three points are collinear. The algorithm may be easily modified to deal with collinearity, including the choice whether it should report only extreme points (vertices of the convex hull) or all points that lie on the convex hull. Also, the complete implementation must deal with degenerate cases when the convex hull has only 1 or 2 vertices, as well as with the issues of limited arithmetic precision, both of computer computations and input data.
The gift wrapping algorithm begins with i=0 and a point p0 known to be on the convex hull, e.g., the leftmost point, and selects the point pi+1 such that all points are to the right of the line pi pi+1. This point may be found in O(n) time by comparing polar angles of all points with respect to point pi taken for the center of polar coordinates. Letting i=i+1, and repeating with until one reaches ph=p0 again yields the convex hull in h steps. In two dimensions, the gift wrapping algorithm is similar to the process of winding a string (or wrapping paper) around the set of points.
The approach can be extended to higher dimensions.
Pseudocode
algorithm jarvis(S) is
// S is the set of points
// P will be the set of points which form the convex hull. Final set size is i.
pointOnHull = leftmost point in S // which is guaranteed to be part of the CH(S)
i := 0
repeat
P[i] := pointOnHull
endpoint := S[0] // initial endpoint for a candidate edge on the hull
for j from 0 to |S| do
// endpoint == pointOnHull is a rare case and can happen only when j == 1 and a better endpoint has not yet been set for the loop
if (endpoint == pointOnHull) or (S[j] is on left of line from P[i] to endpoint) then
endpoint := S[j] // found greater left turn, update endpoint
i := i + 1
pointOnHull = endpoint
until endpoint = P[0] // wrapped around to first hull point
Complexity
The inner loop checks every point in the set S, and the outer loop repeats for each point on the hull. Hence the total run time is . The run time depends on the size of the output, so Jarvis's march is an output-sensitive algorithm.
However, because the running time depends linearly on the number of hull v |
https://en.wikipedia.org/wiki/Graham%20scan | Graham's scan is a method of finding the convex hull of a finite set of points in the plane with time complexity O(n log n). It is named after Ronald Graham, who published the original algorithm in 1972. The algorithm finds all vertices of the convex hull ordered along its boundary. It uses a stack to detect and remove concavities in the boundary efficiently.
Algorithm
The first step in this algorithm is to find the point with the lowest y-coordinate. If the lowest y-coordinate exists in more than one point in the set, the point with the lowest x-coordinate out of the candidates should be chosen. Call this point P. This step takes O(n), where n is the number of points in question.
Next, the set of points must be sorted in increasing order of the angle they and the point P make with the x-axis. Any general-purpose sorting algorithm is appropriate for this, for example heapsort (which is O(n log n)).
Sorting in order of angle does not require computing the angle. It is possible to use any function of the angle which is monotonic in the interval . The cosine is easily computed using the dot product, or the slope of the line may be used. If numeric precision is at stake, the comparison function used by the sorting algorithm can use the sign of the cross product to determine relative angles.
If several points are of the same angle, either break ties by increasing distance (Manhattan or Chebyshev distance may be used instead of Euclidean for easier computation, since the points lie on the same ray), or delete all but the furthest point.
The algorithm proceeds by considering each of the points in the sorted array in sequence. For each point, it is first determined whether traveling from the two points immediately preceding this point constitutes making a left turn or a right turn. If a right turn, the second-to-last point is not part of the convex hull, and lies 'inside' it. The same determination is then made for the set of the latest point and the two points that immediately precede the point found to have been inside the hull, and is repeated until a "left turn" set is encountered, at which point the algorithm moves on to the next point in the set of points in the sorted array minus any points that were found to be inside the hull; there is no need to consider these points again. (If at any stage the three points are collinear, one may opt either to discard or to report it, since in some applications it is required to find all points on the boundary of the convex hull.)
Again, determining whether three points constitute a "left turn" or a "right turn" does not require computing the actual angle between the two line segments, and can actually be achieved with simple arithmetic only. For three points , and , compute the z-coordinate of the cross product of the two vectors and , which is given by the expression . If the result is 0, the points are collinear; if it is positive, the three points constitute a "left turn" or counter-clockwise o |
https://en.wikipedia.org/wiki/Metaprogramming | Metaprogramming is a programming technique in which computer programs have the ability to treat other programs as their data. It means that a program can be designed to read, generate, analyze or transform other programs, and even modify itself while running. In some cases, this allows programmers to minimize the number of lines of code to express a solution, in turn reducing development time. It also allows programs a greater flexibility to efficiently handle new situations without recompilation.
Metaprogramming can be used to move computations from run-time to compile-time, to generate code using compile time computations, and to enable self-modifying code. The ability of a programming language to be its own metalanguage is called reflection. Reflection is a valuable language feature to facilitate metaprogramming.
Metaprogramming was popular in the 1970s and 1980s using list processing languages such as LISP. LISP hardware machines were popular in the 1980s and enabled applications that could process code. They were frequently used for artificial intelligence applications.
Approaches
Metaprogramming enables developers to write programs and develop code that falls under the generic programming paradigm. Having the programming language itself as a first-class data type (as in Lisp, Prolog, SNOBOL, or Rebol) is also very useful; this is known as homoiconicity. Generic programming invokes a metaprogramming facility within a language by allowing one to write code without the concern of specifying data types since they can be supplied as parameters when used.
Metaprogramming usually works in one of three ways.
The first approach is to expose the internals of the run-time engine to the programming code through application programming interfaces (APIs) like that for the .NET IL emitter.
The second approach is dynamic execution of expressions that contain programming commands, often composed from strings, but can also be from other methods using arguments or context, like JavaScript. Thus, "programs can write programs." Although both approaches can be used in the same language, most languages tend to lean toward one or the other.
The third approach is to step outside the language entirely. General purpose program transformation systems such as compilers, which accept language descriptions and carry out arbitrary transformations on those languages, are direct implementations of general metaprogramming. This allows metaprogramming to be applied to virtually any target language without regard to whether that target language has any metaprogramming abilities of its own. One can see this at work with Scheme and how it allows tackling some limitations faced in C by using constructs that were part of the Scheme language itself to extend C.
Lisp is probably the quintessential language with metaprogramming facilities, both because of its historical precedence and because of the simplicity and power of its metaprogramming. In Lisp metaprogramming, the |
https://en.wikipedia.org/wiki/FIGlet | FIGlet is a computer program that generates text banners, in a variety of typefaces, composed of letters made up of conglomerations of smaller ASCII characters (see ASCII art). The name derives from "Frank, Ian and Glenn's letters".
Being free software, FIGlet is commonly included as part of many Unix-like operating systems (Linux, BSD, etc.) distributions, but it has been ported to other platforms as well. The official FIGlet FTP site includes precompiled ports for the Acorn, Amiga, Apple II, Atari ST, BeOS, Mac, MS-DOS, NeXTSTEP, OS/2, and Microsoft Windows, as well as a reimplementation in Perl (Text::FIGlet). There are third-party reimplementations of FIGlet in Java (including one embedded in the JavE ASCII art editor), JavaScript, PHP, Python, and Go.
Behavior
FIGlet can read from standard input or accept a message as part of the command line. It prints to standard output. Some common arguments (options) are:
-f to select a font file. (font files are available here)
-d to change the directory for fonts.
-c centers the output.
-l left-aligns the output.
-r right-aligns the output.
-t sets the output width to the terminal width.
-w specifies a custom output width.
-k enables kerning, printing each letter of the message individually, instead of merged into the adjacent letters.
Sample usage
An example of output generated by FIGlet is shown below.
[user@hostname ~]$ figlet Wikipedia
__ ___ _ _ _ _
\ \ / (_) | _(_)_ __ ___ __| (_) __ _
\ \ /\ / /| | |/ / | '_ \ / _ \/ _` | |/ _` |
\ V V / | | <| | |_) | __/ (_| | | (_| |
\_/\_/ |_|_|\_\_| .__/ \___|\__,_|_|\__,_|
|_|
The following command:
[user@hostname ~]$ figlet -ct -f roman Wikipedia
generates this output:
oooooo oooooo oooo o8o oooo o8o .o8 o8o
`888. `888. .8' `"' `888 `"' "888 `"'
`888. .8888. .8' oooo 888 oooo oooo oo.ooooo. .ooooo. .oooo888 oooo .oooo.
`888 .8'`888. .8' `888 888 .8P' `888 888' `88b d88' `88b d88' `888 `888 `P )88b
`888.8' `888.8' 888 888888. 888 888 888 888ooo888 888 888 888 .oP"888
`888' `888' 888 888 `88b. 888 888 888 888 .o 888 888 888 d8( 888
`8' `8' o888o o888o o888o o888o 888bod8P' `Y8bod8P' `Y8bod88P" o888o `Y888""8o
888
o888o
The -ct options centers the text and makes it take up the full width of the terminal. The -f roman option specifies the 'roman' font file.
Font examples
Invita
__ __)
(, ) | / , /) , /) ,
| /| / (/_ __ _ _(/ _
|/ |/ _(_/(___(_/_)__(/_(_(__(_( |
https://en.wikipedia.org/wiki/Warlock%20%28comics%29 | In comics, Warlock may refer to:
Warlock (New Mutants), a cybernetic alien member of the New Mutants superhero team in Marvel Comics
Warlock, a villain in the 1966 animated TV series The New Adventures of Superman
Adam Warlock, a space-traveling superhero in Marvel Comics
Maha Yogi, a Marvel Comics character who has also gone by the names Warlock and Mad Merlin
See also
Warlock (disambiguation)
fr:Warlock (comics) |
https://en.wikipedia.org/wiki/Association%20list | In computer programming and particularly in Lisp, an association list, often referred to as an alist, is a linked list in which each list element (or node) comprises a key and a value. The association list is said to associate the value with the key. In order to find the value associated with a given key, a sequential search is used: each element of the list is searched in turn, starting at the head, until the key is found. Associative lists provide a simple way of implementing an associative array, but are efficient only when the number of keys is very small.
Operation
An associative array is an abstract data type that can be used to maintain a collection of key–value pairs and look up the value associated with a given key. The association list provides a simple way of implementing this data type.
To test whether a key is associated with a value in a given association list, search the list starting at its first node and continuing either until a node containing the key has been found or until the search reaches the end of the list (in which case the key is not present).
To add a new key–value pair to an association list, create a new node for that key-value pair, set the node's link to be the previous first element of the association list, and replace the first element of the association list with the new node. Although some implementations of association lists disallow having multiple nodes with the same keys as each other, such duplications are not problematic for this search algorithm: duplicate keys that appear later in the list are ignored.
It is also possible to delete a key from an association list, by scanning the list to find each occurrence of the key and splicing the nodes containing the key out of the list. The scan should continue to the end of the list, even when the key is found, in case the same key may have been inserted multiple times.
Performance
The disadvantage of association lists is that the time to search is , where is the length of the list. For large lists, this may be much slower than the times that can be obtained by representing an associative array as a binary search tree or as a hash table.
Additionally, unless the list is regularly pruned to remove elements with duplicate keys, multiple values associated with the same key will increase the size of the list, and thus the time to search, without providing any compensatory advantage.
One advantage of association lists is that a new element can be added in constant time. Additionally, when the number of keys is very small, searching an association list may be more efficient than searching a binary search tree or hash table, because of the greater simplicity of their implementation.
Applications and software libraries
In the early development of Lisp, association lists were used to resolve references to free variables in procedures. In this application, it is convenient to augment association lists with an additional operation, that reverses the addition of a |
https://en.wikipedia.org/wiki/Test-and-set | In computer science, the test-and-set instruction is an instruction used to write (set) 1 to a memory location and return its old value as a single atomic (i.e., non-interruptible) operation. The caller can then "test" the result to see if the state was changed by the call. If multiple processes may access the same memory location, and if a process is currently performing a test-and-set, no other process may begin another test-and-set until the first process's test-and-set is finished. A central processing unit (CPU) may use a test-and-set instruction offered by another electronic component, such as dual-port RAM; a CPU itself may also offer a test-and-set instruction.
A lock can be built using an atomic test-and-set instruction as follows:
This code assumes that the memory location was initialized to 0 at some point prior to the first test-and-set. The calling process obtains the lock if the old value was 0, otherwise the while-loop spins waiting to acquire the lock. This is called a spinlock. At any point, the holder of the lock can simply set the memory location back to 0 to release the lock for acquisition by another--this does not require any special handling as the holder "owns" this memory location. "Test and test-and-set" is another example.
Maurice Herlihy (1991) proved that test-and-set (1-bit comparand) has a finite consensus number and can solve the wait-free consensus problem for at-most two concurrent processes. In contrast, compare-and-swap (32-bit comparand) offers a more general solution to this problem, and in some implementations compare-double-and-swap (64-bit comparand) is also available for extended utility.
Hardware implementation of test-and-set
DPRAM test-and-set instructions can work in many ways. Here are two variations, both of which describe a DPRAM which provides exactly 2 ports, allowing 2 separate electronic components (such as 2 CPUs) access to every memory location on the DPRAM.
Variation 1
When CPU 1 issues a test-and-set instruction, the DPRAM first makes an "internal note" of this by storing the address of the memory location in a special place. If at this point, CPU 2 happens to issue a test-and-set instruction for the same memory location, the DPRAM first checks its "internal note", recognizes the situation, and issues a BUSY interrupt, which tells CPU 2 that it must wait and retry. This is an implementation of a busy waiting or spinlock using the interrupt mechanism. Since all this happens at hardware speeds, CPU 2's wait to get out of the spin-lock is very short.
Whether or not CPU 2 was trying to access the memory location, the DPRAM performs the test given by CPU 1. If the test succeeds, the DPRAM sets the memory location to the value given by CPU 1. Then the DPRAM wipes out its "internal note" that CPU 1 was writing there. At this point, CPU 2 could issue a test-and-set, which would succeed.
Variation 2
CPU 1 issues a test-and-set instruction to write to "memory location A". The DPRA |
https://en.wikipedia.org/wiki/Selection%20bias | Selection bias is the bias introduced by the selection of individuals, groups, or data for analysis in such a way that proper randomization is not achieved, thereby failing to ensure that the sample obtained is representative of the population intended to be analyzed. It is sometimes referred to as the selection effect. The phrase "selection bias" most often refers to the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may be false.
Types
Sampling bias
Sampling bias is systematic error due to a non-random sample of a population, causing some members of the population to be less likely to be included than others, resulting in a biased sample, defined as a statistical sample of a population (or non-human factors) in which all participants are not equally balanced or objectively represented. It is mostly classified as a subtype of selection bias, sometimes specifically termed sample selection bias, but some classify it as a separate type of bias.
A distinction of sampling bias (albeit not a universally accepted one) is that it undermines the external validity of a test (the ability of its results to be generalized to the rest of the population), while selection bias mainly addresses internal validity for differences or similarities found in the sample at hand. In this sense, errors occurring in the process of gathering the sample or cohort cause sampling bias, while errors in any process thereafter cause selection bias.
Examples of sampling bias include self-selection, pre-screening of trial participants, discounting trial subjects/tests that did not run to completion and migration bias by excluding subjects who have recently moved into or out of the study area, length-time bias, where slowly developing disease with better prognosis is detected, and lead time bias, where disease is diagnosed earlier participants than in comparison populations, although the average course of disease is the same.
Time interval
Early termination of a trial at a time when its results support the desired conclusion.
A trial may be terminated early at an extreme value (often for ethical reasons), but the extreme value is likely to be reached by the variable with the largest variance, even if all variables have a similar mean.
Exposure
Susceptibility bias
Clinical susceptibility bias, when one disease predisposes for a second disease, and the treatment for the first disease erroneously appears to predispose to the second disease. For example, postmenopausal syndrome gives a higher likelihood of also developing endometrial cancer, so estrogens given for the postmenopausal syndrome may receive a higher than actual blame for causing endometrial cancer.
Protopathic bias, when a treatment for the first symptoms of a disease or other outcome appear to cause the outcome. It is a potential bias when there is a lag time from the first symptoms and s |
https://en.wikipedia.org/wiki/Frieder%20Nake | Frieder Nake (born December 16, 1938 in Stuttgart, Germany) is a mathematician, computer scientist, and pioneer of computer art. He is best known internationally for his contributions to the earliest manifestations of computer art, a field of computing that made its first public appearances with three small exhibitions in 1965.
Art career
Nake had his first exhibition at Galerie Wendelin Niedlich in Stuttgart in November, 1965 alongside the artist Georg Nees.
Until 1969, Nake generated in rapid sequence a large number of works that he showed in many exhibitions over the years. He estimates his production at about 300 to 400 works during those years. A few were limited screenprint editions, single pieces and portfolios. The bulk were done as China ink on paper graphics, carried out by a flatbed high precision plotter called the Zuse Graphomat Z64.
Nake participated in the important group shows of the 1960s, such as, most prominently, Cybernetic Serendipity (London, UK, 1968), Tendencies 4: Computers and Visual Research (Zagreb, Yugoslavia, 1968), Ricerca e Progettazione. Proposte per una esposizione sperimentale (35th Venice Biennale, Italy, 1970), Arteonica (São Paulo, Brazil, 1971).
In 1971, he wrote a short and provocative note for Page, the Bulletin of the Computer Arts Society (whose member he was and still is), under the title „There Should Be No Computer-Art“ (Page No. 18, Oct. 1971, p. 1-2. Reprinted in Arie Altena, Lucas van der Velden (eds.): The anthology of computer art. Amsterdam: Sonic Acts 2006, p. 59-60). The note sparked a lively controversial debate among those who had meanwhile started to build an active community of artists, writers, musicians, and designers in the digital domain. His statement was rooted in a moral position. The involvement of computer technology in the Vietnam War and in massive attempts by capital to automate productive processes and, thereby, generate unemployment, should not allow artists to close their eyes and become silent servants of the ruling classes by reconciling high technology with the masses of the poor and suppressed.
His book Ästhetik als Informationsverarbeitung (1974) is one of the first to study connections between aesthetics, computing, and information theory, which has become important to the transdisciplinary area of digital media. This book and many of his ca. 300 publications (2012) evince his intellectual position between science and the humanities – a position that has always included an element of concern regarding the threats to a fully human society represented by computer technology, and which concern is on full display in a summary interview focused on what he describes as the "Algorithmic Revolution".
Academic career
Frieder Nake has been a professor of interactive computer graphics at the Department of Computer Science at Bremen, Germany, since 1972. Since 2005, he has also been teaching digital media design there. After studying mathematics at the University of Stuttgar |
https://en.wikipedia.org/wiki/Physics%20Analysis%20Workstation | The Physics Analysis Workstation (PAW) is an interactive, scriptable computer software tool for data analysis and graphical presentation in High Energy Physics (HEP).
The development of this software tool started at CERN in 1986, it was optimized for the processing of very large amounts of data. It was based on and intended for inter-operation with components of CERNLIB, an extensive collection of Fortran libraries.
PAW had been a standard tool in high energy physics for decades, yet was essentially unmaintained. Despite continuing popularity as of 2008, it has been losing ground to the C++-based ROOT package. Conversion tutorials exist. In 2014, development and support were stopped.
Sample script
PAW uses its own scripting language. Here is sample code (with its actual output), which can be used to plot data gathered in files.
* read data
vector/read X,Y input_file.dat
* eps plot
fort/file 55 gg_ggg_dsig_dphid_179181.eps
meta 55 -113
opt linx | linear scale
opt logy | logarithmic scale
* here goes plot
set plci 1 | line color
set lwid 2 | line width
set dmod 1 | line type (solid, dotted, etc.)
graph 32 X Y AL | 32 stands for input data lines in input file
* plot title and comments
set txci 1
atitle '[f] (deg)' 'd[s]/d[f]! (mb)'
set txci 1
text 180.0 2e1 '[f]=179...181 deg' 0.12
close 55
References
External links
PAW (at CERN)
The PAW History Seen by the CERN Computer News Letters
CERNLIB (at CERN)
ROOT (at CERN)
Free science software
Free software programmed in Fortran
Physics software
CERN software |
https://en.wikipedia.org/wiki/Wireless%20gateway | A wireless gateway routes packets from a wireless LAN to another network, wired or wireless WAN. It may be implemented as software or hardware or a combination of both. Wireless gateways combine the functions of a wireless access point, a router, and often provide firewall functions as well. They provide network address translation (NAT) functionality, so multiple users can use the internet with a single public IP. It also acts like a dynamic host configuration protocol (DHCP) to assign IPs automatically to devices connected to the network.
There are two kinds of wireless gateways. The simpler kind must be connected to a DSL modem or cable modem to connect to the internet via the internet service provider (ISP). The more complex kind has a built-in modem to connect to the internet without needing another device. This converged device saves desk space and simplifies wiring by replacing two electronic packages with one. It has a wired connection to the ISP, at least one jack port for the LAN (usually four jacks), and an antenna for wireless users. The wireless gateway could support wireless 802.11b and 802.11g with speed up to 56Mbit/s, 802.11n with speed up to 300Mps and recently the 802.11ac with speed up to 1200Mbit/s. The LAN interface may support 100Mbit/s (Fast) or 1000Mbit/s (Gigabit) Ethernet.
All wireless gateways have the ability to protect the wireless network using security encryption methods such as WEP, WPA, and WPS. WPA2 with WPS disabled is the most secure method. There are many wireless gateway brands with models offering different features and quality. They can differ on the wireless range and speed, a number of LAN ports, speed, and extra functionality. Some available brands in the market are Motorola, Netgear, and Linksys. However, most internet providers offer a free wireless gateway with their services, thus limiting the user's choice. On the other hand, the device provided by the ISP has the advantage that it comes pre-configured and ready to be installed. Another advantage of using these devices is the ability of the company to troubleshoot and fix any problem via remote access, which is very convenient for most users.
See also
Wi-Fi
IEEE 802.11
Residential gateway
References
Networking hardware |
https://en.wikipedia.org/wiki/Gymnasium%20Jur%20Hronec | Gymnázium Jura Hronca (GJH) is a gymnasium (grammar school) located in Bratislava, Slovakia.
The school has a focus on the study of natural sciences, mathematics, and computer sciences. However its affiliation with the International Baccalaureate, an active bi-lingual (English – Slovak) programme and the option to study several foreign languages such as French and German, the school has a strong reputation for the study of foreign languages.
In the school year 2005/2006, GJH launched lower programs of IB Primary Years Programme and IB Middle Years Programme.
The school has been recently known as the "Spojená škola Gymnázium Jura Hronca a ZŠ Košická" (United school of the Gymnázium Jura Hronca and the Košická Primary School) after a merge with the primary school Základná škola a osemročné gymnázium Košická sharing the same building.
History
The school was founded on January 9, 1959 as an 11-year secondary school. In the school year 1969/70 the school is granted the status of a Gymnasium, named after the Slovak mathematician Jur Hronec.
International Baccalaureate
Spojená škola Novohradská has been an IB World School since June 1994. It offers the IB Primary Years Programme (since 2009), IB Middle Years Programme (since 2009) and IB Diploma Programme (since 1994). It's the only school in Slovakia to offer all 3 programmes (as of 2023).
Alumni
2024 - Adam Gergely
Student Activities
Bratislava Model United Nations
Students from Gymnazium Jura Hronca organize the BratMUN conference held in Bratislava.
The Jur Hronec Cup
The Jur Hronec Cup (súťaž o džbán Jura Hronca) is a competition between all classes of the school.
During the year, classes earn points in different activities (Sports day, different competitions) and in the end of the year, the winner receives money and school-free days for a class-trip and the right to hold The Jur Hronec Cup for the next year.
Eschenbach
The Gymnazium Jura Hronca organises every year a school exchange with the Gymnasium Eschenbach in Bavaria, Germany. This school event takes place every year, the 2011 trip being the 20th, with a special trip to Belgium included in the program. This exchange is organized by Dr. Jan Mayer from the Slovak side, and from Dr. Hans Schmid from the German side.
References
External links
Homepage
GJH Evaluation Report 2010/2011
GJH 2010/2011 Leavers Higher Education Statistics
Official BratMUN Homepage
An article about the 2010 BratMUN in the SME daily.
Education in Bratislava
International Baccalaureate schools |
https://en.wikipedia.org/wiki/Normal%20mapping | In 3D computer graphics, normal mapping, or Dot3 bump mapping, is a texture mapping technique used for faking the lighting of bumps and dents – an implementation of bump mapping. It is used to add details without using more polygons. A common use of this technique is to greatly enhance the appearance and details of a low polygon model by generating a normal map from a high polygon model or height map.
Normal maps are commonly stored as regular RGB images where the RGB components correspond to the X, Y, and Z coordinates, respectively, of the surface normal.
History
In 1978 Jim Blinn described how the normals of a surface could be perturbed to make geometrically flat faces have a detailed appearance.
The idea of taking geometric details from a high polygon model was introduced in "Fitting Smooth Surfaces to Dense Polygon Meshes" by Krishnamurthy and Levoy, Proc. SIGGRAPH 1996, where this approach was used for creating displacement maps over nurbs. In 1998, two papers were presented with key ideas for transferring details with normal maps from high to low polygon meshes: "Appearance Preserving Simplification", by Cohen et al. SIGGRAPH 1998, and "A general method for preserving attribute values on simplified meshes" by Cignoni et al. IEEE Visualization '98. The former introduced the idea of storing surface normals directly in a texture, rather than displacements, though it required the low-detail model to be generated by a particular constrained simplification algorithm. The latter presented a simpler approach that decouples the high and low polygonal mesh and allows the recreation of any attributes of the high-detail model (color, texture coordinates, displacements, etc.) in a way that is not dependent on how the low-detail model was created. The combination of storing normals in a texture, with the more general creation process is still used by most currently available tools.
Spaces
The orientation of coordinate axes differs depending on the space in which the normal map was encoded.
A straightforward implementation encodes normals in object-space, so that red, green, and blue components correspond directly with X, Y, and Z coordinates.
In object-space the coordinate system is constant.
However object-space normal maps cannot be easily reused on multiple models, as the orientation of the surfaces differ.
Since color texture maps can be reused freely, and normal maps tend to correspond with a particular texture map,
it is desirable for artists that normal maps have the same property.
Normal map reuse is made possible by encoding maps in tangent space.
The tangent space is a vector space which is tangent to the model's surface.
The coordinate system varies smoothly (based on the derivatives of position with respect to texture coordinates) across the surface.
Tangent space normal maps can be identified by their dominant purple color, corresponding to a vector facing directly out from the surface.
See below.
Calculating tangent space
|
https://en.wikipedia.org/wiki/Common%20Public%20License | In computing, the Common Public License (CPL) is a free software / open-source software license published by IBM. The Free Software Foundation and Open Source Initiative have approved the license terms of the CPL.
Definition
The CPL has the stated aims of supporting and encouraging collaborative open-source development while still retaining the ability to use the CPL'd content with software licensed under other licenses, including many proprietary licenses. The Eclipse Public License (EPL) consists of a slightly modified version of the CPL.
The CPL has some terms that resemble those of the GNU General Public License (GPL), but some key differences exist. A similarity relates to distribution of a modified computer program: under either license (CPL or GPL), one must make the source code of a modified program available to others.
CPL, like the GNU Lesser General Public License, allows non-CPL-licensed software to link to a library under CPL without requiring the linked source code to be made available to the licensee.
CPL lacks compatibility with both versions of the GPL because it has a "choice of law" section in section 7, which restricts legal disputes to a certain court. Another source of incompatibility is the differing copyleft requirements.
To reduce the number of open source licenses, IBM and Eclipse Foundation agreed upon using solely the Eclipse Public License in the future. Open Source Initiative therefore lists the Common Public License as deprecated and superseded by EPL.
Projects using the Common Public License
Microsoft has released its Windows Installer XML (WiX) developer tool, Windows Template Library (WTL) and the FlexWiki engine under the CPL as SourceForge projects.
Some projects of the COIN-OR Foundation use the CPL.
See also
Software license
Software using the CPL (category)
References
External links
Open Source Initiative The CPL License
The CPL License from IBM
The COIN-OR web page
IBM
Free and open-source software licenses
Copyleft software licenses
de:Eclipse Public License |
https://en.wikipedia.org/wiki/Sinclair%20BASIC | Sinclair BASIC is a dialect of the programming language BASIC used in the 8-bit home computers from Sinclair Research, Timex Sinclair and Amstrad. The Sinclair BASIC interpreter was written by Nine Tiles Networks Ltd.
Designed to run in only 1 kB of RAM, the system makes a number of decisions to lower memory usage. This led to one of Sinclair BASIC's most notable features, that the keywords were entered using single keystrokes; each of the possible keywords was mapped to a key on the keyboard, when pressed, the token would be placed into memory while the entire keyword was printed out on-screen. This made code entry easier whilst simplifying the parser.
The original ZX80 version supported only integer mathematics, which partially made up for some of the memory-saving design notes which had negative impact on performance. When the system was ported to the ZX81 in 1981, a full floating point implementation was added. This version was very slow, among the slowest BASICs on the market at the time, but given the limited capabilities of the machine this was not a serious concern.
Performance became a more serious issue with the release of the Sinclair Spectrum in 1983, which ran too slowly to make full use of the machine's new features. This led to an entirely new BASIC for the following Sinclair QL, as well as a number of 3rd party BASICs for the Spectrum and its various clones. The original version continued to be modified and ported in the post-Sinclair era.
History
Clive Sinclair initially met with John Grant, the owner of Nine Tiles, in April 1979 to discuss a BASIC for Sinclair's new computer concept. Sinclair was inspired to make a new machine after watching his son enjoy their TRS-80, but that machine's £500 price tag appeared to be a serious limit on its popularity. He wanted a new kit that would expand on their previous MK14 and feature a built-in BASIC at the target price of £79.95. To meet this price point, the machine would ship with only 1 kB of RAM and 4 kB of ROM. Grant suggested using the Forth language instead, but the budget precluded this. Grant wrote the BASIC interpreter between June and July 1979, but the code initially came in at 5 kB and he spent the next month trimming it down. It was initially an incomplete implementation of the 1978 American National Standards Institute (ANSI) Minimal BASIC standard with integer arithmetic only, termed 4K BASIC.
Even before the ZX80 was introduced in February 1980, the constant downward price-pressure in the industry was allowing the already inexpensive design to be further reduced in complexity and cost. In particular, many of the separate circuits in the ZX80 were re-implemented in a single uncommitted logic array from Ferranti, which allowed the price to be reduced to only £49.95 while increasing the size of the ROM to 8 kB. This work was assigned to Steve Vickers, who joined Nine Tiles in January 1980. Whilst Grant worked on the code interfacing with hardware, Vickers used the large |
https://en.wikipedia.org/wiki/Legend%20Entertainment | Legend Entertainment Company was an American developer and publisher of computer games, best known for creating adventure titles throughout the 1990s. The company was founded by Bob Bates and Mike Verdu, both veterans of the interactive fiction studio Infocom that shut down in 1989. Legend's first two games, Spellcasting 101: Sorcerers Get All the Girls and Timequest, had strong sales that sustained the company. Legend also profited from negotiating licenses to popular book series, allowing them to create notable game adaptations such as Companions of Xanth (based on Demons Don't Dream by Piers Anthony) and Gateway (based on the eponymous novel by Frederik Pohl). Legend also earned a reputation for comedic adventures, with numerous awards for Eric the Unready in 1993. As the technology of the game industry changed, Legend continued to expand its game engine to take advantage of higher graphical fidelity, mouse support, and the increased media storage of the compact disc.
These industry changes led to difficult competition by the mid-1990s, especially in the adventure game genre. Legend secured an investment from book publishing company Random House and developed additional book adaptations, such as Death Gate and Shannara, as well as original titles such as Mission Critical. However, the company's expenses for graphics were rising without a similar increase in sales, causing Random House to exit the game industry. Legend found game publishers to take over marketing and distribution so it could focus its efforts exclusively on development. While the studio's adventure titles suffered in the changing marketplace, working with game publishers allowed Legend to experiment with more action-oriented titles such as Star Control 3. In its final years, Legend fully pivoted to first-person shooters thanks to a growing relationship with Unreal developer Tim Sweeney and an acquisition by publisher GT Interactive. The studio released the 1999 game adaptation of The Wheel of Time book series, designed using the Unreal Engine as a first-person action game. However, Legend's sales continued to dwindle, followed by the difficult development and commercial failure of Unreal II: The Awakening in 2003. The studio was shut down in January 2004, with staff moving to other game companies.
History
Origins
Legend Entertainment was founded in 1989 by Bob Bates and Mike Verdu. The duo met in the 1980s working at Infocom, a critically acclaimed developer of adventure games and interactive fiction. After the commercial success of the Zork series, Activision acquired Infocom in 1986. They closed the studio three years later due to rising costs, falling profits, and technical issues with MS-DOS. Bates decided to seek investment for a new game company, hoping to succeed where Infocom had declined. He told investors that the adventure genre was still viable, but it needed to evolve beyond just text. After securing funding from defense contractor American Systems Corporation |
https://en.wikipedia.org/wiki/Straddling%20checkerboard | A straddling checkerboard is a device for converting an alphanumeric plaintext into digits whilst simultaneously achieving fractionation (a simple form of information diffusion) and data compression relative to other schemes using digits. It also is known as a monôme-binôme cipher.
History
In 1555, Pope Paul IV created the office of Cipher Secretary to the Pontiff. In the late 1580s, this position was held by members of the Argenti family, most notably Giovanni Batista and his nephew, Matteo. Matteo is credited for designing what is now called the straddling checkerboard cipher.
In more modern times it was used by communist forces during the Spanish Civil War in order to protect their radio and written transmissions. It was later used as the basis for the message-to-digits step in the VIC cipher.
Mechanics
Setup
A straddling checkerboard is set up something like this:
The header row is populated with the ten digits, 0-9. They can be presented in order, as in the above table, or scrambled (based on a secret key value) for additional security. The second row is typically set up with eight high-frequency letters (mnemonics for the English language include; 'ESTONIA-R', 'A SIN TO ER(R)', 'AT ONE SIR'), leaving two blank spots; this row has no row coordinate in the first column. The remaining two rows are labeled with one of the two digits that were not assigned a letter in the second row, and then filled out with the rest of the alphabet, plus the two symbols '.' and '/'.
The period '.' is used as a full stop or decimal separator,
The slash '/' is used as a numeric escape character (indicating that a numeral follows).
Similar to the ordering of the digits in the header row, the alphabet characters can be presented in order (as it is here), or scrambled based on a secret keyword/phrase.
Enciphering
Letter-Encipherment: To encipher a letter in the second row is simply replaced by the number labeling its column. Characters in the third and fourth rows are replaced by a two-digit number representing their row and column numbers (with the row coordinate written first, i.e. B=20)
Digit-Encipherment: To encipher a digit, there are a few possible methods (which must be known/agreed beforehand):
Single Digit Escape: Encode the numerical escape character (i.e. the slash '/') as per any letter, then write the required digit 'in-clear'. This means a digit is encrypted by 3 ciphertext characters; 2 for the escape character, 1 for the digit itself. In this scheme, each digit requires an escape character encoded before it.
Double-Digit Scheme: If the escape character is encoded by two different digits (e.g. '26' in the example above), then multiple digits can be encoded by writing each out twice. To 'escape' back to text the escape character is used. In this way a stream of digits can be encoded with only one escape character. This method cannot be used if the escape character is itself encoded by a double digit combination.
Triple-Digit Scheme: As |
https://en.wikipedia.org/wiki/DRI | DRI or D.R.I. may stand for:
Business
DRI, the NYSE stock symbol for Darden Restaurants, Inc.
Digital Research Inc, first large software company in the business for microcomputers
Data Resources Inc., a former distributor of economic data, now part of IHS Global Insight
Diamond Resorts International Inc. a vacation ownership company
Computing
Declarative referential integrity, in databases
Direct Rendering Infrastructure, a software interface
Digital Research Infrastructure, tools and services underpinning and supporting scientific research
Government
United States District Court for the District of Rhode Island
Directorate of Revenue Intelligence, Intelligence Agency of India
Healthcare and bioscience
Dietary Reference Intake, dietary recommendation
Doncaster Royal Infirmary, South Yorkshire, UK
Dopamine reuptake inhibitor, class of drugs
Former Dundee Royal Infirmary, Scotland, UK
Other organisations
Desert Research Institute, environmental research
Direct Relief, health NGO
Directly Responsible Individual, Project specific title in Apple Computer's Corporate ecosystem.
Disaster Recovery Institute
Other
Direct reduced iron, used in steel production
D.R.I. (band), Dirty Rotten Imbeciles, an American crossover thrash band |
https://en.wikipedia.org/wiki/Dominator%20%28graph%20theory%29 | In computer science, a node of a control-flow graph dominates a node if every path from the entry node to must go through . Notationally, this is written as (or sometimes ). By definition, every node dominates itself.
There are a number of related concepts:
A node strictly dominates a node if dominates and does not equal .
The immediate dominator or idom of a node is the unique node that strictly dominates but does not strictly dominate any other node that strictly dominates . Every node, except the entry node, has an immediate dominator.
The dominance frontier of a node is the set of all nodes such that dominates an immediate predecessor of , but does not strictly dominate . It is the set of nodes where 's dominance stops.
A dominator tree is a tree where each node's children are those nodes it immediately dominates. The start node is the root of the tree.
History
Dominance was first introduced by Reese T. Prosser in a 1959 paper on analysis of flow diagrams. Prosser did not present an algorithm for computing dominance, which had to wait ten years for Edward S. Lowry and C. W. Medlock. Ron Cytron et al. rekindled interest in dominance in 1989 when they applied it to the problem of efficiently computing the placement of φ functions, which are used in static single assignment form.
Applications
Dominators, and dominance frontiers particularly, have applications in compilers for computing static single assignment form. A number of compiler optimizations can also benefit from dominators. The flow graph in this case comprises basic blocks.
Automatic parallelization benefits from postdominance frontiers. This is an efficient method of computing control dependence, which is critical to the analysis.
Memory usage analysis can benefit from the dominator tree to easily find leaks and identify high memory usage.
In hardware systems, dominators are used for computing signal probabilities for test generation, estimating switching activities for power and noise analysis, and selecting cut points in equivalence checking.
In software systems, they are used for reducing the size of the test set in structural testing techniques such as statement and branch coverage.
Algorithms
Let be the source node on the Control-flow graph. The dominators of a node are given by the maximal solution to the following data-flow equations:
The dominator of the start node is the start node itself. The set of dominators for any other node is the intersection of the set of dominators for all predecessors of . The node is also in the set of dominators for .
An algorithm for the direct solution is:
// dominator of the start node is the start itself
Dom(n0) = {n0}
// for all other nodes, set all nodes as the dominators
for each n in N - {n0}
Dom(n) = N;
// iteratively eliminate nodes that are not dominators
while changes in any Dom(n)
for each n in N - {n0}:
Dom(n) = {n} union with intersection over Dom(p) for al |
https://en.wikipedia.org/wiki/World%20Cyber%20Games | The World Cyber Games (WCG) is an international esports competition with multi-game titles in which hundreds of esports athletes from around the world participate in a variety of competitions also known as Esports Olympics. WCG events attempt to emulate a traditional sporting tournament, such as the Olympic Games; events included an official opening ceremony, and players from various countries competing for gold, silver, and bronze medals. WCG are held every year in other cities around the world. The WCG 2020 competition received nearly views worldwide.
General
World Cyber Games is one of the largest global esports tournament, with divisions in various countries. The World Cyber Games, created by International Cyber Marketing CEO Yooseop Oh and backed financially by Samsung, was considered the e-sports Olympics; events included an official opening ceremony, and players from various countries competing for gold, silver and bronze medals. The organization itself had an official mascot, and used an Olympic Games inspired logo. Organizations from each participating country conducted preliminary events at a regional level, before conducting national finals to determine the players best suited to represent them in the main World Cyber Games tournament event. All events had areas for spectators, but the tournament could also be viewed over internet video streams.
Besides providing a platform for tournament gaming, the World Cyber Games was used as a marketing tool; sponsors, such as Samsung, using the space around the venue to set up product demonstrations and stalls. In addition, advertisers saw the event as a good means to reach young male audiences, who may not be exposed to traditional advertising streams via television.
History
In 2000, the World Cyber Games was formed, and an event was held titled "The World Cyber Game Challenge", which began with an opening ceremony on 7 October. The event was sponsored by the Republic of Korea's Ministry of Culture and Tourism, Ministry of Information and Communications, and Samsung. It brought together teams from 17 countries to compete against each other in PC games including Quake III Arena, FIFA 2000, Age of Empires II, and StarCraft: Brood War. The tournament ended on 15 October 2000. The competition initially had 174 competitors from 17 different countries with a total prize purse of $20,000.
In 2001, the World Cyber Games held their first main event, hosted in Seoul, Korea, with a prize pool of $300,000 USD. National preliminaries were held between March and September, with the main tournament running between 5 December to 9 December. The World Cyber Games quoted an attendance of 389,000 competitors in the preliminaries, with 430 players advancing to the final tournament; teams from 24 countries in total were involved in the tournament.
In 2002, the World Cyber Games held a larger event in Daejeon, Korea with a prize pool of US$1,300,000; 450,000 competitors took part in the preliminary ev |
https://en.wikipedia.org/wiki/WCG | WCG may refer to:
World Cyber Games, an international e-sports event
World Combat Games, a sports event
Wide Color Gamut
World Community Grid, for scientific research computing
Worldwide Church of God, renamed Grace Communion International in 2009
WCG (firm), an advertising agency
WCG (college), a group of UK colleges
WCG (Wide DC electric goods), a classification of locomotives of India |
https://en.wikipedia.org/wiki/Initialization | Initialization may refer to:
Booting, a process that starts computer operating systems
Initialism, an abbreviation formed using the initial letters of words or word parts
In computing, formatting a storage medium like a hard disk or memory. Also, making sure a device is available to the operating system.
Initialization (programming) |
https://en.wikipedia.org/wiki/Simulation%20%28computer%20science%29 | In theoretical computer science a simulation is a relation between state transition systems associating systems that behave in the same way in the sense that one system simulates the other.
Intuitively, a system simulates another system if it can match all of its moves.
The basic definition relates states within one transition system, but this is easily adapted to relate two separate transition systems by building a system consisting of the disjoint union of the corresponding components.
Formal definition
Given a labelled state transition system (, , →),
where is a set of states, is a set of labels and → is a set of labelled transitions (i.e., a subset of ),
a relation is a simulation if and only if for every pair of states in and all labels α in :
if , then there is such that
Equivalently, in terms of relational composition:
Given two states and in , can be simulated by , written , if and only if there is a simulation such that . The relation is called the simulation preorder, and it is the union of all simulations: precisely when for some simulation .
The set of simulations is closed under union; therefore, the simulation preorder is itself a simulation. Since it is the union of all simulations, it is the unique largest simulation. Simulations are also closed under reflexive and transitive closure; therefore, the largest simulation must be reflexive and transitive. From this follows that the largest simulation — the simulation preorder — is indeed a preorder relation. Note that there can be more than one relation which is both a simulation and a preorder; the term simulation preorder refers to the largest one of them (which is a superset of all the others).
Two states and are said to be similar, written , if and only if can be simulated by and can be simulated by . Similarity is thus the maximal symmetric subset of the simulation preorder, which means it is reflexive, symmetric, and transitive; hence an equivalence relation. However, it is not necessarily a simulation, and precisely in those cases when it is not a simulation, it is strictly coarser than bisimilarity (meaning it is a superset of bisimilarity).
To witness, consider a similarity which is a simulation. Since it is symmetric, it is a bisimulation. It must then be a subset of bisimilarity, which is the union of all bisimulations. Yet it is easy to see that similarity is always a superset of bisimilarity. From this follows that if similarity is a simulation, it equals bisimilarity. And if it equals bisimilarity, it is naturally a simulation (since bisimilarity is a simulation). Therefore, similarity is a simulation if and only if it equals bisimilarity. If it does not, it must be its strict superset; hence a strictly coarser equivalence relation.
Similarity of separate transition systems
When comparing two different transition systems (S', Λ', →') and (S", Λ", →"), the basic notions of simulation and similarity can be used by forming the disjoint compositi |
https://en.wikipedia.org/wiki/Birmingham%20Canal%20Navigations | Birmingham Canal Navigations (BCN) is a network of canals connecting Birmingham, Wolverhampton, and the eastern part of the Black Country. The BCN is connected to the rest of the English canal system at several junctions. It was owned and operated by the Birmingham Canal Navigation Company from 1767 to 1948.
At its working peak, the BCN contained about 160 miles (257 km) of canals; today just over 100 miles (160 km) are navigable, and the majority of traffic is from tourist and residential narrowboats.
History
The earliest mention of the Birmingham Canal Navigation appears in Aris's Birmingham Gazette on 11 April 1768. Here it was reported that on 25 March 1768, the first general assembly of the Company of Proprietors of the Birmingham Canal Navigation was held at the Swann Inn, Birmingham, to raise funds to submit for an Act of Parliament. The first canal to be built in the area was the Birmingham Canal, authorized by the Birmingham Canal Navigation Act 1768 and built from 1768 to 1772 under the supervision of James Brindley from the, then, edge of Birmingham, with termini at Newhall Wharf (since built over) and Paradise Wharf (also known as Old Wharf) near to Gas Street Basin to meet the Staffordshire and Worcestershire Canal at Aldersley (north of Wolverhampton). It opened for business on 14 September 1772.
In 1769 an Act was obtained to construct the canal through a detached portion of the county of Shropshire, near Oldbury, and it included powers to make reservoirs anywhere within 3 miles between Smethwick and Oldbury.
The Birmingham and Fazeley Canal, from Birmingham to Tamworth, followed in 1784 with the Birmingham Canal Company merging with the Birmingham and Fazeley Canal Company immediately, to form what was originally called the Birmingham and Birmingham and Fazeley Canal Company. This cumbersome name was short-lived, and the combined company became incorporated as the Birmingham Canal Navigations Company from 1794, as the network was expanded. The Birmingham Canal Navigation Act 1794 authorized the extension from Broadwater to Walsall, and the short cut between Bloomfield and Deepfield, where the Coseley Tunnel was constructed, which with a length of , avoided a detour around Tipton Hill of .
Between 1825 and 1829 the canal was improved by the cutting down by of the summit at Smethwick, which occupied two and a half years, and cost £560,000 (), and by cutting off bends and erecting steam engines which reduced the cost of haulage by 4d. per ton.
Between 1825 and 1837 the navigation was improved between Spon Lane, Deepfield and Wolverhampton, saving a distance of six miles, which reduced the toll on coal by 9d per ton. At the same time the Titford Canal was constructed at a cost upwards of £200,000 ().
The junction with the Warwick and Birmingham Canal was made under powers of an Act of 1815. These improvements were all consolidated under an Act of 1835.
From 1839 to 1843 the Tame Valley Canal was built, along with the Bentl |
https://en.wikipedia.org/wiki/Kenneth%20G.%20Wilson | Kenneth Geddes "Ken" Wilson (June 8, 1936 – June 15, 2013) was an American theoretical physicist and a pioneer in leveraging computers for studying particle physics. He was awarded the 1982 Nobel Prize in Physics for his work on phase transitions—illuminating the subtle essence of phenomena like melting ice and emerging magnetism. It was embodied in his fundamental work on the renormalization group.
Life
Wilson was born on June 8, 1936, in Waltham, Massachusetts, the oldest child of Emily Buckingham Wilson and E. Bright Wilson, a prominent chemist at Harvard University, who did
important work on microwave emissions. His mother also trained as a physicist. He attended several schools, including Magdalen College School, Oxford, England,
ending up at the George School in eastern Pennsylvania.
He went on to Harvard College at age 16, majoring in Mathematics and, on two occasions, in 1954 and 1956, ranked among the top five in the William Lowell Putnam Mathematical Competition.
He was also a star on the athletics track, representing Harvard in the Mile. During his summer holidays he worked at the Woods Hole Oceanographic Institution. He earned his PhD from Caltech in 1961, studying under Murray Gell-Mann. He did post-doc work at Harvard and CERN.
He joined Cornell University in 1963 in the Department of Physics as a junior faculty member, becoming a full professor in 1970. He also did research at SLAC during this period. In 1974, he became the James A. Weeks Professor of Physics at Cornell.
In 1982 he was awarded the Nobel Prize in Physics for his work on critical phenomena using the renormalization group.
He was a co-winner of the Wolf Prize in physics in 1980, together with Michael E. Fisher and Leo Kadanoff.
His other awards include the A.C. Eringen Medal, the Franklin Medal, the Boltzmann Medal, and the Dannie Heinemann Prize. He was elected a member of the National Academy of Science and a fellow of the American Academy of Arts and Science, both in 1975, and also was elected a member of the American Philosophical Society in 1984.
In 1985, he was appointed as Cornell's Director of the Center for Theory and Simulation in Science and Engineering (now known as the Cornell Theory Center), one of five national supercomputer centers created by the National Science Foundation. In 1988, Wilson joined the faculty at Ohio State University. Wilson moved to Gray, Maine in 1995. He continued his association with Ohio State University until he retired in 2008. Prior to his death, he was actively involved in research on physics education and was an early proponent of "active involvement" (i.e. Science by Inquiry) of K-12 students in science and math.
Some of his PhD students include H. R. Krishnamurthy, Roman Jackiw, Michael Peskin, Serge Rudaz, Paul Ginsparg, and Steven R. White.
Wilson's brother David was also a professor at Cornell in the department of Molecular Biology and Genetics until his death, and his wife since 1982, Alison Brown, is a |
https://en.wikipedia.org/wiki/Uccel | UCCEL Corp, previously called University Computing Company ("UCC"), was a data processing service bureau on the campus of Southern Methodist University in Dallas, Texas. It was founded by the Wyly brothers (Sam and Charles, Jr.) in 1963. The name change in the mid-1980s was brought about by Gregory Liemandt, placed as CEO by the majority stockholder, a Swiss citizen named Walter Haefner through Careal Holding AG of Zurich. By 1972, the company operated a middle-sized datacenter in Troy, Michigan, and a huge facility in Arlington, Texas, based on top-of-the-line IBM/370 processors.
Uccel's "big-ticket item" claim to fame was software called UCC-1/TMS (Tape Management System), an IBM mainframe product for managing the tape library in an OS/MVS operating system environment. In 1980, they developed their second "big hitter" and most profitable product, UCC-7 (job scheduler). The UCC-1, UCC-7, UCC-11 (batch job rerun/restart add-on) suite led the market for tape management and job scheduling.
In 1986, UCCEL Corporation purchased Cambridge Systems Group, Inc., which marketed for SKK, Inc. and their market-leading ACF2 mainframe security product. In June 1987, Uccel was unexpectedly bought out by its archrival, Computer Associates, which aggressively sold directly competing products CA-Dynam/TLMS (tape management), CA-Scheduler and batch job scheduling products originally from Capex Corporation (flagship products "Optimizer" and "TLMS") and Value Software, plus CA-Top Secret (security / mainframe discretionary access control).
References
https://web.archive.org/web/20070927231947/http://www.horatioalger.com/members/member_info.cfm?memberid=wyl70
http://www.samandcharleswyly.com/sam-wyly-business-industry-texas.htm
http://www.umich.edu/~msjrnl/backmsj/011397/wyly.html
Southern Methodist University
Service companies of the United States
CA Technologies |
https://en.wikipedia.org/wiki/DVR | DVR can refer to:
Dalnevostochnaya Respublika, a nominally independent state that existed from April 1920 to November 1922 in the easternmost part of the Russian Far East
Data validation and reconciliation
Derwent Valley Railway (disambiguation)
Devco Railway
Differential Voting Right, a kind of equity share
Digital video recorder
Discrete valuation ring
Discrete variable representation
Distance-vector routing
Direct volume rendering
Dynamic voltage restoration
DVR College of Engineering and Technology
Van Riebeeck Decoration (DVR), a South African military award |
https://en.wikipedia.org/wiki/Real%20computation | In computability theory, the theory of real computation deals with hypothetical computing machines using infinite-precision real numbers. They are given this name because they operate on the set of real numbers. Within this theory, it is possible to prove interesting statements such as "The complement of the Mandelbrot set is only partially decidable."
These hypothetical computing machines can be viewed as idealised analog computers which operate on real numbers, whereas digital computers are limited to computable numbers. They may be further subdivided into differential and algebraic models (digital computers, in this context, should be thought of as topological, at least insofar as their operation on computable reals is concerned). Depending on the model chosen, this may enable real computers to solve problems that are inextricable on digital computers (For example, Hava Siegelmann's neural nets can have noncomputable real weights, making them able to compute nonrecursive languages.) or vice versa. (Claude Shannon's idealized analog computer can only solve algebraic differential equations, while a digital computer can solve some transcendental equations as well. However this comparison is not entirely fair since in Claude Shannon's idealized analog computer computations are immediately done; i.e. computation is done in real time. Shannon's model can be adapted to cope with this problem.)
A canonical model of computation over the reals is Blum–Shub–Smale machine (BSS).
If real computation were physically realizable, one could use it to solve NP-complete problems, and even #P-complete problems, in polynomial time. Unlimited precision real numbers in the physical universe are prohibited by the holographic principle and the Bekenstein bound.
See also
Hypercomputation, for other such powerful machines.
References
Further reading
Theory of computation
Hypercomputation |
https://en.wikipedia.org/wiki/Bisimulation | In theoretical computer science a bisimulation is a binary relation between state transition systems, associating systems that behave in the same way in that one system simulates the other and vice versa.
Intuitively two systems are bisimilar if they, assuming we view them as playing a game according to some rules, match each other's moves. In this sense, each of the systems cannot be distinguished from the other by an observer.
Formal definition
Given a labeled state transition system (, , →),
where is a set of states, is a set of labels and → is a set of labelled transitions (i.e., a subset of ),
a bisimulation is a binary relation ,
such that both and its converse are simulations. From this follows that the symmetric closure of a bisimulation is a bisimulation, and that each symmetric simulation is a bisimulation. Thus some authors define bisimulation as a symmetric simulation.
Equivalently, is a bisimulation if and only if for every pair of states in and all labels α in :
if , then there is such that ;
if , then there is such that .
Given two states and in , is bisimilar to , written , if and only if there is a bisimulation such that . This means that the bisimilarity relation is the union of all bisimulations: precisely when for some bisimulation .
The set of bisimulations is closed under union; therefore, the bisimilarity relation is itself a bisimulation. Since it is the union of all bisimulations, it is the unique largest bisimulation. Bisimulations are also closed under reflexive, symmetric, and transitive closure; therefore, the largest bisimulation must be reflexive, symmetric, and transitive. From this follows that the largest bisimulation — bisimilarity — is an equivalence relation.
Alternative definitions
Relational definition
Bisimulation can be defined in terms of composition of relations as follows.
Given a labelled state transition system , a bisimulation relation is a binary relation over (i.e., ⊆ × ) such that
and
From the monotonicity and continuity of relation composition, it follows immediately that the set of bisimulations is closed under unions (joins in the poset of relations), and a simple algebraic calculation shows that the relation of bisimilarity—the join of all bisimulations—is an equivalence relation. This definition, and the associated treatment of bisimilarity, can be interpreted in any involutive quantale.
Fixpoint definition
Bisimilarity can also be defined in order-theoretical fashion, in terms of fixpoint theory, more precisely as the greatest fixed point of a certain function defined below.
Given a labelled state transition system (, Λ, →), define to be a function from binary relations over to binary relations over , as follows:
Let be any binary relation over . is defined to be the set of all pairs in × such that:
and
Bisimilarity is then defined to be the greatest fixed point of .
Ehrenfeucht–Fraïssé game definition
Bisimulation can also be thought of i |
https://en.wikipedia.org/wiki/The%20Quill%20%28software%29 | The Quill is a program to write home computer adventure games. Written by Graeme Yeandle, it was published on the ZX Spectrum by Gilsoft in December 1983. Although available to the general public, it was used by several games companies to create best-selling titles; over 450 commercially published titles for the ZX Spectrum were written using The Quill.
Development
Yeandle has stated that the inspiration for The Quill was an article in the August 1980 issue of Practical Computing by Ken Reed in which Reed described the use of a database to produce an adventure game. After Yeandle wrote one database-driven adventure game, Timeline, for Gilsoft, he realised that a database editor was needed, and it was this software which became The Quill.
After the original ZX Spectrum version was ported to the Amstrad CPC, Commodore 64, Atari 8-bit family, and Apple II and Oric. Versions were also published by CodeWriter, Inc. in North America (under the name of AdventureWriter) and a version by Norace in Danish, Norwegian and Swedish. A French version was also made by Codewriter. In 1985 Neil Fleming-Smith ported The Quill to the BBC Micro and Acorn Electron computers for Gilsoft. Although not credited in the article, Chris Hobson submitted a patch to Crash magazine which allowed the Spectrum version to save to a Microdrive. This was published in the September 1986 edition
The Quill only allowed for the creation of text-only adventures, using a text interpretation process known as a verb–noun parser. Later an add-on called The Illustrator was made to let the user include graphics in the adventures. Further add-ons included The Press, The Patch, and The Expander, which enhanced the engine by adding text compression, split-screen text and graphics, and more efficient use of available RAM.
Critical reception
The Quill was generally very well received by the computer press at the time of its release. Micro Adventurer described it as "a product [...] to revolutionise the whole microcomputer scene" and rated it "10 out of 10", while Computer and Video Games described it as "worth every penny of the £14.95 price tag", while CRASH said it was "almost ludicrously underpriced for what it does and, more importantly, what it allows others to do". Sinclair User were somewhat initially less enthusiastic, saying "no package, even if it is brilliant in the production of games using the sausage machine technique, will provide an answer to properly machine-coded and original games", although later in 1984 they said that "The Quill produces programs on a par with handwritten commercial programs".
The Quill was awarded "Best Utility" in the CRASH Readers Awards 1984.
Sequel
Following the success of the original, a second generation Quill was produced with more capabilities and sold under the name Professional Adventure Writer for the ZX Spectrum and CP/M range.
References
External links
CASA: A Feather In His Cap - Graeme Yeandle and The Quill
The Digital Antiquarian: |
https://en.wikipedia.org/wiki/Semantics%20%28computer%20science%29 | In programming language theory, semantics is the rigorous mathematical study of the meaning of programming languages. Semantics assigns computational meaning to valid strings in a programming language syntax. It is closely related to, and often crosses over with, the semantics of mathematical proofs.
Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will be executed on a certain platform, hence creating a model of computation.
History
In 1967, Robert W. Floyd publishes the paper Assigning meanings to programs; his chief aim is "a rigorous standard for proofs about computer programs, including proofs of correctness, equivalence, and termination". Floyd further writes:
A semantic definition of a programming language, in our approach, is founded on a syntactic definition. It must specify which of the phrases in a syntactically correct program represent commands, and what conditions must be imposed on an interpretation in the neighborhood of each command.
In 1969, Tony Hoare publishes a paper on Hoare logic seeded by Floyd's ideas, now sometimes collectively called axiomatic semantics.
In the 1970s, the terms operational semantics and denotational semantics emerged.
Overview
The field of formal semantics encompasses all of the following:
The definition of semantic models
The relations between different semantic models
The relations between different approaches to meaning
The relation between computation and the underlying mathematical structures from fields such as logic, set theory, model theory, category theory, etc.
It has close links with other areas of computer science such as programming language design, type theory, compilers and interpreters, program verification and model checking.
Approaches
There are many approaches to formal semantics; these belong to three major classes:
Denotational semantics, whereby each phrase in the language is interpreted as a denotation, i.e. a conceptual meaning that can be thought of abstractly. Such denotations are often mathematical objects inhabiting a mathematical space, but it is not a requirement that they should be so. As a practical necessity, denotations are described using some form of mathematical notation, which can in turn be formalized as a denotational metalanguage. For example, denotational semantics of functional languages often translate the language into domain theory. Denotational semantic descriptions can also serve as compositional translations from a programming language into the denotational metalanguage and used as a basis for designing compilers.
Operational semantics, whereby the execution of the language is described directly (rather than by translation). Operational semantics loosely corresponds to interpretation, although again the "implementation language" of the interpreter is generall |
https://en.wikipedia.org/wiki/Computer%20club | Computer club or Computer Club may refer to:
Computer club (user group), a computer users' group
Computer Club (broadcast), a former German TV broadcast about computers
Computer Club (band), a music band by Ashley Jones |
https://en.wikipedia.org/wiki/Trivia%20%28disambiguation%29 | Trivia is information and data that are considered to be of little value.
Trivia may also refer to:
Trivia (album), a 1986 album by Utopia
Trivia (gastropod), a genus of small sea snails in the family Triviidae
Trivia, an epithet of the Roman goddesses Diana and Hecate, in their shared role as protector of the crossroads (trivia, “three ways”).
Trivia (poem), a poem by John Gay
"Trivia" (The Office), an episode of The Office
See also
Triviality (mathematics), technical simplicity of some aspects of proofs
Quantum triviality, a trait of classical theories that become trivial when viewed in quantum terms
Parkinson's law of triviality
Trivial (film), a 2007 film
Trivial name, a type of name in chemical nomenclature
Trivialism
Trivial Pursuit (disambiguation)
Trivium (disambiguation)
he:בוגר אוניברסיטה#מקור השם |
https://en.wikipedia.org/wiki/Leo%20Laporte | Leo Laporte (; born November 29, 1956) is the former host of The Tech Guy weekly radio show and a host on TWiT.tv, an Internet podcast network focusing on technology. He is also a former TechTV technology host (1998–2008) and a technology author. On November 19, 2022, actor, writer, musician, and comedian Steve Martin called into Laporte's radio show to announce Leo's retirement from The Tech Guy radio show. Laporte's last new radio show was December 18, 2022 with reruns for the remainder of the year. Rich DeMuro later appeared on the show to announce that he will be taking over in January with a weekly show, recorded on Saturdays, called "Rich On Tech."
Background
Laporte was born in New York City, the son of geologist Leo F. Laporte. He studied Chinese history at Yale University before dropping out in his junior year to pursue a career in radio broadcasting, where his early on-air names were Dave Allen and Dan Hayes. He began his association with computers with his first home computer, an Atari 400. By 1984 he owned a Macintosh and wrote a software review for Byte magazine.
Television and radio
Laporte has worked on technology-related broadcasting projects, including Dvorak on Computers in January 1991 (co-hosted with technology writer John C. Dvorak), and Laporte on Computers on KGO Radio and KSFO in San Francisco.
In 1997, Laporte was awarded a Northern California Emmy for his role as Dev Null, a motion capture character on the MSNBC show The Site.
In 1998, Laporte created and co-hosted The Screen Savers, and the original version of Call for Help on the cable and satellite network ZDTV (later TechTV).
Laporte hosted the daily television show The Lab with Leo Laporte, recorded in Vancouver, British Columbia, Canada. The program was formerly known as Call for Help when it was recorded in the US and Toronto, Ontario, Canada. The series aired on G4 Canada, on the HOW TO Channel in Australia, on several of Canada's Citytv affiliates, and on Google Video. On March 5, 2008, Laporte confirmed on net@nite that The Lab with Leo Laporte had been canceled by Rogers Communications. The HOW TO Channel did not air the remaining episodes after it was announced the show had been canceled.
He hosted, until December 2022, a weekend technology-oriented talk radio program show titled Leo Laporte: The Tech Guy. The show, started on KFI AM 640 (Los Angeles), was syndicated through Premiere Radio Networks. Laporte appeared on Friday mornings on KFI with Bill Handel, and previously on such shows as Showbiz Tonight, Live with Kelly, and World News Now.
He holds an amateur radio license, W6TWT.
Bibliography
Laporte has written technology-oriented books including:
He has published a yearly series of technology almanacs:
Leo Laporte's Technology Almanac
Poor Leo's Computer Almanac
Leo Laporte's 2006 Technology Almanac
Laporte announced in October 2006 that he would not renew his contract with Que Publishing, and had retired from publishing books.
In 2 |
https://en.wikipedia.org/wiki/Odra%20%28computer%29 | Odra was a line of computers manufactured in Wrocław, Poland. The name comes from the Odra river that flows through the city of Wrocław.
Overview
The production started in 1959–1960. Models 1001, 1002, 1003, 1013, 1103, 1204 were of original Polish construction. Models 1304 and 1305 were functional counterparts of ICL 1905 and 1906 due to software agreement. The last model was 1325 based on two models by ICL.
The computers were built at the Elwro manufacturing plant, which was closed in 1993.
Odra 1002 was capable of only 100–400 operations per second.
In 1962, Witold Podgórski, an employee of Elwro, managed to create a computer game on a prototype of Odra 1003; it was an adaptation of a variant of Nim, as depicted in the film Last Year at Marienbad. The computer could play a perfect game and was guaranteed to win. The game was never distributed outside of the Elwro company, but its versions appeared elsewhere. It was probably the first Polish computer game in history.
The operating system used by the Odra 1204 is called SODA. It was designed to work on a small computer without magnetic storage and
can run simultaneous loading and execution of programs.
An Odra 1204 computer was used by a team in Leningrad developing an ALGOL 68 compiler in 1976. The Odra 1204 ran the syntax analysis, code generation ran on an IBM System/360.
Up until 30 April 2010 there was still one Odra 1305 working at the railway station in Wrocław Brochów. The system was shut down at 22:00 CEST and replaced with a contemporary computer system.
The Museum of the History of Computers and Information Technology (Muzeum Historii Komputerów i Informatyki) in Katowice, Poland started a project to recommission an Odra 1305 in 2017.
Literature
See also
History of computing in Poland
History of computer hardware in Eastern Bloc countries
References
Early computers
Science and technology in Poland |
https://en.wikipedia.org/wiki/History%20of%20computer%20hardware%20in%20Eastern%20Bloc%20countries | The history of computing hardware in the Eastern Bloc is somewhat different from that of the Western world. As a result of the CoCom embargo, computers could not be imported on a large scale from Western Bloc.
Eastern Bloc manufacturers created copies of Western designs based on intelligence gathering and reverse engineering. This redevelopment led to some incompatibilities with International Electrotechnical Commission (IEC) and IEEE standards, such as spacing integrated circuit pins at of a 25 mm length (colloquially a "metric inch") instead of a standard inch of 25.4 mm. This made Soviet chips unsellable on the world market outside the Comecon, and made test machinery more expensive.
History
By the end of the 1950s most COMECON countries had developed experimental computer designs, yet none of them had managed to create a stable computer industry.
In October 1962 the "Commission for Scientific Problems in Computing" (Комиссия Научные Вопросы Вычислительной Техники, КНВВТ) was founded in Warsaw and modelled after the International Federation for Information Processing.
Computer design and production began to be coordinated between the Comecon countries in 1964, when the Edinaya Sistema mainframe (Unified System, ES, also known as RIAD) was introduced. The project also included plans for the development of a joint Comecon computer network.
Each COMECON country was given a role in the development of the ES: Hungary was responsible for software development, while East Germany improved the design of disk storage devices. The ES-1040 was successfully exported to countries outside the Comecon, including India, Yugoslavia and China. Each country specialized in a model of the ES series: R-10 in the case of Hungary, R-20 in Bulgaria, R-20A in Czechoslovakia, R-30 in Poland and R-40 in East Germany.
Nairi-3, developed at the Armenian Institute for Computers, was the first third-generation computer in the Comecon area, using integrated circuits. Development on the Nairi system began in 1964, and it went into serial production in 1969.
In 1969 the Intergovernmental Commission for Computer Technology was founded to coordinate computer production. Other cooperation initiatives included the establishment of joint Comecon development facilities in Moscow and Kiev. The R-300 computer, released in 1969, demonstrated the technical and managerial skills of VEB Robotron, and established a leading role for East Germany in the joint development efforts. The relative success of Robotron was attributed to its greater organizational freedom, and the profit motive of securing export orders.
In 1970, Cuba produced its first digital computer, the CID-201.
By 1972 the Comecon countries had produced around 7,500 computers, compared to 120,000 in the rest of the world. The USSR, Czechoslovakia, East Germany, Poland, Bulgaria and Romania had all set up computer production and research institutes. Collaboration between Romania and the other countries was limited, due |
https://en.wikipedia.org/wiki/Leeching | Leeching may refer to:
Leeching (medical), also called Hirudotherapy, the use of leeches for bloodletting or medical therapy
Leeching (computing), using others' information or effort without providing anything in return
Image leeching, direct linking to an object, such as an image, on a remote site
See also
Leaching (disambiguation) |
https://en.wikipedia.org/wiki/List%20of%20computer-animated%20films | A computer-animated film is a feature film that has been computer-animated to appear three-dimensional. While traditional 2D animated films are now made primarily with the help of computers, the technique to render realistic 3D computer graphics (CG) or 3D computer-generated imagery (CGI), is unique to computers.
This is a list of theatrically released feature films that are entirely computer-animated.
Released films
Release date listed is the first public theatrical screening of the completed film. This may mean that the dates listed here may not be representative of when the film came out in a particular country.
The country or countries listed reflects the places where the production companies for each title are based. This means that the countries listed for a film might not reflect the location where the film was produced or the countries where the film received a theatrical release. If a title is a multi-country production, the country listed first corresponds with the production company that had the most significant role in the film's creation.
Upcoming films
See also
History of animation
Timeline of CGI in film and television
List of animated feature films
List of stop motion films
Academy Award for Best Animated Feature
References
External links
List of animated features theatrically released in the United States (with US release dates)
Computer-animated
Computer-animated
Computer-animated films |
https://en.wikipedia.org/wiki/Disk%20swapping | Disk swapping refers to the practice of inserting and removing, or swapping, floppy disks in a floppy disk drive-based computer system. In the early days of personal computers, before hard drives became commonplace, most fully outfitted computer systems had two floppy drives (addressed as A: and B: on CP/M and MS-DOS—other systems had different conventions). Disk drives were expensive, however, and having two was seen as a luxury by many computer users who had to make do with a single drive.
The purpose of two floppy drives was so that the disk containing the application program could remain in the drive while the data disk containing the user's files could be accessed in the second drive. In order to use a function of the program not loaded into memory, the user would have to first remove the data disk, then insert the program disk. When the user then wanted to save their file, the reverse operation would have to be performed. On some less-than-user-friendly systems, this could result in data loss when, for example, files were accidentally saved onto the program disk.
Disk swapping was an infamous feature of early Macintosh 128K systems, which were extremely RAM-starved.
References
Floppy disk computer storage |
https://en.wikipedia.org/wiki/Intellectual%20giftedness | Intellectual giftedness is an intellectual ability significantly higher than average. It is a characteristic of children, variously defined, that motivates differences in school programming. It is thought to persist as a trait into adult life, with various consequences studied in longitudinal studies of giftedness over the last century. There is no generally agreed definition of giftedness for either children or adults, but most school placement decisions and most longitudinal studies over the course of individual lives have followed people with IQs in the top 2.5 percent of the population—that is, IQs above 130. Definitions of giftedness also vary across cultures.
The various definitions of intellectual giftedness include either general high ability or specific abilities. For example, by some definitions, an intellectually gifted person may have a striking talent for mathematics without equally strong language skills. In particular, the relationship between artistic ability or musical ability and the high academic ability usually associated with high IQ scores is still being explored, with some authors referring to all of those forms of high ability as "giftedness", while other authors distinguish "giftedness" from "talent". There is still much controversy and much research on the topic of how adult performance unfolds from trait differences in childhood, and what educational and other supports best help the development of adult giftedness.
Identification
Overview
The identification of giftedness first emerged after the development of IQ tests for school placement. It has since become an important issue for schools, as the instruction of gifted students often presents special challenges. During the twentieth century, gifted children were often classified via IQ tests; other identification procedures have been proposed but are only used in a minority of cases in most public schools in the English-speaking world. Developing useful identification procedures for students who could benefit from a more challenging school curriculum is an ongoing problem in school administration.
Because of the key role that gifted education programs in schools play in the identification of gifted individuals, both children and adults, it is worthwhile to examine how schools define the term "gifted".
Definitions
Since Lewis Terman in 1916, psychometricians and psychologists have sometimes equated giftedness with high IQ. Later researchers (e.g., Raymond Cattell, J. P. Guilford, and Louis Leon Thurstone) have argued that intellect cannot be expressed in such a unitary manner, and have suggested more multifaceted approaches to intelligence.
Research conducted in the 1980s and 1990s has provided data that supports notions of multiple components to intelligence. This is particularly evident in the reexamination of "giftedness" by Sternberg and Davidson in their collection of articles Conceptions of Giftedness (1986; second edition 2005). The many different concepti |
https://en.wikipedia.org/wiki/Foundation%20for%20Intelligent%20Physical%20Agents | The Foundation for Intelligent Physical Agents (FIPA) is a body for developing and setting computer software standards for heterogeneous and interacting agents and agent-based systems.
FIPA was founded as a Swiss not-for-profit organization in 1996 with the ambitious goal of defining a full set of standards for both implementing systems within which agents could execute (agent platforms) and specifying how agents themselves should communicate and interoperate in a standard way.
Within its lifetime the organization's membership included several academic institutions and a large number of companies including Hewlett-Packard, IBM, BT (formerly British Telecom), Sun Microsystems, Fujitsu and many more. A number of standards were proposed, however, despite several agent platforms adopting the "FIPA standard" for agent communication it never succeeded in gaining the commercial support which was originally envisaged. The Swiss organization was dissolved in 2005 and an IEEE standards committee was set up in its place.
The most widely adopted of the FIPA standards are the Agent Management and Agent Communication Language (FIPA-ACL) specifications.
The name FIPA is somewhat of a misnomer as the "physical agents" with which the body is concerned exist solely in software (and hence have no physical aspect).
Systems using FIPA standards
Gamma Platform. See FIPA interface
Fetch.AI
Jade
Jadex Agents (Java)
Java Intelligent Agent Componentware (JIAC) (Java)
The SPADE Multiagent and Organizations Platform (Python)
JACK Intelligent Agents (Java)
The April Agent Platform (AAP) and Language (April) (No longer actively developed)
Zeus Agent Building Toolkit (No longer actively developed)
The Fipa-OS agent platform (No longer actively developed)
AgentService (C#) (last update: 2009)
See also
Agent Communications Language
External links
FIPA Official web site
References
Information technology organisations based in Switzerland
Agent-based software
Standards organisations in Switzerland
1996 establishments in Switzerland |
https://en.wikipedia.org/wiki/Multidimensional%20scaling | Multidimensional scaling (MDS) is a means of visualizing the level of similarity of individual cases of a dataset. MDS is used to translate "information about the pairwise 'distances' among a set of objects or individuals" into a configuration of points mapped into an abstract Cartesian space.
More technically, MDS refers to a set of related ordination techniques used in information visualization, in particular to display the information contained in a distance matrix. It is a form of non-linear dimensionality reduction.
Given a distance matrix with the distances between each pair of objects in a set, and a chosen number of dimensions, N, an MDS algorithm places each object into N-dimensional space (a lower-dimensional representation) such that the between-object distances are preserved as well as possible. For N = 1, 2, and 3, the resulting points can be visualized on a scatter plot.
Core theoretical contributions to MDS were made by James O. Ramsay of McGill University, who is also regarded as the founder of functional data analysis.
Types
MDS algorithms fall into a taxonomy, depending on the meaning of the input matrix:
Classical multidimensional scaling
It is also known as Principal Coordinates Analysis (PCoA), Torgerson Scaling or Torgerson–Gower scaling. It takes an input matrix giving dissimilarities between pairs of items and outputs a coordinate matrix whose configuration minimizes a loss function called strain, which is given by
where denote vectors in N-dimensional space, denotes the scalar product between and , and are the elements of the matrix defined on step 2 of the following algorithm, which are computed from the distances.
Steps of a Classical MDS algorithm:
Classical MDS uses the fact that the coordinate matrix can be derived by eigenvalue decomposition from . And the matrix can be computed from proximity matrix by using double centering.
Set up the squared proximity matrix
Apply double centering: using the centering matrix , where is the number of objects, is the identity matrix, and is an matrix of all ones.
Determine the largest eigenvalues and corresponding eigenvectors of (where is the number of dimensions desired for the output).
Now, , where is the matrix of eigenvectors and is the diagonal matrix of eigenvalues of .
Classical MDS assumes metric distances. So this is not applicable for direct dissimilarity ratings.
Metric multidimensional scaling (mMDS)
It is a superset of classical MDS that generalizes the optimization procedure to a variety of loss functions and input matrices of known distances with weights and so on. A useful loss function in this context is called stress, which is often minimized using a procedure called stress majorization. Metric MDS minimizes the cost function called “stress” which is a residual sum of squares:
Metric scaling uses a power transformation with a user-controlled exponent : and for distance. In classical scaling Non-metric scaling i |
https://en.wikipedia.org/wiki/List%20of%20graphical%20methods | This is a list of graphical methods with a mathematical basis.
Included are diagram techniques, chart techniques, plot techniques, and other forms of visualization.
There is also a list of computer graphics and descriptive geometry topics.
Simple displays
Area chart
Box plot
Dispersion fan diagram
Graph of a function
Logarithmic graph paper
Heatmap
Bar chart
Histogram
Line chart
Pie chart
Plotting
Scatterplot
Sparkline
Stemplot
Radar chart
Set theory
Venn diagram
Karnaugh diagram
Descriptive geometry
Isometric projection
Orthographic projection
Perspective (graphical)
Engineering drawing
Technical drawing
Graphical projection
Mohr's circle
Pantograph
Circuit diagram
Smith chart
Sankey diagram
Systems analysis
Binary decision diagram
Control-flow graph
Functional flow block diagram
Information flow diagram
IDEF
N2 chart
Sankey diagram
State diagram
System context diagram
Data-flow diagram
Cartography
Map projection
Orthographic projection (cartography)
Robinson projection
Stereographic projection
Dymaxion map
Topographic map
Craig retroazimuthal projection
Hammer retroazimuthal projection
Biological sciences
Cladogram
Punnett square
Systems Biology Graphical Notation
Physical sciences
Free body diagram
Greninger chart
Phase diagram
Wavenumber-frequency diagram
Bode plot
Nyquist plot
Dalitz plot
Feynman diagram
Carnot Plot
Business methods
Flowchart
Workflow
Gantt chart
Growth-share matrix (often called BCG chart)
Work breakdown structure
Control chart
Ishikawa diagram
Pareto chart (often used to prioritise outputs of an Ishikawa diagram)
Conceptual analysis
Mind mapping
Concept mapping
Conceptual graph
Entity-relationship diagram
Tag cloud, also known as word cloud
Statistics
Autocorrelation plot
Bar chart
Biplot
Box plot
Bullet graph
Chernoff faces
Control chart
Fan chart
Forest plot
Funnel plot
Galbraith plot
Histogram
Mosaic plot
Multidimensional scaling
np-chart
p-chart
Pie chart
Probability plot
Normal probability plot
Poincaré plot
Probability plot correlation coefficient plot
Q–Q plot
Rankit
Run chart
Seasonal subseries plot
Scatter plot
Skewplot
Ternary plot
Recurrence plot
Waterfall chart
Violin plot
Machine Learning
Hinton Diagram
Other
Ulam spiral
Nomogram
Fitness landscape
Weather map
Predominance diagram
One-line diagram
Autostereogram
Edgeworth box
Lineweaver-Burk diagram
Eadie-Hofstee diagram
Population pyramid
Parametric plot
Causality loop diagram
Ramachandran plot
V model
Sentence diagram
Tree structure
Treemapping
Airfield traffic pattern diagram
See also
List of information graphics software
Data and information visualization
External links
A periodic table of visualization methods.
Speaking of Graphics.
Graphical methods
Graphical methods |
https://en.wikipedia.org/wiki/Apache%20Nutch | Apache Nutch is a highly extensible and scalable open source web crawler software project.
Features
Nutch is coded entirely in the Java programming language, but data is written in language-independent formats. It has a highly modular architecture, allowing developers to create plug-ins for media-type parsing, data retrieval, querying and clustering.
The fetcher ("robot" or "web crawler") has been written from scratch specifically for this project.
History
Nutch originated with Doug Cutting, creator of both Lucene and Hadoop, and Mike Cafarella.
In June, 2003, a successful 100-million-page demonstration system was developed. To meet the multi-machine processing needs of the crawl and index tasks, the Nutch project has also implemented a MapReduce facility and a distributed file system. The two facilities have been spun out into their own subproject, called Hadoop.
In January, 2005, Nutch joined the Apache Incubator, from which it graduated to become a subproject of Lucene in June of that same year. Since April, 2010, Nutch has been considered an independent, top level project of the Apache Software Foundation.
In February 2014 the Common Crawl project adopted Nutch for its open, large-scale web crawl.
While it was once a goal for the Nutch project to release a global large-scale web search engine, that is no longer the case.
Release history
Scalability
IBM Research studied the performance of Nutch/Lucene as part of its Commercial Scale Out (CSO) project. Their findings were that a scale-out system, such as Nutch/Lucene, could achieve a performance level on a cluster of blades that was not achievable on any scale-up computer such as the POWER5.
The ClueWeb09 dataset (used in e.g. TREC) was gathered using Nutch, with an average speed of 755.31 documents per second.
Related projects
Hadoop – Java framework that supports distributed applications running on large clusters.
Search engines built with Nutch
Common Crawl – publicly available internet-wide crawls, started using Nutch in 2014.
Creative Commons Search – an implementation of Nutch, used in the period of 2004–2006.
DiscoverEd – Open educational resources search prototype developed by Creative Commons
Krugle uses Nutch to crawl web pages for code, archives and technically interesting content.
mozDex (inactive)
Wikia Search - launched 2008, closed down 2009
See also
Faceted search
Information extraction
Enterprise search
References
Bibliography
External links
Nutch
Internet search engines
Free search engine software
Java (programming language) libraries
Cross-platform free software
Free web crawlers |
https://en.wikipedia.org/wiki/List%20of%20political%20parties%20in%20Albania | Albania has a multi-party system with two major political parties and few smaller ones that are electorally successful. According to official data from the Central Election Commission, there were a total of 124 political parties listed in the party registry for the year 2014. Only 54 of these parties participated in the 2015 local elections.
Parties represented in the Parliament of Albania
This is a list of political parties with representation in the Albanian parliament following the general parliamentary elections of 2021.
Political parties in Albania (1921–present)
This is a list of noted political parties that have participated in Albania's elections from 1921 to present day.
See also
Politics of Albania
List of political parties by country
MJAFT!
Liberalism in Albania
References
Albania
Political parties
Political parties
Albania |
https://en.wikipedia.org/wiki/Scott%20Meyers | Scott Douglas Meyers (born April 9, 1959) is an American author and software consultant, specializing in the C++ computer programming language. He is known for his Effective C++ book series. During his career, he was a frequent speaker at conferences and trade shows.
Biography
He holds a Ph.D. in computer science from Brown University and an M.S. in computer science from Stanford University.
He conceived and, with Herb Sutter, Andrei Alexandrescu, Dan Saks, and Steve Dewhurst, co-organized and presented the boutique (limited-attendance) conference, The C++ Seminar, which took place three times in 2001-2002. He also conceived and, with Sutter and Alexandrescu, co-organized and presented another boutique conference, C++ and Beyond annually in 2010-2014.
Meyers has expressed opposition to asking programmers to solve design or programming problems during job interviews:"I hate anything that asks me to design on the spot. That's asking to demonstrate a skill rarely required on the job in a high-stress environment, where it is difficult for a candidate to accurately prove their abilities. I think it's fundamentally an unfair thing to request of a candidate."
In December 2015, Meyers announced his retirement from the world of C++.
Publications
1992. Effective C++: 50 Specific Ways to Improve Your Programs and Designs.
1995. More Effective C++: 35 New Ways to Improve Your Programs and Designs.
1998. Effective C++, Second Edition: 50 Specific Ways to Improve Your Programs and Designs.
2001. Effective STL: 50 Specific Ways to Improve Your Use of the Standard Template Library.
2005. Effective C++, Third Edition: 55 Specific Ways to Improve Your Programs and Designs.
2010. Overview of The New C++ (C++11). Annotated training materials published by Artima Press. No ISBN.
2010. Effective C++ in an Embedded Environment. Annotated training materials published by Artima Press. No ISBN.
2014. Effective Modern C++: 42 Specific Ways to Improve Your Use of C++11 and C++14.
Awards and achievements
Meyers is known for his popular Effective C++ Software Development books.
In March 2009, Meyers was awarded the 2009 Dr. Dobb's Excellence in Programming Award.
References
External links
The Keyhole Problem Paper in PDF format
1959 births
Living people
Brown University alumni
Stanford University alumni
American computer programmers
C++ people |
https://en.wikipedia.org/wiki/Electric%20Sheep | Electric Sheep is a volunteer computing project for animating and evolving fractal flames, which are in turn distributed to the networked computers, which display them as a screensaver.
Process
The process is transparent to the casual user, who can simply install the software as a screensaver. Alternatively, the user may become more involved with the project, manually creating a fractal flame file for upload to the server where it is rendered into a video file of the animated fractal flame. As the screensaver entertains the user, their computer is also used for rendering commercial projects, sales of which keep the servers and developers running.
There are about 500,000 active users (monthly uniques).
According to Mitchell Whitelaw in his Metacreation: Art and Artificial Life, "On the screen they are luminous, twisting, elastic shapes, abstract tangles and loops of glowing filaments."
The name "Electric Sheep" is taken from the title of Philip K. Dick's novel Do Androids Dream of Electric Sheep?. The title mirrors the nature of the project: computers (androids) who have started running the screensaver begin rendering (dreaming) the fractal movies (sheep).
The sheep motif is carried over into other aspects of the project: the 100 or so sheep stored on the server at any time is referred to as 'the flock'; creating a new fractal by interpolating or combining the sheep's fractal code with that of another sheep is called mating/breeding; changes to the code are called mutations, etc.
The parameters that generate these movies (sheep) can be created in a few ways: they can be created and submitted by members of the electricsheep mailing list, members of the mailing list can download the parameters of existing sheep and tweak them, or sheep can be mated together automatically by the server or manually by server admins (nicknamed shepherds).
Users may vote on sheep that they like or dislike, and this voting is used for the genetic algorithm which generates new sheep. Each movie is a fractal flame with several of its parameters animated. The individual frames of which these movies consist are rendered using 'spare' processing cycles from idle computers on the distributed network of those running the screensaver application, and finished sheep (in the form of .avi files) are distributed to the network.
The computer-generated sheep parameters and movies are distributed under the Creative Commons Attribution Noncommercial (CC-BY-NC) license; user-generated sheep parameters are under the Creative Commons Attribution (CC-BY) license. Both are automatically downloaded by the screen saver. The underlying copyright issues raised by generative, distributed digital art projects involve novel legal issues that the current copyright system can not understand or handle.
The screensaver was created and released as free software by Scott Draves in 1999 and continues to be developed by him and a team of about five engineers.
The 2.7.x series differs from the |
https://en.wikipedia.org/wiki/List%20of%20U.S.%20state%20and%20territory%20abbreviations | Several sets of codes and abbreviations are used to represent the political divisions of the United States for postal addresses, data processing, general abbreviations, and other purposes.
Table
This table includes abbreviations for three independent countries related to the United States through Compacts of Free Association, and other comparable postal abbreviations, including those now obsolete.
History
As early as October 1831, the United States Post Office recognized common abbreviations for states and territories. However, they accepted these abbreviations only because of their popularity, preferring that patrons spell names out in full to avoid confusion.
The traditional abbreviations for U.S. states and territories, widely used in mailing addresses prior to the introduction of two-letter U.S. postal abbreviations, are still commonly used for other purposes (such as legal citation), and are still recognized (though discouraged) by the Postal Service.
Modern two-letter abbreviated codes for the states and territories originated in October 1963, with the issuance of Publication 59: Abbreviations for Use with ZIP Code, three months after the Post Office introduced ZIP codes in July 1963. The purpose, rather than to standardize state abbreviations per se, was to make room in a line of no more than 23 characters for the city, the state, and the ZIP code.
Since 1963, only one state abbreviation has changed. Originally Nebraska was "NB"; but, in November 1969, the Post Office changed it to "NE" to avoid confusion with New Brunswick in Canada.
Prior to 1987, when the U.S. Secretary of Commerce approved the two-letter codes for use in government documents, the United States Government Printing Office (GPO) suggested its own set of abbreviations, with some states left unabbreviated. Today, the GPO supports United States Postal Service standard.
Current use of traditional abbreviations
Legal citation manuals, such as The Bluebook and The ALWD Citation Manual, typically use the "traditional abbreviations" or variants thereof.
Codes for states and territories
ISO standard 3166
ANSI standard INCITS 38:2009
The American National Standards Institute (ANSI) established alphabetic and numeric codes for each state and outlying areas in ANSI standard INCITS 38:2009. ANSI standard INCITS 38:2009 replaced the Federal Information Processing Standard (FIPS) standards FIPS 5-2, FIPS 6-4, and FIPS 10-4. The ANSI alphabetic state code is the same as the USPS state code except for U.S. Minor Outlying Islands, which have an ANSI code "UM" but no USPS code—and U.S. Military Mail locations, which have USPS codes ("AA", "AE", "AP") but no ANSI code.
Postal codes
The United States Postal Service (USPS) has established a set of uppercase abbreviations to help process mail with optical character recognition and other automated equipment. There are also official USPS abbreviations for other parts of the address, such as street designators (street, avenue, road, |
https://en.wikipedia.org/wiki/List%20of%20interface%20bit%20rates | This is a list of interface bit rates, is a measure of information transfer rates, or digital bandwidth capacity, at which digital interfaces in a computer or network can communicate over various kinds of buses and channels. The distinction can be arbitrary between a computer bus, often closer in space, and larger telecommunications networks. Many device interfaces or protocols (e.g., SATA, USB, SAS, PCIe) are used both inside many-device boxes, such as a PC, and one-device-boxes, such as a hard drive enclosure. Accordingly, this page lists both the internal ribbon and external communications cable standards together in one sortable table.
Factors limiting actual performance, criteria for real decisions
Most of the listed rates are theoretical maximum throughput measures; in practice, the actual effective throughput is almost inevitably lower in proportion to the load from other devices (network/bus contention), physical or temporal distances, and other overhead in data link layer protocols etc. The maximum goodput (for example, the file transfer rate) may be even lower due to higher layer protocol overhead and data packet retransmissions caused by line noise or interference such as crosstalk, or lost packets in congested intermediate network nodes. All protocols lose something, and the more robust ones that deal resiliently with very many failure situations tend to lose more maximum throughput to get higher total long term rates.
Device interfaces where one bus transfers data via another will be limited to the throughput of the slowest interface, at best. For instance, SATA revision 3.0 (6 Gbit/s) controllers on one PCI Express 2.0 (5 Gbit/s) channel will be limited to the 5 Gbit/s rate and have to employ more channels to get around this problem. Early implementations of new protocols very often have this kind of problem. The physical phenomena on which the device relies (such as spinning platters in a hard drive) will also impose limits; for instance, no spinning platter shipping in 2009 saturates SATA revision 2.0 (3 Gbit/s), so moving from this 3 Gbit/s interface to USB 3.0 at 4.8 Gbit/s for one spinning drive will result in no increase in realized transfer rate.
Contention in a wireless or noisy spectrum, where the physical medium is entirely out of the control of those who specify the protocol, requires measures that also use up throughput. Wireless devices, BPL, and modems may produce a higher line rate or gross bit rate, due to error-correcting codes and other physical layer overhead. It is extremely common for throughput to be far less than half of theoretical maximum, though the more recent technologies (notably BPL) employ preemptive spectrum analysis to avoid this and so have much more potential to reach actual gigabit rates in practice than prior modems.
Another factor reducing throughput is deliberate policy decisions made by Internet service providers that are made for contractual, risk management, aggregation saturation, o |
https://en.wikipedia.org/wiki/CompUSA | CompUSA, Inc., was a retailer and reseller of personal computers, consumer electronics, technology products and computer services. Starting with one brick-and-mortar store in 1986 under the name Soft Warehouse, by the 1990s CompUSA had grown into a nationwide big box chain. At its peak, it operated at least 229 locations. Crushed by competition from other brick-and-mortar retailers, corporate oversight which was out of touch with evolving market realities, and a failure to make a strong transition to online sales, CompUSA began closing what they classified as "low performing" locations in 2006. By 2008 only 16 locations were left to be sold to Systemax. In 2012, remaining CompUSA and Circuit City stores were converted to TigerDirect stores, and later closed. As of 2023, the CompUSA online website redirects to an error page hosted on Wix.com.
History
Founded in 1984 as Soft Warehouse in Addison, Texas, a northern suburb of Dallas, Texas, by Errol Jacobson and Mike Henochowicz, the company began national expansion in 1988 with its first megastore opening in Atlanta, Georgia.
In 1991, the company's name was changed to CompUSA, and the company became publicly traded on the New York Stock Exchange. While under Nathan P. Morton's leadership, CompUSA grew to over $2 billion in revenues. Morton resigned in 1993.
Formerly headquartered in Miami, Florida, it was a wholly owned subsidiary of U.S. Commercial Corp S.A.B. de C.V. associated with Grupo Carso and indirectly controlled by a common shareholder, Carlos Slim.
On December 7, 2007, an affiliate of the restructuring and disposition firm Gordon Brothers Group, Specialty Equity, bought the company. Systemax purchased the CompUSA name, 16 retail locations and other company assets in January 2008.
Systemax operated CompUSA retail stores in California, Florida, Texas, Maryland, Georgia, Illinois, Delaware, New Jersey, North Carolina, Virginia, Washington and Puerto Rico, as well as CompUSA.com, a retail website and a dedicated catalog site for businesses.
On November 2, 2012, Systemax announced that it would drop both the CompUSA and Circuit City storefront names, consolidating their businesses under the name, TigerDirect. On December 4, 2013, CompUSA intellectual properties were sold to JASALI 645 Realty LLC. On October 25, 2018, the intellectual properties of CompUSA were leased to DealCentral, and has announced to relaunch CompUSA.com in that same day.
Timeline
1986 – Under the original name Soft Warehouse, the first of their super stores opens on Marsh Lane and Belt Line Road in Addison, Texas
1988 – Opened their second store in Atlanta, Georgia
1990 – Long running radio ad campaign featuring character "P.C. Modem" (played by actor Jack Riley) begins.
1993 – Began offering technical services at customer locations.
1996 – Launched retail sales on CompUSA.com.
1997 – Partners with Apple Computer in a "store within a store" concept for selling Macintosh computers. By January 19, 1998, 57 stores had |
https://en.wikipedia.org/wiki/Seymour%20I.%20Rubinstein | Seymour Ivan Rubinstein (born 1934) is an American businessman and software developer. With the founding of MicroPro International in 1978, he became a pioneer of personal computer software, publishing under it the extremely popular word processing package, WordStar. He grew up in Brooklyn, New York, and after a six-year stint in New Hampshire, later moved to California. Programs developed partially or entirely under his direction include WordStar, HelpDesk, Quattro Pro, and WebSleuth, among others. WordStar was the first truly successful program for the personal computer in a commercial sense and gave reasonably priced access to word processing for the general population for the first time.
Rubinstein began his involvement with microcomputers as director of marketing at IMSAI.
Early career
During his teenage years, Rubinstein was a television repairman. After his military service he became a technical writer and continued his undergraduate studies at night.
In 1964, he was given the opportunity to participate in the design and implementation a classified system for identifying unknown vessels at sea by their sound fingerprint. Following his success with this and other related projects, he moved to New Hampshire to be put in charge of the computer software development for a line of IBM compatible programmable CRT terminals. As part of this assignment, Rubinstein went to San Francisco. Two years later, Rubinstein moved to the Bay Area and landed an assignment to implement a law office management system on a Varian Data Machines minicomputer. Following this, he formed the Systems Division of Prodata International Corporation which was subsequently acquired by Varian Data Machines. As a consequence, Rubinstein temporarily moved to Zürich, Switzerland to utilize the technology he developed as part of a branch banking system for Credit Suisse.
Upon his return to California, he visited the Byte Shop of San Rafael and began his love affair with the microcomputer.
Business ventures
Rubinstein founded MicroPro International Corporation in June 1978. Subsequently, Rubinstein made an arrangement with Rob Barnaby, a programmer Rubinstein met at IMSAI. While at IMSAI, Barnaby wrote a screen editor which was called NED. Rubinstein had Barnaby totally rewrite NED into a new product, WordMaster. MicroPro was officially launched in September, 1978 using Barnaby’s first two programs, WordMaster and SuperSort. Feedback from the computer store dealers, who were MicroPro’s first customers, said they wanted a program with integrated printing.
Rubinstein developed the specifications for the new program including many innovations unavailable in commercial word processing at the time, such as showing page breaks, providing an integrated help system and a keyboard design specifically for touch typists. Barnaby did the initial foundation for MailMerge, which was finished by others.
In mid-1979 was born the Wordstar word processor. A year and a half later, several |
https://en.wikipedia.org/wiki/Weak%20consistency | The name weak consistency can be used in two senses. In the first sense, strict and more popular, weak consistency is one of the consistency models used in the domain of concurrent programming (e.g. in distributed shared memory, distributed transactions etc.).
A protocol is said to support weak consistency if:
All accesses to synchronization variables are seen by all processes (or nodes, processors) in the same order (sequentially) - these are synchronization operations. Accesses to critical sections are seen sequentially.
All other accesses may be seen in different order on different processes (or nodes, processors).
The set of both read and write operations in between different synchronization operations is the same in each process.
Therefore, there can be no access to a synchronization variable if there are pending write operations. And there can not be any new read/write operation started if the system is performing any synchronization operation.
In the second, more general, sense weak consistency may be applied to any consistency model weaker than sequential consistency.
A stricter condition is strong consistency, where parallel processes can observe only one consistent state.
References
The original paper on weak ordering: M. Dubois, C. Scheurich and F. A. Briggs, Memory Access Buffering in Multiprocessors, in Proceedings of 13th Annual International Symposium on Computer Architecture 14, 2 (June 1986), 434-442.
Sarita V. Adve, Mark D. Hill, Weak ordering - a new definition, in Proceedings of the 17th Annual International Symposium on Computer Architecture.
Consistency models |
https://en.wikipedia.org/wiki/Memory%20coherence | Memory coherence is an issue that affects the design of computer systems in which two or more processors or cores share a common area of memory.
In a uniprocessor system (where there exists only one core), there is only one processing element doing all the work and therefore only one processing element that can read or write from/to a given memory location. As a result, when a value is changed, all subsequent read operations of the corresponding memory location will see the updated value, even if it is cached.
Conversely, in multiprocessor (or multicore) systems, there are two or more processing elements working at the same time, and so it is possible that they simultaneously access the same memory location. Provided none of them changes the data in this location, they can share it indefinitely and cache it as they please. But as soon as one updates the location, the others might work on an out-of-date copy that, e.g., resides in their local cache. Consequently, some scheme is required to notify all the processing elements of changes to shared values; such a scheme is known as a memory coherence protocol, and if such a protocol is employed the system is said to have a coherent memory.
The exact nature and meaning of the memory coherency is determined by the consistency model that the coherence protocol implements. In order to write correct concurrent programs, programmers must be aware of the exact consistency model that is employed by their systems.
When implemented in hardware, the coherency protocol can, for example, be directory-based or snooping-based (also called sniffing). Specific protocols include the MSI protocol and its derivatives MESI, MOSI and MOESI.
See also
Cache coherence
Distributed shared memory
Race condition
References
Computer memory
Parallel computing |
https://en.wikipedia.org/wiki/Driverless | Driverless may refer to:
A computer able to configure itself, without explicit driver software, see Plug-'n'-Play.
A train without a human driver, see Automatic train operation.
A vehicle which navigates without human input, see Autonomous car.
Driverless tractor
Driverless (film), a 2010 Chinese film directed by Zhang Yang. |
https://en.wikipedia.org/wiki/Distributed%20shared%20memory | In computer science, distributed shared memory (DSM) is a form of memory architecture where physically separated memories can be addressed as a single shared address space. The term "shared" does not mean that there is a single centralized memory, but that the address space is shared—i.e., the same physical address on two processors refers to the same location in memory. Distributed global address space (DGAS), is a similar term for a wide class of software and hardware implementations, in which each node of a cluster has access to shared memory in addition to each node's private (i.e., not shared) memory.
Overview
A distributed-memory system, often called a multicomputer, consists of multiple independent processing nodes with local memory modules which is connected by a general interconnection network. Software DSM systems can be implemented in an operating system, or as a programming library and can be thought of as extensions of the underlying virtual memory architecture. When implemented in the operating system, such systems are transparent to the developer; which means that the underlying distributed memory is completely hidden from the users. In contrast, software DSM systems implemented at the library or language level are not transparent and developers usually have to program them differently. However, these systems offer a more portable approach to DSM system implementations. A DSM system implements the shared-memory model on a physically distributed memory system.
DSM can be achieved via software as well as hardware. Hardware examples include cache coherence circuits and network interface controllers. There are three ways of implementing DSM:
Page-based approach using virtual memory
Shared-variable approach using routines to access shared variables
Object-based approach, ideally accessing shared data through object-oriented discipline
Advantages
Scales well with a large number of nodes
Message passing is hidden
Can handle complex and large databases without replication or sending the data to processes
Generally cheaper than using a multiprocessor system
Provides large virtual memory space
Programs are more portable due to common programming interfaces
Shield programmers from sending or receiving primitives
Disadvantages
Generally slower to access than non-distributed shared memory
Must provide additional protection against simultaneous accesses to shared data
May incur a performance penalty
Little programmer control over actual messages being generated
Programmers need to understand consistency models to write correct programs
Comparison with message passing
Software DSM systems also have the flexibility to organize the shared memory region in different ways. The page based approach organizes shared memory into pages of fixed size. In contrast, the object based approach organizes the shared memory region as an abstract space for storing shareable objects of variable sizes. Another commonly seen implementation us |
https://en.wikipedia.org/wiki/SyncML | SyncML (Synchronization Markup Language) is the former name for a platform-independent information synchronization standard. The project is currently referred to as Open Mobile Alliance Data Synchronization and Device Management. The purpose of SyncML is to offer an open standard as a replacement for existing data synchronization solutions, which have mostly been somewhat vendor-, application- or operating system specific. SyncML 1.0 specification was released on December 17, 2000, and 1.1 on February 26, 2002.
Internals
SyncML works by exchanging commands, which can be requests and responses. As an example:
the mobile sends an Alert command for signaling the wish to begin a refresh-only synchronization
the computer responds with a Status command for accepting the request
the mobile sends one or more Sync command containing an Add sub-command for each item (e.g., phonebook entry); if the number of entries is large, it does not include the tag;
in the latter case, the computer requests to continue with an appropriate Alert message, and the mobile sends another chunk of items; otherwise, the computer confirms it received all data with a Status command
Commands (Alert, Sync, Status, ecc.) are grouped into messages. Each message and each of its commands has an identifier, so that the pair MsgID,CmdID uniquely determine a command. Responses like Status commands include the pair identifying the command they are responding to.
Before commands, messages contain a header specifying various data regarding the transaction. An example message containing the Alert command for begin a refresh synchronization, like in the previous example, is:
<?xml version="1.0"?>
<!DOCTYPE SyncML PUBLIC "-//SYNCML//DTD SyncML 1.2//EN" "http://www.openmobilealliance.org/tech/DTD/OMA-TS-SyncML_RepPro_DTD-V1_2.dtd">
<SyncML xmlns="SYNCML:SYNCML1.2">
<SyncHdr>
<VerDTD>1.1</VerDTD>
<VerProto>SyncML/1.1</VerProto>
<SessionID>1</SessionID>
<MsgID>1</MsgID>
<Target><LocURI>PC Suite</LocURI></Target>
<Source><LocURI>IMEI:3405623856456</LocURI></Source>
<Meta><MaxMsgSize xmlns="syncml:metinf">8000</MaxMsgSize></Meta>
</SyncHdr>
<SyncBody>
<Alert>
<CmdID>1</CmdID>
<Data>203</Data> <!-- 203 = mobile signals a refresh from it to computer -->
<Item>
<Target><LocURI>Events</LocURI></Target>
<Source><LocURI>/telecom/cal.vcs</LocURI></Source>
<Meta><Anchor xmlns="syncml:metinf"><Last>42</Last><Next>42</Next></Anchor></Meta>
</Item>
</Alert>
<Final/>
</SyncBody>
</SyncML>
The response from the computer could be an xml document like (comments added
for the sake of explanation):
<?xml version="1.0"?>
<!DOCTYPE SyncML PUBLIC "-//SYNCML//DTD SyncML 1.2//EN" "http://www.openmobilealliance.org/tech/DTD/OMA-TS-SyncML_RepPro_DTD-V1_2.dtd">
<SyncML>
<SyncHdr>
<VerDTD>1.1</VerDTD>
<VerProto>SyncML/1.1</VerProto>
<SessionID>1</SessionID>
<MsgID>1</MsgID>
<Target><LocURI>IMEI:3405623856456</LocURI></Target>
<Source><LocURI>PC Su |
https://en.wikipedia.org/wiki/Jeff%20Hawkins | Jeffrey Hawkins is an American businessman, neuroscientist and engineer. He co-founded Palm Computing — where he co-created the PalmPilot and Treo — and Handspring.
He subsequently turned to work on neuroscience, founding the Redwood Center for Theoretical Neuroscience in 2002. In 2005 he founded Numenta, where he leads a team in efforts to reverse-engineer the neocortex and enable machine intelligence technology based on brain theory.
He is the co-author of On Intelligence (2004), which explains his memory-prediction framework theory of the brain, and the author of A Thousand Brains: A New Theory of Intelligence (2021).
Education
Hawkins attended Cornell University, where he received a bachelor's degree in electrical engineering in 1979.
His interest in pattern recognition for speech and text input to computers led him to enroll in the biophysics program at the University of California, Berkeley in 1986. While there he patented a "pattern classifier" for handwritten text, but his PhD proposal on developing a theory of the neocortex was rejected.
Career
Hawkins joined GRiD Systems in 1982, where he developed rapid application development (RAD) software called GRiDtask. As vice president of research from 1988 to 1992, he developed their pen-based computing initiative that in 1989 spawned the GRiDPad, one of the first tablet computers.
Hawkins founded Palm Inc., in January 1992. In 1998 he left the company along with Palm co-founders Donna Dubinsky and Ed Colligan to start Handspring.
In March 2005, Hawkins, together with Dubinsky (Palm's original CEO) and Dileep George, founded Numenta, Inc.
Neuroscience
In 2002, after two decades of finding little interest from neuroscience institutions that he did not have a stake in, Hawkins founded the Redwood Neuroscience Institute in Menlo Park, California.
In 2004, he co-authored On Intelligence with Sandra Blakeslee, laying out a theory on his "memory-prediction framework" of how the brain works.
One of Hawkins' areas of interest is cortical columns. In 2016, he hypothesized that cortical columns did not capture just a sensation, but also the relative location of that sensation, in three dimensions rather than two (situated capture), in relation to what was around it. Hawkins explains, "When the brain builds a model of the world, everything has a location relative to everything else".
In 2021, he published A Thousand Brains: A New Theory of Intelligence, a framework for intelligence and cortical computation. The book details the advances he and the Numenta team made in the development of their theory of how the brain understands the world and what it means to be intelligent. It also details how the "thousand brains" theory can affect machine intelligence, and how an understanding of the brain impacts the threats and opportunities facing humanity. It also offers a theory of what's missing in current AI.
Board and institute memberships
In 2003, Hawkins was elected as a member of the National |
https://en.wikipedia.org/wiki/Maven%20%28Scrabble%29 | Maven is an artificial intelligence Scrabble player, created by Brian Sheppard. It has been used in official licensed Hasbro Scrabble games.
Algorithms
Game phases
Maven's gameplay is sub-divided into three phases: The "mid-game" phase, the "pre-endgame" phase, and the "endgame" phase.
The "mid-game" phase lasts from the beginning of the game up until there are nine or fewer tiles left in the bag. The program uses a rapid algorithm to find all possible plays from the given rack, and then part of the program called the "kibitzer" uses simple heuristics to sort them into rough order of quality. The most promising moves are then evaluated by "simming", in which the program simulates the random drawing of tiles, plays forward a set number of plays, and compares the points spread of the moves' outcomes. By simulating thousands of random drawings, the program can give a very accurate quantitative evaluation of the different plays. (While a Monte Carlo search, Maven does not use Monte Carlo tree search because it evaluates game trees only 2-ply deep, rather than playing out to the end of the game, and does not reallocate rollouts to more promising branches for deeper exploration; in reinforcement learning terminology, the Maven search strategy might be considered "truncated Monte Carlo simulation". A true MCTS strategy is unnecessary because the endgame can be solved. The shallow search is because the Maven author argues that, due to the fast turnover of letters in one's bag, it is typically not useful to look more than 2-ply deep, because if one instead looked deeper, e.g. 4-ply, the variance of rewards will be larger and the simulations will take several times longer, while only helping in a few exotic situations: "We maintain that if it requires an extreme situation like CACIQUE to see the value of a four-ply simulation then they are not worth doing." As the board value can be evaluated with very high accuracy in Scrabble, unlike games such as Go, deeper simulations are unlikely to change the initial evaluation.)
The "pre-endgame" phase works in almost the same way as the "mid-game" phase, except that it is designed to attempt to yield a good end-game situation.
The "endgame" phase takes over as soon as there are no tiles left in the bag. In two-player games, this means that the players can now deduce from the initial letter distribution the exact tiles on each other's racks. Maven uses the B-star search algorithm to analyze the game tree during the endgame phase.
Move generation
Maven has used several algorithms for move generation, but the one that has stuck is the DAWG algorithm. The GADDAG algorithm is faster, but a DAWG for North American English is only 0.5 MB, compared to about 2.5 MB for a GADDAG. That makes a significant difference for download games, whereas the speed advantage is not important. (Note that unimportant does not mean that the difference is small, merely that users cannot tell the difference. The GADDAG is perhaps twice |
https://en.wikipedia.org/wiki/USB%20flash%20drive | A USB flash drive (also called a thumb drive in the US, or a memory stick in the UK & pen drive or pendrive in many countries) is a data storage device that includes flash memory with an integrated USB interface. It is typically removable, rewritable and much smaller than an optical disc. Most weigh less than . Since first appearing on the market in late 2000, as with virtually all other computer memory devices, storage capacities have risen while prices have dropped. , flash drives with anywhere from 8 to 256 gigabytes (GB) were frequently sold, while 512 GB and 1 terabyte (TB) units were less frequent. As of 2023, 2 TB flash drives were the largest currently in production. Some allow up to 100,000 write/erase cycles, depending on the exact type of memory chip used, and are thought to physically last between 10 and 100 years under normal circumstances (shelf storage time).
Common uses of USB flash drives are for storage, supplementary back-ups, and transferring of computer files. Compared with floppy disks or CDs, they are smaller, faster, have significantly more capacity, and are more durable due to a lack of moving parts. Additionally, they are less vulnerable to electromagnetic interference than floppy disks, and are unharmed by surface scratches (unlike CDs). However, as with any flash storage, data loss from bit leaking due to prolonged lack of electrical power and the possibility of spontaneous controller failure due to poor manufacturing could make it unsuitable for long-term archiving of data. The ability to retain data is affected by the controller's firmware, internal data redundancy, and error correction algorithms.
Until about 2005, most desktop and laptop computers were supplied with floppy disk drives in addition to USB ports, but floppy disk drives became obsolete after widespread adoption of USB ports and the larger USB drive capacity compared to the "1.44 megabyte" (1440 kilobyte) 3.5-inch floppy disk.
USB flash drives use the USB mass storage device class standard, supported natively by modern operating systems such as Windows, Linux, and other Unix-like systems, as well as many BIOS boot ROMs. USB drives with USB 2.0 support can store more data and transfer faster than much larger optical disc drives like CD-RW or DVD-RW drives and can be read by many other systems such as the Xbox One, PlayStation 4, DVD players, automobile entertainment systems, and in a number of handheld devices such as smartphones and tablet computers, though the electronically similar SD card is better suited for those devices, due to their standardized form factor, which allows the card to be housed inside a device without protruding.
A flash drive consists of a small printed circuit board carrying the circuit elements and a USB connector, insulated electrically and protected inside a plastic, metal, or rubberized case, which can be carried in a pocket or on a key chain, for example. Some are equipped with an I/O indication LED that lights up or bl |
https://en.wikipedia.org/wiki/Schedule%20%28computer%20science%29 | In the fields of databases and transaction processing (transaction management), a schedule (or history) of a system is an abstract model to describe execution of transactions running in the system. Often it is a list of operations (actions) ordered by time, performed by a set of transactions that are executed together in the system. If the order in time between certain operations is not determined by the system, then a partial order is used. Examples of such operations are requesting a read operation, reading, writing, aborting, committing, requesting a lock, locking, etc. Not all transaction operation types should be included in a schedule, and typically only selected operation types (e.g., data access operations) are included, as needed to reason about and describe certain phenomena. Schedules and schedule properties are fundamental concepts in database concurrency control theory.
Formal description
The following is an example of a schedule:
D
In this example, the horizontal axis represents the different transactions in the schedule D. The vertical axis represents time order of operations. Schedule D consists of three transactions T1, T2, T3. The schedule describes the actions of the transactions as seen by the DBMS.
First T1 Reads and Writes to object X, and then Commits. Then T2 Reads and Writes to object Y and Commits, and finally, T3 Reads and Writes to object Z and Commits. This is an example of a serial schedule, i.e., sequential with no overlap in time because the actions of all three transactions are sequential, and the transactions are not interleaved in time.
Representing the schedule D above by a table (rather than a list) is just for the convenience of identifying each transaction's operations in a glance. This notation is used throughout the article below. A more common way in the technical literature for representing such schedule is by a list:
D = R1(X) W1(X) Com1 R2(Y) W2(Y) Com2 R3(Z) W3(Z) Com3
Usually, for the purpose of reasoning about concurrency control in databases, an operation is modelled as atomic, occurring at a point in time, without duration. When this is not satisfactory, start and end time-points and possibly other point events are specified (rarely). Real executed operations always have some duration and specified respective times of occurrence of events within them (e.g., "exact" times of beginning and completion), but for concurrency control reasoning usually only the precedence in time of the whole operations (without looking into the quite complex details of each operation) matters, i.e., which operation is before, or after another operation. Furthermore, in many cases, the before/after relationships between two specific operations do not matter and should not be specified, while being specified for other pairs of operations.
In general, operations of transactions in a schedule can interleave (i.e., transactions can be executed concurrently), while time orders between operations in each transaction re |
https://en.wikipedia.org/wiki/Kyoto%20Municipal%20Subway | The , also known as Kyoto City Subway, is the rapid transit network in the city of Kyoto, Japan. Operated by the Kyoto Municipal Transportation Bureau, it has two lines.
Lines
The Kyoto Municipal Subway is made up of two lines: the long, 15-station Karasuma Line, and the long, 17-station Tōzai Line, which together share one interchange station (Karasuma Oike Station):
Rolling stock
Karasuma Line
Kyoto Municipal Subway 10 series
Kyoto Municipal Subway 20 series
Kintetsu 3200 series
Kintetsu 3220 series
Tozai Line
Kyoto Municipal Subway 50 series
Keihan 800 series
Network Map
See also
Transport in Keihanshin
List of metro systems
Notes
References
External links
Kyoto City Bus & Subway Information Guide (Official website)
Network map (to scale)
Kyoto Municipal Subway map
Transport in Kyoto |
https://en.wikipedia.org/wiki/Macintosh%20SE | The Macintosh SE is a personal computer designed, manufactured, and sold by Apple Computer, from March 1987 to October 1990. It marked a significant improvement on the Macintosh Plus design and was introduced by Apple at the same time as the Macintosh II.
The SE retains the same Compact Macintosh form factor as the original Macintosh computer introduced three years earlier and uses the same design language used by the Macintosh II. An enhanced model, the SE/30, was introduced in January 1989; sales of the original SE continued. The Macintosh SE was updated in August 1989 to include a SuperDrive, with this updated version being called the "Macintosh SE FDHD" and later the "Macintosh SE SuperDrive". The Macintosh SE was replaced with the Macintosh Classic, a very similar model which retained the same central processing unit and form factor, but at a lower price point.
Overview
The Macintosh SE was introduced at the AppleWorld conference in Los Angeles on March 2, 1987. The "SE" is an initialism for "System Expansion". Its notable new features, compared to its similar predecessor, the Macintosh Plus, were:
First compact Macintosh with an internal drive bay for a hard disk (originally 20 MB or 40 MB) or a second floppy disk drive.
First compact Macintosh that featured an expansion slot.
First Macintosh to support the Apple Desktop Bus (ADB), previously only available on the Apple IIGS, for keyboard and mouse connections.
Improved SCSI support, providing faster data throughput (double that of the Macintosh Plus) and a standard 50-pin internal SCSI connector.
Better reliability and longer life expectancy (15 years of continuous use) due to the addition of a cooling fan.
25 percent greater speed when accessing RAM, resulting in a lower percentage of CPU time being spent drawing the screen. In practice this results in a 10-20 percent performance improvement.
Additional fonts and kerning routines in the Toolbox ROM
Disk First Aid is included on the system disk
The SE and Macintosh II were the first Apple computers since the Apple I to be sold without a keyboard. Instead the customer was offered the choice of the new ADB Apple Keyboard or the Apple Extended Keyboard.
Apple produced ten SEs with transparent cases as prototypes for promotional shots and employees. They are extremely rare and command a premium price for collectors.
Operating system
The Macintosh SE shipped with System 4.0 and Finder 5.4; this version is specific to this computer. (The Macintosh II, which was announced at the same time but shipped a month later, includes System 4.1 and Finder 5.5.) The README file included with the installation disks for the SE and II is the first place Apple ever used the term "Macintosh System Software", and after 1998 these two versions were retroactively given the name "Macintosh System Software 2.0.1".
Hardware
Processor: Motorola 68000, 8 MHz, with an 8 MHz system bus and a 16-bit data path
RAM: The SE came with 1 MB of RAM as standar |
https://en.wikipedia.org/wiki/Document-based%20question | In American Advanced Placement exams, a document-based question (DBQ), also known as data-based question, is an essay or series of short-answer questions that is constructed by students using one's own knowledge combined with support from several provided sources. Usually, it is employed on timed history tests.
In the United States
The document based question was first used for the 1973 AP United States History Exam published by the College Board, created as a joint effort between Development Committee members Reverend Giles Hayes and Stephen Klein. Both were unhappy with student performance on free-response essays, and often found that students were "groping for half-remembered information" and "parroted factual information with little historical analysis or argument" when they wrote their essays. The goal of the Document Based Question was for students to be "less concerned with the recall of previously learned information" and more engaged in deeper historical inquiry. Hayes, in particular, hoped students would "become junior historians and play the role of historians for that hour" as they engaged in the DBQ.
A typical DBQ is a packet of several original sources (anywhere from three to sixteen), labeled by letters (beginning with "Document A" or "Source A") or numbers. Usually all but one or two source(s) are textual, with the other source(s) being graphic (usually a political cartoon, map, or poster if primary and a chart or graph if secondary). In most cases, the sources are selected to provide different perspectives or views on the events or movements being analyzed.
On the Advanced Placement (AP) exams, only primary sources are provided; on the International Baccalaureate (IB) exams, both primary and secondary sources are provided. AP exams also require students to construct and defend a thesis based on one prompt, while IB exams focus on a series of questions, with at least one asking students to assess the "value and limitations" of a source, usually "with reference to the documents' origin or purpose."
The documents contained in the document-based questions are rarely familiar texts (for example, the Emancipation Proclamation and Declaration of Independence are not likely to be on a U.S. history test), though the documents' authors may be major historical figures. The documents vary in length and format.
On some tests students are not permitted to begin responding to the question or questions in the essay packet until after a mandatory reading time ("planning period"), usually around 10 to 15 minutes. During this time, students read the passage and, if desired, make notes or markings. After this period, students are permitted to respond, usually for around 45 minutes to an hour.
References
External links
The DBQ Project
Examinations |
https://en.wikipedia.org/wiki/Desktop%20Linux%20Consortium | The Desktop Linux Consortium (DLC) was a non-profit organization which aims at enhancing and promoting the use of the Linux operating system on desktop computers.
It was founded on 4 February 2003.
Members
Ark Linux
CodeWeavers
Debian
KDE
Linux Professional Institute (LPI)
The Linux Terminal Server Project (LTSP)
Lycoris (company)
Mandriva (formerly known as Mandrakesoft)
NeTraverse
OpenOffice.org Organisation, does not exist any more
Questnet (Support4Linux.com)
Samba
Sunwah Linux (rays Linux Distribution)
SUSE
theKompany
TransGaming Technologies
TrustCommerce
Xandros
Ximian
See also
Desktop Linux
References
External links
Archived version of the official website as of 2007
Linux organizations |
https://en.wikipedia.org/wiki/Patricia%20Ryan | Patricia Ryan may refer to:
Pat Nixon (1912–1993), née Patricia Ryan, wife of U.S. president Richard Nixon
Patricia Ryan (CFF), former director of the Cult Awareness Network, daughter of former U.S. Congressman Leo Ryan
Patricia Ryan (author) (born 1954), American writer
Patricia Ryan (actress) (1921–1949), active in old-time radio from childhood until her death at age 27
Patricia Ryan (equestrian) (born 1973), Irish equestrian
Patricia E. Ryan, American human rights advocate and women's rights lobbyist
Patricia Ryan (judge), Irish judge
Patricia Ryan (politician), Irish Sinn Féin politician for Kildare South
See also
Patty Ryan (born 1961), German singer
Pat Ryan (disambiguation) |
https://en.wikipedia.org/wiki/David%20Cone | David Brian Cone (born January 2, 1963) is an American former Major League Baseball (MLB) pitcher, and current color commentator for the New York Yankees on the YES Network and WPIX as well as for ESPN on Sunday Night Baseball. A third round draft pick of the Kansas City Royals in 1981 MLB Draft, he made his MLB debut in 1986 and continued playing until 2003, pitching for five different teams. Cone batted left-handed and threw right-handed.
Cone pitched the sixteenth perfect game in baseball history in 1999. On the final game of the 1991 regular season, he struck out 19 batters, tied for second-most ever in a game. The 1994 Cy Young Award winner, he was a five-time All-Star and led the major leagues in strikeouts each season from 1990 to 1992. A two-time 20 game-winner, he set the MLB record for most years between 20-win seasons with 10.
He was a member of five World Series championship teams — with the Toronto Blue Jays and , , and with the New York Yankees. His 8–3 career postseason record came over 21 games and 111 innings pitched, with an earned run average (ERA) of 3.80; in World Series play, his ERA was 2.12.
Cone is the subject of the book, A Pitcher's Story: Innings With David Cone, by Roger Angell. Cone and Jack Curry co-wrote the autobiography Full Count: The Education of a Pitcher, which was released in May 2019 and made The New York Times Best Seller list shortly after its release.
Early years
Cone was born in Kansas City, Missouri, the son of Joan (née Curran; 1936–2016) and Edwin Cone (1934–2022). He attended Rockhurst High School, a Jesuit school, where he played quarterback on the football team, leading them to the district championship. He was also a point guard on the basketball team. Because Rockhurst did not have a baseball team, Cone instead played summer ball in the Ban Johnson League, a college summer league in Kansas City. At 16, he reported to an invitation-only tryout at Royals Stadium and an open tryout for the St. Louis Cardinals. He was also recruited to play college football and baseball. Upon graduation, he enrolled at the University of Missouri and was drafted by his hometown Kansas City Royals in the third round of the 1981 Major League Baseball draft.
Professional baseball career
Minor leagues and MLB debut: Kansas City Royals (1981–1986)
Cone went 22–7 with a 2.21 earned run average in his first two professional seasons. He sat out 1983 with an injury, and went 8–12 with a 4.28 ERA for the Double-A Memphis Chicks when he returned in 1984. During his second season with the Class AAA Omaha Royals (1986), Cone was converted to a relief pitcher, and he made his Major League debut on June 8, 1986, in relief of reigning Cy Young Award winner Bret Saberhagen. He made three more appearances out of the Royals' bullpen before returning to Omaha, where he went 8–4 with a 2.79 ERA. He returned to Kansas City when rosters expanded that September.
New York Mets (1987–1992)
Prior to the 1987 season, Cone was traded w |
https://en.wikipedia.org/wiki/Float%20%28project%20management%29 | In project management, float or slack is the amount of time that a task in a project network can be delayed without causing a delay to:
subsequent tasks ("free float")
project completion date ("total float").
Total float is associated with the path. If a project network chart/diagram has 4 non-critical paths then that project would have 4 total float values. The total float of a path is the combined free float values of all activities in a path.
The total float represents the schedule flexibility and can also be measured by subtracting early start dates from late start dates of path completion. Float is core to critical path method, with the total floats of noncritical activities key to computing the critical path drag of an activity, i.e., the amount of time it is adding to the project's duration.
Example
Consider the process of replacing a broken pane of glass in the window of your home. There are various component activities involved in the project as a whole; obtaining the glass and putty, installing the new glass, choosing the paint, obtaining a tin once it has set, wiping the new glass free of finger smears etc.
Some of these activities can run concurrently e.g. obtaining the glass, obtaining the putty, choosing the paint etc., while others are consecutive e.g. the paint cannot be bought until it has been chosen, the new window cannot be painted until the window is installed and the new putty has set. Delaying the acquisition of the glass is likely to delay the entire project - this activity will be on the critical path and have no float, of any sort, attached to it and hence it is a 'critical activity'. A relatively short delay in the purchase of the paint may not automatically hold up the entire project as there is still some waiting time for the new putty to dry before it can be painted anyway - there will be some 'free float' attached to the activity of purchasing the paint and hence it is not a critical activity. However, a delay in choosing the paint, in turn, inevitably delays buying the paint which, although it may not subsequently mean any delay to the entire project, does mean that choosing the paint has no 'free float' attached to it - despite having no free float of its own, the choosing of the paint is involved with a path through the network which does have 'total float'.
See also
Critical path method
Glossary of project management
List of project management topics
References
Further reading
Schedule (project management) |
https://en.wikipedia.org/wiki/Swap | Swap or SWAP may refer to:
Finance
Swap (finance), a derivative in which two parties agree to exchange one stream of cash flows against another
Barter
Science and technology
Swap (computer programming), exchanging two variables in the memory of a computer
Swap partition, a partition of a computer data storage used for paging
SWAP (instrument) (Sun Watcher using Active Pixel System Detector and Image Processing), a space instrument aboard the PROBA2 satellite
SWAP (New Horizons) (Solar Wind At Pluto), a science instrument aboard the uncrewed New Horizons space probe
SWAP protein domain, in molecular biology
Size, weight and power (SWaP), see DO-297
Other
Swåp, an Anglo-Swedish folk music band
Sector-Wide Approach (SWAp), an approach to international development
Swap (film), a 2015 Philippine crime drama film
See also
Swaps (horse) (1952–1972), a California-bred American Thoroughbred racehorse
Swapping (disambiguation) |
https://en.wikipedia.org/wiki/Nmap | Nmap (Network Mapper) is a network scanner created by Gordon Lyon (also known by his pseudonym Fyodor Vaskovich). Nmap is used to discover hosts and services on a computer network by sending packets and analyzing the responses.
Nmap provides a number of features for probing computer networks, including host discovery and service and operating system detection. These features are extensible by scripts that provide more advanced service detection, vulnerability detection, and other features. Nmap can adapt to network conditions including latency and congestion during a scan.
Nmap started as a Linux utility and was ported to other systems including Windows, macOS, and BSD. It is most popular on Linux, followed by Windows.
Features
Nmap features include:
Fast scan (nmap -F [target]) – Performing a basic port scan for fast result.
Host discovery – Identifying hosts on a network. For example, listing the hosts that respond to TCP and/or ICMP requests or have a particular port open.
Port scanning – Enumerating the open ports on target hosts.
Version detection – Interrogating network services on remote devices to determine application name and version number.
Ping Scan – Check host by sending ping requests.
TCP/IP stack fingerprinting – Determining the operating system and hardware characteristics of network devices based on observations of network activity of said devices.
Scriptable interaction with the target – using Nmap Scripting Engine (NSE) and Lua programming language.
Nmap can provide further information on targets, including reverse DNS names, device types, and MAC addresses.
Typical uses of Nmap:
Auditing the security of a device or firewall by identifying the network connections which can be made to, or through it.
Identifying open ports on a target host in preparation for auditing.
Network inventory, network mapping, maintenance and asset management.
Auditing the security of a network by identifying new servers.
Generating traffic to hosts on a network, response analysis and response time measurement.
Finding and exploiting vulnerabilities in a network.
DNS queries and subdomain search
User interfaces
NmapFE, originally written by Kanchan, was Nmap's official GUI for Nmap versions 2.2 to 4.22. For Nmap 4.50 (originally in the 4.22SOC development series) NmapFE was replaced with Zenmap, a new official graphical user interface based on UMIT, developed by Adriano Monteiro Marques.
Web-based interfaces exist that allow either controlling Nmap or analysing Nmap results from a web browser, such as IVRE.
Output
Four different output formats are offered by Nmap. Everything is saved to a file except the interactive output. Text processing software can be used to modify Nmap output, allowing the user to customize reports.
Interactive presented and updated real time when a user runs Nmap from the command line. Various options can be entered during the scan to facilitate monitoring.
XML a format that can be further processed by |
https://en.wikipedia.org/wiki/Platform-independent%20model | A platform-independent model (PIM) in software engineering is a model of a software system or business system that is independent of the specific technological platform used to implement it (.e.g a programming language or a database).
The term platform-independent model is most frequently used in the context of the model-driven architecture approach. This model-driven architecture approach corresponds to the Object Management Group vision of model-driven engineering.
The main idea is that it should be possible to use a model transformation language to transform a platform-independent model into a platform-specific model. In order to achieve this transformation, one can use a language compliant to the newly defined QVT standard. Examples of such languages are VIATRA or ATLAS Transformation Language. It means execution of the program is not restricted by the type of operating system used.
Related concepts
References
Software architecture |
https://en.wikipedia.org/wiki/Platform-specific%20model | A platform-specific model is a model of a software or business system that is linked to a specific technological platform (e.g. a specific programming language, operating system, document file format or database). Platform-specific models are indispensable for the actual implementation of a system.
For example, a need to implement an online shop. The system will need to store information regarding users, goods, credit cards, etc. The designer might decide to use for this purpose an Oracle database. For this to work, the designer will need to express concepts (e.g. the concept of a user) in a relational model using the Oracle's SQL dialect. This Oracle's specific relational model is an example of a Platform-specific model.
The term platform-specific model is most frequently used in the context of the MDA approach. This MDA approach corresponds the OMG vision of Model Driven Engineering. The main idea is that it should be possible to use a MTL to transform a Platform-independent model into a Platform-specific model. In order to achieve this transformation, one can use a language compliant to the newly defined QVT standard. Examples of such languages are AndroMDA, VIATRA or ATL.
Related Concepts
ATLAS Transformation Language (ATL)
Domain Specific Language (DSL)
Domain-specific modelling (DSM)
Eclipse Modeling Framework (EMF)
Generic Modeling Environment (GME)
Graphical Modeling Framework (GMF)
Meta-Object Facility (MOF)
Meta-modeling
Model-based testing (MBT)
Model-driven architecture (MDA)
Model Transformation Language (MTL)
Object Constraint Language (OCL)
Object-oriented analysis and design (OOAD)
Visual Automated model Transformations VIATRA
XML Metadata Interchange (XMI)
See also
Platform-independent model
References
Software architecture
Systems engineering |
https://en.wikipedia.org/wiki/Shadow%20volume | Shadow volume is a technique used in 3D computer graphics to add shadows to a rendered scene. They were first proposed by Frank Crow in 1977 as the geometry describing the 3D shape of the region occluded from a light source. A shadow volume divides the virtual world in two: areas that are in shadow and areas that are not.
The stencil buffer implementation of shadow volumes is generally considered among the most practical general purpose real-time shadowing techniques for use on modern 3D graphics hardware. It has been popularized by the video game Doom 3, and a particular variation of the technique used in this game has become known as Carmack's Reverse.
Shadow volumes have become a popular tool for real-time shadowing, alongside the more venerable shadow mapping. The main advantage of shadow volumes is that they are accurate to the pixel (though many implementations have a minor self-shadowing problem along the silhouette edge, see construction below), whereas the accuracy of a shadow map depends on the texture memory allotted to it as well as the angle at which the shadows are cast (at some angles, the accuracy of a shadow map unavoidably suffers). However, the technique requires the creation of shadow geometry, which can be CPU intensive (depending on the implementation). The advantage of shadow mapping is that it is often faster, because shadow volume polygons are often very large in terms of screen space and require a lot of fill time (especially for convex objects), whereas shadow maps do not have this limitation.
Construction
In order to construct a shadow volume, project a ray from the light source through each vertex in the shadow casting object to some point (generally at infinity). These projections will together form a volume; any point inside that volume is in shadow, everything outside is lit by the light.
For a polygonal model, the volume is usually formed by classifying each face in the model as either facing toward the light source or facing away from the light source. The set of all edges that connect a toward-face to an away-face form the silhouette with respect to the light source. The edges forming the silhouette are extruded away from the light to construct the faces of the shadow volume. This volume must extend over the range of the entire visible scene; often the dimensions of the shadow volume are extended to infinity to accomplish this (see optimization below.) To form a closed volume, the front and back end of this extrusion must be covered. These coverings are called "caps". Depending on the method used for the shadow volume, the front end may be covered by the object itself, and the rear end may sometimes be omitted (see depth pass below).
There is also a problem with the shadow where the faces along the silhouette edge are relatively shallow. In this case, the shadow an object casts on itself will be sharp, revealing its polygonal facets, whereas the usual lighting model will have a gradual change in the light |
https://en.wikipedia.org/wiki/Static%20Shock | Static Shock is an American superhero animated television series based on the Milestone Media/DC Comics superhero Static. It premiered on September 23, 2000, on the WB Television Network's Kids' WB programming block. Static Shock ran for four seasons, with 52 half-hour episodes in total. The show revolves around Virgil Hawkins, a 14-year-old boy who uses the secret identity of "Static" after exposure to a mutagen gas during a gang fight which gave him electromagnetic powers. It was the first time that an African-American superhero was the titular character of their own broadcast animation series.
Static Shock was produced by Warner Bros. Animation from a crew composed mostly of people from the company's past shows, but also with the involvement of two of the comic's creators, Dwayne McDuffie and Denys Cowan. Static Shock had some alterations from the original comic book because it was oriented to a pre-teen audience. Although originally not intended to be part of the DC Animated Universe, it was incorporated into it as the fifth series beginning in the second season.
The show approached several social issues, which was positively received by most television critics. Static Shock was nominated for numerous awards, including the Daytime Emmy. Some criticism was directed towards its humor and animation, which was said to be unnatural and outdated. The series also produced some related merchandise, which sold poorly; McDuffie cited the low sales as one of the main factors behind the series' cancellation. In spite of this, its popularity revived interest in the original Milestone comic and introduced McDuffie to the animation industry.
Plot
Virgil Hawkins is a 14-year-old who lives with his older sister Sharon, and his widowed father Robert in Dakota City. He attends high school with his best friend Richie Foley, and has a crush on a girl named Frieda. He also has a dispute with a bully named Francis Stone, nicknamed "F-Stop." A gang leader named Wade recently helped Virgil, hoping to recruit him, but Virgil is hesitant, as he knows his mother died in an exchange of gunfire between gangs. Wade eventually leads Virgil to a restricted area for a fight against F-Stop's crew, but it was interrupted by police helicopters. During the dispute with the police, chemical containers explode, releasing a gas that causes mutations among the people in the vicinity (this event was later known as "The Big Bang"). As a result, Virgil obtains the ability to create, generate, absorb, and control electricity and magnetism—he takes up the alter-ego of "Static". The gas also gives others in the area their own powers, and several of them become supervillains known as "Bang Babies".
Characters
Virgil Ovid Hawkins / Static (voiced by Phil LaMarr) – A high school student in Dakota City. As a result of accidental exposure to an experimental mutagen in an event known as the Big Bang, he gained the ability to control and manipulate electromagnetism, and uses these powers to |
https://en.wikipedia.org/wiki/Tokyo%20Metro%20Chiyoda%20Line | The is a subway line owned and operated by Tokyo Metro in Tokyo, Japan. On average, the line carries 1,447,730 passengers daily (2017), the second highest of the Tokyo Metro network, behind the Tozai Line (1,642,378).
The line was named after the Chiyoda ward, under which it passes. On maps, diagrams and signboards, the line is shown using the color green, and its stations are given numbers using the letter "C".
Overview
The line serves the wards of Adachi, Arakawa, Bunkyō, Chiyoda, Minato and Shibuya, and a short stretch of tunnel in Taitō with no station. Its official name, rarely used, is .
On maps, diagrams and signboards, the line is shown using the color green, and its stations are given numbers using the letter "C".
Trains have through running onto other railway lines on both ends. More than half of these are trains to the northeast beyond Ayase onto the East Japan Railway Company (JR East) Joban Line to . The rest run to the southwest beyond Yoyogi-Uehara onto the Odakyu Odawara Line to .
According to the Tokyo Metropolitan Bureau of Transportation, as of June 2009 the Chiyoda Line was the second most crowded subway line in Tokyo, at its peak running at 181% capacity between and stations.
Basic data
Distance:
Double-tracking: Entire line
Railway signalling: New CS-ATC
Station list
All stations are located in Tokyo.
Stopping patterns:
Commuter Semi Express, Local, Semi Express, and Express trains stop at every station.
Odakyu Romancecar limited express services stop at stations marked "●" and does not stop at those marked "|".
Rolling stock
, the following train types are used on the line, all running as ten-car formations unless otherwise indicated.
Tokyo Metro
16000 series (x37) (since November 2010)
05 series 3-car trains (x4) (since April 2014, used on Kita-Ayase Branch)
Odakyu
4000 series (since September 2007)
60000 series MSE (since spring 2008)
JR East
E233-2000 series (x19) (since summer 2009)
Former rolling stock
6000 series (x35) (from 1971 until November 2018)
JNR 103-1000 series (x16) (from 1971 until April 1986)
JR East 203 series (x17) (from August 27, 1982 until September 26, 2011)
JR East 209-1000 series (x2) (from 1999 until October 13, 2018)
JNR 207–900 series (x1) (from 1986 until December 2009)
5000 series 3-car trains (x2) (from 1969 until 2014, later used on branch line)
6000 series 3-car train (x1) (prototype of the series built in 1968 until 2014, used on branch line)
06 series (x1) (from 1993 until January 2015)
07 series (x1) (September 2008 – December 2008)
Odakyu 1000 series (1988–2010)
Odakyu 9000 series (1978–1990)
History
The Chiyoda Line was originally proposed in 1962 as a line from Setagaya in Tokyo to Matsudo, Chiba; the initial name was "Line 8". In 1964, the plan was changed slightly so that through service would be offered on the Joban Line north of Tokyo, and the number was changed to "Line 9".
Line 9 was designed to pass through built-up areas in Chiyoda, and |
https://en.wikipedia.org/wiki/Plessey%20System%20250 | Plessey System 250, also known as PP250, was the first operational computer to implement capability-based addressing, to check and balance the computation as a pure Church–Turing machine. Plessey built the systems for a British Army message routing project.
Description
A Church–Turing machine is a digital computer that encapsulates the symbols in a thread of computation as a chain of protected abstractions by enforcing the dynamic binding laws of Alonzo Church's lambda calculus Other capability based computers, which include CHERI and CAP computers, are hybrids. They retain default instructions that can access every word of accessible physical or logical (paged) memory.
It is an unavoidable characteristic of the von Neumann architecture that is founded on shared random access memory and trust in the sharing default access rights. For example, every word in every page managed by the virtual memory manager in an operating system using a memory management unit (MMU) must be trusted. Using a default privilege among many compiled programs allows corruption to grow without any method of error detection. However, the range of virtual addresses given to the MMU or the range of physical addresses produced by the MMU is shared undetected corruption flows across the shared memory space from one software function to another. PP250 removed not only virtual memory or any centralized, precompiled operating system, but also the superuser, removing all default machine privileges.
It is default privileges that empower undetected malware and hacking in a computer. Instead, the pure object capability model of PP250 always requires a limited capability key to define the authority to operate. PP250 separated binary data from capability data to protect access rights, simplify the computer and speed garbage collection. The Church machine encapsulates and context limits the Turing machine by enforcing the laws of the lambda calculus. The typed digital media is program controlled by distinctly different machine instructions.
Mutable binary data is programmed by 28 RISC instruction set for Imperative programming and procedural programming the binary data using binary data registers confined to a capability limited memory segment. The immutable capability keys, exclusive to six Church instructions, navigate the computational context of a Turing machine through the separately programmed structure of the object-capability model.
Immutable capability keys represent named lambda calculus variables. This Church side is a lambda calculus meta-machine. The other side is an object-oriented machine of binary objects, programmed functions, capability lists defining function abstractions, storage for threads of computation (lambda calculus applications) or storage for the list of capability keys in a namespace. The laws of the lambda calculus are implemented by the Church instructions with micro-programmed access to the reserved (hidden) capability registers. The software is i |
https://en.wikipedia.org/wiki/PTEN | PTEN may mean:
Prime Time Entertainment Network
PTEN (gene), a human tumour suppressor gene on chromosome 10 (and its protein: phosphatase and tensin homolog)
See also
Akt/PKB signaling pathway
Discovery and development of mTOR inhibitors
PI3K/AKT/mTOR pathway
Akt inhibitor |
https://en.wikipedia.org/wiki/Prime%20Time%20Entertainment%20Network | The Prime Time Entertainment Network (PTEN) was an American television network that was operated by the Prime Time Consortium, a joint venture between the Warner Bros. Domestic Television subsidiary of Time Warner and Chris-Craft Industries. First launched on January 20, 1993, and operating until 1997, the network mainly aired drama programs aimed at adults between the ages of 18 and 54. At its peak, PTEN's programming was carried on 177 television stations, covering 93% of the country.
History
Origins
At the time of PTEN's founding, co-owner Chris-Craft Industries owned independent television stations in several large and mid-sized U.S. cities (among them its two largest stations, WWOR-TV in New York City and KCOP-TV in Los Angeles) through its BHC Communications and United Television divisions, which formed the nuclei of the network.
PTEN was launched as a potential fifth television network, and was created in reaction to the success of the Fox network (which debuted in October 1986, seven years before PTEN launched) as well as the successes of first-run syndicated programming during the late 1980s and early 1990s. It offered packaged nights of programming to participating television stations, beginning with a two-hour block on Wednesday evenings, with a second block (originally airing on Saturday, before moving to Monday for the 1994-95 season) being added in September 1993. Originally, the station groups involved in the Prime Time Consortium helped finance PTEN's programs; however, that deal was restructured at the beginning of the network's second year.
The service sought affiliations with various television stations not affiliated with the Big Three television networks. However, close to half of PTEN's initial affiliates were stations that were already affiliated with Fox; as a result, these stations usually scheduled PTEN programming around Fox's then five-night prime time schedule (although Fox would expand its schedule to seven nights with the addition of programming on Tuesdays and Wednesdays on January 19, 1993, the day before PTEN launched). PTEN launched on January 20, 1993, with two series: the science fiction series Time Trax and the action drama Kung Fu: The Legend Continues.
Demise
PTEN faced two obstacles created by its parent companies which would affect the network. On November 2, 1993, the Warner Bros. Entertainment division of Time Warner announced that it would form its own fifth network, The WB, as a joint venture with the Tribune Company, Six days earlier, on October 27, Chris-Craft Industries announced the launch of the United Paramount Network (UPN), in a programming partnership with Paramount Television division of Viacom (which would become part-owner of the network in 1996). As a result, the core Chris-Craft independent stations (as well as those owned by Paramount) would serve as charter stations of the new network; Chris-Craft also chose to pull out of the partnership to focus on operating UPN.
The network al |
https://en.wikipedia.org/wiki/ADABAS | Adabas, a contraction of “adaptable database system," is a database package that was developed by Software AG to run on IBM mainframes. It was launched in 1971 as a non-relational database. As of 2019, Adabas is marketed for use on a wider range of platforms, including Linux, Unix, and Windows.
Adabas can store multiple data relationships in the same table.
History
Initially released by Software AG in 1971 on IBM mainframe systems using DOS/360, OS/MFT, or OS/MVT, Adabas is currently available on a range of enterprise systems, including BS2000, z/VSE, z/OS, Unix, Linux, and Microsoft Windows. Adabas is frequently used in conjunction with Software AG's programming language Natural; many applications that use Adabas as a database on the back end are developed with Natural. In 2016, Software AG announced that Adabas and Natural would be supported through the year 2050 and beyond.
Adabas is one of the three major inverted list DBMS packages, the other two being Computer Corporation of America’s Model 204 and ADR’s Datacom/DB.
4GL support
Since the 1979 introduction of Natural the popularity of Adabas databases has grown. By 1990, SAS was supporting Adabas.
Non-relational
In a 2015 white paper, IBM said, "applications that are written in a pre-relational database, such as Adabas, are no longer mainstream and do not follow accepted IT industry standards." However, an Adabas database can be designed in accordance with the relational model. While there are tools and services to facilitate converting Adabas to various relational databases, such migrations are usually costly.
Hardware zIIP boost
IBM's zIIP (System z Integrated Information Processor) special purpose processors permit "direct, real-time SQL access to Adabas" (even though the data may still stored in a non-relational form).
Adabas Data Model
Adabas is an acronym for Adaptable Data Base System (originally written in all caps; today only the initial cap is used for the product name.)
Adabas is an inverted list data base, with the following characteristics or terminology:
Works with tables (referred to as files) and rows (referred to as records) as the major organizational units
Columns (referred to as fields) are components of rows
No embedded SQL engine. SQL access via the Adabas SQL Gateway was introduced through an acquired company, CONNX, in 2004. It provides ODBC, JDBC, and OLE DB access to Adabas and enables SQL access to Adabas using COBOL programs.
Search facilities may use indexed fields or non-indexed fields or both.
Does not natively enforce referential integrity constraints, and parent-child relations must be maintained by application code.
Supports two methods of denormalization: repeating groups in a record ("periodic groups") and multiple value fields in a record ("multi-value fields").
Adabas is typically used in applications that require high volumes of data processing or in high transaction online analytical processing environments.
Adabas access is normally th |
https://en.wikipedia.org/wiki/Software%20AG | Software AG is a German multinational software corporation that develops enterprise software for business process management, integration, and big data analytics. Founded in 1969, the company is headquartered in Darmstadt, Germany, and has offices worldwide.
With over 10,000 enterprise customers in over 70 countries, it is the second largest software vendor in Germany, and the seventh largest in Europe. Software AG is traded on the Frankfurt Stock Exchange under the symbol “SOW” and part of the technology index TecDAX.
In 2023, Silver Lake and Bain Capital made separate offers to buy the German company. In June, Software AG had most of its controlling interest acquired by Silver Lake, in a deal valued at 2.4 billion euros.
History
The company was founded in 1969 by six young employees at the consulting firm AIV (Institut für Angewandte Informationsverarbeitung). One of the founders was the mathematician Peter Schnell, who later became chairman of the board for many years.
ADABAS was launched in 1971 as a high-performance transactional database management system. In 1979, Natural, a 4GL application development English-like language, that was mainly developed by Peter Pagé, was launched. The company continued to open offices and subsidiaries in North America (1971), Japan (1974), UK (1977), France (1983), Spain (1984), Switzerland, Austria, Belgium, and Saudi Arabia (1985).
By 1987, Software AG had around 500 employees, 12 subsidiaries in Europe and offices in more than 50 countries. In 1999, Software AG was listed on Frankfurt Stock Exchange and soon after the company released Tamino Information Server and Tamino XML Server.
In January 2005, Software AG acquired Sabratec Ltd, a privately held legacy integration vendor headquartered in Israel. This was followed by the acquisition of its Israeli distributor SPL Software in March 2007 and the application modernization business of another Israeli company, Jacada in December 2007, which formed the basis for its research and development center in Israel.
The company launched Centrasite SOA Governance platform in 2006 and with the $546M acquisition of U.S. rival webMethods in 2007 Software AG became active in the enterprise service bus, business process management and service-oriented architecture (SOA) product space.
In July 2009, it announced a takeover offer for the Germany-based company IDS Scheer AG. Since February 2010, IDS Scheer is part of the Software AG Group. In October 2010, the company acquired New Jersey based Data Foundations, a leading master data management provider.
In May 2011, Software AG acquired Terracotta, Inc. and Metismo Ltd. Terracotta Inc is in the field of in-memory technology for high-performance applications and cloud services - especially the middleware platform WebMethods should benefit from this acquisition.
In April 2012, Software AG (SAG) announced buying the British technology provider my-Channels. With it, Software AG gained in-house access to universal mess |
https://en.wikipedia.org/wiki/Lineage%20%28video%20game%29 | Lineage () is a medieval fantasy, massively multiplayer online role-playing game (MMORPG) released in Korea and the United States in 1998 by the South Korean computer game developer NCSoft, based on a Korean comic book series of the same name. It is the first game in the Lineage series. It is most popular in Korea and is available in Chinese, Japanese, and English. The game was designed by Jake Song, who had previously designed Nexus: The Kingdom of the Winds, another MMORPG.
Lineage features 2D isometric-overhead graphics similar to those of Ultima Online and Diablo II. Lineage II: The Chaotic Chronicle, a "prequel" set 150 years before the time of Lineage, was released in 2003. By 2006, the Lineage franchise had attracted 43 million players. Lineage W and Project TL is a planned sequel to be set after Lineage and will be the last two games in the Lineage series.
The North American servers were shut down on June 29, 2011 by NCSoft.
Gameplay
Lineage's stat, monster, and item system was originally largely borrowed from NetHack with MMO elements added.
Players can choose one of seven character classes: Elf, Dark Elf, Knight, Prince, Magician, Dragon Knight, or Illusionist. Princes are the only class that can lead a blood pledge (which is Lineage's term for a guild or clan).
Gameplay is based primarily upon a castle siege system which allows castle owners to set tax rates in neighboring cities and collect taxes on items purchased in stores within those cities. It features classic RPG elements reminiscent of Dungeons & Dragons, such as killing monsters and completing quests for loot and experience points, levels, character attributes (charisma, strength, wisdom, etc.), and alignments (neutral, chaotic or lawful). A character's alignment affects how monsters and town guards react to the player's character, often turning hostile to chaotic players and attacking on sight.
Player versus player combat (also known as PVP) is extensive in Lineage. Players can engage in combat with other player characters at any time as long as they are not in safe zones such as cities. By joining a "bloodpledge" (an association of players similar to a clan in other games) players become eligible to engage in castle sieges or wars between bloodpledges.
Development
The title Lineage came from a series of comic books with the same title Lineage by Shin Il-sook, and the servers of Lineage are named after the characters of the comic book. It is a fantasy story where a rightful prince reclaims the throne from the hands of a usurper. When first created, the game closely resembled the original work. As developers have added new features, however, the fictional universes of the two works have gradually diverged.
Reception
NCSoft reported that Lineage had at one point more than three million subscribers, most of them in Korea. The magnitude of the number Korean subscribers compared to other countries has sparked a number of theories. A ban on some Japanese imports until 1 |
https://en.wikipedia.org/wiki/DOS%20memory%20management | In IBM PC compatible computing, DOS memory management refers to software and techniques employed to give applications access to more than 640 kibibytes (640*1024 bytes) (KiB) of "conventional memory". The 640 KiB limit was specific to the IBM PC and close compatibles; other machines running MS-DOS had different limits, for example the Apricot PC could have up to 768 KiB and the Sirius Victor 9000, 896 KiB. Memory management on the IBM family was made complex by the need to maintain backward compatibility to the original PC design and real-mode DOS, while allowing computer users to take advantage of large amounts of low-cost memory and new generations of processors. Since DOS has given way to Microsoft Windows and other 32-bit operating systems not restricted by the original arbitrary 640 KiB limit of the IBM PC, managing the memory of a personal computer no longer requires the user to manually manipulate internal settings and parameters of the system.
The 640 KiB limit imposed great complexity on hardware and software intended to circumvent it; the physical memory in a machine could be organised as a combination of base or conventional memory (including lower memory), upper memory, high memory (not the same as upper memory), extended memory, and expanded memory, all handled in different ways.
Conventional memory
The Intel 8088 processor used in the original IBM PC had 20 address lines and so could directly address 1 MiB (220 bytes) of memory. Different areas of this address space were allocated to different kinds of memory used for different purposes. Starting at the lowest end of the address space, the PC had read/write random access memory (RAM) installed, which was used by DOS and application programs. The first part of this memory was installed on the motherboard of the system (in very early machines, 64 KiB, later revised to 256 KiB). Additional memory could be added with cards plugged into the expansion slots; each card contained straps or switches to control what part of the address space accesses memory and devices on that card.
On the IBM PC, all the address space up to 640 KiB was available for RAM. This part of the address space is called "conventional memory" since it is accessible to all versions of DOS automatically on startup. Segment 0, the first 64 KiB of conventional memory, is also called low memory area. Normally expansion memory is set to be contiguous in the address space with the memory on the motherboard. If there was an unallocated gap between motherboard memory and the expansion memory, the memory would not be automatically detected as usable by DOS.
Upper memory area
The upper memory area (UMA) refers to the address space between 640 KiB and 1024 KiB (0xA0000–0xFFFFF). The 128 KiB region between 0xA0000 and 0xBFFFF was reserved for VGA screen memory and legacy SMM. The 128 KiB region between 0xC0000 and 0xDFFFF was reserved for device Option ROMs, including Video BIOS. The 64 KiB of the address space from 0xE |
https://en.wikipedia.org/wiki/Visual%20inspection | Visual inspection is a common method of quality control, data acquisition, and data analysis.
Visual Inspection, used in maintenance of facilities, mean inspection of equipment and structures using either or all of raw human senses such as vision, hearing, touch and smell and/or any non-specialized inspection equipment.
Inspections requiring Ultrasonic, X-Ray equipment, Infra-red, etc. are not typically regarded as visual inspection as these Inspection methodologies require specialized equipment, training and certification.
Quality control
A study of the visual inspection of small integrated circuits found that the modal duration of eye fixations of trained inspectors was about 200 ms. The most accurate inspectors made the fewest eye fixations and were the fastest. When the same chip was judged more than once by an individual inspector the consistency of judgment was very high whereas the consistency between inspectors was somewhat less. Variation by a factor of six in inspection speed led to variation of less than a factor of two in inspection accuracy. Visual inspection had a false positive rate of 2% and a false negative rate of 23%.
Humorous terminology
To do an eyeball search is to look for something specific in a mass of code or data with one's own eyes, as opposed to using some sort of pattern matching software like grep or any other automated search tool. Also known as vgrep or ogrep, i.e., "visual/optical grep", and in the IBM mainframe world as IEBIBALL. The most important application of eyeball search / vgrep in software engineering is vdiff.
In various disciplines it is also called the "eyeball technique" or "eyeball method" (of data assessment).
"Eyeballing" is the most common and readily available method of initial data assessment.
Experts in pattern recognition maintain that the "eyeball" technique is still the most effective procedure for searching arbitrary, possibly unknown structures in data.
In the military, applying this sort of search to real-world terrain is often referred to as "using the Mark I Eyeball" device (pronounced as Mark One Eyeball), the U.S. military adopting it in 1950s. The term is an allusion on military nomenclature, "Mark I" being the first version of a military vehicle or weapon.
See also
Automated optical inspection
Inspection
Inspection (medicine)
Statistical graphics
Visual search
Visual comparison
References
Quality control
Data analysis
Vision
Computer humor
Computer jargon
Military humor
Nondestructive testing |
https://en.wikipedia.org/wiki/JavaCC | JavaCC (Java Compiler Compiler) is an open-source parser generator and lexical analyzer generator written in the Java programming language.
JavaCC is similar to yacc in that it generates a parser from a formal grammar written in EBNF notation. Unlike yacc, however, JavaCC generates top-down parsers. JavaCC can resolve choices based on the next k input tokens, and so can handle LL(k) grammars automatically; by use of "lookahead specifications", it can also resolve choices requiring unbounded look ahead. JavaCC also generates lexical analyzers in a fashion similar to lex. The tree builder that accompanies it, JJTree, constructs its trees from the bottom up.
JavaCC is licensed under a BSD license.
History
In 1996, Sun Microsystems released a parser generator called Jack. The developers responsible for Jack created their own company called Metamata and changed the Jack name to JavaCC. Metamata eventually became part of WebGain. After WebGain shut down its operations, JavaCC was moved to its current home.
Uses
Software built using JavaCC includes:
Apache Derby
BeanShell
FreeMarker
PMD
Vaadin
Apache Lucene
JavaParser
Judoscript
See also
ANTLR
SableCC
Coco/R
parboiled
References
External links
Java Compiler Compiler (JavaCC) - The Java Parser Generator
JavaCC's New Official Website by April 2017
JavaCC Tutorial
JavaCC FAQ
A JavaCC book - Generating Parsers with JavaCC
Parser generators
Java development tools
Free software programmed in Java (programming language)
Software using the BSD license |
https://en.wikipedia.org/wiki/Omega%20%28video%20game%29 | Omega is a video game developed and published by Origin Systems in 1989. It was directed by Stuart B. Marks.
The player assumes the role of a cyber-tank designer and programmer, with the objective of creating tanks to defeat increasingly difficult opponents. The game emphasizes programming the tank, using a built-in text editor with artificial intelligence script commands similar to BASIC. Tanks can communicate and coordinate actions, and successful designs tend to be automated. Code is cross-platform, allowing Apple, Commodore, and IBM users to compete against each other.
The game received positive reviews, with Compute! praising its ease of use for newcomers to programming. Computer Gaming World acknowledged its similarities to RobotWar, while noting its improvements. Games International magazine awarded it 4 stars out of 5, highlighting its unique gameplay and requirement for strategic thinking.
Gameplay
The game puts the player in the role of a cyber-tank designer and programmer. Given a limited budget, the player must design a tank that can defeat a series of ever more challenging opponent tanks. Each successful design yields a higher security clearance and a larger budget, ultimately resulting in an OMEGA clearance and an unlimited budget. The focus of the game is not on the combat but on game programming the tank itself.
Tanks are programmed using a built-in text editor that allows the player to use various artificial intelligence script commands, similar in structure to BASIC. These commands permit control of various aspects of the tank, and also allows teams of tanks to communicate and coordinate actions. While commands exist that enable a range of control over the tank, successful designs tend to be automated. Decision making is an important part of the design process, as the programming must reflect the equipment placed on the tank.
Code was cross-platform, so Apple, Commodore, and IBM users could compete against each other. Origin operated a bulletin board system for Omega owners.
Reception
Compute! praised Omega, stating that it made writing code for tanks easy and fun for those new to programming.
Russell Sipe of Computer Gaming World in 1989 gave the game a positive review, noting its similarities and improvements over RobotWar. In a 1992 survey of science fiction games the magazine gave the title two of five stars, stating that "Programmers loved this 'simulation' [but] it's all greek to me".
John Inglis reviewed Omega for Games International magazine, and gave it 4 stars out of 5, and stated that "In summary I would say that Omega is a unique game that has had a considerable amount of thought lavished on it. It is not a game for the shoot-em-up enthusiast, as you must put considerable though in before you get anywhere."
References
External links
Omega Game Documentation
1989 video games
Amiga games
Apple II games
Apple IIGS games
Atari ST games
Commodore 64 games
DOS games
Classic Mac OS games
NEC PC-9801 games
Progra |
https://en.wikipedia.org/wiki/Ray%20Solomonoff | Ray Solomonoff (July 25, 1926 – December 7, 2009) was the inventor of algorithmic probability, his General Theory of Inductive Inference (also known as Universal Inductive Inference), and was a founder of algorithmic information theory. He was an originator of the branch of artificial intelligence based on machine learning, prediction and probability. He circulated the first report on non-semantic machine learning in 1956.
Solomonoff first described algorithmic probability in 1960, publishing the theorem that launched Kolmogorov complexity and algorithmic information theory. He first described these results at a conference at Caltech in 1960, and in a report, Feb. 1960, "A Preliminary Report on a General Theory of Inductive Inference." He clarified these ideas more fully in his 1964 publications, "A Formal Theory of Inductive Inference," Part I and Part II.
Algorithmic probability is a mathematically formalized combination of Occam's razor, and the Principle of Multiple Explanations.
It is a machine independent method of assigning a probability value to each hypothesis (algorithm/program) that explains a given observation, with the simplest hypothesis (the shortest program) having the highest probability and the increasingly complex hypotheses receiving increasingly small probabilities.
Solomonoff founded the theory of universal inductive inference, which is based on solid philosophical foundations and has its root in Kolmogorov complexity and algorithmic information theory. The theory uses algorithmic probability in a Bayesian framework. The universal prior is taken over the class of all computable measures; no hypothesis will have a zero probability. This enables Bayes' rule (of causation) to be used to predict the most likely next event in a series of events, and how likely it will be.
Although he is best known for algorithmic probability and his general theory of inductive inference, he made many other important discoveries throughout his life, most of them directed toward his goal in artificial intelligence: to develop a machine that could solve hard problems using probabilistic methods.
Life history through 1964
Ray Solomonoff was born on July 25, 1926, in Cleveland, Ohio, son of Jewish Russian immigrants Phillip Julius and Sarah Mashman Solomonoff. He attended Glenville High School, graduating in 1944. In 1944 he joined the United States Navy as Instructor in Electronics. From 1947–1951 he attended the University of Chicago, studying under Professors such as Rudolf Carnap and Enrico Fermi, and graduated with an M.S. in Physics in 1951.
From his earliest years he was motivated by the pure joy of mathematical discovery and by the desire to explore where no one had gone before. At the age of 16, in 1942, he began to search for a general method to solve mathematical problems.
In 1952 he met Marvin Minsky, John McCarthy and others interested in machine intelligence. In 1956 Minsky and McCarthy and others organized the Dartmouth Su |
https://en.wikipedia.org/wiki/Algorithmic%20probability | In algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s.
It is used in inductive inference theory and analyses of algorithms. In his general theory of inductive inference, Solomonoff uses the method together with Bayes' rule to obtain probabilities of prediction for an algorithm's future outputs.
In the mathematical formalism used, the observations have the form of finite binary strings viewed as outputs of Turing machines, and the universal prior is a probability distribution over the set of finite binary strings calculated from a probability distribution over programs (that is, inputs to a universal Turing machine). The prior is universal in the
Turing-computability sense, i.e. no string has zero probability. It is not computable, but it can be approximated.
Overview
Algorithmic probability is the main ingredient of Solomonoff's theory of inductive inference, the theory of prediction based on observations; it was invented with the goal of using it for machine learning; given a sequence of symbols, which one will come next? Solomonoff's theory provides an answer that is optimal in a certain sense, although it is incomputable. Unlike, for example, Karl Popper's informal inductive inference theory, Solomonoff's is mathematically rigorous.
Four principal inspirations for Solomonoff's algorithmic probability were: Occam's razor, Epicurus' principle of multiple explanations, modern computing theory (e.g. use of a universal Turing machine) and Bayes’ rule for prediction.
Occam's razor and Epicurus' principle are essentially two different non-mathematical approximations of the universal prior.
Occam's razor: among the theories that are consistent with the observed phenomena, one should select the simplest theory.
Epicurus' principle of multiple explanations: if more than one theory is consistent with the observations, keep all such theories.
At the heart of the universal prior is an abstract model of a computer, such as a universal Turing machine. Any abstract computer will do, as long as it is Turing-complete, i.e. every computable function has at least one program that will compute its application on the abstract computer.
The abstract computer is used to give precise meaning to the phrase "simple explanation". In the formalism used, explanations, or theories of phenomena, are computer programs that generate observation strings when run on the abstract computer. Each computer program is assigned a weight corresponding to its length. The universal probability distribution is the probability distribution on all possible output strings with random input, assigning for each finite output prefix q the sum of the probabilities of the programs that compute something starting with q. Thus, a simple explanation is a short computer program. A complex explanation is a long |
https://en.wikipedia.org/wiki/Leonid%20Levin | Leonid Anatolievich Levin ( ; ; ; born November 2, 1948) is a Soviet-American mathematician and computer scientist.
He is known for his work in randomness in computing, algorithmic complexity and intractability, average-case complexity, foundations of mathematics and computer science, algorithmic probability, theory of computation, and information theory. He obtained his master's degree at Moscow University in 1970 where he studied under Andrey Kolmogorov and completed the Candidate Degree academic requirements in 1972.
He and Stephen Cook independently discovered the existence of NP-complete problems. This NP-completeness theorem, often called the Cook–Levin theorem, was a basis for one of the seven Millennium Prize Problems declared by the Clay Mathematics Institute with a $1,000,000 prize offered. The Cook–Levin theorem was a breakthrough in computer science and an important step in the development of the theory of computational complexity.
Levin was awarded the Knuth Prize in 2012 for his discovery of NP-completeness and the development of average-case complexity.
He is a member of the US National Academy of Sciences and
a fellow of the American Academy of Arts and Sciences.
Biography
He obtained his master's degree at Moscow University in 1970 where he studied under Andrey Kolmogorov and completed the Candidate Degree academic requirements in 1972. After researching algorithmic problems of information theory at the Moscow Institute of Information Transmission of the National Academy of Sciences in 1972–1973, and a position as senior research scientist at the Moscow National Research Institute of Integrated Automation for the Oil/Gas Industry in 1973–1977, he emigrated to the U.S. in 1978 and also earned a Ph.D. at the Massachusetts Institute of Technology (MIT) in 1979. His advisor at MIT was Albert R. Meyer.
He is well known for his work in randomness in computing, algorithmic complexity and intractability, average-case complexity, foundations of mathematics and computer science, algorithmic probability, theory of computation, and information theory.
His life is described in a chapter of the book Out of Their Minds: The Lives and Discoveries of 15 Great Computer Scientists.
Levin and Stephen Cook independently discovered the existence of NP-complete problems. This NP-completeness theorem, often called the Cook–Levin theorem, was a basis for one of the seven Millennium Prize Problems declared by the Clay Mathematics Institute with a $1,000,000 prize offered. The Cook–Levin theorem was a breakthrough in computer science and an important step in the development of the theory of computational complexity. Levin's journal article on this theorem was published in 1973; he had lectured on the ideas in it for some years before that time (see Trakhtenbrot's survey), though complete formal writing of the results took place after Cook's publication.
Levin was awarded the Knuth Prize in 2012 for his discovery of NP-completeness and the devel |
https://en.wikipedia.org/wiki/Algorithmic%20complexity | Algorithmic complexity may refer to:
In algorithmic information theory, the complexity of a particular string in terms of all algorithms that generate it.
Solomonoff–Kolmogorov–Chaitin complexity, the most widely used such measure.
In computational complexity theory, although it would be a non-formal usage of the term, the time/space complexity of a particular problem in terms of all algorithms that solve it with computational resources (i.e., time or space) bounded by a function of the input's size.
Or it may refer to the time/space complexity of a particular algorithm with respect to solving a particular problem (as above), which is a notion commonly found in analysis of algorithms.
Time complexity is the amount of computer time it takes to run an algorithm. |
https://en.wikipedia.org/wiki/Interface%20bloat | In software design, interface bloat (also called fat interfaces by Bjarne Stroustrup and Refused Bequests by Martin Fowler) is when an interface incorporates too many operations on some data into an interface, only to find that most of the objects cannot perform the given operations.
Interface bloat is an example of an anti-pattern. One might consider using visitor pattern, Adapter Pattern, or interface segregation instead.
Anti-patterns
Computer programming folklore
Software engineering folklore |
https://en.wikipedia.org/wiki/Social%20Liberal%20Party%20%28Tunisia%29 | The Social Liberal Party ( ; ), abbreviated to PSL, is an opposition liberal political party in Tunisia. The party is a member of the Liberal International and the Africa Liberal Network.
The party was founded in September 1988 under the name "Social Party for Progress" (), but was renamed in October 1993 to reflect its liberal ideology. At the 1999 election, it won election for the first time, winning its first two seats in the Chamber of Deputies.
They retained these two seats at the 2004, when its candidate Mohamed Mouni Béji also won 0.8% at the presidential elections. In 2005, Mongi Khamassi, one of the party's founders, split to form the Green Party for Progress. Despite this, the PSL quadrupled its seats to eight in the 2009 election, making it the fifth-largest party.
As well as liberal social and political reforms, the PSL advocates economic liberalisation, including the privatisation of state-owned firms.
See also
Liberalism
Social liberalism
Contributions to liberal theory
Liberalism worldwide
List of liberal parties
Liberal democracy
Notes
External links
Social Liberal Party archived official site
1988 establishments in Tunisia
Liberal International
Liberal parties in Tunisia
Political parties established in 1988
Political parties in Tunisia |
https://en.wikipedia.org/wiki/Maximum%20flow%20problem | In optimization theory, maximum flow problems involve finding a feasible flow through a flow network that obtains the maximum possible flow rate.
The maximum flow problem can be seen as a special case of more complex network flow problems, such as the circulation problem. The maximum value of an s-t flow (i.e., flow from source s to sink t) is equal to the minimum capacity of an s-t cut (i.e., cut severing s from t) in the network, as stated in the max-flow min-cut theorem.
History
The maximum flow problem was first formulated in 1954 by T. E. Harris and F. S. Ross as a simplified model of Soviet railway traffic flow.
In 1955, Lester R. Ford, Jr. and Delbert R. Fulkerson created the first known algorithm, the Ford–Fulkerson algorithm. In their 1955 paper, Ford and Fulkerson wrote that the problem of Harris and Ross is formulated as follows (see p. 5):Consider a rail network connecting two cities by way of a number of intermediate cities, where each link of the network has a number assigned to it representing its capacity. Assuming a steady state condition, find a maximal flow from one given city to the other.In their book Flows in Network, in 1962, Ford and Fulkerson wrote:It was posed to the authors in the spring of 1955 by T. E. Harris, who, in conjunction with General F. S. Ross (Ret.), had formulated a simplified model of railway traffic flow, and pinpointed this particular problem as the central one suggested by the model [11].where [11] refers to the 1955 secret report Fundamentals of a Method for Evaluating Rail net Capacities by Harris and Ross (see p. 5).
Over the years, various improved solutions to the maximum flow problem were discovered, notably the shortest augmenting path algorithm of Edmonds and Karp and independently Dinitz; the blocking flow algorithm of Dinitz; the push-relabel algorithm of Goldberg and Tarjan; and the binary blocking flow algorithm of Goldberg and Rao. The algorithms of Sherman and Kelner, Lee, Orecchia and Sidford, respectively, find an approximately optimal maximum flow but only work in undirected graphs.
In 2013 James B. Orlin published a paper describing an algorithm.
In 2022 Li Chen, Rasmus Kyng, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva published an almost-linear time algorithm running in for the minimum-cost flow problem of which for the maximum flow problem is a particular case. For the single-source shortest path (SSSP) problem with negative weights another particular case of minimum-cost flow problem an algorithm in almost-linear time has also been reported. Both algorithms were deemed best papers at the 2022 Symposium on Foundations of Computer Science.
Definition
First we establish some notation:
Let be a network with being the source and the sink of respectively.
If is a function on the edges of then its value on is denoted by or
Definition. The capacity of an edge is the maximum amount of flow that can pass through an edge. Formally it is a |
https://en.wikipedia.org/wiki/Path%20%28computing%29 | A path is a string of characters used to uniquely identify a location in a directory structure. It is composed by following the directory tree hierarchy in which components, separated by a delimiting character, represent each directory. The delimiting character is most commonly the slash ("/"), the backslash character ("\"), or colon (":"), though some operating systems may use a different delimiter. Paths are used extensively in computer science to represent the directory/file relationships common in modern operating systems and are essential in the construction of Uniform Resource Locators (URLs). Resources can be represented by either absolute or relative paths.
History
Multics first introduced a hierarchical file system with directories (separated by ">") in the mid-1960s.
Around 1970, Unix introduced the slash character ("/") as its directory separator.
In 1981, the first version of Microsoft DOS was released. MS-DOS 1.0 did not support file directories. Also, a major portion of the utility commands packaged with MS-DOS 1.0 came from IBM and their command line syntax used the slash character as a 'switch' prefix. For example, dir /w runs the dir command with the wide list format option.
This use of slash can still be found in the command interface under Microsoft Windows. By contrast, Unix uses the dash ("-") character as a command-line switch prefix.
When directory support was added to MS-DOS in version 2.0, "/" was kept as the switch prefix character for backwards compatibility. Microsoft chose the backslash character ("\") as a directory separator, which looks similar to the slash character, though more modern version of Windows are slash-agnostic, allowing mixage of both types of slashes in a path.
Absolute and relative paths
An absolute or full path points to the same location in a file system, regardless of the current working directory. To do that, it must include the root directory.
By contrast, a relative path starts from some given working directory, avoiding the need to provide the full absolute path. A filename can be considered as a relative path based at the current working directory. If the working directory is not the file's parent directory, a file not found error will result if the file is addressed by its name.
Base URL
A base URL is the consistent part of an API path, to which endpoint paths are appended.
Representations of paths by operating system and shell
Japanese and Korean versions of Windows may often display the '¥' character or the '₩' character instead of the directory separator. In such cases the code for a backslash is being drawn as these characters. Very early versions of MS-DOS replaced the backslash with these glyphs on the display to make it possible to display them by programs that only understood 7-bit ASCII (other characters such as the square brackets were replaced as well, see ISO 646, Windows Codepage 932 (Japanese Shift JIS), and Codepage 949 (Korean)). Although even the first versio |
https://en.wikipedia.org/wiki/Radeon | Radeon () is a brand of computer products, including graphics processing units, random-access memory, RAM disk software, and solid-state drives, produced by Radeon Technologies Group, a division of AMD. The brand was launched in 2000 by ATI Technologies, which was acquired by AMD in 2006 for US$5.4 billion.
Radeon Graphics
Radeon Graphics is the successor to the Rage line. Three different families of microarchitectures can be roughly distinguished, the fixed-pipeline family, the unified shader model-families of TeraScale and Graphics Core Next. ATI/AMD have developed different technologies, such as TruForm, HyperMemory, HyperZ, XGP, Eyefinity for multi-monitor setups, PowerPlay for power-saving, CrossFire (for multi-GPU) or Hybrid Graphics. A range of SIP blocks is also to be found on certain models in the Radeon products line: Unified Video Decoder, Video Coding Engine and TrueAudio.
The brand was previously only known as "ATI Radeon" until August 2010, when it was renamed to increase AMD's brand awareness on a global scale. Products up to and including the HD 5000 series are branded as ATI Radeon, while the HD 6000 series and beyond use the new AMD Radeon branding.
On 11 September 2015, AMD's GPU business was split into a separate unit known as Radeon Technologies Group, with Raja Koduri as Senior Vice President and chief architect.
Radeon Graphics card brands
AMD does not distribute Radeon cards directly to consumers (though some exceptions can be found). Instead, it sells Radeon GPUs to third-party manufacturers, who build and sell the Radeon-based video cards to the OEM and retail channels. Manufacturers of the Radeon cards—some of whom also make motherboards—include ASRock, Asus, Biostar, Club 3D, Diamond, Force3D, Gainward, Gigabyte, HIS, MSI, PowerColor, Sapphire, VisionTek, and XFX.
Graphics processor generations
Early generations were identified with a number and major/minor alphabetic prefix. Later generations were assigned code names. New or heavily redesigned architectures have a prefix of R (e.g., R300 or R600) while slight modifications are indicated by the RV prefix (e.g., RV370 or RV635).
The first derivative architecture, RV200, did not follow the scheme used by later parts.
Fixed-pipeline family
R100/RV200
The Radeon, first introduced in 2000, was ATI's first graphics processor to be fully DirectX 7 compliant. R100 brought with it large gains in bandwidth and fill-rate efficiency through the new HyperZ technology.
The RV200 was a die-shrink of the former R100 with some core logic tweaks for clockspeed, introduced in 2002. The only release in this generation was the Radeon 7500, which introduced little in the way of new features but offered substantial performance improvements over its predecessors.
R200
ATI's second generation Radeon included a sophisticated pixel shader architecture. This chipset implemented Microsoft's pixel shader 1.4 specification for the first time.
Its performance relative to competitor |
https://en.wikipedia.org/wiki/Trigraph | Trigraph may refer to:
Computing
Digraphs and trigraphs, a group of characters used to symbolise one character
An octal or decimal representation of byte values
Mnemonics for machine language instructions
As language codes in ISO 639
Cryptography
As substitution group in a substitution cipher
As combinations in the Ling Qi Jing
Mathematics
As a generalization of graphs where there is a set of edges called semi-adjacent
Other uses
Trigraph (orthography), a sound representation in orthography
See also
Digraph (disambiguation)
Tetragraph
Multigraph (disambiguation) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.