source stringlengths 32 199 | text stringlengths 26 3k |
|---|---|
https://en.wikipedia.org/wiki/The%20Province | The Province is a daily newspaper published in tabloid format in British Columbia by Pacific Newspaper Group, a division of Postmedia Network, alongside the Vancouver Sun broadsheet newspaper. Together, they are British Columbia's only two major newspapers.
Formerly a broadsheet, The Province later became tabloid paper-size. It publishes daily except Saturdays, Mondays (as of October 17, 2022) and selected holidays.
History
The Province was established as a weekly newspaper in Victoria in 1894. A 1903 article in the Pacific Monthly described the Province as the largest and the youngest of Vancouver's important newspapers.
In 1923, the Southam family bought The Province. By 1945 the paper's printers went out on strike. The Province had been the best selling newspaper in Vancouver, ahead of the Vancouver Sun and News Herald. As a result of the six-week strike, it lost significant market share, at one point falling to third place. In 1957, The Province and the Vancouver Sun were sold to Pacific Press Limited which was jointly owned by both newspaper companies.
A 1970 strike by Pacific Press employees shut down the Sun and Province for three months; in the interim, the Vancouver Express published daily editions. It ended on May 13 and resulted in increased pay for employees and a trustee pension fund with a board that included management and union representatives.
Circulation
The Province has seen, like most Canadian daily newspapers, a decline in circulation. Its total circulation dropped by percent to 114,467 copies daily from 2009 to 2015.
Daily average
Notable journalists
Kim Bolan
Jim Coleman
Lukin Johnston
Hugh George Egioke Savage
Tony Gallagher
CFCB/CKCD radio station
At 2 p.m. on March 23, 1922, the Province launched radio station CFCB, with news and stock market reports. There were news bulletins throughout the day, followed by music. Sign off was at 10 p.m. The station's name changed to CKCD in 1923 and it moved to 730 kHz in 1925. In 1933 the paper turned its operations over to the Pacific Broadcasting Co., while continuing to supply news reports to the station.
In 1936, the newly formed Canadian Broadcasting Corporation, established to function as both broadcaster and broadcasting regulator (taking over the latter function from previous regulator the Department of Marine and Fisheries), asked CKCD to relinquish its licence, and the station signed off for the last time in February 1940.
See also
List of newspapers in Canada
Wait for Me, Daddy, 1940 photograph
Media in Vancouver
References
External links
Newspapers published in Vancouver
Postmedia Network publications
Newspapers established in 1898
Daily newspapers published in British Columbia
1898 establishments in British Columbia |
https://en.wikipedia.org/wiki/Assertion%20%28software%20development%29 | In computer programming, specifically when using the imperative programming paradigm, an assertion is a predicate (a Boolean-valued function over the state space, usually expressed as a logical proposition using the variables of a program) connected to a point in the program, that always should evaluate to true at that point in code execution. Assertions can help a programmer read the code, help a compiler compile it, or help the program detect its own defects.
For the latter, some programs check assertions by actually evaluating the predicate as they run. Then, if it is not in fact true – an assertion failure – the program considers itself to be broken and typically deliberately crashes or throws an assertion failure exception.
Details
The following code contains two assertions, x > 0 and x > 1, and they are indeed true at the indicated points during execution:
x = 1;
assert x > 0;
x++;
assert x > 1;
Programmers can use assertions to help specify programs and to reason about program correctness. For example, a precondition—an assertion placed at the beginning of a section of code—determines the set of states under which the programmer expects the code to execute. A postcondition—placed at the end—describes the expected state at the end of execution. For example: x > 0 { x++ } x > 1.
The example above uses the notation for including assertions used by C. A. R. Hoare in his 1969 article. That notation cannot be used in existing mainstream programming languages. However, programmers can include unchecked assertions using the comment feature of their programming language. For example, in C++:
x = 5;
x = x + 1;
// {x > 1}
The braces included in the comment help distinguish this use of a comment from other uses.
Libraries may provide assertion features as well. For example, in C using glibc with C99 support:
#include <assert.h>
int f(void)
{
int x = 5;
x = x + 1;
assert(x > 1);
}
Several modern programming languages include checked assertions – statements that are checked at runtime or sometimes statically. If an assertion evaluates to false at runtime, an assertion failure results, which typically causes execution to abort. This draws attention to the location at which the logical inconsistency is detected and can be preferable to the behaviour that would otherwise result.
The use of assertions helps the programmer design, develop, and reason about a program.
Usage
In languages such as Eiffel, assertions form part of the design process; other languages, such as C and Java, use them only to check assumptions at runtime. In both cases, they can be checked for validity at runtime but can usually also be suppressed.
Assertions in design by contract
Assertions can function as a form of documentation: they can describe the state the code expects to find before it runs (its preconditions), and the state the code expects to result in when it is finished running (postconditions); they can also specify invariants of a class. Eif |
https://en.wikipedia.org/wiki/Boat%20anchor%20%28metaphor%29 | In amateur radio and computing, a boat anchor or boatanchor is something obsolete, useless, and cumbersome – so-called because metaphorically its only productive use is to be thrown into the water as a boat mooring. Terms such as brick, doorstop, and paperweight are similar.
Amateur radio
In amateur radio, a boat anchor or boatanchor is an old piece of radio equipment. It is usually used in reference to large, heavy radio equipment of earlier decades that used tubes. In this context boat anchors are often prized by their owners and their strengths (e.g. immunity to EMP) emphasised, even if newer equipment is more capable.
An early use of the term appeared in a 1956 issue of CQ Amateur Radio Magazine. The magazine published a letter from a reader seeking "schematics or conversion data" for a war surplus Wireless Set No. 19 MK II transceiver in order to modify it for use on the amateur bands. The editor added this reply:
The editor's use of the term generated some reader interest, and in February 1957, CQ published a follow-up story that included photos.
Computers
The metaphor transfers directly from old radios to old computers. It also has been extended to refer to relic software.
Hardware
Early computers were physically large and heavy devices. As computers became more compact, the term boat anchor became popular among users to signify that the earlier, larger computer gear was obsolete and no longer useful.
Software
The term boat anchor has been extended to software code that is left in a system's codebase, typically in case it is needed later. This is an example of an anti-pattern and therefore can cause many problems for people attempting to maintain the program that contains the obsolete code. The key problem comes from the fact that programmers will have a hard time differentiating between obsolete code which doesn't do anything and working code which does. For example, a programmer may be looking into a bug with the program's input handling system, so they search through the code looking for code that links into the input handling API. Obviously if the programmer comes across obsolete input handling code they may well start editing and debugging it, wasting valuable time before they realise that the code that they're working with is never executed and therefore not part of the problem they're trying to solve. Other problems include longer compile times and the risk that programmers may accidentally link working code into the defunct code, inadvertently resurrecting it. A recommended solution for dealing with boat anchors in source code is to remove them from the code base and to place them in a separate location so that they can be referred to if necessary, but will not be compiled or be mistaken as "working" code. (For example, deleting them, knowing they are stored in the project's source control)
See also
Anti-pattern
Vintage amateur radio
Legacy system
References
External links
Origin of Ham Speak
BoatAnchor Manual Archive
|
https://en.wikipedia.org/wiki/Assertion | Assertion or assert may refer to:
Computing
Assertion (software development), a computer programming technique
assert.h, a header file in the standard library of the C programming language
Assertion definition language, a specification language providing a formal grammar to specify behaviour and interfaces for computer software
Logic and language
Logical assertion, a statement that asserts that a certain premise is true
Proof by assertion, an informal fallacy in which a proposition is repeatedly restated
Time of assertion, in linguistics a secondary temporal reference in establishing tense
Assertive, a speech act that commits a speaker to the truth of the expressed proposition
Other uses
Assert (horse) (1979–1995), an Irish racehorse
Assertions (auditing), the set of information that the statement preparer is providing in a financial statement audit
See also |
https://en.wikipedia.org/wiki/Negative%20cache | In computer programming, negative cache is a cache that also stores "negative" responses, i.e. failures. This means that a program remembers the result indicating a failure even after the cause has been corrected. Usually negative cache is a design choice, but it can also be a software bug.
Examples
Consider a web browser which attempts to load a page while the network is unavailable. The browser will receive an error code indicating the problem, and may display this error message to the user in place of the requested page. However, it is incorrect for the browser to place the error message in the page cache, as this would lead it to display the error again when the user tries to load the same page - even after the network is back up. The error message must not be cached under the page's URL; until the browser is able to successfully load the page, whenever the user tries to load the page, the browser must make a new attempt.
A frustrating aspect of negative caches is that the user may put a great effort into troubleshooting the problem, and then after determining and removing the root cause, the error still does not vanish.
There are cases where failure-like states must be cached. For instance, DNS requires that caching nameservers remember negative responses as well as positive ones. If an authoritative nameserver returns a negative response, indicating that a name does not exist, this is cached. The negative response may be perceived as a failure at the application level; however, to the nameserver caching it, it is not a failure. The cache times for negative and positive caching may be tuned independently.
Description
A negative cache is normally only desired if failure is very expensive and the error condition raises automatically without user's action. It creates a situation where the user is unable to isolate the cause of the failure: despite fixing everything they can think of, the program still refuses to work. When a failure is cached, the program should provide a clear indication of what must be done to clear the cache, in addition to a description of the cause of the error. In such conditions a negative cache is an example of a design anti-pattern.
Negative cache still may recover if the cached records expires.
See also
Perl Design Patterns Book
References
Software bugs
Software anomalies
Cache (computing) |
https://en.wikipedia.org/wiki/Code%20smell | In computer programming, a code smell is any characteristic in the source code of a program that possibly indicates a deeper problem. Determining what is and is not a code smell is subjective, and varies by language, developer, and development methodology.
The term was popularised by Kent Beck on WardsWiki in the late 1990s. Usage of the term increased after it was featured in the 1999 book Refactoring: Improving the Design of Existing Code by Martin Fowler. It is also a term used by agile programmers.
Definition
One way to look at smells is with respect to principles and quality: "Smells are certain structures in the code that indicate violation of fundamental design principles and negatively impact design quality". Code smells are usually not bugs; they are not technically incorrect and do not prevent the program from functioning. Instead, they indicate weaknesses in design that may slow down development or increase the risk of bugs or failures in the future. Bad code smells can be an indicator of factors that contribute to technical debt. Robert C. Martin calls a list of code smells a "value system" for software craftsmanship.
Often the deeper problem hinted at by a code smell can be uncovered when the code is subjected to a short feedback cycle, where it is refactored in small, controlled steps, and the resulting design is examined to see if there are any further code smells that in turn indicate the need for more refactoring. From the point of view of a programmer charged with performing refactoring, code smells are heuristics to indicate when to refactor, and what specific refactoring techniques to use. Thus, a code smell is a driver for refactoring.
A 2015 study utilizing automated analysis for half a million source code commits and the manual examination of 9,164 commits determined to exhibit "code smells" found that:
There exists empirical evidence for the consequences of "technical debt", but there exists only anecdotal evidence as to how, when, or why this occurs.
Common wisdom suggests that urgent maintenance activities and pressure to deliver features while prioritizing time-to-market over code quality are often the causes of such smells.
Tools such as Checkstyle, PMD, FindBugs, and SonarQube can automatically identify code smells.
Common code smells
Application-level smells
Mysterious name: functions, modules, variables or classes that are named in a way that does not communicate what they do or how to use them.
Duplicated code: identical or very similar code that exists in more than one location.
Contrived complexity: forced usage of overcomplicated design patterns where simpler design patterns would suffice.
Shotgun surgery: a single change that needs to be applied to multiple classes at the same time.
Uncontrolled side effects: side effects of coding that commonly cause runtime exceptions, with unit tests unable to capture the exact cause of the problem.
Variable mutations: mutations that vary widely enough that |
https://en.wikipedia.org/wiki/Citytv | Citytv (sometimes shortened to City, which was the network's official branding from 2012 to 2018) is a Canadian television network owned by the Rogers Sports & Media subsidiary of Rogers Communications. The network consists of six owned-and-operated (O&O) television stations located in the metropolitan areas of Toronto, Montreal, Winnipeg, Calgary, Edmonton, and Vancouver, a cable-only service that serves the province of Saskatchewan, and three independently owned affiliates serving smaller cities in Alberta and British Columbia.
The Citytv brand name originates from its flagship station, CITY-TV in Toronto, a station that went on the air in September 28, 1972 in the former Electric Circus nightclub in which became known for an intensely local format based on newscasts aimed at younger viewers, nightly movies, and music and cultural programming. The Citytv brand first expanded with then-parent company CHUM Limited's acquisition of former Global owned-and-operated station CKVU-TV in Vancouver, followed by its purchase of Craig Media's stations and the re-branding of its A-Channel system in Central Canada as Citytv in August 2005. CHUM Limited was acquired by CTVglobemedia (now Bell Media) in 2007; to comply with Canadian Radio-television and Telecommunications Commission (CRTC) ownership limits, the Citytv stations were sold to Rogers. The network grew through further affiliations with three Jim Pattison Group-owned stations, along with Rogers' acquisition of the cable-only Saskatchewan Communications Network and Montreal's CJNT-DT.
While patterned after the original station in Toronto, since the 2000s, and particularly since its acquisition by Rogers, Citytv has moved towards a series-based prime time schedule much like its competitors, albeit one still focused on younger demographics.
History
The licence of the original Citytv station, granted the callsign of CITY-TV by the CRTC, was awarded in Toronto on November 25, 1971, and began broadcasting for the first time using the "Citytv" brand on September 28, 1972, under the ownership of Channel Seventy-Nine Ltd. with its studios located at 99 Queen Street East near Church Street. The station was in debt by 1975. Multiple Access Ltd. (then-owners of CFCF-TV in Montreal) purchased a 45% interest in the station, and sold its stake to CHUM Limited three years later. CHUM Limited acquired the station outright in 1981. Broadcasting on UHF channel 79 during its first decade, the station moved to channel 57 in 1983, until moving to channel 44 with the digital transition (though mapping as virtual channel 57.1). In 1987, the station moved its headquarters to 299 Queen Street West, formerly known as the Ryerson Press Building (then known as the CHUM-City Building); one of the most recognizable landmarks in the city.
Citytv gained a second station in Vancouver when CHUM bought CKVU-TV from Canwest Global Communications in 2001. The station became known as "Citytv Vancouver" on July 22, 2002. Prior to CH |
https://en.wikipedia.org/wiki/Constructor | Constructor may refer to:
Science and technology
Constructor (object-oriented programming), object-organizing method
Constructors (Formula One), person or group who builds the chassis of a car in auto racing, especially Formula One
Constructor, an entity in Constructor theory, a theory of everything developed by physicist David Deutsch in 2012
Other uses
Constructor (video game), a 1997 PC game by Acclaim, the prequel of Constructor: Street Wars
Constructor Group AS, a Norwegian-based group specialising in shelving, racking and storage systems
Construction worker, a builder, especially a construction company or a manager of builders
See also
Assembler (disambiguation) |
https://en.wikipedia.org/wiki/Copy-and-paste%20programming | Copy-and-paste programming, sometimes referred to as just pasting, is the production of highly repetitive computer programming code, as produced by copy and paste operations. It is primarily a pejorative term; those who use the term are often implying a lack of programming competence. It may also be the result of technology limitations (e.g., an insufficiently expressive development environment) as subroutines or libraries would normally be used instead. However, there are occasions when copy-and-paste programming is considered acceptable or necessary, such as for boilerplate, loop unrolling (when not supported automatically by the compiler), or certain programming idioms, and it is supported by some source code editors in the form of snippets.
Origins
Copy-and-paste programming is often done by inexperienced or student programmers, who find the act of writing code from scratch difficult or irritating and prefer to search for a pre-written solution or partial solution they can use as a basis for their own problem solving.
(See also Cargo cult programming)
Inexperienced programmers who copy code often do not fully understand the pre-written code they are taking. As such, the problem arises more from their inexperience and lack of courage in programming than from the act of copying and pasting, per se. The code often comes from disparate sources such as friends' or co-workers' code, Internet forums, code provided by the student's professors/TAs, or computer science textbooks. The result risks being a disjointed clash of styles, and may have superfluous code that tackles problems for which new solutions are no longer required.
A further problem is that bugs can easily be introduced by assumptions and design choices made in the separate sources that no longer apply when placed in a new environment.
Such code may also, in effect, be unintentionally obfuscated, as the names of variables, classes, functions and the like are typically left unchanged, even though their purpose may be completely different in the new context.
Copy-and-paste programming may also be a result of poor understanding of features common in computer languages, such as loop structures, functions and subroutines.
Duplication
Applying library code
Copying and pasting is also done by experienced programmers, who often have their own libraries of well tested, ready-to-use code snippets and generic algorithms that are easily adapted to specific tasks.
Being a form of code duplication, copy-and-paste programming has some intrinsic problems; such problems are exacerbated if the code doesn't preserve any semantic link between the source text and the copies. In this case, if changes are needed, time is wasted hunting for all the duplicate locations. (This can be partially mitigated if the original code and/or the copy are properly commented; however, even then the problem remains of making the same edits multiple times. Also, because code maintenance often omits updating the comme |
https://en.wikipedia.org/wiki/Dynamic%20programming%20language | In computer science, a dynamic programming language is a class of high-level programming languages, which at runtime execute many common programming behaviours that static programming languages perform during compilation. These behaviors could include an extension of the program, by adding new code, by extending objects and definitions, or by modifying the type system. Although similar behaviors can be emulated in nearly any language, with varying degrees of difficulty, complexity and performance costs, dynamic languages provide direct tools to make use of them. Many of these features were first implemented as native features in the Lisp programming language.
Most dynamic languages are also dynamically typed, but not all are. Dynamic languages are frequently (but not always) referred to as scripting languages, although that term in its narrowest sense refers to languages specific to a given run-time environment.
Implementation
Eval
Some dynamic languages offer an eval function. This function takes a string or abstract syntax tree containing code in the language and executes it. If this code stands for an expression, the resulting value is returned. Erik Meijer and Peter Drayton distinguish the runtime code generation offered by eval from the dynamic loading offered by shared libraries, and warn that in many cases eval is used merely to implement higher-order functions (by passing functions as strings) or deserialization.
Object runtime alteration
A type or object system can typically be modified during runtime in a dynamic language. This can mean generating new objects from a runtime definition or based on mixins of existing types or objects. This can also refer to changing the inheritance or type tree, and thus altering the way that existing types behave (especially with respect to the invocation of methods).
Type inference
As a lot of dynamic languages come with a dynamic type system, runtime inference of types based on values for internal interpretation marks a common task. As value types may change throughout interpretation, it is regularly used upon performing atomic operations.
Variable memory allocation
Static programming languages (possibly indirectly) require developers to define the size of utilized memory before compilation (unless working around with pointer logic). Consistent with object runtime alteration, dynamic languages implicitly need to (re-)allocate memory based on program individual operations.
Reflection
Reflection is common in many dynamic languages, and typically involves analysis of the types and metadata of generic or polymorphic data. It can, however, also include full evaluation and modification of a program's code as data, such as the features that Lisp provides in analyzing S-expressions.
Macros
A limited number of dynamic programming languages provide features which combine code introspection (the ability to examine classes, functions, and keywords to know what they are, what they do and what they know) |
https://en.wikipedia.org/wiki/Concern%20%28computer%20science%29 | In computer science, a concern is a particular set of information that has an effect on the code of a computer program. A concern can be as general as the details of database interaction or as specific as performing a primitive calculation, depending on the level of conversation between developers and the program being discussed. IBM uses the term concern space to describe the sectioning of conceptual information.
Overview
Usually the code can be separated into logical sections, each addressing separate concerns, and so it hides the need for a given section to know particular information addressed by a different section. This leads to a modular program. Edsger W. Dijkstra coined the term "separation of concerns" to describe the mentality behind this modularization, which allows the programmer to reduce the complexity of the system being designed. Two different concerns that intermingle in the same section of code are called "highly coupled". Sometimes the chosen module divisions do not allow for one concern to be completely separated from another, resulting in cross-cutting concerns. The various programming paradigms address the issue of cross-cutting concerns to different degrees. Data logging is a common cross-cutting concern, being used in many other parts of the program other than the particular module(s) that actually log the data. Since changes to the logging code can affect other sections, it could introduce bugs in the operation of the program.
Paradigms that specifically address the issue of concern separation:
Object-oriented programming, describing concerns as objects
Functional programming, describing concerns as functions
Aspect-oriented software development, treating concerns and their interaction as constructs of their own standing
See also
Cross-cutting concern
Separation of concerns
Issue (computers), a unit of work to accomplish an improvement in a data system
References
External links
Concerns in Rails, by DHH, the Rails creator
Software engineering terminology |
https://en.wikipedia.org/wiki/Traffic%20congestion | Traffic congestion is a condition in transport that is characterized by slower speeds, longer trip times, and increased vehicular queueing. Traffic congestion on urban road networks has increased substantially since the 1950s. When traffic demand is great enough that the interaction between vehicles slows the traffic stream, this results in congestion. While congestion is a possibility for any mode of transportation, this article will focus on automobile congestion on public roads.
As demand approaches the capacity of a road (or of the intersections along the road), extreme traffic congestion sets in. When vehicles are fully stopped for periods of time, this is known as a traffic jam or (informally) a traffic snarl-up or a tailback.
Drivers can become frustrated and engage in road rage. Drivers and driver-focused road planning departments commonly propose to alleviate congestion by adding another lane to the road. This is ineffective: increasing road capacity induces more demand for driving.
Mathematically, traffic is modeled as a flow through a fixed point on the route, analogously to fluid dynamics.
Causes
Traffic congestion occurs when a volume of traffic generates demand for space greater than the available street capacity; this point is commonly termed saturation. Several specific circumstances can cause or aggravate congestion; most of them reduce the capacity of a road at a given point or over a certain length, or increase the number of vehicles required for a given volume of people or goods. About half of U.S. traffic congestion is recurring, and is attributed to sheer weight of traffic; most of the rest is attributed to traffic incidents, road work and weather events. In terms of traffic operation, rainfall reduces traffic capacity and operating speeds, thereby resulting in greater congestion and road network productivity loss.
Traffic research still cannot fully predict under which conditions a "traffic jam" (as opposed to heavy, but smoothly flowing traffic) may suddenly occur. It has been found that individual incidents (such as crashes or even a single car braking heavily in a previously smooth flow) may cause ripple effects (a cascading failure) which then spread out and create a sustained traffic jam when, otherwise, the normal flow might have continued for some time longer.
Separation of work and residential areas
People often work and live in different parts of the city. Many workplaces are located in a central business district away from residential areas, resulting in workers commuting. According to a 2011 report published by the United States Census Bureau, a total of 132.3 million people in the United States commute between their work and residential areas daily.
Movement to obtain or provide goods and services
People may need to move about within the city to obtain goods and services, for instance to purchase goods or attend classes in a different part of the city. Brussels, a Belgian city with a strong service ec |
https://en.wikipedia.org/wiki/Separation%20of%20concerns | In computer science, separation of concerns is a design principle for separating a computer program into distinct sections. Each section addresses a separate concern, a set of information that affects the code of a computer program. A concern can be as general as "the details of the hardware for an application", or as specific as "the name of which class to instantiate". A program that embodies SoC well is called a modular program. Modularity, and hence separation of concerns, is achieved by encapsulating information inside a section of code that has a well-defined interface. Encapsulation is a means of information hiding. Layered designs in information systems are another embodiment of separation of concerns (e.g., presentation layer, business logic layer, data access layer, persistence layer).
Separation of concerns results in more degrees of freedom for some aspect of the program's design, deployment, or usage. Common among these is increased freedom for simplification and maintenance of code. When concerns are well-separated, there are more opportunities for module upgrade, reuse, and independent development. Hiding the implementation details of modules behind an interface enables improving or modifying a single concern's section of code without having to know the details of other sections and without having to make corresponding changes to those other sections. Modules can also expose different versions of an interface, which increases the freedom to upgrade a complex system in piecemeal fashion without interim loss of functionality.
Separation of concerns is a form of abstraction. As with most abstractions, separating concerns means adding additional code interfaces, generally creating more code to be executed. So despite the many benefits of well-separated concerns, there is often an associated execution penalty.
Implementation
The mechanisms for modular or object-oriented programming that are provided by a programming language are mechanisms that allow developers to provide SoC. For example, object-oriented programming languages such as C#, C++, Delphi, and Java can separate concerns into objects, and architectural design patterns like MVC or MVP can separate presentation and the data-processing (model) from content. Service-oriented design can separate concerns into services. Procedural programming languages such as C and Pascal can separate concerns into procedures or functions. Aspect-oriented programming languages can separate concerns into aspects and objects.
Separation of concerns is an important design principle in many other areas as well, such as urban planning, architecture and information design. The goal is to more effectively understand, design, and manage complex interdependent systems, so that functions can be reused, optimized independently of other functions, and insulated from the potential failure of other functions.
Common examples include separating a space into rooms, so that activity in one room does not aff |
https://en.wikipedia.org/wiki/National%20Post | The National Post is a Canadian English-language broadsheet newspaper available in several cities in central and western Canada. The paper is the flagship publication of Postmedia Network and is published Mondays through Saturdays, with Monday released as a digital e-edition only. The newspaper is distributed in the provinces of Ontario, Quebec, Alberta and British Columbia. Weekend editions of the newspaper are also distributed in Manitoba and Saskatchewan.
The newspaper was founded in 1998 by Conrad Black in an attempt to compete with The Globe and Mail. In 2001, CanWest completed its acquisition of the National Post. In 2006, the newspaper ceased distribution in Atlantic Canada and the Canadian territories. Postmedia assumed ownership of the newspaper in 2010, after the CEO of the National Posts, Paul Godfrey, assembled an ownership group to acquire CanWest's chain of newspapers.
History
Conrad Black built the National Post around the Financial Post, a financial newspaper in Toronto which Hollinger Inc. purchased from Sun Media in 1997. Originally slated for an October 5, 1998 launch date, the debut of the paper was delayed until October 27 because of financial complications that stemmed from Black's acquisition of the Financial Post, which was retained as the name of the new newspaper's business section.
Outside Toronto, the Post was built on the printing and distribution infrastructure of Hollinger's national newspaper chain, formerly called Southam Newspapers, that included the newspapers Ottawa Citizen, Montreal Gazette, Edmonton Journal, Calgary Herald, and Vancouver Sun. The Post became Black's national flagship title, and Ken Whyte was appointed editor.
Beyond his political vision, Black attempted to compete directly with Kenneth Thomson's media empire led in Canada by The Globe and Mail, which Black and many others perceived as the platform of the Liberal establishment.
When the Post launched, its editorial stance was conservative. It advocated a "unite-the-right" movement to create a viable alternative to the Liberal government of Jean Chrétien, and supported the Canadian Alliance. The Post op-ed page has included dissenting columns by ideological liberals such as Linda McQuaig, as well as conservatives including Mark Steyn and Diane Francis, and David Frum. Original members of the Post editorial board included Ezra Levant, Neil Seeman, Jonathan Kay, Conservative Member of Parliament John Williamson and the author/historian Alexander Rose.
The Post magazine-style graphic and layout design has won awards. The original design of the Post was created by Lucie Lacava, a design consultant based in Montreal. The Post now bears the motto "World's Best-Designed Newspaper" on its front page.
21st century
The Post was unable to maintain momentum in the market without continuing to operate with annual budgetary deficits. At the same time, Conrad Black was becoming preoccupied by his debt-heavy media empire, Hollinger International. Black |
https://en.wikipedia.org/wiki/Join%20point | In computer science, a join point is a point in the control flow of a program where the control flow can arrive via two different paths. In particular, it's a basic block that has more than one predecessor. In aspect-oriented programming a set of join points is called a pointcut. A join point is a specification of when, in the corresponding main program, the aspect code should be executed.
The join point is a point of execution in the base code where the advice specified in a corresponding pointcut is applied.
See also
AspectJ, an aspect-oriented extension for the Java programming language
References
Aspect-oriented software development
Aspect-oriented programming
Control flow |
https://en.wikipedia.org/wiki/Pointcut | In aspect-oriented programming, a pointcut is a set of join points. Pointcut specifies where exactly to apply advice, which allows separation of concerns and helps in modularizing business logic. Pointcuts are often specified using class names or method names, in some cases using regular expressions that match class or method name. Different frameworks support different Pointcut expressions; AspectJ syntax is considered as de facto standard. Frameworks are available for various programming languages like Java, Perl, Ruby, and many more which support pointcut.
Background
Due to limitations in various programming languages, cross-cutting concern has not modularized. Cross-cutting concern refers to parts of software that logically belong to one module and affect the whole system: this could be security or logging, for example. Aspect-oriented programming tries to solve these cross cutting concerns by allowing programmers to write modules called aspects, which contain pieces of code executed at particular point. The expressions required to select a particular point led to creation of Pointcut Expressions.
Execution
Whenever the program execution reaches one of the join points described in the pointcut, a piece of code associated with the pointcut (called advice) is executed. This allows a programmer to describe where and when additional code should be executed in addition to already-defined behavior. Pointcut permits the addition of aspects to existing software, as well as the design of software with a clear separation of concerns, wherein the programmer weaves (merges) different aspects into a complete application.
Example
Suppose there is an application where we can modify records in a database. Whenever users modify the database, we want to have a log of information regarding who is modifying the records. The traditional way to log is to call a log method just before modifying the database. With aspect-oriented programming, we can apply pointcut to the Modify Database method and have an advice that is called to log the required information.
Expressions
Following are some important pointcut expressions supported by AspectJ. These expressions can be combined using logical operators.
execution(void User.setPassword(password))
This pointcut matches execution of the User.setPassword method.
call(void User.getPassword())
When User.getPassword is called, this pointcut is matched.
handler(ArrayIndexOutOfBounds)
Pointcut will match when there is an ArrayIndexOutOfBounds exception
this(UserType)
Pointcut will match when the object currently executing is of UserType.
target(UserType)
Pointcut will match when the target object is of UserType.
within(UserType)
Pointcut will match when the code executing belongs to UserType.
Criticisms
Pointcut languages impact important software properties like evolvability and comprehensibility in a negative way. There might be a possibility where there is a need to perform refactoring to define a correct aspect, whi |
https://en.wikipedia.org/wiki/Alternate%20Reality%20%28series%29 | Alternate Reality (AR) is an unfinished role-playing video game series. It was created by Philip Price, who formed a development company called Paradise Programming. Published by Datasoft, AR: The City was released in 1985 and AR: The Dungeon was released in 1987. Price was unable to complete the second game in the series, and The Dungeon was finished by Ken Jordan and Dan Pinal. Gary Gilbertson created the music for both games.
Concept
Aliens have captured the player from Earth, and suddenly the player finds themself in front of a gate with a slot-machine-like row of rotating numbers of statistics. Stepping through the gate freezes the numbers and turns the player into a new person, putting them into an "alternate reality", hence the name.
In 1988 Datasoft denied that the series would end after The Dungeon. The end of the series was supposed to conclude with the player discovering everyone's true bodies on the ship cocooned and effectively frozen, and that the ship is really a "pleasure world" of some kind for the aliens, leading to the player's ultimate decision of what to do to the ship, to the aliens, or even whether to return to Earth. However, the series was never completed.
During the late 90s, Price intended to produce an MMORPG version of the game called Alternate Reality Online or ARO, and teamed with Monolith. The deal ended due to lack of funds to start serious development on the project. Monolith originally had funds, but needed the funds for existing games in the pipeline. Monolith tried to find an external publisher to fund the game, but the number of technical innovations, coupled with an unknown market for MMORPGs, made it difficult to find publishers willing to risk funding. The publication deal ended and the rights to the game were returned due to no funds. Monolith went on years later to create The Matrix Online.
The "Lost" Games
The original outline for the game series included plans for 6 games:
The City
The Arena
The Palace
The Wilderness
Revelations
Destiny
The first break from this outline was when Datasoft forced the release of The City early, and The Dungeon which would have been included, became its own release. Nonetheless, the design planned to allow the player to move between these games, so that, for example, when one attempted to leave the confines of The City, one was prompted to "Insert disk #1 of Alternate Reality: The Wilderness". The planned seamless migration never worked out, in large part because the Datasoft developers did not implement the idea, so only the Atari 8-Bit city had the ability to boot sequels. Since the final coding of the sequels was done by Datasoft, the matching code was not put into any of the sequels, including the Atari 8-bit dungeon.
Although The Dungeon was completed and released, work on the remaining 5 installments never moved beyond theoretical outlines. A brief summary of these outlines follows:
The City (and its sewers, which became The Dungeon)
The player is thru |
https://en.wikipedia.org/wiki/MDF | MDF may refer to:
Computing
Master Database File, a Microsoft SQL Server file type
MES Development Framework, a .NET framework for building manufacturing execution system applications
Message Development Framework, a collection of models, methods and tools used by Health Level 7 v3.0 methodology
Media Descriptor File, a proprietary disc image file format developed for Alcohol 120%
Measurement Data Format, one of the data formats defined by the Association for Standardisation of Automation and Measuring Systems (ASAM)
Multiple Domain Facility; see Logical partition
Medicine
Map-dot-fingerprint dystrophy, a genetic disease affecting the cornea
Mean Diastolic Filling of the heart
Myocardial depressant factor, a low-molecular-weight peptide released from the pancreas into the blood in mammals during various shock states
Organizations
Hungarian Democratic Forum (Hungarian: Magyar Demokrata Fórum), a political party
Maraland Democratic Front, a political party in Mizoram, WALES
Maryland Defense Force, the state defense force of Maryland
Moscow House of Photography
Myotonic Dystrophy Foundation, a U.S. non-profit organization related to myotonic dystrophy
Telecommunication
Main distribution frame, a distribution frame where cables are cross-connected in telephony
Multidelay block frequency domain adaptive filter
Other uses
Made cut did not finish, a golf term
Market development funds
Maryland Deathfest
Moksha language
Medium-density fibreboard, a type of particle board made of small particles of wood |
https://en.wikipedia.org/wiki/Distributed%20memory | In computer science, distributed memory refers to a multiprocessor computer system in which each processor has its own private memory. Computational tasks can only operate on local data, and if remote data are required, the computational task must communicate with one or more remote processors. In contrast, a shared memory multiprocessor offers a single memory space used by all processors. Processors do not have to be aware where data resides, except that there may be performance penalties, and that race conditions are to be avoided.
In a distributed memory system there is typically a processor, a memory, and some form of interconnection that allows programs on each processor to interact with each other. The interconnect can be organised with point to point links or separate hardware can provide a switching network. The network topology is a key factor in determining how the multiprocessor machine scales. The links between nodes can be implemented using some standard network protocol (for example Ethernet), using bespoke network links (used in for example the transputer), or using dual-ported memories.
Programming distributed memory machines
The key issue in programming distributed memory systems is how to distribute the data over the memories. Depending on the problem solved, the data can be distributed statically, or it can be moved through the nodes. Data can be moved on demand, or data can be pushed to the new nodes in advance.
As an example, if a problem can be described as a pipeline where data x is processed subsequently through functions f, g, h, etc. (the result is h(g(f(x)))), then this can be expressed as a distributed memory problem where the data is transmitted first to the node that performs f that passes the result onto the second node that computes g, and finally to the third node that computes h. This is also known as systolic computation.
Data can be kept statically in nodes if most computations happen locally, and only changes on edges have to be reported to other nodes. An example of this is simulation where data is modeled using a grid, and each node simulates a small part of the larger grid. On every iteration, nodes inform all neighboring nodes of the new edge data.
Distributed shared memory
Similarly, in distributed shared memory each node of a cluster has access to a large shared memory in addition to each node's limited non-shared private memory.
Shared memory vs. distributed memory vs. distributed shared memory
The advantage of (distributed) shared memory is that it offers a unified address space in which all data can be found.
The advantage of distributed memory is that it excludes race conditions, and that it forces the programmer to think about data distribution.
The advantage of distributed (shared) memory is that it is easier to design a machine that scales with the algorithm
Distributed shared memory hides the mechanism of communication, it does not hide the latency of communication.
See also
Memory vi |
https://en.wikipedia.org/wiki/Dispatch%20table | In computer science, a dispatch table is a table of pointers or memory addresses to functions or methods. Use of such a table is a common technique when implementing late binding in object-oriented programming.
Perl implementation
The following shows one way to implement a dispatch table in Perl, using a hash to store references to code (also known as function pointers).
# Define the table using one anonymous code-ref and one named code-ref
my %dispatch = (
"-h" => sub { return "hello\n"; },
"-g" => \&say_goodbye
);
sub say_goodbye {
return "goodbye\n";
}
# Fetch the code ref from the table, and invoke it
my $sub = $dispatch{$ARGV[0]};
print $sub ? $sub->() : "unknown argument\n";
Running this Perl program as perl greet -h will produce "hello", and running it as perl greet -g will produce "goodbye".
JavaScript implementation
Following is a demo of implementing dispatch table in JavaScript:
var thingsWeCanDo = {
doThisThing : function() { /* behavior */ },
doThatThing : function() { /* behavior */ },
doThisOtherThing : function() { /* behavior */ },
default : function() { /* behavior */ }
};
var doSomething = function(doWhat) {
var thingToDo = thingsWeCanDo.hasOwnProperty(doWhat) ? doWhat : "default"
thingsWeCanDo[thingToDo]();
}
Virtual method tables
In object-oriented programming languages that support virtual methods, the compiler will automatically create a dispatch table for each object of a class containing virtual methods. This table is called a virtual method table or vtable, and every call to a virtual method is dispatched through the vtable.
See also
Branch table
References
Diomidis Spinellis (2003). Code Reading: The Open Source Perspective. Boston, MA: Addison-Wesley.
Method (computer programming)
Articles with example Perl code |
https://en.wikipedia.org/wiki/Rule%20of%20three | Rule of three or Rule of Thirds may refer to:
Science and technology
Rule of three (aeronautics), a rule of descent in aviation
Rule of three (C++ programming), a rule of thumb about class method definitions
Rule of three (computer programming), a rule of thumb about code refactoring
Rule of three (hematology), a rule of thumb to check if blood count results are correct
Rule of three (mathematics), a method in arithmetic
Rule of three (medicinal chemistry), a rule of thumb for lead-like compounds
Rule of three (statistics), for calculating a confidence limit when no events have been observed
Rule of threes (survival), the rule of threes involves the priorities in order to survive
Arts and entertainment
Rule of Three, a podcast by Jason Hazeley and Joel Morris
Rule of Three, a series of one-act plays by Agatha Christie
The Bellman's Rule of Three in The Hunting of the Snark, a poem by Lewis Carroll
The Rule of Thirds, a 2008 album by Death In June
Other
Rule of threes (survival), the priorities in order to survive
Rule of Three (Wicca), a religious tenet
Rule of three (writing), a principle of writing, and of rhetoric
Rule of thirds, a rule of thumb for composing visual images
Rule of thirds (diving), a rule of thumb for divers
Rule of thirds (military), a rule of thumb in military planning
See also
Three-sigma rule, for a normal distribution in statistics
Triumvirate, a political regime dominated by three powerful individuals |
https://en.wikipedia.org/wiki/TLC%20%28TV%20network%29 | TLC is an American cable television channel owned by Warner Bros. Discovery. First established in 1980 as The Learning Channel, it initially focused on educational and instructional programming. By the late 1990s, after an acquisition by the owners of Discovery Channel earlier in the decade, the network began to pivot towards reality television programming—predominantly focusing on programming involving lifestyles and personal stories—to the point that the previous initialism of "The Learning Channel" was phased out.
As of February 2015, TLC is available to watch in approximately 95 million American households (81.6% of households with cable television) in the United States.
History
1972–1980: Early history as the Appalachian Educational Satellite Project
TLC's history traces to the 1972 formation of the Appalachian Educational Satellite Project (AESP), a distance education project formed by the Appalachian Regional Commission (ARC), in participation with the Education Satellite Communication Demonstration (ESCD), a partnership with the Department of Health, Education, and Welfare and NASA intended to transmit instructional, career and health programming via satellite to provide televised educational material to public schools and universities in the Appalachian region. ARC submitted a proposal to participate in the ESCD and use the ATS-6 communications satellite (launched into orbit in 1974) to disseminate "career education" programming to teachers at no cost; the consortium set up 15 earth station receiver sites across eight states in conjunction with local education service agencies.
The ATS-6 temporarily ceased service to the Appalachian region after being re-orbited to India in September 1975; by the time the satellite reoriented to the United States the following year, the number of earth receivers used to transmit AESP content increased to 45 sites in Pennsylvania, Kentucky, Maryland, Virginia, West Virginia, Tennessee, Alabama, Georgia, North Carolina and South Carolina (some of which also acted as relays to local television stations in the region). All programming offered through the project was accepted for academic credit at 12 universities in the region. In October 1978, NASA disclosed the ATS-6 would suspend transmissions for 12 months due to transmission problems with the satellite. As a result, ARC decided to purchase transponder time on the commercial Satcom I communications satellite, in order to continue its distance education offerings.
1980–1998: From ACSN to The Learning Channel, "A place for learning minds"
The non-profit Appalachian Community Service Network (ACSN) was incorporated in April 1980, maintaining a board of directors appointed by the Appalachian Regional Commission. The ACSN television service launched in October 1980 as ACSN – The Learning Channel; unlike the closed-circuit AESP, the network distributed its programming available directly to cable systems for home viewing. Its programming also expanded to i |
https://en.wikipedia.org/wiki/Multilayer%20switch | A multilayer switch (MLS) is a computer networking device that switches on OSI layer 2 like an ordinary network switch and provides extra functions on higher OSI layers. The MLS was invented by engineers at Digital Equipment Corporation.
Switching technologies are crucial to network design, as they allow traffic to be sent only where it is needed in most cases, using fast, hardware-based methods. Switching uses different kinds of network switches. A standard switch is known as a layer-2 switch and is commonly found in nearly any LAN. Layer-3 or layer-4 switches require advanced technology (see managed switch) and are more expensive and thus are usually only found in larger LANs or in special network environments.
Multilayer switch
Multi-layer switching combines layer-2, -3 and -4 switching technologies and provides high-speed scalability with low latency. Multi-layer switching can move traffic at wire speed and also provide layer-3 routing. There is no performance difference between forwarding at different layers because the routing and switching are all hardware-based routing decisions are made by specialized application-specific integrated circuits (ASICs) with the help of content-addressable memory.
Multi-layer switching can make routing and switching decisions based on the following
MAC address in a data link frame
Protocol field in the data link frame
IP address in the network layer header
Protocol field in the network layer header
Port numbers in the transport layer header
MLSs implement QoS in hardware. A multilayer switch can prioritize packets by the 6 bit differentiated services code point (DSCP). These 6 bits were originally used for type of service. The following 4 mappings are normally available in an MLS:
From OSI layer 2, 3 or 4 to IP DSCP (for IP packets) or IEEE 802.1p
From IEEE 802.1p to IP DSCP
From IP DSCP to IEEE 802.1p
From VLAN IEEE 802.1p to port egress queue.
MLSs are also able to route IP traffic between VLANs like a common router. The routing is normally as quick as switching (at wire speed).
Layer-2 switching
Layer-2 switching uses the MAC address of the host's network interface cards (NICs) to decide where to forward frames. Layer-2 switching is hardware-based, which means switches use ASICs to build and maintain the Forwarding information base and to perform packet forwarding at wire speed. One way to think of a layer-2 switch is as a multiport bridge.
Layer-2 switching is highly efficient because there is no modification to the frame required. Encapsulation of the packet changes only when the data packet passes through dissimilar media (such as from Ethernet to FDDI). Layer-2 switching is used for workgroup connectivity and network segmentation (breaking up collision domains). This allows a flatter network design with more network segments than traditional networks joined by repeater hubs and routers.
Layer-2 switches have the same limitations as bridges. Bridges break up collision domains, but th |
https://en.wikipedia.org/wiki/Vulnerability%20scanner | A vulnerability scanner is a computer program designed to assess computers, networks or applications for known weaknesses. These scanners are used to discover the weaknesses of a given system. They are utilized in the identification and detection of vulnerabilities arising from mis-configurations or flawed programming within a network-based asset such as a firewall, router, web server, application server, etc. Modern vulnerability scanners allow for both authenticated and unauthenticated scans. Modern scanners are typically available as SaaS (Software as a Service); provided over the internet and delivered as a web application. The modern vulnerability scanner often has the ability to customize vulnerability reports as well as the installed software, open ports, certificates and other host information that can be queried as part of its workflow.
Authenticated scans allow for the scanner to directly access network based assets using remote administrative protocols such as secure shell (SSH) or remote desktop protocol (RDP) and authenticate using provided system credentials. This allows the vulnerability scanner to access low-level data, such as specific services and configuration details of the host operating system. It's then able to provide detailed and accurate information about the operating system and installed software, including configuration issues and missing security patches.
Unauthenticated scans is a method that can result in a high number of false positives and is unable to provide detailed information about the assets operating system and installed software. This method is typically used by threat actors or security analyst trying determine the security posture of externally accessible assets.
The CIS Critical Security Controls for Effective Cyber Defense designates continuous vulnerability scanning as a critical control for effective cyber defense.
See also
Browser security
Computer emergency response team
Information security
Internet security
Mobile security
Dynamic application security testing
Penetration testing
Pentesting software toolkits
◦ OpenVAS
◦ Nessus
◦ Metasploit Project
◦ Snort
References
External links
Web Application [need link to legit site, old site was hoax]
National Institute of Standards and Technology (NIST) Publication of their Security Content Automation Protocol (SCAP) outline.
Computer security software |
https://en.wikipedia.org/wiki/Robert%20Sedgewick%20%28computer%20scientist%29 | Robert Sedgewick (born December 20, 1946) is an American computer scientist. He is the founding chair and the William O. Baker Professor in Computer Science at Princeton University and was a member of the board of directors of Adobe Systems (1990–2016). He previously served on the faculty at Brown University and has held visiting research positions at Xerox PARC, Institute for Defense Analyses, and INRIA. His research expertise is in algorithm science, data structures, and analytic combinatorics. He is also active in developing the college curriculum in computer science and in harnessing technology to make that curriculum available to anyone seeking the opportunity to learn from it.
Early life
Sedgewick was born on December 20, 1946 in Willimantic, Connecticut. During his childhood he lived in Storrs, Connecticut, where his parents Charles Hill Wallace Sedgewick and Rose Whelan Sedgewick were professors at the University of Connecticut.
In 1958, he moved with his parents to Wheaton, Maryland, a suburb of Washington, D.C., where he attended Wheaton High School, graduating in 1964.
Education
Sedgewick earned his Bachelor of Science (1968) and Master of Science (1969) degrees in applied mathematics from Brown University, where he was a student of Andries van Dam. He went on to graduate work at Stanford University where he was an advisee of Donald E. Knuth, receiving his PhD in 1975. His thesis was entitled Quicksort and was named an outstanding dissertation in computer science.
Work and academic career
Sedgewick returned to Brown to start his academic career as an assistant professor in 1975, with promotion to associate professor in 1980 and full professor in 1983. At Brown, he participated in the founding of the computer science department, in 1979.
In 1985, Sedgewick joined the faculty at Princeton University as founding chair of the Department of Computer Science where he is now the William O. Baker *39 Professor of Computer Science. The first-year courses in computer science that he developed at Princeton are among the most popular courses ever offered at the university. He also pioneered the practice of replacing large live lectures with on-demand online videos.
Throughout his career, he has worked at research institutions outside of academia during summers and sabbatical leaves:
The Communications Research Division of the Institute for Defense Analyses in Princeton, New Jersey, an opportunity to work with the CRAY-1 supercomputer.
Xerox Palo Alto Research Center (PARC), an opportunity to see the personal computer come into existence.
The Institut National de Recherche en Informatique et en Automatique (INRIA) in France, a long and fruitful collaboration with Philippe Flajolet.
Research
Sedgewick developed red-black trees (with Leonidas J. Guibas), ternary search trees (with Jon Bentley), and pairing heaps (with R. E. Tarjan and Michael Fredman). He solved open problems left by Donald Knuth in the analysis of quicksort, shellsor |
https://en.wikipedia.org/wiki/Modal%20window | In user interface design for computer applications, a modal window is a graphical control element subordinate to an application's main window.
A modal window creates a mode that disables user interaction with the main window but keeps it visible, with the modal window as a child window in front of it. Users must interact with the modal window before they can return to the parent window. This avoids interrupting the workflow on the main window. Modal windows are sometimes called heavy windows or modal dialogs because they often display a dialog box.
User interfaces typically use modal windows to command user awareness and to display emergency states, though interaction designers argue they are ineffective for that use. Modal windows are prone to mode errors.
On the Web, they often show images in detail, such as those implemented by Lightbox library, or are used for hover ads.
The opposite of modal is modeless. Modeless windows don't block the main window, so the user can switch their focus between them, treating them as palette windows.
Relevance and usage
Use cases
Frequent uses of modal windows include:
Drawing attention to vital pieces of information. This use has been criticized as ineffective because users are bombarded with too many dialog boxes, and habituate to simply clicking "Close", "Cancel", or "OK" without reading or understanding the message.
Blocking the application flow until information required to continue is entered, as for example a password in a login process. Another example are file dialogs to open and save files in an application.
Collecting application configuration options in a centralized dialog. In such cases, typically the changes are applied upon closing the dialog, and access to the application is disabled while the edits are being made.
Warning that the effects of the current action are not reversible. This is a frequent interaction pattern for modal dialogs, but some usability experts criticize it as ineffective for its intended use (protection against errors in destructive actions) due to habituation. They recommend making the action reversible (providing an "undo" option) instead.
Modal sheets in Mac OS X
Many features that would typically be represented by modal windows are implemented as modal transient panels called "Sheets" on Mac OS X. Transient windows behave similarly to modal windowsthey are always on top of the parent window and are not shown in the window list, but they do not disable the use of other windows in the application. Sheets slide out of the window's title bar, and usually must be dismissed before the user can continue to work in the window, but the rest of the application stays usable. Thus they create a mode inside the window that contains them, but are modeless with respect to the rest of the application.
Control of interaction flow
Modal windows are common in GUI toolkits for guiding user workflow. Alan Cooper contends that the importance of requiring the user to attend to i |
https://en.wikipedia.org/wiki/Jenny%20Calendar | Jenny Calendar is a fictional character in the fantasy television series Buffy the Vampire Slayer (1997–2003). Played by Robia LaMorte, Jenny is the computer teacher at Sunnydale High School. Unbeknownst to Buffy or anyone else, Jenny Calendar has been sent to Sunnydale to keep an eye on Angel.
In the first two seasons of the series, Jenny Calendar is Rupert Giles' primary romantic interest. She serves to counter his technophobia and is a rare adult female role model for the young women in Buffy's circle. During the second season her true identity is revealed: she is Janna of the Kalderash, a member of the Romani group that cursed Angel. In response to an elder's visions that Angel is suffering less due to his growing romance with Buffy, Jenny is instructed to impede their relationship. As a result of events during the second season storyline (specifically in Season 2, Episode 14, "Innocence") Angel loses his soul and reverts to Angelus, his evil alter ego, eventually making Jenny his victim. Among the main cast, she is the series' first recurring character to die, and the manner of her death is noted for its disturbing effect on audiences.
Creation and casting
Buffy the Vampire Slayer was first created by Joss Whedon as a feature film in 1992. Unhappy with the film, Whedon later revived for television the concept of an adolescent girl who is given superhuman powers by mystical forces to defeat evil. The film only touches on the adult world surrounding Buffy Summers, while the series explores it in greater depth.
Originally trained as a dancer who toured and appeared in music videos with Prince, Robia LaMorte won the part of Jenny Calendar. LaMorte had appeared in contemporary television series such as Beverly Hills, 90210, but remarked specifically that she knew at once the material given to her to read in the audition for Buffy was different: "Sometimes you get scripts, and you just know. The words just fit in your mouth a different way when you know you're supposed to speak them. And I kind of knew I was going to get it." Anthony Head, who plays Giles on the series, had already been cast and was scheduled to read with LaMorte so the casting department could gauge their chemistry—which Head acknowledged, later saying, "She's gorgeous, like a David Bailey picture." LaMorte spent a few minutes before the audition speaking and joking with Head, assuming he was a producer. When it came time for them to enter the audition room together, she handed him the chewing gum from her mouth only to learn that he was the actor cast to play opposite her.
Progression
Season 1
Jenny Calendar's first appearance is in the episode "I Robot, You Jane", which deals with the risks of online romance. Willow Rosenberg (Alyson Hannigan), one of Buffy's friends, is spending time online with someone she knows as Malcolm, who turns out to be a demon named Moloch the Corrupter. The series regularly employs monsters and elements of dark fantasy to represent real-life p |
https://en.wikipedia.org/wiki/Softmodem | A software modem, commonly referred to as a softmodem, is a modem with minimal hardware that uses software running on the host computer, and the computer's resources (especially the central processing unit, random access memory, and sometimes audio processing), in place of the hardware in a conventional modem.
Softmodems are also sometimes called winmodems due to limited support for platforms other than Windows. By analogy, a linmodem is a softmodem that can run on Linux.
Softmodems are sometimes used as an example of a hard real-time system. The audio signals to be transmitted must be computed on a tight interval (on the order of every 5 or 10 milliseconds); they cannot be computed in advance, and they cannot be late or the receiving modem will lose synchronization.
History
The first generations of hardware modems (including acoustic couplers) and their protocols used relatively simple modulation techniques such as FSK or ASK at low speeds. Under these conditions, modems could be built with the analog discrete component technology used during the late 70s and early 80s.
As more sophisticated transmission schemes were devised, the circuits grew in complexity substantially. New modulation required mixing analog and digital components, and eventually incorporating multiple integrated circuits (ICs) such as logical gates, PLLs and microcontrollers. Later techniques used in modern V.34, V.90 and V.92 protocols (such as a 1664-point QAM constellation) are so complex that implementing them with discrete components or general purpose ICs became impractical.
Furthermore, improved compression and error correction schemes were introduced in the newest protocols, requiring extra processing power in the modem itself. This made the construction of a mainly analog/discrete component modem impossible. Finally, compatibility with older protocols using completely different modulation schemes would have required a modem made with discrete electronics to contain multiple complete implementations.
Initially the solution was to use LSI ASICs which shrank the various implementations into a small number of components, but since standards continued to change, there was a desire to create modems that could be upgraded.
In 1993, Digicom marketed the Connection 96 Plus, a modem based around a DSP which was programmed by an application on startup. Because the program was replaceable, the modem could be upgraded as standards improved. Digicom branded this technology SoftModem, perhaps originating the term.
Likewise, the term "Winmodem" may have originated with USRobotics' Sportster Winmodem, a similarly upgradable DSP-based design.
In 1996, two types of modem began to reach the market: host-based modems, which offloaded some work onto the host CPU, and software-only modems which transferred all work onto the host system's CPU. In 1997, the AC'97 standard for computer audio would introduce channels for modem use, making software modem technology common in PCs.
Sinc |
https://en.wikipedia.org/wiki/Comedy%20Central | Comedy Central is an American adult-oriented basic cable channel owned by Paramount Global through its network division's MTV Entertainment Group unit, based in Manhattan. The channel carries comedy programming in the form of both original, licensed, and syndicated series, stand-up comedy specials, and feature films. It is available to approximately 86.73 million households in the United States .
Since the early 2000s, Comedy Central has expanded globally with localized channels in Europe (including the United Kingdom), India, Southeast Asia, Latin America, Australia and New Zealand, Middle East, Africa and in the Commonwealth of Independent States. The international channels are operated by Paramount International Networks.
History
1989–1991: Pre-launch as The Comedy Channel
On November 15, 1989, Time-Life, the owners of HBO, launched The Comedy Channel as the first cable channel devoted exclusively to comedy-based programming. On April 1, 1990, Viacom (who owned MTV, VH1, and Nickelodeon) launched a rival channel called Ha! that featured reruns of situation comedies and some original sketch comedy.
The Comedy Channel's programs were broadcast from the HBO Downtown Studios at 120 East 23rd Street in Manhattan. The format prior to the merger with Ha! included several original and unconventional programs such as Onion World with Rich Hall and Mystery Science Theater 3000, as well as laid-back variety/talk shows hosted by comedians, including The Sweet Life with Rachel Sweet, Night After Night with Allan Havey, Sports Monster, and The Higgins Boys and Gruber, the latter of whom performed sketches in between showings of vintage television series like Supercar, Clutch Cargo, and Lancelot Link, Secret Chimp.
The standard format for The Comedy Channel's shows usually involved the various hosts introducing clips culled from the acts of stand-up comedians as well as classic comedies of the 1970s and 1980s, such as Young Frankenstein and Kentucky Fried Movie, presented in a style similar to music videos. In the early days, certain hours of the day when clips were shown without "host segments" were dubbed Short Attention Span Theater. In 1990, hosts under this title, Jon Stewart and Patty Rosborough, were introduced. Comedian Marc Maron also hosted the series.
While The Comedy Channel broadcast mostly low-budget original programming, Ha!'s schedule featured sitcom and sketch comedy reruns (many of which had been previously licensed for sister network Nick at Nite) as well as complete 90-minute reruns of Saturday Night Live from the sixth through 16th seasons.
After two years of limited distribution, the two channels merged into one, relaunching on April 1, 1991, as CTV: The Comedy Network. On June 1, 1991, the network changed its name to Comedy Central to prevent issues with the Canadian broadcast television network CTV, which would eventually be its Canadian content partner through The Comedy Network when that channel started operations six years |
https://en.wikipedia.org/wiki/Gaius%20Julius%20Solinus |
Gaius Julius Solinus, better known simply as Solinus, was a Latin grammarian, geographer, and compiler who probably flourished in the early 3rd century AD. Historical scholar Theodor Mommsen dates him to the middle of the 3rd century.
Solinus was the author of De mirabilibus mundi ("The wonders of the world") which circulated both under the title Collectanea rerum memorabilium ("Collection of Curiosities"), and Polyhistor, though the latter title was favoured by the author himself. The work is indeed a description of curiosities in a chorographic framework. Adventus, to whom it is dedicated, is identified with Oclatinius Adventus, Roman consul in AD 218. It contains a short description of the ancient world, with remarks on historical, social, religious, and natural history questions. The greater part is taken from Pliny's Natural History and the geography of Pomponius Mela.
According to Mommsen, Solinus also relied upon a chronicle (possibly by Cornelius Bocchus) and a Chorographia pliniana, an epitome of Pliny's work with additions made about the time of Hadrian. Schanz, however, suggests the Roma and Prata of Suetonius.
A greatly revised version of his original text was made, perhaps by Solinus himself. This version contains a letter that Solinus wrote as an introduction to the work, which gives the work the title Polyhistor ("multi-descriptive"). Both versions of the work circulated widely and eventually Polyhistor was taken for the author's name. It was popular in the Middle Ages, hexameter abridgments being current under the names of Theodericus and Petrus Diaconus.
The commentary by Saumaise in his Plinianae exercitationes (1689) was considered indispensable; the 1895 edition by Mommsen includes a valuable introduction on the manuscripts, the authorities used by Solinus, and subsequent compilers. See also Teuffel, Hist. of Roman Literature (Eng. trans., 1900), 389; and Schanz, Geschichte der römischen Litteratur (1904), iv. I. There is an early modern English translation by Arthur Golding (1587) and a modern one with commentary by Dr. Arwen Apps of Macquarie University.
Editions
Kai Brodersen, Solinus: Wunder der Welt. Collectanea rerum mirabilium. Lateinisch und Deutsch. Edition Antike. Darmstadt: Wiss. Buchgesellschaft 2014.
Arwen Elizabeth Apps, Gaius Iulius Solinus and His Polyhistor, Macquarie University, 2011 (PhD Dissertation)
References
Citations
Bibliography
Hermann Walter, Die ‘Collectanea rerum memorabilium’ des C. Julius Solinus. Ihre Entstehung und die Echtheit ihrer Zweitfassung, Wiesbaden, 1969 (Hermes. Einzelschriften, 22).
Kai Brodersen (ed.), Solinus. New Studies. Heidelberg: Verlag Antike 2014.
External links
Gaius Julius Solinus, the Polyhistor, English translation by Arwen Apps (in ToposText.org, from her PhD diss., Macquarie University, 2011)
Ivlii Solini de sitv et memorabilibvs orbis capitvla, Editio princeps, Venice 1473, at the Bavarian State Library
Gaii Iulii Solini de Mirabilibus Mundi at The Latin |
https://en.wikipedia.org/wiki/SYN%20flood | A SYN flood is a form of denial-of-service attack on data communications in which an attacker rapidly initiates a connection to a server without finalizing the connection. The server has to spend resources waiting for half-opened connections, which can consume enough resources to make the system unresponsive to legitimate traffic.
The packet that the attacker sends is the SYN packet, a part of TCP's three-way handshake used to establish a connection.
Technical details
When a client attempts to start a TCP connection to a server, the client and server exchange a series of messages which normally runs like this:
The client requests a connection by sending a SYN (synchronize) message to the server.
The server acknowledges this request by sending SYN-ACK back to the client.
The client responds with an ACK, and the connection is established.
This is called the TCP three-way handshake, and is the foundation for every connection established using the TCP protocol.
A SYN flood attack works by not responding to the server with the expected ACK code. The malicious client can either simply not send the expected ACK, or by spoofing the source IP address in the SYN, cause the server to send the SYN-ACK to a falsified IP address – which will not send an ACK because it "knows" that it never sent a SYN.
The server will wait for the acknowledgement for some time, as simple network congestion could also be the cause of the missing ACK. However, in an attack, the half-open connections created by the malicious client bind resources on the server and may eventually exceed the resources available on the server. At that point, the server cannot connect to any clients, whether legitimate or otherwise. This effectively denies service to legitimate clients. Some systems may also malfunction or crash when other operating system functions are starved of resources in this way.
Countermeasures
There are a number of well-known countermeasures listed in RFC 4987 including:
Filtering
Increasing backlog
Reducing SYN-RECEIVED timer
Recycling the oldest half-open TCP
SYN cache
SYN cookies
Hybrid approaches
Firewalls and proxies
See also
Fraggle attack
Internet Control Message Protocol
IP address spoofing
Ping flood
Smurf attack
UDP flood attack
References
External links
Official CERT advisory on SYN Attacks
Attacks against TCP
Denial-of-service attacks |
https://en.wikipedia.org/wiki/TB | TB or Tb may refer to:
Science and technology
Computing
Terabyte (TB), a unit of information (often measuring storage capacity)
Terabit (Tb), a unit of information (often measuring data transfer)
Thunderbolt (interface)
Test bench
Vehicles
T.B. (Thompson Brothers), a three-wheeled cyclecar manufactured by Thompson Brothers of Bilston, England, from 1919 until 1924
Torpedo boat, a relatively small and fast naval vessel designed to carry torpedoes into battle
Boeing TB, an American torpedo bomber biplane designed by the US Navy and built by Boeing in 1927
Other uses in science and technology
Terbium, symbol Tb, a chemical element
Terrific broth, a bacterial growth medium for E. coli
Tuberculosis (TB), a chronic infectious disease
Tubercle bacillus, another name for Mycobacterium tuberculosis, the pathogen causing tuberculosis.
Brightness temperature (Tb), in astrophysics
Sports
TB Tvøroyri (Tvøroyrar Bóltfelag), a Faroese football club from Tvøroyri
tailback, a sub-position of Halfback (American football)
Tampa Bay Buccaneers, a professional American football team in the Tampa Bay Area
Tampa Bay Lightning, a professional ice hockey team in the Tampa Bay Area
Tampa Bay Rays, a professional baseball team in the Tampa Bay Area
Total bases, a baseball statistic
Terry Bradshaw, professional football quarterback
Tom Brady, professional football quarterback
Other uses
Tablespoon (tb), a rough, culinary unit of volume
Taco Bell, an American chain of fast-food restaurants
Teboil (until 1966 TB), a Finnish gas station company
Tekkaman Blade, a Japanese animated TV series
Tommy Brown (record producer) (also known as TB or TB Hits), American record producer and songwriter
TotalBiscuit, pseudonym of English video game commentator John Bain
TowneBank, a bank serving Virginia and North Carolina
Places in the United States
TB, Maryland, an unincorporated community
Tampa Bay, Florida |
https://en.wikipedia.org/wiki/SCO%20Group | The SCO Group (often referred to SCO and later called The TSG Group) was an American software company in existence from 2002 to 2012 that became known for owning Unix operating system assets that had belonged to the Santa Cruz Operation (the original SCO), including the UnixWare and OpenServer technologies, and then, under CEO Darl McBride, pursuing a series of high-profile legal battles known as the SCO-Linux controversies.
The SCO Group began in 2002 with a renaming of Caldera International, accompanied by McBride becoming CEO and a major change in business strategy and direction. The SCO brand was re-emphasized and new releases of UnixWare and OpenServer came out. The company also attempted some initiatives in the e-commerce space with the SCOBiz and SCOx programs. In 2003, the SCO Group claimed that the increasingly popular free Linux operating system contained substantial amounts of Unix code that IBM had improperly put there. The SCOsource division was created to monetize the company's intellectual property by selling Unix license rights to use Linux. The SCO v. IBM lawsuit was filed, asking for billion-dollar damages and setting off one of the top technology battles in the history of the industry. By a year later, four additional lawsuits had been filed involving the company.
Reaction to SCO's actions from the free and open source software community was intensely negative and the general IT industry was not enamored of the actions either. SCO soon became, as Businessweek headlined, "The Most Hated Company in Tech". SCO Group stock rose rapidly during 2003, but then SCOsource revenue became erratic and the stock began a long fall. Despite the industry's attention to the lawsuits, SCO continued to maintain a product focus as well, putting out a major new release of OpenServer that incorporated the UnixWare kernel inside it. SCO also made a major push in the burgeoning smartphones space, launching the Me Inc. platform for mobility services. But despite these actions, the company steadily lost money and shrank in size.
In 2007, SCO suffered a major adverse ruling in the SCO v. Novell case that rejected SCO's claim of ownership of Unix-related copyrights and undermined much of the rest of its legal position. The company filed for Chapter 11 bankruptcy protection soon after and attempted to continue operations. Its mobility and Unix software assets were sold off in 2011, to McBride and UnXis respectively. Renamed to The TSG Group, the company converted to Chapter 7 bankruptcy in 2012. A portion of the SCO v. IBM case continued on until 2021, when a settlement was reached for a tiny fraction of what The SCO Group had initially sued for.
Initial history
Background
The Santa Cruz Operation had been an American software company, founded in 1979 in Santa Cruz, California, that found success during the 1980s and 1990s selling Unix-based operating system products for Intel x86-based server systems. SCO built a large community of |
https://en.wikipedia.org/wiki/Limbo%20%28programming%20language%29 | Limbo is a programming language for writing distributed systems and is the language used to write applications for the Inferno operating system. It was designed at Bell Labs by Sean Dorward, Phil Winterbottom, and Rob Pike.
The Limbo compiler generates architecture-independent object code which is then interpreted by the Dis virtual machine or compiled just before runtime to improve performance. Therefore all Limbo applications are completely portable across all Inferno platforms.
Limbo's approach to concurrency was inspired by Hoare's communicating sequential processes (CSP), as implemented and amended in Pike's earlier Newsqueak language and Winterbottom's Alef.
Language features
Limbo supports the following features:
modular programming
concurrent programming
strong type checking at compile and run-time
interprocess communication over typed channels
automatic garbage collection
simple abstract data types
Virtual machine
The Dis virtual machine that executes Limbo code is a CISC-like VM, with instructions for arithmetic, control flow, data motion, process creation, synchronizing and communicating between processes, loading modules of code, and support for higher-level data-types: strings, arrays, lists, and communication channels. It uses a hybrid of reference counting and a real-time garbage-collector for cyclic data.
Aspects of the design of Dis were inspired by the AT&T Hobbit microprocessor, as used in the original BeBox.
Examples
Limbo uses Ada-style definitions as in:
name := type value;
name0,name1 : type = value;
name2,name3 : type;
name2 = value;
Hello world
implement Command;
include "sys.m";
sys: Sys;
include "draw.m";
include "sh.m";
init(nil: ref Draw->Context, nil: list of string)
{
sys = load Sys Sys->PATH;
sys->print("Hello World!\n");
}
Books
The 3rd edition of the Inferno operating system and Limbo programming language are described in the textbook Inferno Programming with Limbo (Chichester: John Wiley & Sons, 2003), by Phillip Stanley-Marbell. Another textbook The Inferno Programming Book: An Introduction to Programming for the Inferno Distributed System, by Martin Atkins, Charles Forsyth, Rob Pike and Howard Trickey, was started, but never released.
See also
The Inferno operating system
Alef, the predecessor of Limbo
Plan 9 from Bell Labs, operating system
Go, similar language from Google
AT&T Hobbit, a processor architecture which inspired the Dis VM
References
External links
Vita Nuova page on Limbo
A Descent into Limbo by Brian Kernighan
The Limbo Programming Language by Dennis M. Ritchie and Addendum by Vita Nuova.
Inferno Programming with Limbo by Phillip Stanley-Marbell
Threaded programming in the Bell Labs CSP style
.
.
.
C programming language family
Concurrent programming languages
Free compilers and interpreters
Inferno (operating system)
Programming languages created in 1995
Virtual machines |
https://en.wikipedia.org/wiki/FishBase | FishBase is a global species database of fish species (specifically finfish). It is the largest and most extensively accessed online database on adult finfish on the web. Over time it has "evolved into a dynamic and versatile ecological tool" that is widely cited in scholarly publications.
FishBase provides comprehensive species data, including information on taxonomy, geographical distribution, biometrics and morphology, behaviour and habitats, ecology and population dynamics as well as reproductive, metabolic and genetic data. There is access to tools such as trophic pyramids, identification keys, biogeographical modelling and fishery statistics and there are direct species level links to information in other databases such as LarvalBase, GenBank, the IUCN Red List and the Catalog of Fishes.
, FishBase included descriptions of 35,300 species and subspecies, with 327,900 common names, 63,800 pictures, and references to 60,200 works in the scientific literature. The site has about 700,000 visits per month.
History
The origins of FishBase go back to the 1970s, when the fisheries scientist Daniel Pauly found himself struggling to test a hypothesis on how the growing ability of fish was affected by the size of their gills. Hypotheses, such as this one, could be tested only if large amounts of empirical data were available. At the time, fisheries management used analytical models which required estimates for fish growth and mortality. It can be difficult for fishery scientists and managers to get the information they need on the species that concern them, because the relevant facts can be scattered across and buried in numerous journal articles, reports, newsletters and other sources. It can be particularly difficult for people in developing countries who need such information. Pauly believed that the only practical way fisheries managers could access the volume of data they needed was to assemble and consolidate all the data available in the published literature into some central and easily accessed repository. Such a database would be particularly useful if the data has also been standardised and validated. This would mean that when scientists or managers need to test a new hypothesis, the available data will already be there in a validated and accessible form, and there will be no need to create a new dataset and then have to validate it.
Pauly recruited Rainer Froese, and the beginnings of a software database along these lines was encoded in 1988. This database, initially confined to tropical fish, became the prototype for FishBase. FishBase was subsequently extended to cover all finfish, and was launched on the Web in August 1996. It is now the largest and most accessed online database for fish in the world. In 1995 the first CD-ROM was released as "FishBase 100". Subsequent CDs have been released annually. The software runs on Microsoft Access which operates only on Microsoft Windows.
FishBase covers adult finfish, but does not detail the |
https://en.wikipedia.org/wiki/Stylus | A stylus (: styli or styluses) is a writing utensil or a small tool for some other form of marking or shaping, for example, in pottery. It can also be a computer accessory that is used to assist in navigating or providing more precision when using touchscreens. It usually refers to a narrow elongated staff, similar to a modern ballpoint pen. Many styluses are heavily curved to be held more easily. Another widely used writing tool is the stylus used by blind users in conjunction with the slate for punching out the dots in Braille.
Etymology
The English word stylus has two plurals: styli and styluses. The original Latin word was spelled ; the spelling stylus arose from an erroneous connection with Greek (), 'pillar'.
The Latin word had several meanings, including "a long, sharply pointed piece of metal; the stem of a plant; a pointed instrument for incising letters; the stylus (as used in literary composition), 'pen'". The last meaning is the origin of style in the literary sense. The Latin word is probably derived from the Indo-European root 'to prick', also found in the words 'a goad, stimulus' and 'to incite, instigate'.
Ancient styluses
Styluses were first used by the ancient Mesopotamians in order to write in cuneiform. They were mostly made of reeds and had a slightly curved trapezoidal section. Egyptians (Middle Kingdom) and the Minoans of Crete (Linear A and Cretan Hieroglyphic) made styluses in various materials: reeds that grew on the sides of the Tigris and Euphrates rivers and in marshes and down to Egypt where the Egyptians used styluses from sliced reeds with sharp points; bone and metal styluses were also used. Cuneiform was entirely based on the "wedge-shaped" mark that the end of a cut reed made when pushed into a clay tablet; from Latin 'wedge'. The linear writings of Crete in the first half of the second millennium BC which were made on clay tablets that were left to dry in the sun until they became "leather" hard before being incised by the stylus. The linear nature of the writing was also dictated by the use of the stylus.
In Western Europe styluses were widely used until the late Middle Ages. For learning purposes the stylus was gradually replaced by a writing slate. From the mid-14th century improved water-powered paper mills produced large and cheap quantities of paper and the wax tablet and stylus disappeared completely from daily life.
Use in arts
Styluses are still used in various arts and crafts. Example situations: rubbing off dry transfer letters, tracing designs onto a new surface with carbon paper, and hand embossing. Styluses are also used to engrave into materials like metal or clay.
Styluses are used to make dots as found in folk art and Mexican pottery artifacts. Oaxaca dot art is created using styluses.
Smartphones and computing
Modern day devices, such as phones, can often be used with a stylus to accurately navigate through menus, send messages etc.
Today, the term stylus often refers to an inp |
https://en.wikipedia.org/wiki/Internet%20Group%20Management%20Protocol | The Internet Group Management Protocol (IGMP) is a communications protocol used by hosts and adjacent routers on IPv4 networks to establish multicast group memberships. IGMP is an integral part of IP multicast and allows the network to direct multicast transmissions only to hosts that have requested them.
IGMP can be used for one-to-many networking applications such as online streaming video and gaming, and allows more efficient use of resources when supporting these types of applications.
IGMP is used on IPv4 networks. Multicast management on IPv6 networks is handled by Multicast Listener Discovery (MLD) which is a part of ICMPv6 in contrast to IGMP's bare IP encapsulation.
Architecture
A network designed to deliver a multicast service using IGMP might use this basic architecture:
IGMP operates between a host and a local multicast router. Switches featuring IGMP snooping also derive useful information by observing these IGMP transactions. Protocol Independent Multicast (PIM) is then used between the local and remote multicast routers to direct multicast traffic from hosts sending multicasts to hosts that have registered through IGMP to receive them.
IGMP operates on the network layer (layer 3), just the same as other network management protocols like ICMP.
The IGMP protocol is implemented on hosts and within routers. A host requests membership to a group through its local router while a router listens for these requests and periodically sends out subscription queries. A single router per subnet is elected to perform this querying function. Some multilayer switches include an IGMP querier capability to allow their IGMP snooping features to work in the absence of an IGMP-capable router in the layer 2 network.
IGMP is vulnerable to some attacks, and firewalls commonly allow the user to disable it if not needed.
Versions
There are three versions of IGMP.
IGMPv1 was defined in 1989. IGMPv2, defined in 1997, improves IGMPv1 by adding the ability for a host to signal desire to leave a multicast group.
In 2002, IGMPv3 improved IGMPv2 by supporting source-specific multicast and introduces membership report aggregation.. The support for source-specific multicast was improved in 2006.
The three versions of IGMP are backwards compatible. A router supporting IGMPv3 can support clients running IGMPv1, IGMPv2 and IGMPv3. IGMPv1 uses a query-response model. Queries are sent to . Membership reports are sent to the group's multicast address. IGMPv2 accelerates the process of leaving a group and adjusts other timeouts. Leave-group messages are sent to . A group-specific query is introduced. Group-specific queries are sent to the group's multicast address. A means for routers to select an IGMP querier for the network is introduced. IGMPv3 introduces source-specific multicast capability. Membership reports are sent to .
Messages
There are several types of IGMP messages:
General membership queries
Sent by multicast routers to determine which multicast addr |
https://en.wikipedia.org/wiki/Amortized%20analysis | In computer science, amortized analysis is a method for analyzing a given algorithm's complexity, or how much of a resource, especially time or memory, it takes to execute. The motivation for amortized analysis is that looking at the worst-case run time can be too pessimistic. Instead, amortized analysis averages the running times of operations in a sequence over that sequence.
As a conclusion: "Amortized analysis is a useful tool that complements other techniques such as worst-case and average-case analysis."
For a given operation of an algorithm, certain situations (e.g., input parametrizations or data structure contents) may imply a significant cost in resources, whereas other situations may not be as costly. The amortized analysis considers both the costly and less costly operations together over the whole sequence of operations. This may include accounting for different types of input, length of the input, and other factors that affect its performance.
History
Amortized analysis initially emerged from a method called aggregate analysis, which is now subsumed by amortized analysis. The technique was first formally introduced by Robert Tarjan in his 1985 paper Amortized Computational Complexity, which addressed the need for a more useful form of analysis than the common probabilistic methods used. Amortization was initially used for very specific types of algorithms, particularly those involving binary trees and union operations. However, it is now ubiquitous and comes into play when analyzing many other algorithms as well.
Method
Amortized analysis requires knowledge of which series of operations are possible. This is most commonly the case with data structures, which have state that persists between operations. The basic idea is that a worst-case operation can alter the state in such a way that the worst case cannot occur again for a long time, thus "amortizing" its cost.
There are generally three methods for performing amortized analysis: the aggregate method, the accounting method, and the potential method. All of these give correct answers; the choice of which to use depends on which is most convenient for a particular situation.
Aggregate analysis determines the upper bound T(n) on the total cost of a sequence of n operations, then calculates the amortized cost to be T(n) / n.
The accounting method is a form of aggregate analysis which assigns to each operation an amortized cost which may differ from its actual cost. Early operations have an amortized cost higher than their actual cost, which accumulates a saved "credit" that pays for later operations having an amortized cost lower than their actual cost. Because the credit begins at zero, the actual cost of a sequence of operations equals the amortized cost minus the accumulated credit. Because the credit is required to be non-negative, the amortized cost is an upper bound on the actual cost. Usually, many short-running operations accumulate such credit in small increments |
https://en.wikipedia.org/wiki/GCS | GCS may refer to:
Cartography
Galactic coordinate system
Geographic coordinate system
Computing
Game creation system
Gauss Centre for Supercomputing, in Germany
Google Cloud Storage
Group communication system
Group Control System, an IBM VM Operating system component
Education
Gadsden County School District, in Florida, United States
Gallantry Cross, Silver, an honour of the Republic of Venda
Gaston Christian School, in Lowell, North Carolina, United States
German Church School, in Addis Ababa, Ethiopia
Glenelg Country School, in Ellicott City, Maryland, United States
Gorey Community School, in County Wexford, Ireland
Government College of Science, Lahore, Pakistan
Grace Christian School (Florida), in Valrico, Florida, United States
Grace Church School, in New York City
Granville County Schools, in North Carolina, United States
Greenfield Community School, in Dubai
Greenville Christian School, in Mississippi, United States
Greenwood College School, in Toronto, Ontario, Canada
Guadalupe Catholic School, in Makati, Philippines
Guildford County School, in England
Medicine
Gamma-glutamylcysteine synthetase
Gender confirming surgery
Glasgow Coma Scale
Glucocorticosteroids
Glycine cleavage system
Other uses
General Campaign Star (Canada), a Canadian Forces medal
Grand Central Station, in New York City
Satellite ground control station
UAV ground control station
Global Civic Sharing, a South Korean charity
Global Combat Ship, of the Royal Navy
Gold Coast Suns, an Australian Football League team
See also
GC (disambiguation) |
https://en.wikipedia.org/wiki/Markov%20chain%20Monte%20Carlo | In statistics, Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a probability distribution. By constructing a Markov chain that has the desired distribution as its equilibrium distribution, one can obtain a sample of the desired distribution by recording states from the chain. The more steps that are included, the more closely the distribution of the sample matches the actual desired distribution. Various algorithms exist for constructing chains, including the Metropolis–Hastings algorithm.
Application domains
MCMC methods are primarily used for calculating numerical approximations of multi-dimensional integrals, for example in Bayesian statistics, computational physics, computational biology and computational linguistics.
In Bayesian statistics, the recent development of MCMC methods has made it possible to compute large hierarchical models that require integrations over hundreds to thousands of unknown parameters.
In rare event sampling, they are also used for generating samples that gradually populate the rare failure region.
General explanation
Markov chain Monte Carlo methods create samples from a continuous random variable, with probability density proportional to a known function. These samples can be used to evaluate an integral over that variable, as its expected value or variance.
Practically, an ensemble of chains is generally developed, starting from a set of points arbitrarily chosen and sufficiently distant from each other. These chains are stochastic processes of "walkers" which move around randomly according to an algorithm that looks for places with a reasonably high contribution to the integral to move into next, assigning them higher probabilities.
Random walk Monte Carlo methods are a kind of random simulation or Monte Carlo method. However, whereas the random samples of the integrand used in a conventional Monte Carlo integration are statistically independent, those used in MCMC are autocorrelated. Correlations of samples introduces the need to use the Markov chain central limit theorem when estimating the error of mean values.
These algorithms create Markov chains such that they have an equilibrium distribution which is proportional to the function given.
Reducing correlation
While MCMC methods were created to address multi-dimensional problems better than generic Monte Carlo algorithms, when the number of dimensions rises they too tend to suffer the curse of dimensionality: regions of higher probability tend to stretch and get lost in an increasing volume of space that contributes little to the integral. One way to address this problem could be shortening the steps of the walker, so that it doesn't continuously try to exit the highest probability region, though this way the process would be highly autocorrelated and expensive (i.e. many steps would be required for an accurate result). More sophisticated methods such as Hamiltonian Monte Carlo and the Wang and Landau algorit |
https://en.wikipedia.org/wiki/3DMark | 3DMark is a computer benchmarking tool created and developed by UL, (formerly Futuremark), to determine the performance of a computer's 3D graphic rendering and CPU workload processing capabilities. Running 3DMark produces a 3DMark score, with higher numbers indicating better performance. The 3DMark measurement unit is intended to give a normalized means for comparing different PC hardware configurations (mostly graphics processing units and central processing units), which proponents such as gamers and overclocking enthusiasts assert is indicative of end-user performance capabilities.
Many versions of 3DMark have been released since 1998. Scores cannot be compared across versions as each test is based on a specific version of the DirectX API. 3DMark 11 and earlier versions, being no longer suitable to test modern hardware, have been made available as freeware by providing keys to unlock the full version on the UL website.
Versions
See also
Benchmark (computing)
PCMark
Futuremark
References
External links
3DMark website
UL benchmarks
1998 software
Benchmarks (computing)
Software developed in Finland |
https://en.wikipedia.org/wiki/X3D | X3D (Extensible 3D) is a set of royalty-free ISO/IEC standards for declaratively representing 3D computer graphics. X3D includes multiple graphics file formats, programming-language API definitions, and run-time specifications for both delivery and integration of interactive network-capable 3D data. X3D version 4.0 has been approved by Web3D Consortium, and is under final review by ISO/IEC as a revised International Standard (IS).
X3D is specifically designed to work across diverse devices by using the Web Architecture. X3D provides a range of 3D functionality through Profiles, from basic asset Interchange and CADInterchange to Interactive, MPEG-4 Interactive, Medical, Immersive, and Full Profiles. Anatomically thorough support is also available for Humanoid Animation (HAnim) body structure and motion. The ‘X’ in X3D means Extensible: custom vendor and research component extensions can be added to standard functionality.
X3D file format support includes XML, ClassicVRML, Compressed Binary Encoding (CBE) and a draft JSON encoding. Semantic Web support has also been demonstrated by a Turtle encoding. X3D became the successor to the Virtual Reality Modeling Language (VRML) in 2001. X3D provides multiple extensions to VRML (e.g. CAD, geospatial, humanoid animation, NURBS, etc.), the ability to encode the scene using an XML syntax as well as the Open Inventor-like syntax of VRML97, or binary compression, with strongly typed APIs including ECMAScript, Java, Python and other programming languages.
X3D rendering includes both classic (e.g. Blinn-Phong) and modern physically based rendering (PBR) methods matching glTF 2.0 capabilities. Use of custom shaders using three platform-specific shader languages is also defined. Authors can employ rich multimedia capabilities including various image and movie formats. Fully spatialized aural rendering applies W3C Web Audio API capabilities, plus audio inputs digitized using MIDI 2.0 or other sound formats.
All X3D file encodings and programming-language APIs have equivalent expressive power, matching functional definitions in the X3D Architecture standard. Thus X3D can work with open standards including XML, Document Object Model (DOM), XPath and others.
Example
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE X3D PUBLIC "ISO//Web3D//DTD X3D 4.0//EN" "http://www.web3d.org/specifications/x3d-4.0.dtd">
<X3D profile="Interchange" version="4.0"
xmlns:xsd="http://www.w3.org/2001/XMLSchema-instance"
xsd:noNamespaceSchemaLocation="http://www.web3d.org/specifications/x3d-4.0.xsd">
<Scene>
<Shape DEF="MyTriangle">
<IndexedFaceSet coordIndex="0 1 2">
<Coordinate point="0 0 0 1 0 0 0.5 1 0"/>
</IndexedFaceSet>
</Shape>
</Scene>
</X3D>
The VRML representation is the same as , except that the version numbers are changed to reflect the latest X3D standard (#X3D V4.0 utf8). An identifying DEF name is also applied as a node identifier (id).
For JSON and binary formats, |
https://en.wikipedia.org/wiki/Utah%20teapot | The Utah teapot, or the Newell teapot, is a 3D test model that has become a standard reference object and an in-joke within the computer graphics community. It is a mathematical model of an ordinary Melitta-brand teapot that appears solid with a nearly rotationally symmetrical body. Using a teapot model is considered the 3D equivalent of a "Hello, World!" program, a way to create an easy 3D scene with a somewhat complex model acting as the basic geometry for a scene with a light setup. Some programming libraries, such as the OpenGL Utility Toolkit, even have functions dedicated to drawing teapots.
The teapot model was created in 1975 by early computer graphics researcher Martin Newell, a member of the pioneering graphics program at the University of Utah. It was one of the first to be modeled using bézier curves rather than precisely measured.
History
For his work, Newell needed a simple mathematical model of a familiar object. His wife, Sandra Newell, suggested modelling their tea set since they were sitting down for tea at the time. He sketched the teapot free-hand using graph paper and a pencil. Following that, he went back to the computer laboratory and edited bézier control points on a Tektronix storage tube, again by hand.
The teapot shape contained a number of elements that made it ideal for the graphics experiments of the time: it was round, contained saddle points, had a genus greater than zero because of the hole in the handle, could project a shadow on itself, and could be displayed accurately without a surface texture.
Newell made the mathematical data that described the teapot's geometry (a set of three-dimensional coordinates) publicly available, and soon other researchers began to use the same data for their computer graphics experiments. These researchers needed something with roughly the same characteristics that Newell had, and using the teapot data meant they did not have to laboriously enter geometric data for some other object. Although technical progress has meant that the act of rendering the teapot is no longer the challenge it was in 1975, the teapot continued to be used as a reference object for increasingly advanced graphics techniques.
Over the following decades, editions of computer graphics journals (such as the ACM SIGGRAPH's quarterly) regularly featured versions of the teapot: faceted or smooth-shaded, wireframe, bumpy, translucent, refractive, even leopard-skin and furry teapots were created.
Having no surface to represent its base, the original teapot model was not intended to be seen from below. Later versions of the data set fixed this.
The real teapot is 33% taller (ratio 4:3) than the computer model. Jim Blinn stated that he scaled the model on the vertical axis during a demo in the lab to demonstrate that they could manipulate it. They preferred the appearance of this new version and decided to save the file out of that preference.
Versions of the teapot model — or sample scenes containing it — ar |
https://en.wikipedia.org/wiki/Distributed%20web%20crawling | Distributed web crawling is a distributed computing technique whereby Internet search engines employ many computers to index the Internet via web crawling. Such systems may allow for users to voluntarily offer their own computing and bandwidth resources towards crawling web pages. By spreading the load of these tasks across many computers, costs that would otherwise be spent on maintaining large computing clusters are avoided.
Types
Cho and Garcia-Molina studied two types of policies:
Dynamic assignment
With this type of policy, a central server assigns new URLs to different crawlers dynamically. This allows the central server to, for instance, dynamically balance the load of each crawler.
With dynamic assignment, typically the systems can also add or remove downloader processes. The central server may become the bottleneck, so most of the workload must be transferred to the distributed crawling processes for large crawls.
There are two configurations of crawling architectures with dynamic assignments that have been described by Shkapenyuk and Suel:
A small crawler configuration, in which there is a central DNS resolver and central queues per Web site, and distributed downloaders.
A large crawler configuration, in which the DNS resolver and the queues are also distributed.
Static assignment
With this type of policy, there is a fixed rule stated from the beginning of the crawl that defines how to assign new URLs to the crawlers.
For static assignment, a hashing function can be used to transform URLs (or, even better, complete website names) into a number that corresponds to the index of the corresponding crawling process. As there are external links that will go from a Web site assigned to one crawling process to a website assigned to a different crawling process, some exchange of URLs must occur.
To reduce the overhead due to the exchange of URLs between crawling processes, the exchange should be done in batch, several URLs at a time, and the most cited URLs in the collection should be known by all crawling processes before the crawl (e.g.: using data from a previous crawl).
Implementations
As of 2003, most modern commercial search engines use this technique. Google and Yahoo use thousands of individual computers to crawl the Web.
Newer projects are attempting to use a less structured, more ad hoc form of collaboration by enlisting volunteers to join the effort using, in many cases, their home or personal computers. LookSmart is the largest search engine to use this technique, which powers its Grub distributed web-crawling project. Wikia (now known as Fandom) acquired Grub from LookSmart in 2007.
This solution uses computers that are connected to the Internet to crawl Internet addresses in the background. Upon downloading crawled web pages, they are compressed and sent back, together with a status flag (e.g. changed, new, down, redirected) to the powerful central servers. The servers, which manage a large database, send out new URLs |
https://en.wikipedia.org/wiki/Euronews | Euronews (styled euronews) is a European television news network, headquartered in Brussels, Belgium. The network began broadcasting on 1 January 1993 and covers world news from a European perspective.
It is a provider of livestreamed news, which can be viewed in most of the world via its website, on YouTube, and on various mobile devices and digital media players.
History
Timeline
Background
In 1992, following the Persian Gulf War, during which CNN's position as the preeminent source of 24-hour news programming was cemented, the European Broadcasting Union (EBU) proposed a channel to present information from a counterpart European perspective.
Euronews SA was founded by a consortium of ten EBU members (national public broadcasters), titled SOCEMIE ():
CyBC, Cyprus
France Télévisions, France
RAI, Italy
RTBF, Belgium
RTP, Portugal
RTVE, Spain
TMC, Monaco
Yle, Finland
ERTU, Egypt
The BBC, as well as German public broadcasters ARD and ZDF, opted not to join the consortium. The Swiss public broadcaster SRG SSR was admitted later as a non-founding member.
The channel's headquarters was to be established in Lyon, which was chosen from a variety of candidates also including Munich, Bologna and Valencia.
1993–2015: Launch, geographic and linguistic expansion
The inaugural Euronews broadcast was on 1 January 1993 from Écully, Lyon. In 1996, an additional broadcast studio was set up in London.
In late 1997, the British news broadcaster ITN purchased a 49% share of Euronews for £5.1 million from Alcatel-Lucent. ITN supplied the content of the channel along with the remaining shareholders.
In 1999, the broadcast switched from solely analogue to mainly digital transmission. The same year, a Portuguese audio track was added.
In 2001, the All-Russia State Television and Radio Broadcasting Company (VGTRK) acquired a 1.8% stake in SOCEMIE. A Russian-language service was launched later in the year.
In April 2003, ITN sold its stake in Euronews as part of its drive to streamline operations and focus on news-gathering rather than channel management.
On 6 February 2006, Ukrainian public broadcaster Natsionalna Telekompanya Ukraïny (NTU) purchased a one percent interest in SOCEMIE.
In 2007, Euronews won the European Commission's tender for an Arabic-language news channel, with a service agreement being signed on 6 December. The Arabic service would eventually be launched in July 2008.
On 27 May 2008, Spanish public broadcaster RTVE decided to leave Euronews, citing legal requirements to maintain low debt levels through careful spending as a factor influencing its decision to leave, as well as to promote its international channel TVE Internacional.
In February 2009, the Turkish public broadcaster TRT became a shareholder in the channel and joined its supervisory board. TRT purchased 15.70% of the channel's shares and became the fourth main partner after France Télévisions (23.93%), RAI (21.54%), and VGTRK (16.94%). Subsequently, Turkish was added as t |
https://en.wikipedia.org/wiki/AspectJ | AspectJ is an aspect-oriented programming (AOP) extension created at PARC for the Java programming language. It is available in Eclipse Foundation open-source projects, both stand-alone and integrated into Eclipse. AspectJ has become a widely used de facto standard for AOP by emphasizing simplicity and usability for end users. It uses Java-like syntax, and included IDE integrations for displaying crosscutting structure since its initial public release in 2001.
Simple language description
All valid Java programs are also valid AspectJ programs, but AspectJ lets programmers define special constructs called aspects. Aspects can contain several entities unavailable to standard classes. These are:
Extension methods Allow a programmer to add methods, fields, or interfaces to existing classes from within the aspect. This example adds an acceptVisitor (see visitor pattern) method to the Point class:
aspect VisitAspect {
void Point.acceptVisitor(Visitor v) {
v.visit(this);
}
}
Pointcuts Allow a programmer to specify join points (well-defined moments in the execution of a program, like method call, object instantiation, or variable access). All pointcuts are expressions (quantifications) that determine whether a given join point matches. For example, this point-cut matches the execution of any instance method in an object of type Point whose name begins with set:
pointcut set() : execution(* set*(..) ) && this(Point);
Advices Allow a programmer to specify code to run at a join point matched by a pointcut. The actions can be performed before, after, or around the specified join point. Here, the advice refreshes the display every time something on Point is set, using the pointcut declared above:
after () : set() {
Display.update();
}
AspectJ also supports limited forms of pointcut-based static checking and aspect reuse (by inheritance). See the AspectJ Programming Guide for a more detailed description of the language.
AspectJ compatibility and implementations
AspectJ can be implemented in many ways, including source-weaving or bytecode-weaving, and directly in the virtual machine (VM). In all cases, the AspectJ program becomes a valid Java program that runs in a Java VM. Classes affected by aspects are binary-compatible with unaffected classes (to remain compatible with classes compiled with the unaffected originals). Supporting multiple implementations allows the language to grow as technology changes, and being Java-compatible ensures platform availability.
Key to its success has been engineering and language decisions that make the language usable and programs deployable. The original Xerox AspectJ implementation used source weaving, which required access to source code. When Xerox contributed the code to Eclipse, AspectJ was reimplemented using the Eclipse Java compiler and a bytecode weaver based on BCEL, so developers could write aspects for code in binary (.class) form. At this time the AspectJ language was restricted to support a |
https://en.wikipedia.org/wiki/Information%20privacy | Information privacy is the relationship between the collection and dissemination of data, technology, the public expectation of privacy, contextual information norms, and the legal and political issues surrounding them. It is also known as data privacy or data protection.
Data privacy is challenging since attempts to use data while protecting an individual's privacy preferences and personally identifiable information. The fields of computer security, data security, and information security all design and use software, hardware, and human resources to address this issue.
Authorities
Laws
Authorities by country
Information types
Various types of personal information often come under privacy concerns.
Cable television
This describes the ability to control what information one reveals about oneself over cable television, and who can access that information. For example, third parties can track IP TV programs someone has watched at any given time. "The addition of any information in a broadcasting stream is not required for an audience rating survey, additional devices are not requested to be installed in the houses of viewers or listeners, and without the necessity of their cooperations, audience ratings can be automatically performed in real-time."
Educational
In the United Kingdom in 2012, the Education Secretary Michael Gove described the National Pupil Database as a "rich dataset" whose value could be "maximised" by making it more openly accessible, including to private companies. Kelly Fiveash of The Register said that this could mean "a child's school life including exam results, attendance, teacher assessments and even characteristics" could be available, with third-party organizations being responsible for anonymizing any publications themselves, rather than the data being anonymized by the government before being handed over. An example of a data request that Gove indicated had been rejected in the past, but might be possible under an improved version of privacy regulations, was for "analysis on sexual exploitation".
Financial
Information about a person's financial transactions, including the amount of assets, positions held in stocks or funds, outstanding debts, and purchases can be sensitive. If criminals gain access to information such as a person's accounts or credit card numbers, that person could become the victim of fraud or identity theft. Information about a person's purchases can reveal a great deal about that person's history, such as places they have visited, whom they have contact with, products they have used, their activities and habits, or medications they have used. In some cases, corporations may use this information to target individuals with marketing customized towards those individual's personal preferences, which that person may or may not approve.
Internet
The ability to control the information one reveals about oneself over the internet, and who can access that information, has become a growing concern. |
https://en.wikipedia.org/wiki/Distributed%20artificial%20intelligence | Distributed Artificial Intelligence (DAI) also called Decentralized Artificial Intelligence is a subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems.
Multi-agent systems and distributed problem solving are the two main DAI approaches. There are numerous applications and tools.
Definition
Distributed Artificial Intelligence (DAI) is an approach to solving complex learning, planning, and decision-making problems. It is embarrassingly parallel, thus able to exploit large scale computation and spatial distribution of computing resources. These properties allow it to solve problems that require the processing of very large data sets. DAI systems consist of autonomous learning processing nodes (agents), that are distributed, often at a very large scale. DAI nodes can act independently, and partial solutions are integrated by communication between nodes, often asynchronously. By virtue of their scale, DAI systems are robust and elastic, and by necessity, loosely coupled. Furthermore, DAI systems are built to be adaptive to changes in the problem definition or underlying data sets due to the scale and difficulty in redeployment.
DAI systems do not require all the relevant data to be aggregated in a single location, in contrast to monolithic or centralized Artificial Intelligence systems which have tightly coupled and geographically close processing nodes. Therefore, DAI systems often operate on sub-samples or hashed impressions of very large datasets. In addition, the source dataset may change or be updated during the course of the execution of a DAI system.
Development
In 1975 distributed artificial intelligence emerged as a subfield of artificial intelligence that dealt with interactions of intelligent agents. Distributed artificial intelligence systems were conceived as a group of intelligent entities, called agents, that interacted by cooperation, by coexistence or by competition. DAI is categorized into multi-agent systems and distributed problem solving. In multi-agent systems the main focus is how agents coordinate their knowledge and activities. For distributed problem solving the major focus is how the problem is decomposed and the solutions are synthesized.
Goals
The objectives of Distributed Artificial Intelligence are to solve the reasoning, planning, learning and perception problems of artificial intelligence, especially if they require large data, by distributing the problem to autonomous processing nodes (agents). To reach the objective, DAI requires:
A distributed system with robust and elastic computation on unreliable and failing resources that are loosely coupled
Coordination of the actions and communication of the nodes
Subsamples of large data sets and online machine learning
There are many reasons for wanting to distribute intelligence or cope with multi-agent systems. Mainstream prob |
https://en.wikipedia.org/wiki/Omoikane | Omoikane may refer to:
Omoikane (Shinto) is the god of knowledge in Shinto
Omoikane (Nadesico) is the main computer of the Nadesico starship from the series Martian Successor Nadesico |
https://en.wikipedia.org/wiki/Malbolge | Malbolge () is a public domain esoteric programming language invented by Ben Olmstead in 1998, named after the eighth circle of hell in Dante's Inferno, the Malebolge. It was specifically designed to be almost impossible to use, via a counter-intuitive 'crazy operation', base-three arithmetic, and self-altering code. It builds on the difficulty of earlier challenging esoteric languages (such as Brainfuck and Befunge), but exaggerates this aspect to an extreme degree, playing on the entangled histories of computer science and encryption. Despite this design, it is possible to write useful Malbolge programs.
Programming in Malbolge
Malbolge was very difficult to understand when it arrived, taking two years for the first Malbolge program to appear. The author himself has never written a Malbolge program. The first program was not written by a human being; it was generated by a beam search algorithm designed by Andrew Cooke and implemented in Lisp.
Later, Lou Scheffer posted a cryptanalysis of Malbolge and provided a program to copy its input to its output. He also saved the original interpreter and specification after the original site stopped functioning, and offered a general strategy of writing programs in Malbolge as well as some thoughts on its Turing completeness.
Olmstead believed Malbolge to be a linear bounded automaton. There's a discussion about whether one can implement sensible loops in Malbolge—it took many years before the first non-terminating one was introduced. A correct 99 Bottles of Beer program, which deals with non-trivial loops and conditions, was not announced for seven years; the first correct one was by Hisashi Iizawa in 2005. Hisashi Iizawa et al. also proposed a guide for programming in Malbolge for the purpose of obfuscation for software protection.
In 2020, Kamila Szewczyk published a Lisp interpreter in Malbolge Unshackled.
Example programs
Hello, World!
This program displays "Hello, World".
(=<`#9]~6ZY327Uv4-QsqpMn&+Ij"'E%e{Ab~w=_:]Kw%o44Uqp0/Q?xNvL:`H%c#DD2^WV>gY;dts76qKJImZkj
cat program
This program reads a string from a user and prints that string, similar to the Unix command-line utility cat.(=BA#9"=<;:3y7x54-21q/p-,+*)"!h%B0/.
~P<
<:(8&
66#"!~}|{zyxwvu
gJ%
Design
Malbolge is machine language for a ternary virtual machine, the Malbolge interpreter.
The standard interpreter and the official specification do not match perfectly. One difference is that the compiler stops execution with data outside the 33–126 range. Although this was initially considered a bug in the compiler, Ben Olmstead stated that it was intended and there was in fact "a bug in the specification".
Registers
Malbolge has three registers, a, c, and d. When a program starts, the value of all three registers is zero.
a stands for 'accumulator', set to the value written by all write operations on memory and used for standard I/O. c, the code pointer, is special: it points to the current instruction. d is the data pointer. It is autom |
https://en.wikipedia.org/wiki/De%20Wallen | De Wallen () is the largest and best known red-light district in Amsterdam. It consists of a network of alleys containing approximately 300 one-room cabins rented by prostitutes who offer their sexual services from behind a window or glass door, typically illuminated with red lights and blacklight. Window prostitution is the most visible and typical kind of red-light district sex work in Amsterdam.
De Wallen, together with prostitution areas Singelgebied and Ruysdaelkade, form the Rosse Buurt (red-light areas) of Amsterdam. Of these De Wallen is the oldest and largest area. It is one of the city's major tourist attractions. Now Thursday through Sunday access to De Wallen is restricted at 1:00 am, with bars and restaurants closing at 2:00 am and brothels at 3:00 am.
The area also has a number of sex shops, sex theatres, peep shows, a sex museum, a cannabis museum, and a number of coffee shops that sell cannabis.
History
The Rokin and Damrak run along the original course of the river Amstel. These two roads meet in Dam Square which marks the spot where a bridge was built across the river in 1270. It had doors which were used to dam the river at certain times to avoid flooding. The Damrak then became a harbor and it was around this area that the red-light district first appeared. The walled canals led to the names De Wallen and Walletjes (little walls).
Historically because of proximity to the harbor the area has attracted both prostitution and migrant populations and these are the features it is best known for today.
From late medieval times, the trade started to be restricted. Married men and priests were forbidden to enter the area. In 1578 during the Dutch Revolt against Spanish rule a Protestant city board was formed with fornication deemed punishable. Sex workers were banned and forced underground. They would work for a madam who provided room and board, protection and advice. Often the madam and girls would venture out at night visiting pubs and inns to pick up clients. Parlours remained illegal but tolerated if kept hidden. Trade remained small-scale though spread across the city. Well-known areas were De Haarlemmerdijk, De Houttuinen, Zeedijk and around the harbor.
In the 18th century, wealthy men would meet prostitutes at gambling houses on De Gelderskade and Zeedijk. The women would then take the men back to the parlors where they came from. However, these were often unappealing to a gentleman of means. A solution to this problem was for the gambling houses to provide board for the women. This suited everyone including the authorities. The gambling houses invested in luxury furnishings and gradually they became brothels employing up to 30 women. Famous brothels included De Pijl in Pijlstraat, De Fonteyn in Nieuwmarkt and Madame Therese on the Prinsengracht. For those who could not afford entry to these houses, there were still women to be found around Oudekerksplein and unofficial policies of tolerance remained, although prostitutio |
https://en.wikipedia.org/wiki/End-to-end%20principle | The end-to-end principle is a design framework in computer networking. In networks designed according to this principle, guaranteeing certain application-specific features, such as reliability and security, requires that they reside in the communicating end nodes of the network. Intermediary nodes, such as gateways and routers, that exist to establish the network, may implement these to improve efficiency but cannot guarantee end-to-end correctness.
The essence of what would later be called the end-to-end principle was contained in the work of Paul Baran and Donald Davies on packet-switched networks in the 1960s. Louis Pouzin pioneered the use of the end-to-end strategy in the CYCLADES network in the 1970s. The principle was first articulated explicitly in 1981 by Saltzer, Reed, and Clark. The meaning of the end-to-end principle has been continuously reinterpreted ever since its initial articulation. Also, noteworthy formulations of the end-to-end principle can be found before the seminal 1981 Saltzer, Reed, and Clark paper.
A basic premise of the principle is that the payoffs from adding certain features required by the end application to the communication subsystem quickly diminish. The end hosts have to implement these functions for correctness. Implementing a specific function incurs some resource penalties regardless of whether the function is used or not, and implementing a specific function in the network adds these penalties to all clients, whether they need the function or not.
Concept
The fundamental notion behind the end-to-end principle is that for two processes communicating with each other via some communication means, the reliability obtained from that means cannot be expected to be perfectly aligned with the reliability requirements of the processes. In particular, meeting or exceeding very high-reliability requirements of communicating processes separated by networks of nontrivial size is more costly than obtaining the required degree of reliability by positive end-to-end acknowledgments and retransmissions (referred to as PAR or ARQ). Put differently, it is far easier to obtain reliability beyond a certain margin by mechanisms in the end hosts of a network rather than in the intermediary nodes, especially when the latter are beyond the control of, and not accountable to, the former. Positive end-to-end acknowledgments with infinite retries can obtain arbitrarily high reliability from any network with a higher than zero probability of successfully transmitting data from one end to another.
The end-to-end principle does not extend to functions beyond end-to-end error control and correction, and security. E.g., no straightforward end-to-end arguments can be made for communication parameters such as latency and throughput. In a 2001 paper, Blumenthal and Clark note: "[F]rom the beginning, the end-to-end arguments revolved around requirements that could be implemented correctly at the endpoints; if implementation inside the ne |
https://en.wikipedia.org/wiki/C-sharp | C-sharp, C♯, or C# may refer to:
C♯ (musical note)
C-sharp major, a musical scale
C-sharp minor, a musical scale
C# (programming language), a programming language pronounced as "C-sharp" |
https://en.wikipedia.org/wiki/Holddown | Holddown works by having each router start a timer when they first receive information about a network that is unreachable. Until the timer expires, the router will discard any subsequent route messages that indicate the route is in fact reachable. It can solve the case where multiple routers are connected indirectly. There are realistic scenarios where split horizon and split horizon with poisoned reverse can do nothing.
In other words, a holddown keeps a router from receiving route updates until the network appears to be stable—until either an interface stops changing state (flapping) or a better route is learned.
Holddowns are usually implemented with timers. If the router detects that a network is unreachable, the timer is started. The router will then wait a preset number of seconds until the network is stable. When the timer expires, the router will then receive its routing updates from other routers. For example, in RIP the default holddown timer is set to 180 seconds.
References
Routing protocols |
https://en.wikipedia.org/wiki/Business%20Operating%20System%20%28software%29 | The Business Operating System, or BOS, was initially developed as an early cross-platform operating system, originally for Intel 8080 and Motorola 6800 microprocessors and then for actual businesses and business models. The technology was used in Zilog Z80-based computers and later for most microcomputers of the 1980s. The system was developed by CAP Ltd, a British company that later became one of the world's largest Information Technology consulting firms. BOS and BOS applications were designed to be platform-independent.
Via a management buyout (MBO) in 1981, BOS was spun off to three interlinked companies, MPSL (MicroProducts Software Ltd) which looked after the sales and marketing of BOS, MPPL (MicroProducts Programming Ltd) which looked after both the development of BOS and various horizontal software packages, and MicroProducts Training Ltd. BOS was distributed on a global basis, mainly to the United States and the British Commonwealth, by a variety of independent and MPSL-owned companies.
A popular version was implemented on the Sage/Stride 68000 family based computers, and sold well in Australia. The Sage itself was initially developed using UCSD Pascal and p-code, so it fitted well with the basic BOS design.
The small BOS dealer/distributor network as well as the system's command-line interface contributed to its decline, especially as this was at a time when graphical user interfaces (GUIs) were becoming popular. In 2013, the system was provided with an integrated GUI in order to provide a "simple to use" solution, which "learned" from its user's input.
MPSL developed numerous products for BOS, generally targeting horizontal markets, leaving vertical (industry-specific) markets to independent software vendors (ISVs). Examples of MPSL developed software include BOS/Finder (database), BOS/Planner (spreadsheet), BOS/Writer (word processor) and BOS/AutoClerk (report generator). Companies sold various BOS accounting software suites in the UK and United States. In the UK, BOS accounting packages were considered to be the industry standard by some accountants.
The accounts software was split into four sections: Sales Ledger, Invoices, Purchase Ledger, Daybook and Journal Entries. Data entry and ledger reports were compatible with the Autoclerk report generator. This feature was especially favoured by accountants and tax officials as it meant that a consultant could sit down with a programmer/operator of the BOS system to plan out and ensure that accounting information was presented in exactly the right way for official acceptance. In the early adoption of business microcomputers not having accounts correctly laid out was one of the biggest complaints by tax officials.
An interesting feature of the command line input was the use of the ESCape key for line entry. This freed up the ENTER key (also called RETURN, as per typewriter keyboards) to allow the input of long lines of code and long spans of data entry.
BOS had its own job control |
https://en.wikipedia.org/wiki/Power-line%20communication | Power-line communication (also known as power-line carrier), abbreviated as PLC, carries data on a conductor that is also used simultaneously for AC electric power transmission or electric power distribution to consumers.
In the past, powerlines were solely used for transmitting electricity. But with the advent of advanced networking technologies, including broadband, there's a push for utility and service providers to find cost-effective and high-performance solutions. It's only recently that businesses have started to seriously consider using powerlines for data networking. The possibility of using powerlines as a universal medium to transmit not just electricity or control signals, but also high-speed data and multimedia, is now under investigation.
A wide range of power-line communication technologies are needed for different applications, ranging from home automation to Internet access which is often called broadband over power lines (BPL). Most PLC technologies limit themselves to one type of wires (such as premises wiring within a single building), but some can cross between two levels (for example, both the distribution network and premises wiring). Typically transformers prevent propagating the signal, which requires multiple technologies to form very large networks. Various data rates and frequencies are used in different situations.
A number of difficult technical problems are common between wireless and power-line communication, notably those of spread spectrum radio signals operating in a crowded environment. Radio interference, for example, has long been a concern of amateur radio groups.
Basics
Power-line communications systems operate by adding a modulated carrier signal to the wiring system. Different types of power-line communications use different frequency bands. Since the power distribution system was originally intended for transmission of AC power at typical frequencies of 50 or 60 Hz, power wire circuits have only a limited ability to carry higher frequencies. The propagation problem is a limiting factor for each type of power-line communications.
The main issue determining the frequencies of power-line communication is laws to limit interference with radio services. Many nations regulate unshielded wired emissions as if they were radio transmitters. These jurisdictions usually require unlicensed uses to be below 500 kHz or in unlicensed radio bands. Some jurisdictions (such as the EU), regulate wire-line transmissions further. The U.S. is a notable exception, permitting limited-power wide-band signals to be injected into unshielded wiring, as long as the wiring is not designed to propagate radio waves in free space.
Data rates and distance limits vary widely over many power-line communication standards. Low-frequency (about 100–200 kHz) carriers impressed on high-voltage transmission lines may carry one or two analog voice circuits, or telemetry and control circuits with an equivalent data rate of a few hund |
https://en.wikipedia.org/wiki/United%20World%20Colleges | United World Colleges (UWC) is an international network of schools and educational programmes with the shared aim of "making education a force to unite people, nations and cultures for peace and a sustainable future." The organization was founded on the principles of German educator Kurt Hahn in 1962 to promote intercultural understanding.
Today, UWC consists of 18 colleges on four continents. Young people from more than 155 countries are selected through a system of national committees and pursue the International Baccalaureate Diploma; some of the schools are also open to younger years (from kindergarten). UWC runs the world's largest scholarship programme in international secondary education, with over 80% of students selected by UWC national committees to attend one of the colleges receiving financial support. To date, there are almost 60,000 UWC alumni from all over the world.
The current President of UWC is Queen Noor of Jordan (1995–present). Former South African President Nelson Mandela was joint President (1995–1999) alongside Queen Noor, and subsequently Honorary President of UWC (1999–2013). Former UWC presidents are Lord Mountbatten (1968–1977) and when he was the Prince of Wales, King Charles III (1978–1995).
The movement, including the colleges and national committees, are linked and coordinated by UWC International, which consists of the UWC International Board, the UWC International Council, and the UWC International Office (UWCIO), based in London and Berlin. These entities work together to set the global strategy for the movement, oversee fundraising, and approve new colleges. Faith Abiodun, who joined the movement in 2021, serves as executive director of the International Office, and Musimbi Kanyoro has been the chair of the International Board since 2019.
History
UWC was originally founded in the early 1960s to bridge the social, national and cultural divides apparent during the Second World War, and exacerbated by the Cold War. The first college in the movement, UWC Atlantic College in Wales, United Kingdom, was founded in 1962 by Kurt Hahn, a German educator who had previously founded Schule Schloss Salem in Germany, Gordonstoun in Scotland, the Outward Bound movement, and the Duke of Edinburgh's Award Scheme.
Hahn envisaged a college educating boys and girls aged 16 to 19. He believed that schools should not simply be a means for preparing to enter university, but should help students prepare for life by developing resilience and the ability to experience both successes and failures. The selection would be based on personal motivation and potential, regardless of any social, economic or cultural factors. A scholarship programme would facilitate the recruitment of young people from different socio-economic backgrounds.
Louis Mountbatten was involved with Atlantic College from its early days, and encouraged the organization to adopt the name "United World Colleges" and to open an international office with operations di |
https://en.wikipedia.org/wiki/Quiet%20PC | A quiet, silent or fanless PC is a personal computer that makes very little or no noise. Common uses for quiet PCs include video editing, sound mixing and home theater PCs, but noise reduction techniques can also be used to greatly reduce the noise from servers. There is currently no standard definition for a "quiet PC", and the term is generally not used in a business context, but by individuals and the businesses catering to them.
A proposed general definition is that the sound emitted by such PCs should not exceed 30 dBA, but in addition to the average sound pressure level, the frequency spectrum and dynamics of the sound are important in determining if the sound of the computer is noticed. Sounds with a smooth frequency spectrum (lacking audible tonal peaks), and little temporal variation are less likely to be noticed. The character and amount of other noise in the environment also affects how much sound will be noticed or masked, so a computer may be quiet with relation to a particular environment or set of users.
History
Prior to about 1975, all computers were typically large industrial/commercial machines, often in a centralized location with a dedicated room-sized cooling system. For these systems noise was not an important issue.
The first home computers, such as the Commodore 64, were very low power, and therefore could run fanless or, like the IBM PC, with a low-speed fan only used to cool the power supply, so noise was seldom an issue.
By the mid 1990s as CPU clock speeds increased above 60 MHz, "spot-cooling" was added by means of a fan over the CPU heatsink to blow air onto the processor. Over time, more fans were included to provide spot-cooling in more locations where heat dissipation was needed, including the 3D graphics cards as they grew more powerful. Computer cases increasingly needed to add fans to extract heated air from the case, but unless very carefully designed, this would add more noise.
Energy Star, in 1992, and similar programs led to the widespread adoption of sleep mode among consumer electronics, and the TCO Certified program promoted lower energy consumption. Both added features that allowed systems to only consume as much power as is needed at a particular moment and helped reduce power consumption. In a similar manner the first low power and energy-conserving CPUs were developed for use in laptops but can be used in any machine to reduce power requirements, and hence noise.
Causes of noise
The main causes of PC noise are:
Mechanical friction generated by disk drives and fan bearings
Vibration from disk drives and fans
Air turbulence caused by obstructions in the flow of air
Air vortex effects from fan blade edges
Electrical whine: noise generated by electrical coils or transformers used in power supplies, motherboards, video cards or LCD monitors.
Many of these sources increase with the power of the computer. More or faster transistors use more power, which releases more heat. Increasing the rot |
https://en.wikipedia.org/wiki/A/UX | A/UX is a Unix-based operating system from Apple Computer for Macintosh computers, integrated with System 7's graphical interface and application compatibility. It is Apple's first official Unix-based operating system, launched in 1988 and discontinued in 1995 with version 3.1.1. A/UX requires select 68k-based Macintosh models with an FPU and a paged memory management unit (PMMU), including the Macintosh II, SE/30, Quadra, and Centris series.
Described by InfoWorld as "an open systems solution with the Macintosh at its heart", A/UX is based on UNIX System V Release 2.2, with features from System V Releases 3 and 4 and BSD versions 4.2 and 4.3. It is POSIX- and System V Interface Definition (SVID)-compliant and includes TCP/IP networking since version 2. Having a Unix-compatible, POSIX-compliant operating system enabled Apple to bid for large contracts to supply computers to U.S. federal government institutes.
Features
A/UX provides a graphical user interface including the familiar Finder windows, menus, and controls. The A/UX Finder is a customized version of the System 7 Finder, adapted to run as a Unix process and designed to interact with the underlying Unix file systems. A/UX includes the CommandShell terminal program, which offers a command-line interface to the underlying Unix system. An X Window System server application (called MacX) with a terminal program can also be used to interface with the system and run X applications alongside the Finder. Alternatively, the user can choose to run a fullscreen X11R4 session without the Finder.
Apple's compatibility layer allows A/UX to run Macintosh System 7.0.1, Unix, and hybrid applications. A hybrid application uses functions from both the Macintosh toolbox and the Unix system. For example, it can run a Macintosh application which calls Unix system functions, or a Unix application which calls Macintosh Toolbox functions (such as QuickDraw), or a HyperCard stack graphical frontend for a command-line Unix application. A/UX's compatibility layer uses some existing Toolbox functions in the computer's ROM, while other function calls are translated into native Unix system calls; and it cooperatively multitasks all Macintosh apps in a single address space by using a token-passing system for their access to the Toolbox.
A/UX includes a utility called Commando (similar to a tool of the same name included with Macintosh Programmer's Workshop) to assist users with entering Unix commands. Opening a Unix executable file from the Finder opens a dialog box that allows the user to choose command-line options for the program using standard controls such as radio buttons and check boxes, and display the resulting command line argument for the user before executing the command or program. This feature is intended to ease the learning curve for users new to Unix, and decrease the user's reliance on the Unix manual. A/UX has a utility that allows the user to reformat third-party SCSI drives in such a way that t |
https://en.wikipedia.org/wiki/Loebner%20Prize | The Loebner Prize was an annual competition in artificial intelligence that awarded prizes to the computer programs considered by the judges to be the most human-like. The prize is reported as defunct since 2020. The format of the competition was that of a standard Turing test. In each round, a human judge simultaneously held textual conversations with a computer program and a human being via computer. Based upon the responses, the judge would attempt to determine which was which.
The contest was launched in 1990 by Hugh Loebner in conjunction with the Cambridge Center for Behavioral Studies, Massachusetts, United States. Beginning in 2014 it was organised by the AISB at Bletchley Park.
It has also been associated with Flinders University, Dartmouth College, the Science Museum in London, University of Reading and Ulster University, Magee Campus, Derry, UK City of Culture.
In 2004 and 2005, it was held in Loebner's apartment in New York City. Within the field of artificial intelligence, the Loebner Prize is somewhat controversial; the most prominent critic, Marvin Minsky, called it a publicity stunt that does not help the field along.
In 2019 the format of the competition changed. There was no panel of judges. Instead, the chatbots were judged by the public and there were to be no human competitors.
Prizes
Originally, $2,000 was awarded for the most human-seeming program in the competition. The prize was $3,000 in 2005 and $2,250 in 2006. In 2008, $3,000 was awarded.
In addition, there are two one-time-only prizes that have never been awarded. $25,000 is offered for the first program that judges cannot distinguish from a real human and which can convince judges that the human is the computer program. $100,000 is the reward for the first program that judges cannot distinguish from a real human in a Turing test that includes deciphering and understanding text, visual, and auditory input. Once this is achieved, the annual competition will end.
Competition rules and restrictions
The rules have varied over the years and early competitions featured restricted conversation Turing tests but since 1995 the discussion has been unrestricted.
For the three entries in 2007, Robert Medeksza, Noah Duncan and Rollo Carpenter, some basic "screening questions" were used by the sponsor to evaluate the state of the technology. These included simple questions about the time, what round of the contest it is, etc.; general knowledge ("What is a hammer for?"); comparisons ("Which is faster, a train or a plane?"); and questions demonstrating memory for preceding parts of the same conversation. "All nouns, adjectives and verbs will come from a dictionary suitable for children or adolescents under the age of 12." Entries did not need to respond "intelligently" to the questions to be accepted.
For the first time in 2008 the sponsor allowed introduction of a preliminary phase to the contest opening up the competition to previously disallowed web-based entries judged by |
https://en.wikipedia.org/wiki/Lhasa%20%28computing%29 | In computing, Lhasa () refers to two different applications.
File archives
Lhasa is a Japanese computer program used to "unpack" or decompress compressed files in LHA, ZIP, and other formats.
Synthetic analysis
It is also the name of a computer program developed in the research group of Elias James Corey at the Harvard University Department of Chemistry which uses AI techniques to discover sequences of reactions which may be used to synthesize a compound. LHASA in this case is an acronym for Logic and Heuristics Applied to Synthetic Analysis. This program was one of the first to use a graphical interface to input and display chemical structures.
References
External links
Susie no heya — author of Lhasa
Data compression software |
https://en.wikipedia.org/wiki/Bootstrap%20Protocol | The Bootstrap Protocol (BOOTP) is a computer networking protocol used in
Internet Protocol networks to automatically assign an IP address to network devices from a configuration server. The BOOTP was originally defined in published in 1985.
While some parts of BOOTP have been effectively superseded by the Dynamic Host Configuration Protocol (DHCP), which adds the feature of leases, parts of BOOTP are used to provide service to the DHCP protocol. DHCP servers also provide the legacy BOOTP functionality.
When a network-connected computer boots up, its IP stack broadcasts BOOTP network messages requesting an IP-address assignment. A BOOTP configuration-server replies to the request by assigning an IP address from a pool of addresses, which is preconfigured by an administrator.
BOOTP is implemented using the User Datagram Protocol (UDP) for transport protocol, port number 67 is used by the (DHCP) server for receiving client-requests and port number 68 is used by the client for receiving (DHCP) server responses. BOOTP operates only on IPv4 networks.
Historically, BOOTP has also been used for Unix-like diskless workstations to obtain the network location of their boot image, in addition to the IP address assignment. Enterprises used it to roll out a pre-configured client (e.g., Windows) installation to newly installed PCs.
Initially requiring the use of a boot floppy disk to establish the initial network connection, manufacturers of network cards later embedded the protocol in the BIOS of the interface cards as well as system boards with on-board network adapters, thus allowing direct network booting.
History
The BOOTP was first defined in September 1985 as a replacement for the Reverse Address Resolution Protocol (RARP), published in June 1984. The primary motivation for replacing RARP with BOOTP is that RARP was a link layer protocol. This made implementation difficult on many server platforms, and required that a server be present on each individual IP subnet. BOOTP introduced the innovation of relay agents, which forwarded BOOTP packets from the local network using standard IP routing, so that one central BOOTP server could serve hosts on many subnets.
An increasing set of BOOTP vendor information extensions was defined to supply BOOTP clients of relevant information about the network, like default gateway, name server IP address, the domain name, etcetera.
With the advent of the Dynamic Host Configuration Protocol, the BOOTP vendor information extensions were incorporated as DHCP option fields, to allow DHCP servers to also serve BOOTP clients.
Operation
Case 1: Client and server on same network
When a BOOTP client is started, it has no IP address, so it broadcasts a message containing its MAC address onto the network. This message is called a “BOOTP request”, and it is picked up by the BOOTP server, which replies to the client with the following information that the client needs:
The client's IP address, subnet mask, and default ga |
https://en.wikipedia.org/wiki/C%20file%20input/output | The C programming language provides many standard library functions for file input and output. These functions make up the bulk of the C standard library header . The functionality descends from a "portable I/O package" written by Mike Lesk at Bell Labs in the early 1970s, and officially became part of the Unix operating system in Version 7.
The I/O functionality of C is fairly low-level by modern standards; C abstracts all file operations into operations on streams of bytes, which may be "input streams" or "output streams". Unlike some earlier programming languages, C has no direct support for random-access data files; to read from a record in the middle of a file, the programmer must create a stream, seek to the middle of the file, and then read bytes in sequence from the stream.
The stream model of file I/O was popularized by Unix, which was developed concurrently with the C programming language itself. The vast majority of modern operating systems have inherited streams from Unix, and many languages in the C programming language family have inherited C's file I/O interface with few if any changes (for example, PHP).
Overview
This library uses what are called streams to operate with physical devices such as keyboards, printers, terminals or with any other type of files supported by the system. Streams are an abstraction to interact with these in a uniform way. All streams have similar properties independent of the individual characteristics of the physical media they are associated with.
Functions
Most of the C file input/output functions are defined in (or in the C++ header , which contains the standard C functionality but in the namespace).
Constants
Constants defined in the header include:
Variables
Variables defined in the header include:
Member types
Data types defined in the header include:
– also known as a file handle or a , this is an opaque pointer containing the information about a file or text stream needed to perform input or output operations on it, including:
platform-specific identifier of the associated I/O device, such as a file descriptor
the buffer
stream orientation indicator (unset, narrow, or wide)
stream buffering state indicator (unbuffered, line buffered, fully buffered)
I/O mode indicator (input stream, output stream, or update stream)
binary/text mode indicator
end-of-file indicator
error indicator
the current stream position and multibyte conversion state (an object of type mbstate_t)
reentrant lock (required as of C11)
– a non-array type capable of uniquely identifying the position of every byte in a file and every conversion state that can occur in all supported multibyte character encodings
– an unsigned integer type which is the type of the result of the operator.
Extensions
The POSIX standard defines several extensions to in its Base Definitions, among which are a function that allocates memory, the and functions that establish the link between objects and file descriptors, and a group |
https://en.wikipedia.org/wiki/ThinkPad | ThinkPad is a line of business-oriented laptop computers and tablets, the early models of which were designed, developed and marketed by IBM, starting in 1992. In 2005 IBM sold its PC business, including laptops to Lenovo. The Chinese manufacturer further developed the line, and is still selling new models in 2023.
ThinkPads have a distinct black, boxy design, which originated in 1990 and is still used in some models. Most models also feature a red-colored trackpoint on the keyboard, which has become an iconic and distinctive design characteristic associated with the ThinkPad line.
The ThinkPad line was first developed at the IBM Yamato Facility in Japan, and the first ThinkPads were released in October 1992. It has seen significant success in the business market. ThinkPad laptops have been used in outer space and for many years were the only laptops certified for use on the International Space Station. ThinkPads have also for several years been one of the preferred laptops used by the United Nations.
History
The ThinkPad was developed to compete with Toshiba and Compaq, who had created the first two portable notebooks, with an emphasis on sales to the Harvard Business School. The task of creating a notebook was given to the Yamato Facility in Japan, headed by , a Japanese engineer and product designer who had joined IBM in the 1970s, now known as the "Father of ThinkPad".
The name "ThinkPad" was a product of IBM's corporate history and culture. Thomas J. Watson Sr. first introduced "Think" as an IBM slogan in the 1920s. With every minicomputer and mainframe, IBM installed (almost all were leased – not sold), a blue plastic sign was placed atop the operator's console, with the text "Think" printed on an aluminum plate.
For decades IBM had also distributed small notepads with the word "THINK" emblazoned on their cover to customers and employees. The name "ThinkPad" was suggested by IBM employee Denny Wainwright, who had one such notepad in his pocket. The name was opposed by the IBM corporate naming committee as all the names for IBM computers were numeric at that time, but "ThinkPad" was kept due to praise from journalists and the public.
Early models
In April 1992, IBM announced the first ThinkPad model, the 700, later renamed the 700T after the release of three newer models, the 300, (new) 700 and 700C in October 1992. The 700T was a tablet computer.
This machine was the first product produced under IBM's new "differentiated product personality" strategy, a collaboration between Richard Sapper and Tom Hardy, head of the corporate IBM Design Program. Development of the 700C also involved a close working relationship between Sapper and Kazuhiko Yamazaki, lead notebook designer at IBM's Yamato Design Center in Japan and liaison between Sapper and Yamato engineering.
This 1990–1992 "pre-Internet" collaboration between Italy and Japan was facilitated by a special Sony digital communications system that transmitted high-res images over te |
https://en.wikipedia.org/wiki/Pointing%20stick | A pointing stick (or trackpoint, also referred to generically as a nub or nipple) is a small analog stick used as a pointing device typically mounted centrally in a computer keyboard. Like other pointing devices such as mice, touchpads or trackballs, operating system software translates manipulation of the device into movements of the pointer on the computer screen. Unlike other pointing devices, it reacts to sustained force or strain rather than to gross movement, so it is called an "isometric" pointing device. IBM introduced it commercially in 1992 on its laptops under the name "TrackPoint", and patented it in 1997 (but the patent expired in 2017). It has been used for business laptops, such as Acer's TravelMate, Dell's Latitude, HP's EliteBook and Lenovo's ThinkPad.
The pointing stick senses applied force by using two pairs of resistive strain gauges. A pointing stick can be used by pushing with the fingers in the general direction the user wants the pointer to move. The velocity of the pointer depends on the applied force so increasing pressure causes faster movement. The relation between pressure and pointer speed can be adjusted, just as mouse speed is adjusted.
On a QWERTY keyboard, the stick is typically embedded between the G, H and B keys, and the mouse buttons are placed just below the space bar. The mouse buttons can be operated right-handed or left-handed due to their placement below the keyboard along the centerline. This pointing device has also appeared next to screens on compact-sized laptops such as the Toshiba Libretto and Sony VAIO UX.
Variants
Pointing sticks typically have a replaceable rubber cap, called a nub, which can be a slightly rough "eraser head" material or another shape.
The cap is red on ThinkPads, but is also found in other colors on other machines. It may be gray, pink, black or blue on some Dell models, blue on some HP/Compaq laptops, and green or gray on most Toshiba laptops produced before the 2000s.
Button configurations vary depending on vendor and laptop model. ThinkPads have a prominent middle mouse button, but some models have no physical buttons. Toshiba employs concentric arcs.
In the early 1990s, Zenith Data Systems shipped a number of laptop computers equipped with a device called "J-Mouse", which essentially used a special keyswitch under the J key to allow the J keycap to be used as a pointing stick.
In addition to appearing between the G, H and B keys on a QWERTY keyboard, these devices or similar can also appear on gaming devices as an alternative to a D-pad or analog stick. On certain Toshiba Libretto mini laptops, the pointing stick was located next to the display. IBM sold a mouse with a pointing stick in the location where a scroll wheel is common now.
Optical pointing sticks are also used on some Ultrabook tablet hybrids, such as the Sony Duo 11, ThinkPad Tablet and Samsung Ativ Q.
On the Gateway 2000 Liberty laptop the pointing stick is above the enter key on the right side of th |
https://en.wikipedia.org/wiki/Backtracking | Backtracking is a class of algorithms for finding solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution.
The classic textbook example of the use of backtracking is the eight queens puzzle, that asks for all arrangements of eight chess queens on a standard chessboard so that no queen attacks any other. In the common backtracking approach, the partial candidates are arrangements of k queens in the first k rows of the board, all in different rows and columns. Any partial solution that contains two mutually attacking queens can be abandoned.
Backtracking can be applied only for problems which admit the concept of a "partial candidate solution" and a relatively quick test of whether it can possibly be completed to a valid solution. It is useless, for example, for locating a given value in an unordered table. When it is applicable, however, backtracking is often much faster than brute-force enumeration of all complete candidates, since it can eliminate many candidates with a single test.
Backtracking is an important tool for solving constraint satisfaction problems, such as crosswords, verbal arithmetic, Sudoku, and many other puzzles. It is often the most convenient technique for parsing, for the knapsack problem and other combinatorial optimization problems. It is also the basis of the so-called logic programming languages such as Icon, Planner and Prolog.
Backtracking depends on user-given "black box procedures" that define the problem to be solved, the nature of the partial candidates, and how they are extended into complete candidates. It is therefore a metaheuristic rather than a specific algorithm – although, unlike many other meta-heuristics, it is guaranteed to find all solutions to a finite problem in a bounded amount of time.
The term "backtrack" was coined by American mathematician D. H. Lehmer in the 1950s. The pioneer string-processing language SNOBOL (1962) may have been the first to provide a built-in general backtracking facility.
Description of the method
The backtracking algorithm enumerates a set of partial candidates that, in principle, could be completed in various ways to give all the possible solutions to the given problem. The completion is done incrementally, by a sequence of candidate extension steps.
Conceptually, the partial candidates are represented as the nodes of a tree structure, the potential search tree. Each partial candidate is the parent of the candidates that differ from it by a single extension step; the leaves of the tree are the partial candidates that cannot be extended any further.
The backtracking algorithm traverses this search tree recursively, from the root down, in depth-first order. At each node c, the algorithm checks whether c can be completed to a valid solution. If |
https://en.wikipedia.org/wiki/Serviceability | Serviceability may refer to:
Serviceability (structure)
Serviceability (computer)
Serviceability (banking) |
https://en.wikipedia.org/wiki/Programming%20Research%20Group | The Programming Research Group (PRG) was part of the Oxford University Computing Laboratory (OUCL) in Oxford, England, along with the Numerical Analysis Group, until OUCL became the Department of Computer Science in 2011.
The PRG was founded by Christopher Strachey (1916–1975) in 1965. It was originally located at 45 Banbury Road.
After Strachey's untimely death, C.A.R. Hoare, FRS took over the leadership in 1977. The PRG ethos is summed up by the following quotation from Strachey, found and promulgated by Tony Hoare after he arrived at the PRG:
The PRG moved to 8–11 Keble Road in 1984. During the later 1980s and early 1990s, some members of the PRG were housed at 2 South Parks Road, including Joseph Goguen (who was at the PRG during 1988–1996). Tony Hoare retired in 1999 and the PRG was led by Samson Abramsky from 2000. The PRG continued until the renaming of the Oxford University Computing Laboratory to the Department of Computer Science on 1 June 2011, under the leadership of Bill Roscoe, a former member of the PRG.
The PRG was a centre of excellence in the field of formal methods, playing a leading role in the development of the Z notation (initiated by a visit of Jean-Raymond Abrial) and CSP (together with the associated Occam programming language). It won Queen's Awards with IBM and Inmos for work in this area.
References
External links
PRG website (Archive.org, 2010)
Educational institutions established in 1965
1965 establishments in England
2011 disestablishments in England
Educational institutions disestablished in 2011
Departments of the University of Oxford
Formal methods organizations
Computer science institutes in the United Kingdom
Oxford University Computing Laboratory |
https://en.wikipedia.org/wiki/Boston%20and%20Maine%20Railroad | The Boston and Maine Railroad was a U.S. Class I railroad in northern New England. Originally chartered in 1835, it became part of what was the Pan Am Railways network in 1983 (most of which was purchased by CSX in 2022).
At the end of 1970, B&M operated on of track, not including Springfield Terminal. That year it reported 2,744 million ton-miles of revenue freight and 92 million passenger-miles.
History
The Andover and Wilmington Railroad was incorporated March 15, 1833, to build a branch from the Boston and Lowell Railroad at Wilmington, Massachusetts, north to Andover, Massachusetts. The line opened to Andover on August 8, 1836. The name was changed to the Andover and Haverhill Railroad on April 18, 1837, reflecting plans to build further to Haverhill, Massachusetts (opened later that year), and yet further to Portland, Maine, with renaming to the Boston and Portland Railroad on April 3, 1839, opening to the New Hampshire state line in 1840.
The Boston and Maine Railroad was chartered in New Hampshire on June 27, 1835, and the Maine, New Hampshire and Massachusetts Railroad was incorporated March 12, 1839, in Maine, both companies continuing the proposed line to South Berwick, Maine. The railroad opened in 1840 to Exeter, New Hampshire, and on January 1, 1842, the two companies merged with the Boston and Portland to form a new Boston and Maine Railroad.
On February 23, 1843, the B&M opened to Agamenticus, on the line of the Portland, Saco and Portsmouth Railroad in South Berwick. On January 28 of that year, the B&M and Eastern Railroad came to an agreement to both lease the PS&P as a joint line to Portland.
The Boston and Maine Railroad Extension was incorporated on March 16, 1844, due to a dispute with the Boston and Lowell Railroad over trackage rights rates between Wilmington and Boston. That company was merged into the main B&M on March 19, 1845, and opened on July 1, leading to the abandonment of the old connection to the B&L (later reused by the B&L for its Wildcat Branch). In 1848, another original section was abandoned, as a new alignment was built from Wilmington north to North Andover, Massachusetts in order to better serve Lawrence, Massachusetts.
A new alignment to Portland opened in 1873, splitting from the old route at South Berwick, Maine. The old route was later abandoned. This completed the B&M "main line", which would become known as the Western Route to distinguish it from the Eastern Route (described below), which also connected Boston and Portland.
Acquisitions
As the B&M grew, it also gained control of former rivals, including:
Eastern
On March 28, 1883, the boards of directors of B&M and the Eastern Railroad Company voted to ratify the proposition that Eastern Railroad would be leased by B&M. However, a disagreement about the wording of the contract delayed its execution until December 2, 1884. On May 9, 1890, B&M purchased Eastern Railroad outright. This provided a second route to Maine, ending competitio |
https://en.wikipedia.org/wiki/BitTorrent | BitTorrent, also referred to as simply torrent, is a communication protocol for peer-to-peer file sharing (P2P), which enables users to distribute data and electronic files over the Internet in a decentralized manner. The protocol is developed and maintained by Rainberry, Inc., and was first released in 2001.
To send or receive files, users use a BitTorrent client on their Internet-connected computer, which are available for a variety of computing platforms and operating systems, including an official client. BitTorrent trackers provide a list of files available for transfer and allow the client to find peer users, known as "seeds", who may transfer the files. BitTorrent downloading is considered to be faster than HTTP ("direct downloading") and FTP due to the lack of a central server that could hog bandwidth.
BitTorrent is one of the most common protocols for transferring large files, such as digital video files containing TV shows and video clips, or digital audio files containing songs. In 2019, BitTorrent was a dominant file sharing protocol and generated a substantial amount of Internet traffic, with 2.46% of downstream, and 27.58% of upstream traffic.
History
Programmer Bram Cohen, a University at Buffalo alumnus, designed the protocol in April 2001, and released the first available version on 2 July 2001. Cohen and Ashwin Navin founded BitTorrent, Inc. (later renamed Rainberry, Inc.) to further develop the technology in 2004.
The first release of the BitTorrent client had no search engine and no peer exchange. Up until 2005, the only way to share files was by creating a small text file called a "torrent", that they would upload to a torrent index site. The first uploader acted as a seed, and downloaders would initially connect as peers. Those who wish to download the file would download the torrent, which their client would use to connect to a tracker which had a list of the IP addresses of other seeds and peers in the swarm. Once a peer completed a download of the complete file, it could in turn function as a seed. These files contain metadata about the files to be shared and the trackers which keep track of the other seeds and peers.
In 2005, first Vuze and then the BitTorrent client introduced distributed tracking using distributed hash tables which allowed clients to exchange data on swarms directly without the need for a torrent file.
In 2006, peer exchange functionality was added allowing clients to add peers based on the data found on connected nodes.
In 2017, BitTorrent, Inc. released the BitTorrent v2 protocol specification. BitTorrent v2 is intended to work seamlessly with previous versions of the BitTorrent protocol. The main reason for the update was that the old cryptographic hash function, SHA-1 is no longer considered safe from malicious attacks by the developers, and as such, v2 uses SHA-256. To ensure backwards compatibility, the v2 .torrent file format supports a hybrid mode where the torrents are hashed through |
https://en.wikipedia.org/wiki/Edmonds%E2%80%93Karp%20algorithm | In computer science, the Edmonds–Karp algorithm is an implementation of the Ford–Fulkerson method for computing the maximum flow in a flow network in time. The algorithm was first published by Yefim Dinitz (whose name is also transliterated "E. A. Dinic", notably as author of his early papers) in 1970 and independently published by Jack Edmonds and Richard Karp in 1972. Dinic's algorithm includes additional techniques that reduce the running time to .
Algorithm
The algorithm is identical to the Ford–Fulkerson algorithm, except that the search order when finding the augmenting path is defined. The path found must be a shortest path that has available capacity. This can be found by a breadth-first search, where we apply a weight of 1 to each edge. The running time of is found by showing that each augmenting path can be found in time, that every time at least one of the edges becomes saturated (an edge which has the maximum possible flow), that the distance from the saturated edge to the source along the augmenting path must be longer than last time it was saturated, and that the length is at most . Another property of this algorithm is that the length of the shortest augmenting path increases monotonically. There is an accessible proof in Introduction to Algorithms.
Pseudocode
algorithm EdmondsKarp is
input:
graph (graph[v] should be the list of edges coming out of vertex v in the
original graph and their corresponding constructed reverse edges
which are used for push-back flow.
Each edge should have a capacity, flow, source and sink as parameters,
as well as a pointer to the reverse edge.)
s (Source vertex)
t (Sink vertex)
output:
flow (Value of maximum flow)
flow := 0 (Initialize flow to zero)
repeat
(Run a breadth-first search (bfs) to find the shortest s-t path.
We use 'pred' to store the edge taken to get to each vertex,
so we can recover the path afterwards)
q := queue()
q.push(s)
pred := array(graph.length)
while not empty(q) and pred[t] = null
cur := q.pop()
for Edge e in graph[cur] do
if pred[e.t] = null and e.t ≠ s and e.cap > e.flow then
pred[e.t] := e
q.push(e.t)
if not (pred[t] = null) then
(We found an augmenting path.
See how much flow we can send)
df := ∞
for (e := pred[t]; e ≠ null; e := pred[e.s]) do
df := min(df, e.cap - e.flow)
(And update edges by that amount)
for (e := pred[t]; e ≠ null; e := pred[e.s]) do
e.flow := e.flow + df
e.rev.flow := e.rev.flow - df
flow := flow + df
until pred[t] = null (i.e., until no augmenting path was found)
return flow
Exampl |
https://en.wikipedia.org/wiki/D.%20R.%20Fulkerson | Delbert Ray Fulkerson (; August 14, 1924 – January 10, 1976) was an American mathematician who co-developed the FordFulkerson algorithm, one of the most well-known algorithms to solve the maximum flow problem in networks.
Early life and education
D. R. Fulkerson was born in Tamms, Illinois, the third of six children of Elbert and Emma Fulkerson. Fulkerson became an undergraduate at Southern Illinois University. His academic career was interrupted by military service during World War II. Having returned to complete his degree after the war, he went on to do a Ph.D. in mathematics at the University of Wisconsin–Madison under the supervision of Cyrus MacDuffee, who was a student of L. E. Dickson. Fulkerson received his Ph.D. in 1951.
Career
After graduation, Fulkerson joined the mathematics department at the RAND Corporation. In 1956, he and L. R. Ford Jr. described the Ford–Fulkerson algorithm. In 1962 they produced a book-length description of their method.
In 1971 he moved to Cornell University as the Maxwell Upson Professor of Engineering. He was diagnosed with Crohn's disease and was limited in his teaching. In despair, he committed suicide in 1976.
Fulkerson was the supervisor of Jon Folkman at RAND and Tatsuo Oyama at GRIPS. After Folkman committed suicide in 1969, Fulkerson blamed himself for failing to notice Folkman's suicidal behaviors.
In 1979, the renowned Fulkerson Prize was established which is now awarded every three years for outstanding papers in discrete mathematics jointly by the Mathematical Programming Society and the American Mathematical Society.
See also
Out-of-kilter algorithm
List of people diagnosed with Crohn's disease
References
External links
Delbert Ray Fulkerson prize
Fulkerson biography at Cornell
Biography of D. R. Fulkerson from the Institute for Operations Research and the Management Sciences
20th-century American mathematicians
Combinatorialists
1924 births
1976 suicides
University of Wisconsin–Madison College of Letters and Science alumni
RAND Corporation people
People from Alexander County, Illinois
Mathematicians from Illinois
1976 deaths
People with Crohn's disease
Suicides in New York (state) |
https://en.wikipedia.org/wiki/LotusScript | LotusScript is an object oriented programming language used by Lotus Notes (since version 4.0) and other IBM Lotus Software products.
LotusScript is similar to Visual Basic. Developers familiar with one can easily understand the syntax and structure of code in the other. The major differences between the two are in their respective Integrated Development Environments and in the product-specific object classes provided in each language that are included. VB includes a richer set of classes for UI manipulation, whereas LotusScript includes a richer set of application-specific classes for Lotus Notes, Lotus Word Pro and Lotus 1-2-3. In the case of Lotus Notes, there are classes to work with Notes databases, documents (records) in those databases, etc. These classes can also be used as OLE Automation objects outside the Lotus Notes environment, from Visual Basic.
LotusScript also allows the definition of user-defined types and classes, although it is not possible to inherit from the product-specific classes. LotusScript programs can access Microsoft Office documents by using the OLE automation in libraries from MS Office.
See also
Visual Basic for Applications
Microsoft Power Fx
References
External links
IBM Notes Domino Overview
Lotus Domino Designer documentation
IBM Redbook - "LotusScript for VisualBasic Programmers"
Integrating IBM Lotus Notes with Microsoft Office using LotusScript
BASIC programming language family
Lotus Software software
Scripting languages |
https://en.wikipedia.org/wiki/Software%20patents%20under%20the%20European%20Patent%20Convention | The patentability of software, computer programs and computer-implemented inventions under the European Patent Convention (EPC) is the extent to which subject matter in these fields is patentable under the Convention on the Grant of European Patents of October 5, 1973. The subject also includes the question of whether European patents granted by the European Patent Office (EPO) in these fields (sometimes called "software patents") are regarded as valid by national courts.
Under the EPC, and in particular its Article 52, "programs for computers" are not regarded as inventions for the purpose of granting European patents, but this exclusion from patentability only applies to the extent to which a European patent application or European patent relates to a computer program as such. As a result of this partial exclusion, and despite the fact that the EPO subjects patent applications in this field to a much stricter scrutiny when compared to their American counterpart, that does not mean that all inventions including some software are de jure not patentable.
Article 52 of the European Patent Convention
The European Patent Convention (EPC), Article 52, paragraph 2, excludes from patentability, in particular
Paragraph 3 then says:
The words "as such" have caused patent applicants, attorneys, examiners, and judges a great deal of difficulty since the EPC came into force in 1978. The Convention, as with all international conventions, should be construed using a purposive approach. However, the purpose behind the words and the exclusions themselves is far from clear.
One interpretation, which is followed by the Boards of Appeal of the EPO, is that an invention is patentable if it provides a new and non-obvious "technical" solution to a technical problem. The problem, and the solution, may be entirely resident within a computer such as a way of making a computer run faster or more efficiently in a novel and inventive way. Alternatively, the problem may be how to make the computer easier to use, such as in T 928/03.
Patentability under European Patent Office case law
Like the other parts of the paragraph 2, computer programs are open to patenting to the extent that they provide a technical contribution to the prior art. In the case of computer programs and according to the case law of the Boards of Appeal, a technical contribution typically means a further technical effect that goes beyond the normal physical interaction between the program and the computer.
Though many argue that there is an inconsistency on how the EPO now applies Article 52, the practice of the EPO is fairly consistent regarding the treatment of the different elements of Article 52(2). A mathematical method is not patentable, but an electrical filter designed according to this method would not be excluded from patentability by Article 52(2) and (3).
According to the jurisprudence of the Boards of Appeal of the EPO, a technical effect provided by a computer program can be, for e |
https://en.wikipedia.org/wiki/David%20Chaum | David Lee Chaum (born 1955) is an American computer scientist, cryptographer, and inventor. He is known as a pioneer in cryptography and privacy-preserving technologies, and widely recognized as the inventor of digital cash. His 1982 dissertation "Computer Systems Established, Maintained, and Trusted by Mutually Suspicious Groups" is the first known proposal for a blockchain protocol. Complete with the code to implement the protocol, Chaum's dissertation proposed all but one element of the blockchain later detailed in the Bitcoin whitepaper. He has been referred to as "the father of online anonymity", and "the godfather of cryptocurrency".
He is also known for developing ecash, an electronic cash application that aims to preserve a user's anonymity, and inventing many cryptographic protocols like the blind signature, mix networks and the Dining cryptographers protocol. In 1995 his company DigiCash created the first digital currency with eCash. His 1981 paper, "Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms", laid the groundwork for the field of anonymous communications research.
Life and career
Chaum is Jewish and was born to a Jewish family in Los Angeles. He gained a doctorate in computer science from the University of California, Berkeley in 1982. Also that year, he founded the International Association for Cryptologic Research (IACR), which currently organizes academic conferences in cryptography research. Subsequently, he taught at the New York University Graduate School of Business Administration and at the University of California, Santa Barbara (UCSB). He also formed a cryptography research group at CWI, the Dutch National Research Institute for Mathematics and Computer Science in Amsterdam. He founded DigiCash, an electronic cash company, in 1990.
Chaum received the Information Technology European Award for 1995. In 2004, he was named an IACR Fellow. In 2010, at the RSA Conference, he was honored with the RSA Award for Excellence in Mathematics. In 2019, he was awarded the honorary title of Dijkstra Fellow by CWI. He received an honorary doctorate from the University of Lugano in 2021.
Chaum resides in Sherman Oaks, Los Angeles.
Notable research contributions
Vault systems
Recently credited by Alan Sherman's "On the Origins and Variations of Blockchain Technologies", Chaum's 1982 Berkeley dissertation proposed every element of the blockchain found in Bitcoin except proof of work. The proposed vault system lays out a plan for achieving consensus state between nodes, chaining the history of consensus in blocks, and immutably time-stamping the chained data. The paper also lays out the specific code to implement such a protocol.
Digital cash
Chaum is credited as the inventor of secure digital cash for his 1983 paper, which also introduced the cryptographic primitive of a blind signature. These ideas have been described as the technical roots of the vision of the Cypherpunk movement that began in the late |
https://en.wikipedia.org/wiki/Table%20of%20handgun%20and%20rifle%20cartridges | This is a table of selected pistol/submachine gun and rifle/machine gun cartridges by common name. Data values are the highest found for the cartridge, and might not occur in the same load (e.g. the highest muzzle energy might not be in the same load as the highest muzzle velocity, since the bullet weights can differ between loads).
Legend
Factory loadings. Number of manufacturers producing complete cartridges - e.g. Norma, RWS, Hornady, Winchester, Federal, Remington, Sellier & Bellot, Prvi Partizan. May be none for obsolete and wildcat cartridges.
H/R: Handgun (H) or rifle (R) - dominant usage of the cartridge (although several dual-purpose cartridges exist)
Size: Metric size - may not be official
MV: Muzzle velocity, in feet-per-second
ME: Muzzle energy, in foot-pounds
P: Momentum, in pound (force) (lbf) times seconds. A guide to the recoil from the cartridge, and an indicator of bullet penetration potential. The .30-06 Springfield (at 2.064 lbf-s) is considered the upper limit for tolerable recoil for inexperienced rifle shooters.
Chg: Propellant charge, in grains
Dia: Bullet diameter, in inches
BC: Ballistic coefficient, G1 model
L: Case length (mm)
See also
Firearm
History of the firearm
Physics of firearms
Terminal ballistics
External ballistics
Internal ballistics
Stopping power
Hydrostatic shock
Point-blank range
References
External links
Terminal Ballistics Research: Detailed history and terminal performance discussion for numerous hunting cartridges, organized by bullet diameter.
Cartridges pistol and rifle
handgun and rifle |
https://en.wikipedia.org/wiki/Extended%20Display%20Identification%20Data | Extended Display Identification Data (EDID) and Enhanced EDID (E-EDID) are metadata formats for display devices to describe their capabilities to a video source (e.g., graphics card or set-top box). The data format is defined by a standard published by the Video Electronics Standards Association (VESA).
The EDID data structure includes manufacturer name and serial number, product type, phosphor or filter type (as chromaticity data), timings supported by the display, display size, luminance data and (for digital displays only) pixel mapping data.
DisplayID is a VESA standard targeted to replace EDID and E-EDID extensions with a uniform format suited for both PC monitor and consumer electronics devices.
Background
EDID structure (base block) versions range from v1.0 to v1.4; all these define upwards-compatible 128-byte structures. Version 2.0 defined a new 256-byte structure but it has been deprecated and replaced by E-EDID which supports multiple extension blocks. HDMI versions 1.0–1.3c use E-EDID v1.3.
Before Display Data Channel (DDC) and EDID were defined, there was no standard way for a graphics card to know what kind of display device it was connected to. Some VGA connectors in personal computers provided a basic form of identification by connecting one, two or three pins to ground, but this coding was not standardized.
This problem is solved by EDID and DDC, as it enables the display to send information to the graphics card it is connected to. The transmission of EDID information usually uses the Display Data Channel protocol, specifically DDC2B, which is based on I²C-bus (DDC1 used a different serial format which never gained popularity). The data is transmitted via the cable connecting the display and the graphics card; VGA, DVI, DisplayPort and HDMI are supported.
The EDID is often stored in the monitor in the firmware chip called serial EEPROM (electrically erasable programmable read-only memory) and is accessible via the I²C-bus at address . The EDID PROM can often be read by the host PC even if the display itself is turned off.
Many software packages can read and display the EDID information, such as read-edid for Linux and DOS, PowerStrip for Microsoft Windows and the X.Org Server for Linux and BSD unix. Mac OS X natively reads EDID information and programs such as SwitchResX or DisplayConfigX can display the information as well as use it to define custom resolutions.
E-EDID was introduced at the same time as E-DDC, which supports multiple extensions blocks and deprecated EDID version 2.0 structure (it can be incorporated in E-EDID as an optional extension block). Data fields for preferred timing, range limits, and monitor name are required in E-EDID. E-EDID also supports dual GTF timings and aspect ratio change.
With the use of extensions, E-EDID string can be lengthened up to 32 KBytes.
EDID Extensions assigned by VESA
Timing Extension ()
Additional Timing Data Block (CTA EDID Timing Extension) ()
Video Timing Block Exte |
https://en.wikipedia.org/wiki/Knowledge%20base | In computer science, a knowledge base (KB) is a set of sentences, each sentence given in a knowledge representation language, with interfaces to tell new sentences and to ask questions about what is known, where either of these interfaces might use inference. It is a technology used to store complex structured data used by a computer system. The initial use of the term was in connection with expert systems, which were the first knowledge-based systems.
Original usage of the term
The original use of the term knowledge base was to describe one of the two sub-systems of an expert system. A knowledge-based system consists of a knowledge-base representing facts about the world and ways of reasoning about those facts to deduce new facts or highlight inconsistencies.
Properties
The term "knowledge-base" was coined to distinguish this form of knowledge store from the more common and widely used term database. During the 1970s, virtually all large management information systems stored their data in some type of hierarchical or relational database. At this point in the history of information technology, the distinction between a database and a knowledge-base was clear and unambiguous.
A database had the following properties:
Flat data: Data was usually represented in a tabular format with strings or numbers in each field.
Multiple users: A conventional database needed to support more than one user or system logged into the same data at the same time.
Transactions: An essential requirement for a database was to maintain integrity and consistency among data accessed by concurrent users. These are the so-called ACID properties: Atomicity, Consistency, Isolation, and Durability.
Large, long-lived data: A corporate database needed to support not just thousands but hundreds of thousands or more rows of data. Such a database usually needed to persist past the specific uses of any individual program; it needed to store data for years and decades rather than for the life of a program.
The first knowledge-based systems had data needs that were the opposite of these database requirements. An expert system requires structured data. Not just tables with numbers and strings, but pointers to other objects that in turn have additional pointers. The ideal representation for a knowledge base is an object model (often called an ontology in artificial intelligence literature) with classes, subclasses and instances.
Early expert systems also had little need for multiple users or the complexity that comes with requiring transactional properties on data. The data for the early expert systems was used to arrive at a specific answer, such as a medical diagnosis, the design of a molecule, or a response to an emergency. Once the solution to the problem was known, there was not a critical demand to store large amounts of data back to a permanent memory store. A more precise statement would be that given the technologies available, researchers compromised and did without the |
https://en.wikipedia.org/wiki/Extract%2C%20transform%2C%20load | In computing, extract, transform, load (ETL) is a three-phase process where data is extracted, transformed (cleaned, sanitized, scrubbed) and loaded into an output data container. The data can be collated from one or more sources and it can also be output to one or more destinations. ETL processing is typically executed using software applications but it can also be done manually by system operators. ETL software typically automates the entire process and can be run manually or on reoccurring schedules either as single jobs or aggregated into a batch of jobs.
A properly designed ETL system extracts data from source systems and enforces data type and data validity standards and ensures it conforms structurally to the requirements of the output. Some ETL systems can also deliver data in a presentation-ready format so that application developers can build applications and end users can make decisions.
The ETL process is often used in data warehousing. ETL systems commonly integrate data from multiple applications (systems), typically developed and supported by different vendors or hosted on separate computer hardware. The separate systems containing the original data are frequently managed and operated by different stakeholders. For example, a cost accounting system may combine data from payroll, sales, and purchasing.
Extract
Data extraction involves extracting data from homogeneous or heterogeneous sources; data transformation processes data by data cleaning and transforming it into a proper storage format/structure for the purposes of querying and analysis; finally, data loading describes the insertion of data into the final target database such as an operational data store, a data mart, data lake or a data warehouse.
ETL processing involves extracting the data from the source system(s). In many cases, this represents the most important aspect of ETL, since extracting data correctly sets the stage for the success of subsequent processes. Most data-warehousing projects combine data from different source systems. Each separate system may also use a different data organization and/or format. Common data-source formats include relational databases, flat-file databases, XML, and JSON, but may also include non-relational database structures such as IBM Information Management System or other data structures such as Virtual Storage Access Method (VSAM) or Indexed Sequential Access Method (ISAM), or even formats fetched from outside sources by means such as a web crawler or data scraping. The streaming of the extracted data source and loading on-the-fly to the destination database is another way of performing ETL when no intermediate data storage is required.
An intrinsic part of the extraction involves data validation to confirm whether the data pulled from the sources has the correct/expected values in a given domain (such as a pattern/default or list of values). If the data fails the validation rules, it is rejected entirely or in part. The reje |
https://en.wikipedia.org/wiki/Jeeves%20and%20Wooster | Jeeves and Wooster is a British comedy-drama television series adapted by Clive Exton from P. G. Wodehouse's "Jeeves" stories. It aired on the ITV network from 22 April 1990 to 20 June 1993, with the last series nominated for a British Academy Television Award for Best Drama Series. Set in the UK and the US in an unspecified period between the late 1920s and the 1930s, the series starred Hugh Laurie as Bertie Wooster, an affable young gentleman and member of the idle rich, and Stephen Fry as Jeeves, his highly intelligent and competent valet. Bertie and his friends, who are mainly members of the Drones Club, are extricated from all manner of societal misadventures by the indispensable Jeeves.
When Fry and Laurie began the series, they were already a popular comedic double act for their regular appearances on Channel 4's Saturday Live and their own show A Bit of Fry & Laurie (BBC, 1987–95).
In the television documentary Fry and Laurie Reunited (2010), the actors, reminiscing about their involvement in the series, revealed that they were initially reluctant to play the parts of Jeeves and Wooster, but eventually decided to do so because the series was going to be made with or without them, and they felt no one else would do the parts justice.
The series was a collaboration between Brian Eastman of Picture Partnership Productions and Granada Television.
Theme and opening credits
The theme (called "Jeeves and Wooster") is an original piece of music in the jazz/swing style written by composer Anne Dudley for the programme. Dudley uses variations of the theme as a basis for all of the episodes' scores and was nominated for a British Academy Television Award for her work on the third series.
Characters
Many of the programme's supporting roles – including significant characters such as Aunt Agatha, Madeline Bassett and Gussie Fink-Nottle – were played by more than one actor. One prominent character, Aunt Dahlia, was played by a different actress in each of the four series. Francesca Folan played two very different characters: Madeline Bassett in series one and Lady Florence Craye in series four. The character of Stiffy Byng was played by Charlotte Attenborough in series two and by Amanda Harris in series three and then by Attenborough again in series four. Richard Braine, who took over the role of Gussie Fink-Nottle in series three and four, also appeared as the conniving Rupert Steggles in series one.
Episodes
Reception
The third series of Jeeves and Wooster won a British Academy Television Award for Best Design for Eileen Diss. The final series won a British Academy Television Award for Best Graphics for Derek W. Hayes and was nominated for a British Academy Television Award for Best Drama Series; it also earned a British Academy Television Award for Best Original Television Music for Anne Dudley and a British Academy Television Award for Best Costume Design for Dany Everett.
In retrospect, Michael Brooke of BFI Screenonline called screen |
https://en.wikipedia.org/wiki/F%20Sharp%20%28programming%20language%29 | F# (pronounced F sharp) is a functional-first, general-purpose, strongly typed, multi-paradigm programming language that encompasses functional, imperative, and object-oriented programming methods. It is most often used as a cross-platform Common Language Infrastructure (CLI) language on .NET, but can also generate JavaScript and graphics processing unit (GPU) code.
F# is developed by the F# Software Foundation, Microsoft and open contributors. An open source, cross-platform compiler for F# is available from the F# Software Foundation. F# is a fully supported language in Visual Studio and JetBrains Rider. Plug-ins supporting F# exist for many widely used editors including Visual Studio Code, Vim, and Emacs.
F# is a member of the ML language family and originated as a .NET Framework implementation of a core of the programming language OCaml. It has also been influenced by C#,
Python, Haskell, Scala and Erlang.
History
Versions
Language evolution
F# uses an open development and engineering process.
The language evolution process is managed by Don Syme from Microsoft Research as the benevolent dictator for life (BDFL) for the language design, together with the F# Software Foundation.
Earlier versions of the F# language were designed by Microsoft and Microsoft Research using a closed development process.
F# was first included in Visual Studio in the 2010 edition, at the same level as Visual Basic and C# (albeit as an option), and has remained in subsequent editions, thus making the language widely available and well-supported.
F# originates from Microsoft Research, Cambridge, UK. The language was originally designed and implemented by Don Syme, according to whom in the fsharp team, they say the F is for "Fun".
Andrew Kennedy contributed to the design of units of measure. The Visual F# Tools for Visual Studio are developed by Microsoft. The F# Software Foundation developed the F# open-source compiler and tools, incorporating the open-source compiler implementation provided by the Microsoft Visual F# Tools team.
Language overview
Functional programming
While supporting object-oriented features available in C#, F# is a strongly typed functional-first language with a large number of capabilities that are normally found only in functional programming languages. Together, these features allow F# programs to be written in a completely functional style and also allow functional and object-oriented styles to be mixed.
Examples of functional features are:
Everything is an expression
Type inference (using Hindley–Milner type inference)
Functions as first-class citizens
Anonymous functions with capturing semantics (i.e., closures)
Immutable variables and objects
Lazy evaluation support
Higher-order functions
Nested functions
Currying
Pattern matching
Algebraic data types
Tuples
List comprehension
Monad pattern support (called computation expressions)
Tail Call Optimisation
F# is an expression-based language using eager evaluation |
https://en.wikipedia.org/wiki/Lawrence%20Klein | Lawrence Robert Klein (September 14, 1920 – October 20, 2013) was an American economist. For his work in creating computer models to forecast economic trends in the field of econometrics in the Department of Economics at the University of Pennsylvania, he was awarded the Nobel Memorial Prize in Economic Sciences in 1980 specifically "for the creation of econometric models and their application to the analysis of economic fluctuations and economic policies." Due to his efforts, such models have become widespread among economists. Harvard University professor Martin Feldstein told the Wall Street Journal that Klein "was the first to create the statistical models that embodied Keynesian economics," tools still used by the Federal Reserve Bank and other central banks.
Life and career
Klein was born in Omaha, Nebraska, the son of Blanche (née Monheit) and Leo Byron Klein. He went on to graduate from Los Angeles City College, where he learned calculus; the University of California, Berkeley, where he began his computer modeling and earned a BA in Economics in 1942; he earned his PhD in Economics at the Massachusetts Institute of Technology (MIT) in 1944, where he was Paul Samuelson's first doctoral student.
Early model-building
Klein then moved to the Cowles Commission for Research in Economics, which was then at the University of Chicago, now the Cowles Foundation. There he built a model of the United States economy to forecast the development of business fluctuations and to study the effects of government economic-political policy. After World War II Klein used his model to correctly predict, against the prevailing expectation, that there would be an economic upturn rather than a depression due to increasing consumer demand from returning servicemen. Similarly, he correctly predicted a mild recession at the end of the Korean War.
Klein briefly joined the Communist Party during the 1940s, which led to trouble years later.
At the University of Michigan, Klein developed enhanced macroeconomic models, in particular the famous Klein–Goldberger model with Arthur Goldberger, which was based on foundations laid by Jan Tinbergen of the Netherlands, later winner of the first economics prize in 1969. Klein differed from Tinbergen in using an alternative economic theory and a different statistical technique.
McCarthyism and move to England
In 1954, Klein's brief membership in the Communist Party was made public and he was denied tenure at the University of Michigan, in the wake of the McCarthy era. Klein moved to the University of Oxford, and developed an economic model of the United Kingdom known as the Oxford model with Sir James Ball. Additionally, at the Institute of Statistics Klein assisted with the creation of the British Savings Surveys, based upon the Michigan Surveys.
Return to the U.S.
In 1958 Klein returned to the U.S. to join the Department of Economics at the University of Pennsylvania. In 1959 he was awarded the John Bates Clark Medal, |
https://en.wikipedia.org/wiki/Shar | In the Unix operating system, shar (an abbreviation of shell archive) is an archive format created with the Unix shar utility. A shar file is a type of self-extracting archive, because it is a valid shell script, and executing it will recreate the files. To extract the files, only the standard Unix Bourne shell sh is usually required.
Note that the shar command is not specified by the Single Unix Specification, so it is not formally a component of Unix, but a legacy utility.
Details
While the shar format has the advantage of being plain text, it poses a risk due to being executable; for this reason the older and more general tar file format is usually preferred even for transferring text files. GNU provides its own version of shar in the GNU Sharutils collection.
unshar programs have been written for other operating systems but are not always reliable; shar files are shell scripts and can theoretically do anything that a shell script can do (including using incompatible features of enhanced or workalike shells), limiting their utility outside the Unix world.
The drawback of self-extracting shell scripts (any kind, not just shar) is that they may rely on a particular implementation of programs; shell archives created with older versions of makeself, for example, the original Unreal Tournament for Linux installer, fails to run on bash 3.x due to a change in how missing arguments to trap built-in command are handled.
History and variants
James Gosling is credited with writing the first version of the shar utility in 1982, and also wrote an early example (allegedly 1978-79) of the concept in the form of this simple shell script:
# shar -- Shell archiver
AR=$1
shift
for i do
echo a - $i
echo "echo x - $i" >>$AR
echo "cat >$i <<'!Funky!Stuff!'" >>$AR
cat $i >>$AR
echo "!Funky!Stuff!" >>$AR
done
The following variants of shar are known:
shar 1.x (1982) by Gosling. Public domain shell script.
Current FreeBSD shar. 3-clause BSD license, shell script. Adds md5sum.
shar2 or xshar (1988) by William Davidsen. Public domain, C program.
shar3 (1989) by Warren Tucker.
shar 3.49 (1990) by Richard H. Gumpertz. Adds uuencode support.
Current GNU sharutils. GPLv3, C program.
cshar (1984) by Michael A. Thompson and Mark Smith, now lost to bitrot. C program.
cshar (1988) by Rich Salz, C program. Likely influenced shar 3.49.
ccshar (1996), a modification to output a csh script instead. Rarely used on Usenet.
GNU is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities.
Similar formats
A version of the same concept, but for the VMS operating system, was written in 1987 by Michael Bednarek from The Melbourne Institute of Applied Economic and Social Research as a DCL script, VMS_SHAR.COM. This was later maintained and extended by James A. Gray from Xerox, and Andy Harper from King's College London.
makeself (2001–) is a shell script that generates self-ex |
https://en.wikipedia.org/wiki/KDD | KDD may refer to:
Khuzdar Airport (IATA code: KDD), Balochistan, Pakistan
Knowledge discovery in databases, a form of data mining
KDD – Kriminaldauerdienst (Berlin Crime Squad), a German television series broadcast from 2007 to 2010
KDD Group, a Ukrainian real estate development company
KDDI, a Japanese telecommunications company, formerly known as KDD
Yankunytjatjara dialect (ISO 639:kdd)
See also
Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD) |
https://en.wikipedia.org/wiki/Data%20Resources | Data Resources Inc or DRI was co-founded in 1969 by Donald Marron and Otto Eckstein. Marron is best known as the former CEO of PaineWebber and founder of Lightyear Capital. Eckstein was a Harvard University economics professor, economic consultant to Lyndon Baines Johnson and member of the Council of Economic Advisors; he is best known for the development of the theory of core inflation.
DRI became the largest non-governmental distributor of economic data in the world. The company also built the largest macroeconometric model of its era. Allen Sinai was a leading architect. Richard Hokenson did much of the maintenance work.
DRI was a major customer of Burroughs Computer. During the 1970s era of rapid expansion, DRI used the Burroughs 6700 and 7700 mainframes. DRI also developed innovative software, including the PRIMA and AID database languages; EPL Econometric Programming Language; MODSIM for solving models; and MODEL for solving econometric models in particular. Later the functionality of all these programs was merged into the EPS Econometric Programming System by the chief architect of all this software, Robert P. Lacey. Other programmers in this effort included John Ahlstrom, Greg George, and Joe Polak.
The DRI Review was published monthly and summarized what the models said for the economic outlook. This information was presented in outlook conferences. DRI also held educational seminars.
General corporate timeline
1979: Eckstein and Marron sold DRI to McGraw-Hill for over $100 million Joseph Kasputys was named president in 1981.
1984: After Eckstein's death, DRI was slow to adopt new technology. DRI was wed to mainframe computers when the industry was moving this kind of analytic work to personal computers.
1987: Wharton Econometric Forecasting Associates (WEFA) merged with Chase Econometrics, a competitor to DRI and WEFA.
2001: DRI merged with WEFA to form Global Insight.
2008: Global Insight was bought by IHS Inc., thus inheriting 50 years of experience and more than 200 full-time economists, country risk analysts, and consultants.
External links
McGraw-Hill company website
Global Insight company website
Otto Eckstein C.V. at the New School NYC
References
Defunct financial services companies of the United States
Financial data vendors
Defunct financial data vendors
Research and analysis firms of the United States
Defunct research and analysis firms
Defunct macroeconomics consulting firms |
https://en.wikipedia.org/wiki/Advanced%20SCSI%20Programming%20Interface | In computing, ASPI (Advanced SCSI Programming Interface) is an Adaptec-developed programming interface which standardizes communication on a computer bus between a SCSI driver module on the one hand and SCSI (and ATAPI) peripherals on the other.
ASPI structure
The ASPI manager software provides an interface between ASPI modules (device drivers or applications with direct SCSI support), a SCSI host adapter, and SCSI devices connected to the host adapter. The ASPI manager is specific to the host adapter and operating system; its primary role is to abstract the host adapter specifics and provide a generic software interface to SCSI devices.
On Windows 9x and Windows NT, the ASPI manager is generic and relies on the services of SCSI miniport drivers. On those systems, the ASPI interface is designed for applications which require SCSI pass-through functionality (such as CD-ROM burning software).
The primary operations supported by ASPI are discovery of host adapters and attached devices, and submitting SCSI commands to devices via SRBs (SCSI Request Blocks). ASPI supports concurrent execution of SCSI commands.
History
Originally inspired by a driver architecture developed by Douglas W. Goodall for Ampro Computers in 1983, ASPI was developed by Adaptec around 1990. It was initially designed to support DOS, OS/2, Windows 3.x, and Novell NetWare. It was originally written to support SCSI devices; support for ATAPI devices was added later. Most other SCSI host adapter vendors (for example BusLogic, DPT, AMI, Future Domain, DTC) shipped their own ASPI managers with their hardware.
Adaptec also developed generic SCSI disk and CD-ROM drivers for DOS (ASPICD.SYS and ASPIDISK.SYS).
Microsoft licensed the interface for use with Windows 9x series. At the same time Microsoft developed SCSI Pass Through Interface (SPTI), an in-house substitute that worked on the NT platform. Microsoft did not include ASPI in Windows 2000/XP, in favor of its own SPTI. Users may still download ASPI from Adaptec. A number of CD/DVD applications also continue to offer their own implementations of ASPI layer.
To support USB drives under DOS, Panasonic developed a universal ASPI driver (USBASPI.SYS) that bypasses the lack of native USB support by DOS.
Driver
ASPI was provided by the following drivers.
See also
SCSI Pass-Through Direct (SPTD)
SCSI Pass Through Interface (SPTI)
References
Application programming interfaces
SCSI
AT Attachment
Device drivers |
https://en.wikipedia.org/wiki/Nato.0%2B55%2B3d | NATO.0+55+3d was an application software for realtime video and graphics, released by 0f0003 Maschinenkunst and the Netochka Nezvanova collective in 1999 for the classic Mac OS operating system.
Being one of the earliest applications to allow realtime video manipulation and display, it was used by artists for a large variety of purposes, prominently for live performance, VJing and interactive installation.
Design
Running in the framework of Max (a visual programming interface for rapid prototyping and developing of audio software), NATO.0+55+3d extended Max by allowing to access and manipulate all the media types that QuickTime supports (films, images, 3D models, QuickTime VR, etc.). The functionalities included image generation, image processing, control over MIDI and numerical data, integration with Internet, 3D, text and sound.
History
At the time of its release (the summer of 1999), NATO.0+55+3d was in demand as it appeared several years before other similar infrastructures such as GEM and Jitter (released by the makers of Max/MSP in October 2002). Earlier software such as Image/ine developed in 1997 at STEIM was drawing in a similar direction, but the fact that NATO.0+55+3d was operating inside the Max/MSP framework, using its "visual programming" protocol, provided at the same time greater ease of use and more flexibility, allowing the user to create his own applications and tools. It gained popularity among video artists and performers, who were using it for a large variety of purposes, prominently for live performance and interactive installation.
The last version of NATO.0+55+3d modular was released in November 2000, while additional NATO objects were developed until June 2001.
Version history
Applications
Artists used the software to "manipulate video for live performance and installations" (Mieszkowski 2002). The flexibility of the interface provided artists with "a uniquely suitable environment for the creation of new synesthesiac applications and experiences" (Meta 2001) and "opened up tremendous possibillities for working with realtime video" (Gilje 2005).
As NATO was distributed with a software development kit, several artists and programmers created third party extensions (e.g. the PeRColate and Auvi object libraries), or developed entire applications based on NATO.
NATO.0+55 pilots
Some of the most prominent users of NATO.0+55:
242.pilots (Kurt Ralske, HC Gilje, Lukasz Lysakowski) – live video improvisation ensemble, winners of the Transmediale award 2003 in the category "Image" for their video performance DVD Live In Bruxelles, released on the Carpark imprint in November 2002.
Farmers Manual – the Austrian collective was among the first artists to integrate NATO visuals into their performances. Their twelve-hour performance "Help Us Stay Alive", which was presented and awarded at FCMM festival in Montreal, October 1999, was using the NATO software. The group held a max/nato/pd workshop at Avanto festival in 2001. |
https://en.wikipedia.org/wiki/Lisa%27s%20Rival | "Lisa's Rival" is the second episode of the sixth season of the American animated television series The Simpsons. It originally aired on the Fox network in the United States on September 11, 1994. Winona Ryder guest stars as Allison Taylor, a new student at Springfield Elementary School. Lisa Simpson begins to feel threatened by Allison because she is smarter, younger, and a better saxophone player. The episode's subplot sees Homer steal a large pile of sugar from a crashed truck and sell it door-to-door.
The episode was written by Mike Scully and directed by Mark Kirkland. "Lisa's Rival" was the first episode written by Scully. The episode was originally pitched by former writer Conan O'Brien, while the subplot was suggested by George Meyer. It features references to films such as The Fugitive and Scarface. Production of the episode was affected by the 1994 Northridge earthquake.
Plot
Lisa feels her status as the top student in her class is threatened when a new and exceptionally intelligent student named Allison arrives at Springfield Elementary. Lisa immediately befriends her since they share many traits, but she soon sees that Allison's gifts far exceed hers and begins to doubt herself. At a band audition, the girls stage a saxophone duel that results in Lisa passing out from overexertion twice over and thinking it was a dream both times. Allison wins the audition, much to Lisa's horror.
Even the kids who used to tease Lisa for being smart started to tease Allison instead. Still wanting to be better friends with her, Lisa visits her house after school but is dismayed at her vast number of awards. She plays a word game with Allison and her father that makes her seem dim-witted. Their rivalry comes to a head during Springfield Elementary's annual diorama competition. Bart offers to help Lisa sabotage Allison's entry so she can win the contest. He distracts the teachers and other students to allow Lisa to switch Allison's diorama of "The Tell-Tale Heart" with one featuring a cow's heart. When Principal Skinner discovers the cow's heart diorama, he humiliates Allison in front of the students and faculty. Overcome by guilt, Lisa retrieves Allison's real diorama from its hiding place under the floor. However, Skinner, unimpressed by both Allison and Lisa's entries, declares Ralph's collection of Star Wars figurines the winner. Lisa apologizes to Allison for sabotaging the contest. They become friends, picking up Ralph after he accidentally trips and breaks his action figures.
Meanwhile, Homer steals hundreds of pounds of sugar he finds at the site of Moleman's truck accident. Homer hatches a scheme to get rich by selling sugar door-to-door. He keeps the sugar piled in his backyard, where he obsessively guards it against thieves. Marge grows annoyed by this and tells him to get rid of the sugar pile at once. He refuses and accuses her of trying to sabotage his chances of wealth. Soon the sugar attracts bees from a local apiary; the beekeepers tr |
https://en.wikipedia.org/wiki/MEDLINE | MEDLINE (Medical Literature Analysis and Retrieval System Online, or MEDLARS Online) is a bibliographic database of life sciences and biomedical information. It includes bibliographic information for articles from academic journals covering medicine, nursing, pharmacy, dentistry, veterinary medicine, and health care. MEDLINE also covers much of the literature in biology and biochemistry, as well as fields such as molecular evolution.
Compiled by the United States National Library of Medicine (NLM), MEDLINE is freely available on the Internet and searchable via PubMed and NLM's National Center for Biotechnology Information's Entrez system.
History
MEDLARS (Medical Literature Analysis and Retrieval System) is a computerised biomedical bibliographic retrieval system. It was launched by the National Library of Medicine in 1964 and was the first large-scale, computer-based, retrospective search service available to the general public.
Initial development of MEDLARS
Since 1879, the National Library of Medicine has published Index Medicus, a monthly guide to medical articles in thousands of journals. The huge volume of bibliographic citations was manually compiled. In 1957 the staff of the NLM started to plan the mechanization of the Index Medicus, prompted by a desire for a better way to manipulate all this information, not only for Index Medicus but also to produce subsidiary products. By 1960 a detailed specification was prepared, and by the spring of 1961, request for proposals were sent out to 72 companies to develop the system. As a result, a contract was awarded to the General Electric Company. A Minneapolis-Honeywell 800 computer, which was to run MEDLARS, was delivered to the NLM in March 1963, and Frank Bradway Rogers (Director of the NLM 1949 to 1963) said at the time, "..If all goes well, the January 1964 issue of Index Medicus will be ready to emerge from the system at the end of this year. It may be that this will mark the beginning of a new era in medical bibliography."
MEDLARS cost $3 million to develop, and at the time of its completion in 1964, no other publicly available, fully operational electronic storage and retrieval system of its magnitude existed. The original computer configuration operated from 1964 until its replacement by MEDLARS II in January 1975.
MEDLARS Online
In late 1971, an online version called MEDLINE ("MEDLARS Online") became available as a way to do online searching of MEDLARS from remote medical libraries. This early system covered 239 journals and boasted that it could support as many as 25 simultaneous online users (remotely logged in from distant medical libraries) at one time. However, this system remained primarily in the hands of libraries, with researchers able to submit pre-programmed search tasks to librarians and obtain results on printouts, but rarely able to interact with the NLM computer output in real-time. This situation continued through the beginning of the 1990s and the rise of the Wor |
https://en.wikipedia.org/wiki/Cochrane%20Library | The Cochrane Library (named after Archie Cochrane) is a collection of databases in medicine and other healthcare specialties provided by Cochrane and other organizations. At its core is the collection of Cochrane Reviews, a database of systematic reviews and meta-analyses which summarize and interpret the results of medical research. The Cochrane Library aims to make the results of well-conducted controlled trials readily available and is a key resource in evidence-based medicine.
Access and use
The Cochrane Library is a subscription-based database, originally published by Update Software and now published by John Wiley & Sons, Ltd. as part of Wiley Online Library. In many countries, including parts of Canada, the United Kingdom, Ireland, the Scandinavian countries, New Zealand, Australia, India, South Africa, and Poland, it has been made available free to all residents by "national provision" (typically a government or Department of Health pays for the license). There are also arrangements for free access in much of Latin America and in "low-income countries", typically via HINARI. All countries have free access to two-page abstracts of all Cochrane Reviews and to short plain-language summaries of selected articles.
Cochrane Reviews appear to be relatively underused in the United States for two reasons:
1) because public access to Cochrane Library in the USA is limited (the state of Wyoming is an exception, having paid for a licence to enable free access to Cochrane Reviews for all residents of Wyoming).
2) the government-funded U.S. National Library of Medicine maintains an alternative database MEDLINE, which is 100%-free of charge to everyone, and has a significantly larger coverage than Cochrane.
From 26 March to 26 May 2020, the Cochrane Library provided temporary unrestricted access to everyone in every country in response to the COVID-19 pandemic.
Contents
The Cochrane Library consists of the following databases after significant changes in 2018:
The Cochrane Database of Systematic Reviews (Cochrane Reviews). Contains all the peer-reviewed systematic reviews and protocols (Cochrane Protocols) prepared by the Cochrane Review Groups.
The Cochrane Central Register of Controlled Trials (CENTRAL). CENTRAL is a database that contains details of articles of Controlled trials and other studies of healthcare interventions from bibliographic databases (majorly MEDLINE and EMBASE), and other published and unpublished sources that are difficult to access, including trials from the trial registries such as International Clinical Trials Registry Platform (ICTRP) and ClinicalTrials.gov. However, systematic reviewers need to search not only CENTRAL but also ICTRP and ClinicalTrials.gov to identify unpublished studies.
Cochrane Clinical Answers. These evidence summaries on a variety of questions of interest to healthcare professionals have a user-friendly presentation with graphics and high-level conclusions of the research evidence based on Cochr |
https://en.wikipedia.org/wiki/Kylin%20%28operating%20system%29 | Kylin () is an operating system developed by academics at the National University of Defense Technology in the People's Republic of China since 2001. It is named after the mythical beast qilin. The first versions were based on FreeBSD and were intended for use by the Chinese military and other government organizations. With version 3.0 Kylin became Linux-based, and there is a version called NeoKylin which was announced in 2010.
By 2019 NeoKylin variant is compatible with more than 4,000 software and hardware products and it ships pre-installed on most computers sold in China. Together, Kylin and Neokylin have 90% market share of the government sector.
A separate project using Ubuntu as the base Linux operating system was announced in 2013. The first version of Ubuntu Kylin was released in April 2013.
In August 2020, v10 of Kylin OS was launched. It is compatible with 10,000 hardware and software products and it "supports Google's Android ecosystem".
In July 2022, an open-source version of Kylin, titled openKylin was released.
FreeBSD version
Development of Kylin began in 2001, when the National University of Defense Technology was assigned the mission of developing an operating system under the 863 Program intended to make China independent of foreign technology. The aim was "to support several kinds of server platforms, to achieve high performance, high availability and high security, as well as conforming to international standards of Unix and Linux operating systems". It was created using a hierarchy model, including "the basic kernel layer which is similar to Mach, the system service layer which is similar to BSD and the desktop environment which is similar to Windows". It was designed to comply with the UNIX standards and to be compatible with Linux applications.
In February 2006, "China Military Online" (a website sponsored by PLA Daily of the Chinese People's Liberation Army) reported the "successful development of the Kylin server operating system", which it said was "the first 64-bit operating system with high security level (B2 class)" and "also the first operating system without Linux kernel that has obtained Linux global standard authentification by the international Free Standards Group".
In April 2006, it was said that the Kylin operating system was largely based on FreeBSD 5.3. An anonymous Chinese student in Australia, who used the pseudonym "Dancefire", carried out a kernel similarity analysis and showed that the similarities between the two operating systems reached 99.45 percent. One of Kylin's developers confirmed that Kylin was based on FreeBSD during a speech at the international conference EuroBSDCon 2006.
In 2009, a report presented to the US-China Economic and Security Review Commission stated that the purpose of Kylin is to make Chinese computers impenetrable to competing countries in the cyberwarfare arena. The Washington Post reported that:
China has developed more secure operating software for its tens of mil |
https://en.wikipedia.org/wiki/For%20each | For each may refer to:
In mathematics, Universal quantification. Also read as: "for all"
In computer science, foreach loop
See also
Each (disambiguation) |
https://en.wikipedia.org/wiki/.hack%20%28video%20game%20series%29 | .hack () is a series of single-player action role-playing video games developed for the PlayStation 2 console by CyberConnect2 and published by Bandai Namco Entertainment. The four games, .hack//Infection, .hack//Mutation, .hack//Outbreak, and .hack//Quarantine, all feature a "game within a game", a fictional massively multiplayer online role-playing game (MMORPG) called The World which does not require the player to connect to the Internet. Players may transfer their characters and data between games in the series. Each game comes with an extra DVD containing an episode of .hack//Liminality, the accompanying original video animation (OVA) series which details fictional events that occur concurrently with the games.
The games are part of a multimedia franchise called Project .hack, which explores the mysterious origins of The World. Set after the events of the anime series, .hack//Sign, the games focus on a player character named Kite and his quest to discover why some users have become comatose in the real world as a result of playing The World. The search evolves into a deeper investigation of the game and its effects on the stability of the Internet.
Critics gave the series mixed reviews. It was praised for its unique setting and its commitment to preserve the suspension of disbelief, as well as the character designs. However, it was criticized for uneven pacing and a lack of improvement between games in the series. The commercial success of the franchise led to the production of .hack//frägment—a Japan-only remake of the series with online capabilities—and .hack//G.U., another video game trilogy which was released for the PlayStation 2 between 2006 and 2007. A remastered collection of the latter was released for the PlayStation 4 and Microsoft Windows in 2017, titled .hack//G.U. Last Recode. The collection was later released on the Nintendo Switch on March 11, 2022.
Gameplay
.hack simulates an MMORPG; players assume the role of a participant in a fictional game called The World. The player controls the on-screen player character Kite from a third-person perspective but first-person mode is available. The player manually controls the viewing perspective using the game controller. Within the fictional game, players explore monster-infested fields and dungeons, and "Root Towns" that are free of combat. They can also log off from The World and return to a computer desktop interface which includes in-game e-mail, news, message boards, and desktop and background music customization options. The player may save the game to a memory card both from the desktop and within The World at a Save Shop. A Data Flag appears on the save file after the player completes the game, allowing the transfer of all aspects of the player character and party members to the next game in the series.
The series is typical of action role-playing games, in which players attack enemies in real time. The game's action pauses whenever the menu is opened to select magic to c |
https://en.wikipedia.org/wiki/S-box | In cryptography, an S-box (substitution-box) is a basic component of symmetric key algorithms which performs substitution. In block ciphers, they are typically used to obscure the relationship between the key and the ciphertext, thus ensuring Shannon's property of confusion. Mathematically, an S-box is a nonlinear vectorial Boolean function.
In general, an S-box takes some number of input bits, m, and transforms them into some number of output bits, n, where n is not necessarily equal to m. An m×n S-box can be implemented as a lookup table with 2m words of n bits each. Fixed tables are normally used, as in the Data Encryption Standard (DES), but in some ciphers the tables are generated dynamically from the key (e.g. the Blowfish and the Twofish encryption algorithms).
Example
One good example of a fixed table is the S-box from DES (S5), mapping 6-bit input into a 4-bit output:
Given a 6-bit input, the 4-bit output is found by selecting the row using the outer two bits (the first and last bits), and the column using the inner four bits. For example, an input "011011" has outer bits "01" and inner bits "1101"; the corresponding output would be "1001".
Analysis and properties
When DES was first published in 1977, the design criteria of its S-boxes were kept secret to avoid compromising the technique of differential cryptanalysis (which was not yet publicly known). As a result, research in what made good S-boxes was sparse at the time. Rather, the eight S-boxes of DES were the subject of intense study for many years out of a concern that a backdoor (a vulnerability known only to its designers) might have been planted in the cipher. As the S-boxes are the only nonlinear part of the cipher, compromising those would compromise the entire cipher.
The S-box design criteria were eventually published (in ) after the public rediscovery of differential cryptanalysis, showing that they had been carefully tuned to increase resistance against this specific attack such that it was no better than brute force. Biham and Shamir found that even small modifications to an S-box could significantly weaken DES.
Any S-box where any linear combination of output bits is produced by a bent function of the input bits is termed a perfect S-box.
S-boxes can be analyzed using linear cryptanalysis and differential cryptanalysis in the form of a Linear approximation table (LAT) or Walsh transform and Difference Distribution Table (DDT) or autocorrelation table and spectrum. Its strength may be summarized by the nonlinearity (bent, almost bent) and differential uniformity (perfectly nonlinear, almost perfectly nonlinear).
See also
Bijection, injection and surjection
Boolean function
Nothing-up-my-sleeve number
Permutation box (P-box)
Permutation cipher
Rijndael S-box
Substitution cipher
References
Further reading
Sources
External links
A literature survey on S-box design
John Savard's "Questions of S-box Design"
"Substitution Box Design based on Gaussian D |
https://en.wikipedia.org/wiki/Internet%20Speculative%20Fiction%20Database | The Internet Speculative Fiction Database (ISFDB) is a database of bibliographic information on genres considered speculative fiction, including science fiction and related genres such as fantasy, alternate history, and horror fiction. The ISFDB is a volunteer effort, with the database being open for moderated editing and user contributions, and a wiki that allows the database editors to coordinate with each other. the site had catalogued 2,002,324 story titles from 232,816 authors.
The code for the site has been used in books and tutorials as examples of database schema and organizing content. The ISFDB database and code are available under Creative Commons licensing. The site won the Wooden Rocket Award in the Best Directory Site category in 2005.
Purpose
The ISFDB database indexes speculative fiction (science fiction, fantasy, horror, and alternate history) authors, novels, short fiction, essays, publishers, awards, and magazines in print, electronic, and audio formats. It supports author pseudonyms, series, and cover art plus interior illustration credits, which are combined into integrated author, artist, and publisher bibliographies with brief biographical data. An ongoing effort is verification of publication contents and secondary bibliographic sources against the database, with the goals being data accuracy and to improve the coverage of speculative fiction to 100 percent.
History
Several speculative fiction author bibliographies were posted to the USENET newsgroup rec.arts.sf.written from 1984 to 1994 by Jerry Boyajian, Gregory J. E. Rawlins and John Wenn. A more or less standard bibliographic format was developed for these postings. Many of these bibliographies can still be found at The Linköping Science Fiction Archive. In 1993, a searchable database of awards information was developed by Al von Ruff. In 1994, John R. R. Leavitt created the Speculative Fiction Clearing House (SFCH). In late 1994, he asked for help in displaying awards information, and von Ruff offered his database tools. Leavitt declined, because he wanted code that could interact with other aspects of the site. In 1995, Al von Ruff and "Ahasuerus" (a prolific contributor to rec.arts.sf.written) started to construct the ISFDB, based on experience with the SFCH and the bibliographic format finalized by John Wenn. The first version of ISFDB went live on 8 September 1995, and a URL was published in January 1996.
The ISFDB was first located at an ISP in Champaign Illinois, but it suffered from constrained resources in disk space and database support, which limited its growth. In October 1997 the ISFDB moved to SF Site, a major SF portal and review site. Due to the rising costs of remaining with SF Site, the ISFDB moved to its own domain in December 2002, but it was shut down by the hosting ISP due to high resource usage. In February 2003, it began to be hosted by The Cushing Library Science Fiction and Fantasy Research Collection and Institute for Scientific Comput |
https://en.wikipedia.org/wiki/Basename | basename is a standard computer program on Unix and Unix-like operating systems. When basename is given a pathname, it will delete any prefix up to the last slash ('/') character and return the result. basename is described in the Single UNIX Specification and is primarily used in shell scripts.
History
was introduced in X/Open Portability Guidelines issue 2 of 1987. It was inherited into the first version of POSIX and the Single Unix Specification. It first appeared in 4.4BSD.
The version of basename bundled in GNU coreutils was written by David MacKenzie.
The command is available as a separate package for Microsoft Windows as part of the GnuWin32 project and the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities.
Usage
The Single UNIX Specification specification for basename is.
basename string [suffix]
string
A pathname
suffix
If specified, basename will also delete the suffix.
Examples
basename will retrieve the last name from a pathname ignoring any trailing slashes
$ basename /home/jsmith/base.wiki
base.wiki
$ basename /home/jsmith/
jsmith
$ basename /
/
basename can also be used to remove the end of the base name, but not the complete base name
$ basename /home/jsmith/base.wiki .wiki
base
$ basename /home/jsmith/base.wiki ki
base.wi
$ basename /home/jsmith/base.wiki base.wiki
base.wiki
See also
List of Unix commands
dirname
Path (computing)
References
External links
Basename
Unix SUS2008 utilities
Plan 9 commands
Inferno (operating system) commands
IBM i Qshell commands |
https://en.wikipedia.org/wiki/PAVE%20PAWS | PAVE PAWS (PAVE Phased Array Warning System) is a complex Cold War early warning radar and computer system developed in 1980 to "detect and characterize a sea-launched ballistic missile attack against the United States". The first solid-state phased array deployed used a pair of Raytheon AN/FPS-115 phased array radar sets at each site to cover an azimuth angle of 240 degrees. Two sites were deployed in 1980 at the periphery of the contiguous United States, then two more in 1987–95 as part of the United States Space Surveillance Network. One system was sold to Taiwan and is still in service.
Mission
The radar was built in the Cold War to give early warning of a nuclear attack, to allow time for US bombers to get off the ground and land-based US missiles to be launched, to decrease the chance that a preemptive strike could destroy US strategic nuclear forces. The deployment of submarine launched ballistic missiles (SLBMs) by the Soviet Union by the 1970s, significantly decreased the warning time available between the detection of an incoming enemy missile and its reaching its target, because SLBMs can be launched closer to the US than the previous ICBMs, which have a long flight path from the Soviet Union to the continental US. Thus there was a need for a radar system with faster reaction time than existing radars. PAVE PAWS later acquired a second mission of tracking satellites and other objects in Earth orbit as part of the United States Space Surveillance Network.
A notable feature of the system is its phased array antenna technology, it was one of the first large phased array radars. A phased array was used because a conventional mechanically-rotated radar antenna cannot turn fast enough to track multiple ballistic missiles. A nuclear strike on the US would consist of hundreds of ICBMs and SLBMs incoming simultaneously. The beam of the phased array radar is steered electronically without moving the fixed antenna, so it can be pointed in a different direction in milliseconds, allowing it to track many incoming missiles at the same time.
Description
The AN/FPS-115 radar consists of two phased arrays of antenna elements mounted on two sloping sides of the 105 ft high transmitter building, which are oriented 120° apart in azimuth. The beam from each array can be deflected up to 60° from the array's central boresight axis, allowing each array to cover an azimuth angle of 120°, thus the entire radar can cover an azimuth of 240°. The building sides are sloped at an angle of 20°, and the beam can be directed at any elevation angle between 3° and 85°. The beam is kept at least 100 ft above the ground over public-accessible land to avoid the possibility of exposing the public to significant electromagnetic fields.
Each array is a circle 72.5 ft (22.1 m) in diameter consisting of 2,677 crossed dipole antenna elements, of which 1,792 are powered and serve as both transmitting and receiving antennas, with the rest functioning as receiving antenna |
https://en.wikipedia.org/wiki/XSS%20%28disambiguation%29 | XSS is cross-site scripting, a type of computer security vulnerability.
XSS may also refer to:
XSS file, a Microsoft Visual Studio Dataset Designer Surface Data file
X11 Screen Saver extension, of X11
Assan language (ISO 639-3 code)
Xbox Series S, a digital-only video game console
See also
Experimental Satellite System-11 (XSS 11), a spacecraft
CSS (disambiguation)
. |
https://en.wikipedia.org/wiki/Asia%20Pacific%20Institute%20of%20Information%20Technology | The Asia Pacific Institute of Information Technology (APIIT) is an educational organisation specializing in providing education and training programs in computing and information technology. Founded by Datuk Dr Parmjit Singh and based originally in Malaysia, APIIT has since established other centers in Pakistan, India and Sri Lanka. The institute works in collaboration with selected universities in the United Kingdom and has produced more than 14,000 graduates.
History
Asia Pacific IIT was founded in 1993 as part of an initiative by the Malaysian Government to address the shortage of IT Professionals in the country. The newly formed Institute was based in Damansara Heights, Kuala Lumpur and offered Diploma courses in computing and IT.
In the following year, co-operative links were established with Monash University in Australia, leading to the launch of a twinning program in 1995 for Bachelor's degrees. This was followed in 1996 by a twinning relationship with Staffordshire University in UK for master's degree courses.
Expansion led to the opening of the Kuala Lumpur city campus in 1997, followed by campuses in Karachi, Pakistan (1998), Colombo, Sri Lanka (2000), Lahore, Pakistan (2000), Panipat, India (2001) and Perth, Australia (2004). In 2003 the Malaysian campus moved to new premises at Technology Park Malaysia, where it is now known as APIIT TPM.
The curriculum has since developed, with the institution recognize as SUN and Microsoft authorized training center in 1998, a SAP University Alliance Partner in 1999 and a Microsoft Certified Technical Education Center in 2001. APIIT gained University College status in 2004.
Programs and courses
Courses are run at four levels: foundation, diploma, degree and postgraduate.
It also offers English courses at their English Language Center for international and local students Students interested in engineering courses offered by APIIT can apply to its BTech course through NAT 2016 online exam that will be conducted on 3 June 2016.
Foundation program (Pre-university)
The one-year foundation program caters for student who have completed Form 5 studies and prepares them to move into a degree course.
Diploma programs
Diploma in Information and Communication Technology
Diploma in Information and Communication Technology with a specialism in Software Engineering
Diploma in Business with IT
Staffordshire University degree
BSc (Hons) in Computing with specialisms in:
Computing
Web Development
Multimedia Computing
Software Engineering
Mobile Computing
Artificial Intelligence
Knowledge Management
Computer Security
Biometrics
Data Analytics
Information Systems
BSc (Hons) in Business Computing with specialisms in:
Business Computing
Management
E-Marketing
BSc (Hons) in Business Information Technology
BSc (Hons) in E-Commerce
'BSc (Hons) Engineering
Postgraduate programs
Seven MSc programs are run: in "Software Engineering", "Technology Management", "Information Technology Management", |
https://en.wikipedia.org/wiki/Avie%20Tevanian | Avadis "Avie" Tevanian (born 1961) is an American-Armenian software engineer. At Carnegie Mellon University, he was a principal designer and engineer of the Mach operating system (also known as the Mach Kernel). He leveraged that work at NeXT Inc. as the foundation of the NeXTSTEP operating system. He was senior vice president of software engineering at Apple from 1997 to 2003, and then chief software technology officer from 2003 to 2006. There, he redesigned NeXTSTEP to become macOS. Apple's macOS and iOS both incorporate the Mach Kernel, and iPadOS, watchOS, and tvOS are all derived from iOS. He was a longtime friend of Steve Jobs.
Early life
Tevanian is from Westbrook, Maine. He is of Armenian descent. Tevanian cloned the 1980s arcade game Missile Command, giving it the same name in a version for the Xerox Alto, and Mac Missiles! for the Macintosh platform. He has a B.A. degree in mathematics from the University of Rochester and M.S. and Ph.D. degrees in computer science from Carnegie Mellon University. There, he was a principal designer and engineer of the Mach operating system.
Career
NeXT Inc.
He was Vice President of Software Engineering at NeXT Inc. and was responsible for managing NeXT's software engineering department. There, he designed the NeXTSTEP operating system, based upon his previous academic work on Mach.
Apple Inc.
He was senior vice president of software engineering at Apple from 1997 to 2003, and then chief software technology officer from 2003 to 2006. There, he redesigned NeXTSTEP to become macOS, which became iOS.
In United States v. Microsoft in 2001, he was a witness for the United States Department of Justice, testifying against Microsoft.
Theranos and Dolby Labs
Tevanian left Apple on March 31, 2006, and joined the boards of both Dolby Labs and Theranos, Inc. He resigned from the board of Theranos in late 2007, with an acrimonious ending as he faced legal threats and was forced to waive his right to buy a company cofounder's shares, actions he believed were in retaliation for the skepticism he was often alone in expressing about the company's finances and progress in developing its technology at board meetings. In May 2006, he joined the board of Tellme Networks, which was later sold to Microsoft. On January 12, 2010, he became managing director of Elevation Partners. In July 2015, he cofounded NextEquity Partners and as of 2017 is serving as Managing Director.
References
External links
Avie Tevanian, oral history, Computer History Museum
1961 births
American computer scientists
American people of Armenian descent
Apple Inc. employees
Apple Inc. executives
Armenian scientists
Carnegie Mellon University alumni
Date of birth missing (living people)
Kernel programmers
Living people
Macintosh operating systems people
NeXT
People from Westbrook, Maine
Place of birth missing (living people)
Theranos people
University of Rochester alumni |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.