source stringlengths 32 199 | text stringlengths 26 3k |
|---|---|
https://en.wikipedia.org/wiki/Capability-based%20security | Capability-based security is a concept in the design of secure computing systems, one of the existing security models. A capability (known in some systems as a key) is a communicable, unforgeable token of authority. It refers to a value that references an object along with an associated set of access rights. A user program on a capability-based operating system must use a capability to access an object. Capability-based security refers to the principle of designing user programs such that they directly share capabilities with each other according to the principle of least privilege, and to the operating system infrastructure necessary to make such transactions efficient and secure. Capability-based security is to be contrasted with an approach that uses traditional UNIX permissions and Access Control Lists.
Although most operating systems implement a facility which resembles capabilities, they typically do not provide enough support to allow for the exchange of capabilities among possibly mutually untrusting entities to be the primary means of granting and distributing access rights throughout the system. A capability-based system, in contrast, is designed with that goal in mind.
Introduction
Capabilities achieve their objective of improving system security by being used in place of forgeable references. A forgeable reference (for example, a path name) identifies an object, but does not specify which access rights are appropriate for that object and the user program which holds that reference. Consequently, any attempt to access the referenced object must be validated by the operating system, based on the ambient authority of the requesting program, typically via the use of an access-control list (ACL). Instead, in a system with capabilities, the mere fact that a user program possesses that capability entitles it to use the referenced object in accordance with the rights that are specified by that capability. In theory, a system with capabilities removes the need for any access control list or similar mechanism by giving all entities all and only the capabilities they will actually need.
A capability is typically implemented as a privileged data structure that consists of a section that specifies access rights, and a section that uniquely identifies the object to be accessed. The user does not access the data structure or object directly, but instead via a handle. In practice, it is used much like a file descriptor in a traditional operating system (a traditional handle), but to access every object on the system. Capabilities are typically stored by the operating system in a list, with some mechanism in place to prevent the program from directly modifying the contents of the capability (so as to forge access rights or change the object it points to). Some systems have also been based on capability-based addressing (hardware support for capabilities), such as Plessey System 250.
Programs possessing capabilities can perform functions on them, s |
https://en.wikipedia.org/wiki/ID3 | ID3 is a metadata container most often used in conjunction with the MP3 audio file format. It allows information such as the title, artist, album, track number, and other information about the file to be stored in the file itself.
ID3 is a de facto standard for metadata in MP3 files; no standardization body was involved in its creation nor has such an organization given it a formal approval status. It competes with the APE tag in this area.
There are two unrelated versions of ID3: ID3v1 and ID3v2. In ID3v1, the metadata is stored in a 128-byte segment at the end of the file. In ID3v2, an extensible set of "frames" located at the start of the file are used. Subvariants of both versions exist.
ID3v1
When the MP3 standard was published in 1995, it did not include a method for storing file metadata. In 1996 Eric Kemp proposed adding a 128-byte suffix to MP3 files in which useful information such as an artist's name or a related album title could be stored. Kemp deliberately placed the tag data (which is demarcated with the 3-byte string ) at the end of the file as it would cause a short burst of static to be played by older media players that did not support the tag. The method, now known as ID3v1, quickly became the de facto standard for storing metadata in MP3s despite internationalization and localization weaknesses arising from the standard's use of ISO-8859-1 system of encoding rather than the more globally compatible Unicode.
The v1 tag allows 30 bytes each for the title, artist, album, and a "comment", 4 bytes for the year, and 1 byte to identify the genre of the song from a predefined list of values.
ID3v1.1
In 1997, a modification to ID3v1 was proposed by Michael Mutschler in which two bytes formerly allocated to the comment field were used instead to store a track number so that albums stored across multiple files could be correctly ordered. The modified format became known as ID3v1.1.
ID3v1.2
In 2002 or 2003, BirdCage Software proposed ID3v1.2, which enlarged many of the fields from 30 to 60 bytes and added a subgenre field while retaining backward compatibility with v1.1 by placing its new "enhanced" tag in front of a standard v1.1 tag. Adoption of ID3v1.2 was limited.
ID3v2
In 1998, a new specification called ID3v2 was created by multiple contributors. Although it bears the name ID3, its structure is completely distinct from that of ID3v1. ID3v2 tags are of variable size and are usually placed at the start of the file, which enables metadata to load immediately, even when the file as a whole is loading incrementally during streaming.
A ID3v2 tag consists of a number of optional frames, each of which contains a piece of metadata up to 16 MB in size. For example, a frame may be included to contain a title. The entire tag may be as large as 256 MB, and strings may be encoded in Unicode.
ID3v2.2
The first public variant of v2, ID3v2.2, replaced three-characters frame identifiers (e.g., ) with four-character frame identifiers |
https://en.wikipedia.org/wiki/Redway | Redway may refer to:
Redway (surname)
Redway, California, United States, a census-designated place located in Humboldt County
Redways, a network of shared-use paths in Milton Keynes, England
Red Way, an airline based in Lincoln, NE, United States
See also
Redway School (disambiguation)
Redwater (disambiguation) |
https://en.wikipedia.org/wiki/Don%20Coppersmith | Don Coppersmith (born 1950) is a cryptographer and mathematician. He was involved in the design of the Data Encryption Standard block cipher at IBM, particularly the design of the S-boxes, strengthening them against differential cryptanalysis.
He also improved the quantum Fourier transform discovered by Peter Shor in the same year (1994). He has also worked on algorithms for computing discrete logarithms, the cryptanalysis of RSA, methods for rapid matrix multiplication (see Coppersmith–Winograd algorithm) and IBM's MARS cipher. He is also a co-designer of the SEAL and Scream ciphers.
In 1972, Coppersmith obtained a bachelor's degree in mathematics at the Massachusetts Institute of Technology, and a Masters and Ph.D. in mathematics from Harvard University in 1975 and 1977 respectively. He was a Putnam Fellow each year from 1968–1971, becoming the first four-time Putnam Fellow in history. In 1998, he started Ponder This, an online monthly column on mathematical puzzles and problems. In October 2005, the column was taken over by James Shearer. Around that same time, he left IBM and began working at the IDA Center for Communications Research, Princeton.
In 2002, Coppersmith won the RSA Award for Excellence in Mathematics.
See also
Coppersmith's attack
Coppersmith method
References
External links
20th-century American mathematicians
21st-century American mathematicians
IBM employees
IBM Research computer scientists
Harvard Graduate School of Arts and Sciences alumni
Modern cryptographers
Putnam Fellows
1950s births
Living people
Massachusetts Institute of Technology School of Science alumni
International Association for Cryptologic Research fellows |
https://en.wikipedia.org/wiki/Compatibility%20layer | In software engineering, a compatibility layer is an interface that allows binaries for a legacy or foreign system to run on a host system. This translates system calls for the foreign system into native system calls for the host system. With some libraries for the foreign system, this will often be sufficient to run foreign binaries on the host system. A hardware compatibility layer consists of tools that allow hardware emulation.
Software
Examples include:
Wine, which runs some Microsoft Windows binaries on Unix-like systems using a program loader and the Windows API implemented in DLLs
Windows's application compatibility layers to attempt to run poorly written applications or those written for earlier versions of the platform.
Lina, which runs some Linux binaries on Windows, Mac OS X and Unix-like systems with native look and feel.
KernelEX, which runs some Windows 2000/XP programs on Windows 98/Me.
Executor, which runs 68k-based "classic" Mac OS programs in Windows, Mac OS X and Linux.
Anbox, an Android compatibility layer for Linux.
Hybris, library that translates Bionic into glibc calls.
Darling, a translation layer that attempts to run Mac OS X and Darwin binaries on Linux.
Windows Subsystem for Linux v1, which runs Linux binaries on Windows via a compatibility layer which translates Linux system calls into native windows system calls.
Cygwin, a POSIX-compatible environment that runs natively on Windows.
2ine, a project to run OS/2 application on Linux
Rosetta 2, Apple's translation layer bundled with macOS Big Sur to allow x86-64 exclusive applications to run on ARM hardware.
ACL allows Android apps to natively execute on Tizen, webOS, or MeeGoo phones.
Alien Dalvik allows Android apps to run on MeeGo and Meamo. Alien Dalvik 2.0 was also revealed for iOS on an iPad, however unlike MeeGo and Meamo, this version ran from the cloud.
touchHLE is a compatibility layer (referred to as a “high-level emulator”) for Windows and macOS made by Andrea "hikari_no_yume" (Sweden) in early 2023 to run legacy 32-bit iOS software. The compatibility layer was only able to run one software, Super Monkey Ball as of version 0.1.0. As of version 0.1.2, support for the Lite version of Super Monkey Ball, as well as Crash Bandicoot Nitro Kart 3D and Touch & Go has been added. She says that fans will have to "be patient" for anything else to emulate. It uses code translation along with CPU emulation when necessary, and specifically stated that she does not want to be compatible with 64 bit software. Later a pull request to add support for Android was issued, which allowed Android devices to run Super Monkey Ball for iOS.
ipasim is a compatibility layer for Windows that uses WinObjC to translate code from Objective C to native Windows code.
aah (sic) is a program for macOS to run iOS apps on macOS 10.15 "Catalina" on x86 processors via translation of the programs via the Catalyst framework.
brs-emu is a compatibility layer to run Roku software v |
https://en.wikipedia.org/wiki/EyeToy | The EyeToy is a color webcam for use with the PlayStation 2. Supported games use computer vision and gesture recognition to process images taken by the EyeToy. This allows players to interact with the games using motion, color detection, and also sound, through its built-in microphone. It was released in 2003.
The camera was manufactured by Logitech, although newer EyeToys were manufactured by Namtai. The camera is mainly used for playing EyeToy games developed by Sony and other companies. It is not intended for use as a normal PC camera, although some programmers have written unofficial drivers for it. The EyeToy is compatible with the PlayStation 3 and can be used for video chatting. As of November 6, 2008, the EyeToy has sold 10.5 million units worldwide.
History
The EyeToy was conceived by Richard Marks in 1999, after witnessing a demonstration of the PlayStation 2 at the 1999 Game Developers Conference in San Jose, California. Marks' idea was to enable natural user interface and mixed reality video game applications using an inexpensive webcam, using the computational power of the PlayStation 2 to implement computer vision and gesture recognition technologies. He joined Sony Computer Entertainment America (SCEA) that year, and worked on the technology as Special Projects Manager for Research and Development.
Marks' work drew the attention of Phil Harrison, then Vice President of Third Party Relations and Research and Development at SCEA. Soon after being promoted to Senior Vice President of Product Development at Sony Computer Entertainment Europe (SCEE) in 2000, Harrison brought Marks to the division's headquarters in London to demonstrate the technology to a number of developers. At the demonstration, Marks was joined with Ron Festejo of Psygnosis (which would later merge to become London Studio) to begin developing a software title using the technology, which would later become EyeToy: Play. Originally called the iToy (short for "interactive toy") by the London branch, the webcam was later renamed to the EyeToy by Harrison. It was first demonstrated to the public at the PlayStation Experience event in August 2002 with four minigames.
Already planned for release in Europe, the EyeToy was picked by SCE's Japanese and American branches after the successful showing at the PlayStation Experience. In 2003, EyeToy was released in a bundle with EyeToy: Play: in Europe on July 4, and North America on November 4. By the end of the year, the EyeToy sold over 2 million units in Europe and 400,000 units in the United States. On February 11, 2004, the EyeToy was released in Japan.
Design
The camera is mounted on a pivot, allowing for positioning. Focusing the camera is performed by rotating a ring around the lens. It comes with two LED lights on the front. A blue light turns on when the PS2 is on, indicating that it is ready to be used, while the red light flashes when there is insufficient light in the room. It also contains a built-in microphone |
https://en.wikipedia.org/wiki/Java%20Metadata%20Interface | Given that metadata is a set of descriptive, structural and administrative data about a group of computer data (for example such as a database schema), Java Metadata Interface (or JMI) is a platform-neutral specification that defines the creation, storage, access, lookup and exchange of metadata in the Java programming language.
History
The JMI specification was developed under the Java Community Process and is defined by JSR 40 (a JSR is the formal document that describe proposed specifications and technologies for adding to the Java platform).
JMI is based on the Meta-Object Facility (or MOF) specification from the Object Management Group (or OMG). The MOF is a metamodel (a model of any kind of metadata) used notably to define the Unified Modeling Language (or UML).
It supports the exchange of metadata through XMI. XMI is a standard for exchanging metadata information via Extensible Markup Language (or XML). The MOF/XMI specifications are used for the exchange of UML models.
Usage
Essentially, JMI can be used to write tools in Java for manipulating UML models, which can be used in Model Driven Architecture and/or Model Driven Development. There are many implementations of JMI, including the Reference Implementation from Unisys, SAP NetWeaver and Sun Microsystems's open-source implementation from the NetBeans group. JMI is compatible with Java SE 1.3 and above through:
Standardized mappings from the MOF modeling constructs to Java;
Reflective APIs for generic discovery and navigation of metadata models and instances.
See also
External links
JSR 40
Metadata Interface
Metadata |
https://en.wikipedia.org/wiki/Java%20Management%20Extensions | Java Management Extensions (JMX) is a Java technology that supplies tools for managing and monitoring applications, system objects, devices (such as printers) and service-oriented networks. Those resources are represented by objects called MBeans (for Managed Bean). In the API, classes can be dynamically loaded and instantiated.
Managing and monitoring applications can be designed and developed using the Java Dynamic Management Kit.
JSR 003 of the Java Community Process defined JMX 1.0, 1.1 and 1.2. JMX 2.0 was being developed under JSR 255, but this JSR was subsequently withdrawn. The JMX Remote API 1.0 for remote management and monitoring is specified by JSR 160. An extension of the JMX Remote API for Web Services was being developed under JSR 262.
Adopted early on by the J2EE community, JMX has been a part of J2SE since version 5.0. "JMX" is a trademark of Oracle Corporation.
Architecture
JMX uses a three-level architecture:
The Probe level – also called the Instrumentation level – contains the probes (called MBeans) instrumenting the resources
The Agent level, or MBeanServer – the core of JMX. It acts as an intermediary between the MBean and the applications.
The Remote Management level enables remote applications to access the MBeanServer through connectors and adaptors. A connector provides full remote access to the MBeanServer API using various communication (RMI, IIOP, JMS, WS-* …), while an adaptor adapts the API to another protocol (SNMP, …) or to Web-based GUI (HTML/HTTP, WML/HTTP, …).
Applications can be generic consoles (such as JConsole and MC4J) or domain-specific (monitoring) applications. External applications can interact with the MBeans through the use of JMX connectors and protocol adapters. Connectors serve to connect an agent with a remote JMX-enabled management application. This form of communication involves a connector in the JMX agent and a connector client in the management application.
Protocol adapters provide a management view of the JMX agent through a given protocol. Management applications that connect to a protocol adapter are usually specific to the given protocol.
Managed beans
A managed bean – sometimes simply referred to as an MBean – is a type of JavaBean, created with dependency injection. Managed Beans are particularly used in the Java Management Extensions technology – but with Java EE 6 the specification provides for a more detailed meaning of a managed bean.
The MBean represents a resource running in the Java virtual machine, such as an application or a Java EE technical service (transactional monitor, JDBC driver, etc.). They can be used for collecting statistics on concerns like performance, resources usage, or problems (pull); for getting and setting application configurations or properties (push/pull); and notifying events like faults or state changes (push).
Java EE 6 provides that a managed bean is a bean that is implemented by a Java class, which is called its bean class. A top-leve |
https://en.wikipedia.org/wiki/JSR | JSR may refer to:
Computing
Jump to subroutine, an assembly language instruction
Java Specification Request, documents describing proposed additions to the Java platform
Research, science & technology
Joint spectral radius, in mathematics
Jonathan's Space Report, an online newsletter
Journal of Sedimentary Research
The Journal of Sex Research
Journal for the Study of Religion
Journal of Service Research
Journal of Synchrotron Radiation
Journal of Spacecraft and Rockets
Other uses
JSR Corporation, a japanese company, acting in the semiconductor industry
Jacobinte Swargarajyam, a 2016 Indian Malayalam language film
Jessore Airport, in Bangladesh
Jet Set Radio, a video game
John Septimus Roe Anglican Community School, in Perth, Western Australia
Jai Shri Ram, a popular Hindu slogan and greeting |
https://en.wikipedia.org/wiki/European%20Coalition%20for%20Just%20and%20Effective%20Drug%20Policies | The European Coalition for Just and Effective Drug Policies (ENCOD), originally European NGO Council On Drugs and development, is a network of European non-governmental organisations and citizens concerned with the impact of current international drug policies on the lives of the most affected sectors in Europe and the Global South.
Since 1994 they have been working to advocate more just and effective drugs control policies, which include an integrated solution for all problems related to the global drugs phenomenon.
History
ENCOD was set up in 1993 thanks to the support of the European Commission, as an NGO counterpart to the European Monitoring Centre on Drugs and Drug Addiction. However, the Management Board of the EMCDDA later decided to ignore any NGO involvement in the work of the EMCDDA.
In 1998 a Manifesto for just and effective drug policies was redacted by 14 NGOs from Europe, Africa and South America, then signed by hundreds of organizations, companies, political parties, and citizens.
ENCOD is a self-financed and independent network, legally based in Belgium, and is steered by a Committee composed citizens from different EU countries (5 people after the 2013 General assembly).
Mission
The organization declares three primary objectives:
To improve understanding of the causes and effects of the drugs trade,
To contribute to the elaboration of just and effective drugs control policies,
To bring about greater consistency between drugs control efforts and economic and social policies.
ENCOD is implementing these objectives in three ways:
it facilitates coordination, information exchange and joint analysis between its members,
it carries out joint information campaigns, aimed at the general public,
it carries out joint advocacy activities, aimed at policy-makers and the media.
Activities
Advising and organizational activities
In 1998, publication of the Manifesto for just and effective drug policies
Annual participation in the round table organized by the Commission on narcotic drugs (CND)
In 2006 ENCOD organised with the support of the Greens–European free alliance and European united left a meeting in the European parliament in Brussels
From 7 to 9 March 2008 ENCOD co-organised an international meeting in Vienna for a European alternative in drug policies called Drug Peace Days
In the early 2000s, setup of the Cannabis Social Club concept and rules
Advise and counselling about drug policies possible changes.
In March 2014, participation in the annual meeting of the Commission on narcotic drugs (CND). Set up of an alternative press center during the meeting.
Public campaigns
In 2003, 'Spread the Seeds' campaign
Series of Conferences at the United Nations Office on Drugs and Crime: "Vienna 2003: a Chance for the World" (March 2003), "The Road to Vienna" (November 2006),.
In 2009, conference "Coca 2009, from persecution to proposal" on the possibilities of a European approach towards the coca leaf, at the European Pa |
https://en.wikipedia.org/wiki/Gist | Gist or GIST may refer to:
Computing
GiST (Generalized Search Tree), a flexible data structure for building search trees
gist, an upper ontology in information science
Gist, a pastebin service operated by GitHub
Gist (graphics software), a scientific graphics library written in the C programming language
Gist (contact manager), an online contact management service acquired by BlackBerry
Ginān Index & Search Tool, a tool to make Ginans available to researchers and scholars
Medicine
Gastrointestinal stromal tumor, a neoplasm of the gastrointestinal tract
Gist processing, a cognitive process in fuzzy-trace theory
Organizations
German Institute of Science and Technology (Singapore), a research and education institute
Gist Communications, a former Internet-based TV listings and entertainment news provider
Global Institute of Science & Technology, Haldia, West Bengal, India
Gwangju Institute of Science and Technology, a research university in Gwangju City, South Korea
Global Innovation through Science and Technology initiative, an entrepreneur coaching organization
Places
Gist, Texas, a community in the US
Mount Gist, Queen Maud Land, Antarctica
Other uses
Gist (surname)
The Gist (podcast), an American podcast by Slate magazine
Gist, term used in Nigerian English to refer to idle chat or gossip.
See also
Jist (disambiguation) |
https://en.wikipedia.org/wiki/Cut-through%20switching | In computer networking, cut-through switching, also called cut-through forwarding is a method for packet switching systems, wherein the switch starts forwarding a frame (or packet) before the whole frame has been received, normally as soon as the destination address and outgoing interface is determined. Compared to store and forward, this technique reduces latency through the switch and relies on the destination devices for error handling. Pure cut-through switching is only possible when the speed of the outgoing interface is at least equal or higher than the incoming interface speed.
Adaptive switching dynamically selects between cut-through and store and forward behaviors based on current network conditions.
Cut-through switching is closely associated with wormhole switching.
Use in Ethernet
When cut-through switching is used in Ethernet the switch is not able to verify the integrity of an incoming frame before forwarding it.
The technology was developed by Kalpana, the company that introduced the first Ethernet switch.
The primary advantage of cut-through Ethernet switches, compared to store-and-forward Ethernet switches, is lower latency.
Cut-through Ethernet switches can support an end-to-end network delay latency of about ten microseconds.
End-to-end application latencies below 3 microseconds require specialized hardware such as InfiniBand.
A cut-through switch will forward corrupted frames, whereas a store and forward switch will drop them. Fragment free is a variation on cut-through switching that partially addresses this problem by assuring that collision fragments are not forwarded. Fragment free will hold the frame until the first 64 bytes are read from the source to detect a collision before forwarding. This is only useful if there is a chance of a collision on the source port.
The theory here is that frames that are damaged by collisions are often shorter than the minimum valid Ethernet frame size of 64 bytes. With a fragment-free buffer the first 64 bytes of each frame, updates the source MAC and port if necessary, reads the destination MAC, and forwards the frame. If the frame is less than 64 bytes, it is discarded. Frames that are smaller than 64 bytes are called runts; this is why fragment-free switching is sometimes called “runt-less” switching. Because the switch only ever buffers 64 bytes of each frame, fragment-free is a faster mode than store-and-forward, but there still exists a risk of forwarding bad frames.
There are certain scenarios that force a cut-through Ethernet switch to buffer the entire frame, acting like a store-and-forward Ethernet switch for that frame:
Speed: When the outgoing port is faster than the incoming port, the switch must buffer the entire frame received from the lower-speed port before the switch can start transmitting that frame out the high-speed port, to prevent underrun. (When the outgoing port is slower than the incoming port, the switch can perform cut-through switching and start tra |
https://en.wikipedia.org/wiki/OJ%20%28programming%20tool%29 | OJ, formerly named OpenJava, is a programming tool that parses and analyzes Java source code. It uses a metaobject protocol (MOP) to provide services for language extensions. Michiaki Tatsubori was the lead developer of OpenJava. Its first release was back to 1997, and won the Student Encouragement Prize at the Java Conference Grandprix '97 held in Japan.
This isn't to be confused with OpenJDK, which is the open source release of the Java compiler runtime and tools.
OpenJava was renamed OJ in October 2007 at the request of Sun Microsystems.
References
External links
OJ Homepage
Free software testing tools
Software using the BSD license |
https://en.wikipedia.org/wiki/Informal%20value%20transfer%20system | An informal value transfer system (IVTS) is any system, mechanism, or network of people that receives money for the purpose of making the funds or an equivalent value payable to a third party in another geographic location, whether or not in the same form. Informal value transfers generally take place outside of the conventional banking system through non-bank financial institution or other business entities whose primary business activity may not be the transmission of money. The IVTS transactions occasionally interconnect with formal banking systems, such as through the use of bank accounts held by the IVTS operator.
History
An informal value transfer system is an alternative and unofficial remittance and banking system, that pre-dates current day modern banking systems. The systems were established as a means of settling accounts within villages and between villages. It existed as far back as over 4000 years ago and even more.
Their use as global networks for financial transactions spread as expatriates from the original countries settled abroad. Today, IVTS operations are found in most countries. Depending on the ethnic group, IVTS are called by a variety of names including, for example, hawala (Middle East, Afghanistan, Indian Sub-Continent); fei ch’ien (飞钱 or "flying money"; China); phoe kuan (Thailand); and Black Market Peso Exchange (South America).
Individuals or groups engaged in operating IVTS may do so on a full-time, part-time, or ad hoc basis. They may work independently, or as part of a multi-person network. IVTS are based on trust. In general, operators usually don't misappropriate the funds entrusted to them.
How does IVTS work?
The sender gives money to an IVTS agent and their counterpart in the receiver region/country acts as deliverer of this money. The sender calls or faxes instructions to his counterpart and the money gets delivered in a matter of a few hours. In the past, the message could be delivered using couriers, with men or even animals (such as pigeons). Settlements are made either with a private delivery service or wire transfer in the opposite direction. Another method of balancing the books is to under-invoice goods shipped abroad, so that the receiver can resell the products at a higher market price.
Use of IVTS
IVTS are used by a variety of individuals, businesses, organisations, and even governments to remit funds domestically and abroad. Expatriates and immigrants often use IVTS to send money back to their families and friends in their home countries (for workers who worked abroad) or foreign countries (for merchants who need extra money to start a business). IVTS operations are also used by legitimate companies, traders, organisations, and government agencies needing to conduct business in countries with basic or no formal financial systems.
In some countries, IVTS-type networks operate in parallel with formal financial institutions or as a substitute or alternative for them or. Besides citizens of |
https://en.wikipedia.org/wiki/Cohesion%20%28computer%20science%29 | In computer programming, cohesion refers to the degree to which the elements inside a module belong together. In one sense, it is a measure of the strength of relationship between the methods and data of a class and some unifying purpose or concept served by that class. In another sense, it is a measure of the strength of relationship between the class's methods and data themselves.
Cohesion is an ordinal type of measurement and is usually described as “high cohesion” or “low cohesion”. Modules with high cohesion tend to be preferable, because high cohesion is associated with several desirable traits of software including robustness, reliability, reusability, and understandability. In contrast, low cohesion is associated with undesirable traits such as being difficult to maintain, test, reuse, or even understand.
Cohesion is often contrasted with coupling. High cohesion often correlates with loose coupling, and vice versa. The software metrics of coupling and cohesion were invented by Larry Constantine in the late 1960s as part of Structured Design, based on characteristics of “good” programming practices that reduced maintenance and modification costs. Structured Design, cohesion and coupling were published in the article Stevens, Myers & Constantine (1974) and the book Yourdon & Constantine (1979); the latter two subsequently became standard terms in software engineering.
High cohesion
In object-oriented programming, if the methods that serve a class tend to be similar in many aspects, then the class is said to have high cohesion. In a highly cohesive system, code readability and reusability is increased, while complexity is kept manageable.
Cohesion is increased if:
The functionalities embedded in a class, accessed through its methods, have much in common.
Methods carry out a small number of related activities, by avoiding coarsely grained or unrelated sets of data.
Related methods are in the same source file or otherwise grouped together; for example, in separate files but in the same sub-directory/folder.
Advantages of high cohesion (or "strong cohesion") are:
Reduced module complexity (they are simpler, having fewer operations).
Increased system maintainability, because logical changes in the domain affect fewer modules, and because changes in one module require fewer changes in other modules.
Increased module reusability, because application developers will find the component they need more easily among the cohesive set of operations provided by the module.
While in principle a module can have perfect cohesion by only consisting of a single, atomic element – having a single function, for example – in practice complex tasks are not expressible by a single, simple element. Thus a single-element module has an element that either is too complicated, in order to accomplish a task, or is too narrow, and thus tightly coupled to other modules. Thus cohesion is balanced with both unit complexity and coupling.
Types of cohesion
Cohesion is |
https://en.wikipedia.org/wiki/Copy%20constructor%20%28C%2B%2B%29 | In the C++ programming language, a copy constructor is a special constructor for creating a new object as a copy of an existing object. Copy constructors are the standard way of copying objects in C++, as opposed to cloning, and have C++-specific nuances.
The first argument of such a constructor is a reference to an object of the same type as is being constructed (const or non-const), which might be followed by parameters of any type (all having default values).
Normally the compiler automatically creates a copy constructor for each class (known as an implicit copy constructor) but for special cases the programmer creates the copy constructor, known as a user-defined copy constructor. In such cases, the compiler does not create one. Hence, there is always one copy constructor that is either defined by the user or by the system.
A user-defined copy constructor is generally needed when an object owns pointers or non-shareable references, such as to a file, in which case a destructor and an assignment operator should also be written (see Rule of three).
Definition
Copying of objects is achieved by the use of a copy constructor and an assignment operator. A copy constructor has as its first parameter a (possibly const or volatile) reference to its own class type. It can have more arguments, but the rest must have default values associated with them. The following would be valid copy constructors for class X:
X(const X& copy_from_me);
X(X& copy_from_me);
X(volatile X& copy_from_me);
X(const volatile X& copy_from_me);
X(X& copy_from_me, int = 0);
X(const X& copy_from_me, double = 1.0, int = 42);
...
The first one should be used unless there is a good reason to use one of the others. One of the differences between the first and the second is that temporaries can be copied with the first. For example:
X a = X(); // valid given X(const X& copy_from_me) but not valid given X(X& copy_from_me)
// because the second wants a non-const X&
// to create a, the compiler first creates a temporary by invoking the default constructor
// of X, then uses the copy constructor to initialize as a copy of that temporary.
// Temporary objects created during program execution are always of const type. So, const keyword is required.
// For some compilers both versions actually work but this behaviour should not be relied
// upon because it's non-standard.
A similar difference applies when directly attempting to copy a const object:
const X a;
X b = a; // valid given X(const X& copy_from_me) but not valid given X(X& copy_from_me)
// because the second wants a non-const X&
The X& form of the copy constructor is used when it is necessary to modify the copied object. This is very rare but it can be seen used in the standard library's std::auto_ptr. A reference must be provided:
X a;
X b = a; // valid if any of the copy constructors are defined
|
https://en.wikipedia.org/wiki/Btrieve | Btrieve is a transactional database (navigational database) software product. It is based on Indexed Sequential Access Method (ISAM), which is a way of storing data for fast retrieval. There have been several versions of the product for DOS, Linux, older versions of Microsoft Windows, 32-bit IBM OS/2 and for Novell NetWare.
It was originally a record manager published by SoftCraft. Btrieve was written by Doug Woodward and Nancy Woodward and initial funding was provided in part by Doug's brother Loyd Woodward. Around the same time as the release of the first IBM PCs, Doug received 50% of the company as a wedding gift and later purchased the remainder from his brother. After gaining market share and popularity, it was acquired from Doug and Nancy Woodward by Novell in 1987, for integration into their NetWare operating system in addition to continuing with the DOS version. The product gained significant market share as a database embedded in mid-market applications in addition to being embedded in every copy of NetWare 2.x, 3.x and 4.x since it was available on every NetWare network. After some reorganization within Novell, it was decided in 1994 to spin off the product and technology to Doug and Nancy Woodward along with Ron Harris, to be developed by a new company known as Btrieve Technologies, Inc. (BTI).
Btrieve was modularized starting with version 6.15 and became one of two database front-ends that plugged into a standard software interface called the MicroKernel Database Engine. The Btrieve front-end supported the Btrieve API and the other front-end was called Scalable SQL, a relational database product based upon the MKDE that used its own variety of Structured Query Language, otherwise known as SQL. After these versions were released (Btrieve 6.15 and ScalableSQL v4) the company was renamed to Pervasive Software prior to their IPO. Shortly thereafter the Btrieve and ScalableSQL products were combined into the products sold as Pervasive.SQL or PSQL, and later Actian Zen. Btrieve continued for a few years while ScalableSQL was quickly dropped. Customers were encouraged to upgrade to Pervasive.SQL, which supported both SQL and Btrieve applications.
Architecture
Btrieve is not a relational database management system (RDBMS). Early descriptions of Btrieve referred to it as a record manager (though Pervasive initially used the term navigational database but later changed this to transactional database) because it only deals with the underlying record creation, data retrieval, record updating and data deletion primitives. It uses ISAM as its underlying indexing and storage mechanism. A key part of Pervasive's architecture is the use of a MicroKernel Database Engine, which allows different database backends to be modularised and integrated easily into their DBMS package, Pervasive.SQL. This has enabled them to support both their Btrieve navigational database engine and an SQL-based engine, Scalable SQL.
Current versions of Btrieve support syst |
https://en.wikipedia.org/wiki/AOP | AOP may refer to:
Organisations
Aama Odisha Party, political party, India
Association of Optometrists, a British trade association
American Opera Projects
Assembly of the Poor, an NGO network in Thailand
Association of Photographers, a British trade association
Australian Orangutan Project
Army of the Potomac, the major Union Army in the Eastern Theater of the American Civil War.
Science, mathematics and technology
An abbreviation of prebediolone acetate (21-acetoxypregnenolone)
Advanced oxidation process
Adverse outcome pathway, to adverse effects in biology
Agent-oriented programming
All one polynomial
Annals of Probability, a mathematics journal
Apnea of prematurity
Argument of periapsis, an orbital element of an object in a star's orbit (for sun orbit, 'Argument of perihelion')
Aspect-oriented programming
Attribute-oriented programming
AOP (IRC), AutoOp, an Internet Relay chat access level
Food
Appellation d'origine protégée (protected designation of origin), a quality label of the European Union
Appellation d'origine protégée, a food certification of Switzerland
Other
aop, advance online publication
AŌP, a Japanese idol group
Age of Persuasion, a Canadian radio programme
The Authors of Pain, a professional wrestling tag team
Air observation post, artillery spotter aircraft
Apocalypse of Peter, a New Testament apocryphal text.
Annual Operations Plan, part of Sales and operations planning |
https://en.wikipedia.org/wiki/PL/M | The PL/M programming language
(an acronym of Programming Language for Microcomputers)
is a high-level language conceived and developed by
Gary Kildall in 1973 for Hank Smith at Intel for its microprocessors.
Overview
The language incorporated ideas from PL/I, ALGOL and XPL, and had an integrated macro processor. As a graduate of the University of Washington Kildall had used their Burroughs B5500 computer, and as such was aware of the potential of high-level languages such as ESPOL for systems programming.
Unlike other contemporary languages such as Pascal, C or BASIC, PL/M had no standard input or output routines. It included features targeted at the low-level hardware specific to the target microprocessors, and as such, it could support direct access to any location in memory, I/O ports and the processor interrupt flags in a very efficient manner. PL/M was the first higher level programming language for microprocessor-based computers and was the original implementation language for those parts of the CP/M operating system which were not written in assembler. Many Intel and Zilog Z80-based embedded systems were programmed in PL/M during the 1970s and 1980s. For instance, the firmware of the Service Processor component of CISC IBM AS/400 was written in PL/M.
The original PL/M compiler targeted the Intel 8008. An updated version (PL/M-80) generated code for the 8080 processor, which would also run on the newer Intel 8085 as well as on the Zilog Z80 family (as it is backward-compatible with the 8080). Later followed compilers for the Intel 8048 and Intel 8051-microcontroller family (PL/M-51) as well as for the 8086 (8088) (PL/M-86), 80186 (80188) and subsequent 8086-based processors, including the advanced 80286 and the 32-bit 80386. There were also PL/M compilers developed for later microcontrollers, such as the Intel 8061 and 8096 / MCS-96 architecture family (PL/M-96).
While some PL/M compilers were "native", meaning that they ran on systems using that same microprocessor, e.g. for the Intel ISIS operating system, there were also cross compilers, for instance PLMX, which ran on other operating environments such as Digital Research CP/M, Microsoft's DOS, and Digital Equipment Corporation's VAX/VMS.
PL/M is no longer supported by Intel, but aftermarket tools like PL/M-to-C source-code translators exist.
PL/M sample code
FIND: PROCEDURE(PA,PB) BYTE;
DECLARE (PA,PB) BYTE;
/* FIND THE STRING IN SCRATCH STARTING AT PA AND ENDING AT PB */
DECLARE J ADDRESS,
(K, MATCH) BYTE;
J = BACK ;
MATCH = FALSE;
DO WHILE NOT MATCH AND (MAXM > J);
LAST,J = J + 1; /* START SCAN AT J */
K = PA ; /* ATTEMPT STRING MATCH AT K */
DO WHILE SCRATCH(K) = MEMORY(LAST) AND
NOT (MATCH := K = PB);
/* MATCHED ONE MORE CHARACTER */
K = K + 1; LAST = LAST + 1;
END;
END;
IF MATCH THEN /* MOVE STORAGE */
DO; LAST = LAST - 1; CALL MOVER;
|
https://en.wikipedia.org/wiki/Rankit | In statistics, rankits of a set of data are the expected values of the order statistics of a sample from the standard normal distribution the same size as the data. They are primarily used in the normal probability plot, a graphical technique for normality testing.
Example
This is perhaps most readily understood by means of an example. If an i.i.d. sample of six items is taken from a normally distributed population with expected value 0 and variance 1 (the standard normal distribution) and then sorted into increasing order, the expected values of the resulting order statistics are:
−1.2672, −0.6418, −0.2016, 0.2016, 0.6418, 1.2672.
Suppose the numbers in a data set are
65, 75, 16, 22, 43, 40.
Then one may sort these and line them up with the corresponding rankits; in order they are
16, 22, 40, 43, 65, 75,
which yields the points:
These points are then plotted as the vertical and horizontal coordinates of a scatter plot.
Alternative method
Alternatively, rather than sort the data points, one may rank them, and rearrange the rankits accordingly. This yields the same pairs of numbers, but in a different order.
For:
65, 75, 16, 22, 43, 40,
the corresponding ranks are:
5, 6, 1, 2, 4, 3,
i.e., the number appearing first is the 5th-smallest, the number appearing second is 6th-smallest, the number appearing third is smallest, the number appearing fourth is 2nd-smallest, etc. One rearranges the expected normal order statistics accordingly, getting the rankits of this data set:
Rankit plot
A graph plotting the rankits on the horizontal axis and the data points on the vertical axis is called a rankit plot or a normal probability plot. Such a plot is necessarily nondecreasing. In large samples from a normally distributed population, such a plot will approximate a straight line. Substantial deviations from straightness are considered evidence against normality of the distribution.
Rankit plots are usually used to visually demonstrate whether data are from a specified probability distribution.
A rankit plot is a kind of Q–Q plot – it plots the order statistics (quantiles) of the sample against certain quantiles (the rankits) of the assumed normal distribution. Q–Q plots may use other quantiles for the normal distribution, however.
History
The rankit plot and the word rankit was introduced by the biologist and statistician Chester Ittner Bliss (1899–1979).
See also
Probit analysis developed by C. I. Bliss in 1934.
External links
Engineering Statistics Handbook
Statistical charts and diagrams
Normal distribution |
https://en.wikipedia.org/wiki/The%20Phil%20Silvers%20Show | The Phil Silvers Show, originally titled You'll Never Get Rich, is a sitcom which ran on the CBS Television Network from 1955 to 1959. A pilot titled "Audition Show" was made in 1955, but it was never broadcast. 143 other episodes were broadcast – all half-an-hour long except for a 1959 one-hour live special. The series starred Phil Silvers as Master Sergeant Ernest G. Bilko of the United States Army.
The series was created by Nat Hiken and won three consecutive Emmy Awards for Best Comedy Series. The show is sometimes titled Sergeant Bilko or simply Bilko in reruns, and it is very often referred to by these names, both on-screen and by viewers. The show's success transformed Silvers from a journeyman comedian into a star and writer-producer Hiken from a highly regarded behind-the-scenes comedy writer into a publicly recognized creator.
Production
By 1955, the American television business was already moving westward to Los Angeles, but Nat Hiken insisted on filming the series in New York City. He believed this location was more conducive to comedic creativity and the show's humor. Early episodes were filmed at Dumont's television center in New York City – now home to WNYW-TV – with later episodes shot at the CBS "Hi Brown" Studios in Chelsea, Manhattan.
Most of the series was filmed to simulate a live performance. The actors memorized their lines and performed the scenes in sequence before a studio audience. Thus, there are occasional flubs and awkward pauses. Actor Paul Ford, playing Bilko's commanding officer, was notorious for forgetting his lines; when he would get a blank expression on his face, Silvers and the rest of the cast would improvise something to save the scene, like "Oh, you remember, Colonel, the top brass is coming..." At that point, Ford would pick up where he left off.
Creator Nat Hiken wrote or co-wrote 70 of the first 71 episodes, missing only episode 70 (the third-season finale.) He left the show after that season. In the fourth and fifth seasons there were numerous staff writers, and gaining prominence as the show went on was Neil Simon. Simon wrote or co-wrote 20 episodes, including the series finale.
Later episodes were filmed in California. Producer Mike Todd, making a guest appearance, insisted that his show should be filmed like a movie, out of sequence. The cast and crew tried it and soon found that Todd's way was easier. Production continued in this manner until the series ended in 1959.
The fact that Silvers and Hiken were both sports fans inspired some of the character names. Bilko was named after Steve Bilko, a minor league baseball player (it also had the connotation, to bilk someone). Cpl Barbella was named after middleweight boxing champion Rocky Graziano (whose birth name was Rocco Barbella). Pvt Paparelli was named after the baseball umpire Joe Paparella. According to Silvers, Pvt Doberman was so named because actor Maurice Gosfield resembled a doberman pinscher.
Premise
The series was original |
https://en.wikipedia.org/wiki/Mbone | Mbone (short for "multicast backbone") was an experimental backbone and virtual network built on top of the Internet for carrying IP multicast traffic on the Internet. It was developed in the early 1990s and required specialized hardware and software. Since the operators of most Internet routers have disabled IP multicast due to concerns regarding bandwidth tracking and billing, the Mbone was created to connect multicast-capable networks over the existing Internet infrastructure.
History
Mbone was created by Van Jacobson, Steve Deering and Stephen Casner in 1992 based on a suggestion by Allison Mankin.
On May 23, 1993, Wax or the Discovery of Television Among the Bees was streamed over the Mbone, becoming "the first movie to be transmitted on the Internet."
On June 24, 1993, the band Severe Tire Damage was the first to perform live on the Mbone.
On November 11, 1993 Sky Cries Mary performed on the Mbone from Bellevue, WA sponsored by Starwave.
On August 23, 1994 the band Deth Specula broadcast the first live concert over the Mbone.
A November 1994 Rolling Stones concert at the Cotton Bowl in Dallas with 50,000 fans was the "first major cyberspace multicast concert." Mick Jagger opened the concert by saying, "I wanna say a special welcome to everyone that's, uh, climbed into the Internet tonight and, uh, has got into the M-bone. And I hope it doesn't all collapse."
A year later the Mbone was used, this time symmetrically (simultaneous transmission and reception without hierarchy among participants), for a first experience of real-time graphical interaction without the intermediary of any Center (Poietic Generator).
By 1995, there were M-bone links in Russia, as well as at the McMurdo Sound research station in Antarctica. Mbone was predominantly used by research and scientific entities, including NASA. [more information required]
Mbone was used for shared communication such as video teleconferences or shared collaborative workspaces. It was not generally connected to commercial Internet service providers, but often to universities and research institutions. Some other projects and network testbeds, such as Internet2's Abilene Network, made Mbone obsolete.
A "virtual room video conferencing system" (VRVS) started operation in 1997 using the Mbone, and was in operation through 2008.
A revived mboned (mbone deployment) working group was chartered by the Internet Engineering Task Force in 2014, as a forum to coordinate and document multicast deployment challenges and best practices.
Details
The purpose of Mbone was to minimize the amount of data required for multipoint audio/video-conferencing.
Mbone was free and it used a network of routers that support IP multicast, and it enables access to real-time interactive multimedia on the Internet. Many older routers do not support IP multicast. To cope with this, tunnels must be set up on both ends: multicast packets are encapsulated in unicast packets and sent through a tunnel. Mbone uses a sma |
https://en.wikipedia.org/wiki/Random-access%20machine | In computer science, random-access machine (RAM) is an abstract machine in the general class of register machines. The RAM is very similar to the counter machine but with the added capability of 'indirect addressing' of its registers. Like the counter machine, The RAM has its instructions in the finite-state portion of the machine (the so-called Harvard architecture).
The RAM's equivalent of the universal Turing machinewith its program in the registers as well as its datais called the random-access stored-program machine or RASP. It is an example of the so-called von Neumann architecture and is closest to the common notion of a computer.
Together with the Turing machine and counter-machine models, the RAM and RASP models are used for computational complexity analysis. Van Emde Boas (1990) calls these three plus the pointer machine "sequential machine" models, to distinguish them from "parallel random-access machine" models.
Introduction to the model
The concept of a random-access machine (RAM) starts with the simplest model of all, the so-called counter machine model. Two additions move it away from the counter machine, however. The first enhances the machine with the convenience of indirect addressing; the second moves the model toward the more conventional accumulator-based computer with the addition of one or more auxiliary (dedicated) registers, the most common of which is called "the accumulator".
Formal definition
A random-access machine (RAM) is an abstract computational-machine model identical to a multiple-register counter machine with the addition of indirect addressing. At the discretion of instruction from its finite state machine's TABLE, the machine derives a "target" register's address either (i) directly from the instruction itself, or (ii) indirectly from the contents (e.g. number, label) of the "pointer" register specified in the instruction.
By definition: A register is a location with both an address (a unique, distinguishable designation/locator equivalent to a natural number) and a contenta single natural number. For precision we will use the quasi-formal symbolism from Boolos-Burgess-Jeffrey (2002) to specify a register, its contents, and an operation on a register:
[r] means "the contents of register with address r". The label "r" here is a "variable" that can be filled with a natural number or a letter (e.g. "A") or a name.
→ means "copy/deposit into", or "replaces", but without destruction of the source
Example: [3] +1 → 3; means "The contents of source register with address "3", plus 1, is put into destination register with address "3" (here source and destination are the same place). If [3]=37, that is, the contents of register 3 is the number "37", then 37+1 = 38 will be put into register 3.
Example: [3] → 5; means "The contents of source register with address "3" is put into destination register with address "5". If [3]=38, that is, the contents of register 3 is the number 38, then this number will be put |
https://en.wikipedia.org/wiki/Fugaku | is another name for Mount Fuji.
Fugaku may also refer to:
Nakajima G10N Fugaku, a planned Japanese heavy bomber designed during World War II
Fugaku (supercomputer), a Japanese supercomputer
Fugaku Uchiha, a Naruto character
See also
, the ukiyo-e series created by Hokusai |
https://en.wikipedia.org/wiki/LabVIEW | Laboratory Virtual Instrument Engineering Workbench (LabVIEW) is a system-design platform and development environment for a visual programming language from National Instruments.
The graphical language is named "G"; not to be confused with G-code. The G dataflow language was originally developed by LabVIEW. LabVIEW is commonly used for data acquisition, instrument control, and industrial automation on a variety of operating systems (OSs), including macOS and other versions of Unix and Linux, as well as Microsoft Windows.
The latest versions of LabVIEW are LabVIEW 2023 Q1 (released in April 2023) and LabVIEW NXG 5.1 (released in January 2021). NI released the free for non-commercial use LabVIEW and LabVIEW NXG Community editions on April 28, 2020.
Dataflow programming
The programming paradigm used in LabVIEW, sometimes called G, is based on data availability. If there is enough data available to a subVI or function, that subVI or function will execute. The execution flow is determined by the structure of a graphical block diagram (the LabVIEW-source code) on which the programmer connects different function-nodes by drawing wires. These wires propagate variables and any node can execute as soon as all its input data become available. Since this might be the case for multiple nodes simultaneously, LabVIEW can execute inherently in parallel. Multi-processing and multi-threading hardware is exploited automatically by the built-in scheduler, which multiplexes multiple OS threads over the nodes ready for execution.
Graphical programming
LabVIEW integrates the creation of user interfaces (termed front panels) into the development cycle. LabVIEW programs-subroutines are termed virtual instruments (VIs). Each VI has three components: a block diagram, a front panel, and a connector pane. The last is used to represent the VI in the block diagrams of others, calling VIs. The front panel is built using controls and indicators. Controls are inputs: they allow a user to supply information to the VI. Indicators are outputs: they indicate, or display, the results based on the inputs given to the VI. The back panel, which is a block diagram, contains the graphical source code. All of the objects placed on the front panel will appear on the back panel as terminals. The back panel also contains structures and functions which perform operations on controls and supply data to indicators. The structures and functions are found on the Functions palette and can be placed on the back panel. Collectively controls, indicators, structures, and functions are referred to as nodes. Nodes are connected using wires, e.g., two controls and an indicator can be wired to the addition function so that the indicator displays the sum of the two controls. Thus a virtual instrument can be run as either a program, with the front panel serving as a user interface, or, when dropped as a node onto the block diagram, the front panel defines the inputs and outputs for the node through t |
https://en.wikipedia.org/wiki/Viva | Viva may refer to:
Companies and organisations
Viva (network operator), a Dominican mobile network operator
Viva Air, a Spanish airline taken over by flag carrier Iberia
Viva Air Dominicana
VIVA Bahrain, a telecommunication company
Viva Energy, an Australian petroleum company
Viva Entertainment, a Philippine media company
Viva Films, a Philippine film company
Viva Media, an interactive entertainment company based in New York City
Visi Media Asia (branded as VIVA), a subsidiary of Bakrie Group
Viva Records (Philippines), a Philippine record label
Viva Records (U.S.), subsidiary of Snuff Garrett Records
Viva! (organisation), a British animal rights group, which focuses on promoting veganism
Vision with Values (branded as ViVa), political party in Guatemala
Victoria-Vanuatu Physician Project (branded as ViVa), a Canadian organization that sends doctors to Vanuatu
Voices In Vital America a Vietnam era advocacy group
Film
Viva (2007 film), a 2007 film directed by Anna Biller
Viva (2015 film), a 2015 Irish film directed by Paddy Breathnach
Viva Las Vegas, a 1964 film starring Elvis Presley
Magazines
Viva (American magazine), an adult woman's magazine that premiered in 1973
Viva (Canadian magazine), a magazine focusing on holistic medicine that premiered in 2004
Viva (Dutch magazine), a Dutch weekly magazine for women that premiered in 2012
Music
Bands
Viva (band), an Indian pop girl group
Viva, a 1990s British band, part of the Romo movement
Albums
Viva (Bananarama album), 2009
Viva (La Düsseldorf album) or the title song, 1978
Viva! (Roxy Music album), 1976
Viva (Xmal Deutschland album), 1987
Viva!, by Jimsaku, 1992
Songs
"Viva!", a song by Bond
Radio
Viva (Sirius XM), a channel on the Sirius XM Radio network
Viva 963, a former UK radio channel
Television
Channels and networks
Viva (Brazilian TV channel), a Globosat TV channel
Oprah Winfrey Network (Canadian TV channel), formerly known as Viva
Television channels and programming block operated by Viva Communications in the Philippines
Pinoy Box Office, a film channel launched in 1996, formerly known as Viva Cinema until 2003
Viva TV (Philippine TV channel), a general entertainment television channel launched in 2009, formerly known as Viva Cinema until 2012
Viva Cinema, former name of Viva TV, a Philippine channel
VIVA Media, a German music television network
Viva (UK and Irish TV channel), a former music and entertainment channel
VIVA Austria, German music and entertainment channel
VIVA Germany, music network which was available throughout Europe
VIVA Poland, a music and entertainment channel
Television shows
Viva La Bam, an American reality television series that aired on MTV
Viva Variety, an American sketch comedy series that aired on Comedy Central
Transportation
Automobiles
Chevrolet Viva, a Russian subcompact sedan
Perodua Viva, a Malaysian supermini hatchback
Vauxhall Viva, a British compact car
Daewoo Lacetti, a Korean compact car, so |
https://en.wikipedia.org/wiki/DMN | DMN or dmn may refer to:
Science
Default mode network, a network of brain regions
Dorsal motor nucleus, a nerve nucleus for the vagus nerve
Dorsomedial nucleus, a nerve nucleus for the hypothalamus in the brain
Dimethylnitrosamine, a chemical
Other uses
DMN (group), a Brazilian rap group
The Dallas Morning News
Darjah Utama Seri Mahkota Negara (D.M.N.), Malaysian Federal Award (second order of precedence)
Decision Model and Notation, an Object Management Group standard
Dunman Secondary School, a secondary school in Tampines, Singapore
Dynamic Manufacturing Network, a virtual alliance of enterprises who collectively constitute a dispersed manufacturing network
D.Mn., an abbreviation used for the United States District Court for the District of Minnesota
dmn, the ISO 630 code for the Mande languages |
https://en.wikipedia.org/wiki/Object%20binding | Several object binding times exist in object oriented systems. Java, for example, has late binding leading to more loosely coupled systems (at least for deployment).
Object-oriented programming |
https://en.wikipedia.org/wiki/Security%20Account%20Manager | The Security Account Manager (SAM) is a database file in Windows XP, Windows Vista, Windows 7, 8.1, 10 and 11 that stores users' passwords. It can be used to authenticate local and remote users. Beginning with Windows 2000 SP4, Active Directory authenticates remote users. SAM uses cryptographic measures to prevent unauthenticated users accessing the system.
The user passwords are stored in a hashed format in a registry hive either as an LM hash or as an NTLM hash. This file can be found in %SystemRoot%/system32/config/SAM and is mounted on HKLM/SAM and SYSTEM privileges are required to view it.
In an attempt to improve the security of the SAM database against offline software cracking, Microsoft introduced the SYSKEY function in Windows NT 4.0. When SYSKEY is enabled, the on-disk copy of the SAM file is partially encrypted, so that the password hash values for all local accounts stored in the SAM are encrypted with a key (usually also referred to as the "SYSKEY"). It can be enabled by running the syskey program. As of Windows 10 version 1709, syskey was removed due to a combination of insecure security and misuse by bad actors to lock users out of systems.
Cryptanalysis
In 2012, it was demonstrated that every possible 8-character NTLM password hash permutation can be cracked in under 6 hours.
In 2019, this time was reduced to roughly 2.5 hours by using more modern hardware.
In the case of online attacks, it is not possible to simply copy the SAM file to another location. The SAM file cannot be moved or copied while Windows is running, since the Windows kernel obtains and keeps an exclusive filesystem lock on the SAM file, and will not release that lock until the operating system has shut down or a "Blue Screen of Death" exception has been thrown. However, the in-memory copy of the contents of the SAM can be dumped using various techniques (including pwdump), making the password hashes available for offline brute-force attack.
Removing LM hash
LM hash is a compromised protocol and has been replaced by NTLM hash. Most versions of Windows can be configured to disable the creation and storage of valid LM hashes when the user changes their password. Windows Vista and later versions of Windows disable LM hash by default. Note: enabling this setting does not immediately clear the LM hash values from the SAM, but rather enables an additional check during password change operations that will instead store a "dummy" value in the location in the SAM database where the LM hash is otherwise stored. (This dummy value has no relationship to the user's password - it is the same value used for all user accounts.)
Related attacks
In Windows NT 3.51, NT 4.0 and 2000, an attack was devised to bypass the local authentication system. If the SAM file is deleted from the hard drive (e.g. mounting the Windows OS volume into an alternate operating system), the attacker could log in as any account with no password. This flaw was corrected with Windows XP, which sho |
https://en.wikipedia.org/wiki/Experian | Experian is a multinational data analytics and consumer credit reporting company headquartered in Dublin, Ireland. Experian collects and aggregates information on over 1 billion people and businesses including 235 million individual U.S. consumers and more than 25 million U.S. businesses.
The company operates in 37 countries with offices in Brazil, the United Kingdom, and the United States. The company employs approximately 17,000 people and had a reported revenue of US$5.18 billion for the fiscal year ended in March 2020. It is listed on the London Stock Exchange and is a constituent of the FTSE 100 Index. Experian is a partner in USPS address validation. It is one of the "Big Three" credit-reporting agencies, alongside TransUnion and Equifax.
In addition to its credit services, Experian also sells decision analytic and marketing assistance to businesses, including individual fingerprinting and targeting. Its consumer services include online access to credit history and products meant to protect from fraud and identity theft. Like all credit reporting agencies, the company is required by U.S. law to provide consumers with one free credit report every year.
History
The company has its origins in Credit Data Corporation, a business which was acquired by TRW Inc. in 1968, and subsequently renamed TRW Information Systems and Services Inc.
In November 1996, TRW sold the unit, as Experian, to Bain Capital and Thomas H. Lee Partners. Just one month later, the two firms sold Experian to The Great Universal Stores Limited in Manchester, England, a retail conglomerate with millions of customers paying for goods on credit (later renamed GUS). GUS merged its own credit-information business, CCN, which at the time was the largest credit-service company in the UK, into Experian.
In October 2006, Experian was demerged from GUS and listed on the London Stock Exchange.
In August 2005, Experian accepted a settlement with the Federal Trade Commission (FTC) over charges that Experian had violated a previous settlement with the FTC. The FTC alleged that ads for the "free credit report" did not adequately disclose that Experian customers would automatically be enrolled in Experian's $79.95 credit-monitoring program.
In January 2008, Experian announced that it would cut more than 200 jobs at its Nottingham office.
Experian shut down its Canadian operations on 14 April 2009.
In March 2017, the U.S. Consumer Financial Protection Bureau fined Experian $3 million for providing invalid credit scores to consumers.
In October 2017, Experian acquired Clarity Services, a credit bureau specialising in alternative consumer data.
Operations
In the United States, like the other major credit reporting bureaus, Experian is chiefly regulated by the Fair Credit Reporting Act (FCRA). The Fair and Accurate Credit Transactions Act of 2003, signed into law in 2003, amended the FCRA to require the credit reporting companies to provide consumers with one free copy of their credit |
https://en.wikipedia.org/wiki/State%20space%20search | State space search is a process used in the field of computer science, including artificial intelligence (AI), in which successive configurations or states of an instance are considered, with the intention of finding a goal state with the desired property.
Problems are often modelled as a state space, a set of states that a problem can be in. The set of states forms a graph where two states are connected if there is an operation that can be performed to transform the first state into the second.
State space search often differs from traditional computer science search methods because the state space is implicit: the typical state space graph is much too large to generate and store in memory. Instead, nodes are generated as they are explored, and typically discarded thereafter. A solution to a combinatorial search instance may consist of the goal state itself, or of a path from some initial state to the goal state.
Representation
In state space search, a state space is formally represented as a tuple , in which:
is the set of all possible states;
is the set of possible actions, not related to a particular state but regarding all the state space;
is the function that establish which action is possible to perform in a certain state;
is the function that returns the state reached performing action in state
is the cost of performing an action in state . In many state spaces a is a constant, but this is not always true.
Examples of state-space search algorithms
Uninformed search
According to Poole and Mackworth, the following are uninformed state-space search methods, meaning that they do not have any prior information about the goal's location.
Traditional depth-first search
Breadth-first search
Iterative deepening
Lowest-cost-first search / Uniform-cost search (UCS)
Informed search
These methods take the goal's location in the form of a heuristic function. Poole and Mackworth cite the following examples as informed search algorithms:
Informed/Heuristic depth-first search
Greedy best-first search
A* search
See also
State space
State space planning
Branch and bound - a method for making state-space search more efficient by pruning subsets of it.
References
Stuart J. Russell and Peter Norvig (1995). Artificial Intelligence: A Modern Approach. Prentice Hall.
Search algorithms |
https://en.wikipedia.org/wiki/State%20space%20%28computer%20science%29 | In computer science, a state space is a discrete space representing the set of all possible configurations of a "system". It is a useful abstraction for reasoning about the behavior of a given system and is widely used in the fields of artificial intelligence and game theory.
For instance, the toy problem Vacuum World has a discrete finite state space in which there are a limited set of configurations that the vacuum and dirt can be in. A "counter" system, where states are the natural numbers starting at 1 and are incremented over time has an infinite discrete state space. The angular position of an undamped pendulum is a continuous (and therefore infinite) state space.
Definition
State spaces are useful in computer science as a simple model of machines. Formally, a state space can be defined as a tuple [N, A, S, G] where:
N is a set of states
A is a set of arcs connecting the states
S is a nonempty subset of N that contains start states
G is a nonempty subset of N that contains the goal states.
Properties
A state space has some common properties:
complexity, where branching factor is important
structure of the space, see also graph theory:
directionality of arcs
tree
rooted graph
For example, the Vacuum World has a branching factor of 4, as the vacuum cleaner can end up in 1 of 4 adjacent squares after moving (assuming it cannot stay in the same square nor move diagonally). The arcs of Vacuum World are bidirectional, since any square can be reached from any adjacent square, and the state space is not a tree since it is possible to enter a loop by moving between any 4 adjacent squares.
State spaces can be either infinite or finite, and discrete or continuous.
Size
The size of the state space for a given system is the number of possible configurations of the space.
Finite
If the size of the state space is finite, calculating the size of the state space is a combinatorial problem. For example, in the Eight queens puzzle, the state space can be calculated by counting all possible ways to place 8 pieces on an 8x8 chessboard. This is the same as choosing 8 positions without replacement from a set of 64, or
This is significantly greater than the number of legal configurations of the queens, 92. In many games the effective state space is small compared to all reachable/legal states. This property is also observed in Chess, where the effective state space is the set of positions that can be reached by game-legal moves. This is far smaller than the set of positions that can be achieved by placing combinations of the available chess pieces directly on the board.
Infinite
All continuous state spaces can be described by a corresponding continuous function and are therefore infinite. Discrete state spaces can also have (countably) infinite size, such as the state space of the time-dependent "counter" system, similar to the system in queueing theory defining the number of customers in a line, which would have state space {0, 1, 2, 3, ...}.
|
https://en.wikipedia.org/wiki/The%20Art%20of%20Unix%20Programming | The Art of Unix Programming by Eric S. Raymond is a book about the history and culture of Unix programming from its earliest days in 1969 to 2003 when it was published, covering both genetic derivations such as BSD and conceptual ones such as Linux.
The author utilizes a comparative approach to explaining Unix by contrasting it to other operating systems including desktop-oriented ones such as Microsoft Windows and the classic Mac OS to ones with research roots such as EROS and Plan 9 from Bell Labs.
The book was published by Addison-Wesley, September 17, 2003, and is also available online, under a Creative Commons license with additional clauses.
Contributors
The book contains many contributions, quotations and comments from UNIX gurus past and present. These include:
Ken Arnold (author of curses and co-author of Rogue)
Steve Bellovin
Stuart Feldman
Jim Gettys
Stephen C. Johnson
Brian Kernighan
David Korn
Mike Lesk
Doug McIlroy
Marshall Kirk McKusick
Keith Packard
Henry Spencer
Ken Thompson
See also
Unix philosophy
The Hacker Ethic and the Spirit of the Information Age
References
External links
Online book (HTML edition)
The Art of Unix Programming at FAQs
2003 non-fiction books
Books by Eric S. Raymond
Computer programming books
Creative Commons-licensed books
Unix books |
https://en.wikipedia.org/wiki/Soft%20heap | In computer science, a soft heap is a variant on the simple heap data structure that has constant amortized time complexity for 5 types of operations. This is achieved by carefully "corrupting" (increasing) the keys of at most a constant number of values in the heap.
Definition and performance
The constant time operations are:
create(S): Create a new soft heap
insert(S, x): Insert an element into a soft heap
meld(S, S' ): Combine the contents of two soft heaps into one, destroying both
delete(S, x): Delete an element from a soft heap
findmin(S): Get the element with minimum key in the soft heap
Other heaps such as Fibonacci heaps achieve most of these bounds without any corruption, but cannot provide a constant-time bound on the critical delete operation. The amount of corruption can be controlled by the choice of a parameter , but the lower this is set, the more time insertions require: expressed using Big-O notation, the amortized time will be for an error rate of . Some versions of soft heaps allow the create, insert, and meld operations to take constant time in the worst case, producing amortized rather than worst-case performance only for findmin and delete. As with comparison sort, these algorithms access the keys only by comparisons; if arithmetic operations on integer keys are allowed, the time dependence on can be reduced to or (with randomization) .
More precisely, the error guarantee offered by the soft heap is the following: each soft heap is initialized with a parameter , chosen between 0 and 1/2. Then at any point in time it will contain at most corrupted keys, where is the number of elements inserted so far. Note that this does not guarantee that only a fixed percentage of the keys currently in the heap are corrupted: in an unlucky sequence of insertions and deletions, it can happen that all elements in the heap will have corrupted keys. Similarly, there is no guarantee that in a sequence of elements extracted from the heap with findmin and delete, only a fixed percentage will have corrupted keys: in an unlucky scenario only corrupted elements are extracted from the heap. When a key is corrupted, the value stored for it in the soft key is higher than its initially-given value; corruption can never decrease the value of any key. The findmin operation finds the minimum value among the currently stored keys, including the corrupted ones.
The soft heap was designed by Bernard Chazelle in 2000. The term "corruption" in the structure is the result of what Chazelle called "carpooling" in a soft heap. Each node in the soft heap contains a linked list of keys and one common key. The common key is an upper bound on the values of the keys in the linked list. Once a key is added to the linked list, it is considered corrupted because its value is never again relevant in any of the soft heap operations: only the common keys are compared. This is what makes soft heaps "soft"; one cannot be sure whether any particular value put into |
https://en.wikipedia.org/wiki/List%20of%20CBC%20Television%20stations | CBC Television is a Canadian English language public television network made up of fourteen owned-and-operated stations. Some privately owned stations were formerly affiliated with the network until as late as August 2016. This is a table listing of CBC Television's stations, arranged by market. This article also includes former self-supporting stations later operating as rebroadcasters of regional affiliates, stations no longer affiliated with CBC Television, and stations purchased by the CBC that formerly operated as private CBC Television affiliates.
The station's virtual channel number (if applicable) follows the call letters. The number in parentheses that follows is the station's actual digital channel number; digital channels allocated for future use listed in parentheses are italicized.
CBC Television's O&Os operate for the most part as a seamless national service, with few deviations from the national schedule. The network's former private affiliates had some flexibility to carry a reduced schedule of network programming if they chose.
Over the years the CBC has gradually reduced the number of private affiliates; until 2006 this usually involved either opening a new station (or new rebroadcast transmitters) in a market previously served by a private affiliate, or purchasing the affiliate outright. In most cases since 2006 (when CFJC-TV disaffiliated), it declined to open new rebroadcasters in the affected markets for budgetary reasons, and since then has wound down its remaining affiliation agreements, with the last expiring on August 31, 2016. These disaffiliations, along with the CBC's decision to shut down its TV rebroadcaster network in 2012, have significantly reduced the network's terrestrial coverage; however, under Canadian Radio-television and Telecommunications Commission (CRTC) regulations, all cable, satellite, and IPTV service providers are required to include a CBC Television signal in their basic service, even if one is not available terrestrially in the applicable service area.
Since September 2014, all CBC Television O&Os have also been separately affiliated with the temporary and (since April 2015) permanent part-time networks operated by Rogers Media for the purposes of distributing the Rogers-produced Hockey Night in Canada broadcasts. This was required by the CRTC as, under the current arrangement for National Hockey League rights between Rogers and the CBC, Rogers exercises editorial control and sells all advertising time during the HNIC broadcasts, even though for promotional purposes they are still treated as part of the CBC Television schedule. Although the CRTC decision which approved the permanent network only referred to stations owned by the CBC itself, CBC Television's private affiliates also continued to carry the Rogers-produced HNIC broadcasts until disaffiliation.
See also
List of Ici Radio-Canada Télé television stations for stations affiliated with or owned by the CBC's French-language television |
https://en.wikipedia.org/wiki/List%20of%20Ici%20Radio-Canada%20T%C3%A9l%C3%A9%20stations | Ici Radio-Canada Télé operates as a Canadian French language television network owned by the Canadian Broadcasting Corporation (known in French as Société Radio-Canada) made up of thirteen owned-and-operated stations and seven private affiliates. This is a table listing of Radio-Canada affiliates, with stations owned by Radio-Canada separated from privately owned affiliates, and arranged by market. This article also includes former self-supporting stations currently operating as rebroadcasters of regional affiliates, stations no longer affiliated with Télévision de Radio-Canada and stations purchased by the CBC that formerly operated as private Radio-Canada affiliates.
The station's advertised channel number follows the call letters; in most cases, this is their over-the-air broadcast frequency. The number in parentheses which follows a virtual channel number is the station's actual digital channel number, digital channels allocated for future use listed in parentheses are italicized.
Note:
Two boldface asterisks appearing following a station's call letters (**) indicate a Radio-Canada station that was built and signed-on by the Canadian Broadcasting Corporation.
Ici Radio-Canada Télé owned-and-operated stations
Former Radio-Canada-owned self-supporting stations
Former affiliates
Notes:
1 ) Also affiliated with the English CBC network, 1959-1968;
2 ) Also affiliated with the English CBC network, 1954-1957;
3 ) Also affiliated with the English CBC network, 1957-1962;
4 ) Affiliated with both CBC and Radio-Canada, 1956-1974; now TVA affiliate.
5 ) Affiliated with both CBC and Radio-Canada, 1956 until CBFOT (now CBLFT-3) established, which rebroadcasts CBLFT Toronto;
Affiliates later purchased by Radio-Canada
See also
List of CBC television stations for stations affiliated with or owned by the CBC's English-language television network CBC Television
List of assets owned by Canadian Broadcasting Corporation
List of defunct CBC and Radio-Canada television transmitters - decommissioned on July 31, 2012
References |
https://en.wikipedia.org/wiki/James%20van%20Hoften | James Dougal Adrianus "Ox" van Hoften (born June 11, 1944 ) is an American civil and hydraulic engineer, retired U.S. Navy officer and aviator, and a former astronaut for NASA.
Personal data
Van Hoften was born June 11, 1944, in Fresno, California. He was active in the Boy Scouts of America where he achieved its second-highest rank, Life Scout. He considers Burlingame, California, to be his hometown. He is of Dutch descent. Van Hoften is married to the former Vallarie Davis of Pasadena, with three children: Jennifer Lyn (born October 31, 1971), Jamie Juliana (born August 24, 1977), and Victoria Jane (born March 17, 1981). He enjoys skiing, playing handball and racquetball, and jogging. In college, he was a member of the Alpha Sigma chapter of Pi Kappa Alpha.
Education
Graduated from Mills High School, Millbrae, California, in 1962; received a Bachelor of Science degree in Civil Engineering from the University of California, Berkeley in 1966; a Master of Science and a Doctor of Philosophy degrees in Hydraulic Engineering from Colorado State University in 1968 and 1976, respectively.
Flight experience
From 1969 to 1974, Van Hoften was a pilot in the United States Navy. He received flight training at Pensacola, Florida, and completed jet pilot training at Beeville, Texas, in November 1970. He was then assigned to the Naval Air Station, Miramar, California, to fly F-4 Phantoms, and subsequently to VF-121 Replacement Air Group. As a pilot with VF-154 assigned to the aircraft carrier USS Ranger in 1972, Van Hoften participated in two cruises to Southeast Asia where he flew approximately 60 combat missions during the Vietnam War. He resumed his academic studies in 1974, and completed a dissertation on the interaction of waves and turbulent channel flow for his doctorate. In September 1976, he accepted an assistant professorship of Civil Engineering at the University of Houston, and until his selection as an astronaut candidate, taught fluid mechanics and conducted research on biomedical fluid flows concerning flows in artificial internal organs and valves. Dr. Van Hoften has published a number of papers on turbulence, waves, and cardiovascular flows. From 1977 until 1980 he flew F-4N's with Naval Reserve Fighter Squadron 201 at NAS Dallas and then three years as a member of the Texas Air National Guard with the 147th Fighter Interceptor Group at Ellington Field as a pilot in the F-4C.
He has logged 3,300 hours flying time, the majority in jet aircraft.
NASA career
Dr. Van Hoften was selected as an astronaut candidate by NASA in January 1978. He completed a 1-year training and evaluation period in August 1979.
From 1979 through the first flight, STS-1, Van Hoften supported the Space Shuttle entry and on-orbit guidance, navigation and flight control testing at the Flight Systems Laboratory at Downey, California. Subsequently, he was lead of the Astronaut Support Team at Kennedy Space Center, Florida, responsible for the Space Shuttle turn-around t |
https://en.wikipedia.org/wiki/1-Wire | 1-Wire is a wired half duplex serial bus designed by Dallas Semiconductor that provides low-speed (16.3 kbit/s) data communication and supply voltage over a single conductor.
1-Wire is similar in concept to I²C, but with lower data rates and longer range. It is typically used to communicate with small inexpensive devices such as digital thermometers and weather instruments. A network of 1-Wire devices with an associated master device is called a MicroLAN. The protocol is also used in small electronic keys known as a Dallas key or .
One distinctive feature of the bus is the possibility of using only two conductors — data and ground. To accomplish this, 1-Wire devices integrate a small capacitor (~800pF) to store charge, which powers the device during periods when the data line is active.
Usage example
1-Wire devices are available in different packages: integrated circuits, a TO-92-style package (as typically used for transistors), and a portable form called an or Dallas key which is a small stainless-steel package that resembles a watch battery. Manufacturers also produce devices more complex than a single component that use the 1-Wire bus to communicate.
1-Wire devices can fit in different places in a system. It might be one of many components on a circuit board within a product. It also might be a single component within a device such as a temperature probe. It could be attached to a device being monitored. Some laboratory systems connect to 1-Wire devices using cables with modular connectors or CAT-5 cable. In such systems, RJ11 (6P2C or 6P4C modular plugs, commonly used for telephones) are popular.
Systems of sensors and actuators can be built by wiring together many 1-Wire components. Each 1-Wire component contains all of the logic needed to operate on the 1-Wire bus. Examples include temperature loggers, timers, voltage and current sensors, battery monitors, and memory. These can be connected to a PC using a bus converter. USB, RS-232 serial, and parallel port interfaces are popular solutions for connecting a MicroLan to the host PC. 1-Wire devices can also be interfaced directly to microcontrollers from various vendors.
are connected to 1-Wire bus systems by means of sockets with contacts that touch the "lid" and "base" of the canister. Alternatively, the connection can be semi-permanent with a socket into which the clips, but from which it is easily removed.
Each 1-Wire chip has a unique identifier code. This feature makes the chips, especially , suitable electronic keys. Some uses include locks, burglar alarms, computer systems, manufacturer-approved accessories, time clocks and courrier and maintenance keys for smart safes. have been used as Akbil smart tickets for the public transport in Istanbul.
Power supplies
Apple MagSafe and MagSafe 2 connector-equipped power supplies, displays, and Mac laptops use the 1-Wire protocol to send and receive data to and from the connected Mac laptop, via the middle pin of the connector. D |
https://en.wikipedia.org/wiki/Watts%20Humphrey | Watts S. Humphrey (July 4, 1927 – October 28, 2010) was an American pioneer in software engineering who was called the "father of software quality."
Biography
Watts Humphrey (whose grandfather and father also had the same name) was born in Battle Creek, Michigan on July 4, 1927.
His uncle was US Secretary of the Treasury George M. Humphrey.
In 1944, he graduated from high school and served in the United States Navy.
Despite dyslexia, he received a bachelor of science in physics from the University of Chicago, a master of science in physics from Illinois Institute of Technology physics department, and a master of business administration from the University of Chicago Graduate School of Business.
In 1953 he went to Boston and worked at Sylvania Labs.
In 1959 he joined IBM.
In the late 1960s, Humphrey headed the IBM software team that introduced the first software license. Humphrey was a vice president at IBM.
In the 1980s at the Software Engineering Institute (SEI) at Carnegie Mellon University Humphrey founded the Software Process Program, and served as director of that program from 1986 until the early 1990s. This program was aimed at understanding and managing the software engineering process because this is where big and small organizations or individuals encounter the most serious difficulties and where, thereafter, lies the best opportunity for significant improvement.
The program resulted in the development of the Capability Maturity Model, published in 1989 in Humphrey's "Managing the Software Process" and inspired the later development of the personal software process (PSP) and the team software process (TSP) concepts.
Humphrey received an honorary doctor of software engineering from the Embry-Riddle Aeronautical University in 1998.
The Watts Humphrey Software Quality Institute in Chennai, India was named after him in 2000.
In 2003, Humphrey was awarded the National Medal of Technology.
Humphrey became a fellow of the SEI and of the Association for Computing Machinery in 2008.
See also
Personal software process (PSP)
Software quality
Team software process (TSP)
Publications
Humphrey is the author of several books, including
2011. Leadership, Teamwork, and Trust: Building a Competitive Software Capability. Addison-Wesley, Reading, MA.
2010. Reflections on Management: How to Manage Your Software Projects, Your Teams, Your Boss, and Yourself. Addison-Wesley, Reading, MA.
2006. TSP, Coaching Development Teams. Addison-Wesley, Reading, MA.
2006. TSP, Leading a Development Team. Addison-Wesley, Reading, MA.
2005. PSP, A Self-Improvement Process for Software Engineers. Addison-Wesley, Reading, MA.
2001. Winning with Software: An Executive Strategy. Addison-Wesley, Reading, MA.
1999. Introduction to the Team Software Process. Addison-Wesley, Reading, MA.
1997. Introduction to the Personal Software Process. Addison-Wesley, Reading, MA.
1996. Managing Technical People - Innovation, Teamwork and Software Process. Addison-Wes |
https://en.wikipedia.org/wiki/Apple%20Filing%20Protocol | The Apple Filing Protocol (AFP), formerly AppleTalk Filing Protocol, is a proprietary network protocol, and part of the Apple File Service (AFS), that offers file services for macOS, classic Mac OS, and Apple II computers. In OS X 10.8 Mountain Lion and earlier, AFP was the primary protocol for file services. Starting with OS X 10.9 Mavericks, Server Message Block (SMB) was made the primary file sharing protocol, with the ability to run an AFP server removed later in macOS 11 Big Sur. AFP supports Unicode file names, POSIX and access-control list permissions, resource forks, named extended attributes, and advanced file locking.
Compatibility
AFP versions 3.0 and greater rely exclusively on TCP/IP (port 548) for establishing communication, supporting AppleTalk only as a service discovery protocol. The AFP 2.x family supports both TCP/IP (using Data Stream Interface) and AppleTalk for communication and service discovery. Many third-party AFP implementations use AFP 2.x, thereby supporting AppleTalk as a connection method. Still earlier versions rely exclusively on AppleTalk. For this reason, some older literature refers to AFP as "AppleTalk Filing Protocol". Other literature may refer to AFP as "AppleShare", the name of the Mac OS 9 (and earlier) AFP client.
Notable current compatibility topics are:
Mac OS X v10.4 and later eliminates support for AFP servers that rely solely on AppleTalk for communication.
Computers using classic Mac OS can connect to AFP 3.x servers, with some limitations. For example, the maximum file size in Mac OS 8 is 2 gigabytes. Typically, Mac OS 9.1 or later is recommended for connecting to AFP 3.x servers; for versions of original Mac OS prior to 9.1, installation of the AppleShare client 3.8.8 is required.
AFP 3.0 and later is required for network home directories, since Mac OS X requires POSIX permissions on user home directories. Single sign-on using Kerberos requires AFP 3.1.
APFS: AFP is incompatible with sharing of APFS volumes but is still usable as a Time Machine destination in High Sierra.
History
Early implementations of AFP server software were available in Mac OS starting with System 6, in AppleShare and AppleShare IP, and in early "1.x" releases of Mac OS X Server. In client operating systems, AFP was called "Personal File Sharing", and supported up to ten simultaneous connections. These AFP implementations relied on version 1.x or 2.x of the protocol. AppleShare IP 5.x, 6.x, and the "1.x" releases of Mac OS X Server introduced AFP version 2.2. This was the first version to offer transport connections using TCP/IP as well as AppleTalk. It also increased the maximum share point size from four gigabytes to two terabytes, although the maximum file size that could be stored remained at two gigabytes due to limitations in the original Mac OS.
Changes made in AFP since version 3.0 represent major advances in the protocol, introducing features designed specifically for Mac OS X clients.
However, like the A |
https://en.wikipedia.org/wiki/MkLinux | MkLinux (for Microkernel Linux) is an open-source software computer operating system begun by the Open Software Foundation Research Institute and Apple Computer in February 1996, to port Linux to the PowerPC platform, and Macintosh computers. The name refers to the Linux kernel being adapted to run as a server hosted on the Mach microkernel, version 3.0.
History
MkLinux started as a project sponsored by Apple Computer and OSF Research Institute, to get "Linux on Mach" ported to the Macintosh computer and for Apple to explore alternative kernel technologies on the Mac platform. At the time, there was no officially sponsored PowerPC port of Linux, and none specifically for Macintosh hardware. The OSF Institute, owner of the Mach microkernel and several other Unix-based technologies, was interested in promoting Mach on other platforms. Unlike the design of the later macOS versions 10 and newer (not to be confused with the contemporaneous Mac OS versions 9 and older), MkLinux was designed to take full advantage of the Mach microkernel.
The effort was spearheaded by Apple's VP of Development Tools Ike Nassi and Brett Halle at Apple, and development was later split between two main people: Michael Burg on device drivers and distribution at Apple in Cupertino, California; and Nick Stephen on Mach porting and development at the OSF in Grenoble, France. Other key individuals to work on the project included François Barbou at OSF, and Vicki Brown and Gilbert Coville at Apple.
MkLinux was officially announced at the 1996 World Wide Developers Conference (WWDC). A free CD containing a binary distribution of MkLinux was handed out to the attendees.
In mid 1998, the community-led MkLinux Developers Association took over development of the operating system.
The MkLinux distribution is much too large for casual users to have downloaded via the slow dial-up Internet access of the day, even using 56k modems. However, the official CDs were available in a book from Prime Time Freeware, published in English and in Japanese. The book covers installation, management, and use of the OS, and serves as a hardcopy manual.
Apple later released the Open Firmware-based Power Macintosh computers, an official PowerPC branch of the Linux kernel was created and was spearheaded by the LinuxPPC project. MkLinux and LinuxPPC developers traded a lot of ideas back and forth as both worked on their own ways of running Linux. Debian also released a traditional monolithic kernel distribution for PowerPC—as did SUSE, and Terra Soft Solutions with Yellow Dog Linux.
When Apple dropped support for MkLinux, the developer community struggled to improve the Mach kernel, and to support various Power Macintosh models. MkLinux continued to be the only option for Macintosh NuBus computers until June 2000, when PPC/Linux for NuBus Power Macs was released.
Reception
MacTech magazine observed this of the general state of Linux on Macintosh in 1999: "Seen as a Windows NT or commercial Unix kil |
https://en.wikipedia.org/wiki/Common%20subexpression%20elimination | In compiler theory, common subexpression elimination (CSE) is a compiler optimization that searches for instances of identical expressions (i.e., they all evaluate to the same value), and analyzes whether it is worthwhile replacing them with a single variable holding the computed value.
Example
In the following code:
a = b * c + g;
d = b * c * e;
it may be worth transforming the code to:
tmp = b * c;
a = tmp + g;
d = tmp * e;
if the cost of storing and retrieving tmp is less than the cost of calculating b * c an extra time.
Principle
The possibility to perform CSE is based on available expression analysis (a data flow analysis). An expression b*c is available at a point p in a program if:
every path from the initial node to p evaluates b*c before reaching p,
and there are no assignments to b or c after the evaluation but before p.
The cost/benefit analysis performed by an optimizer will calculate whether the cost of the store to tmp is less than the cost of the multiplication; in practice other factors such as which values are held in which registers are also significant.
Compiler writers distinguish two kinds of CSE:
local common subexpression elimination works within a single basic block
global common subexpression elimination works on an entire procedure,
Both kinds rely on data flow analysis of which expressions are available at which points in a program.
Benefits
The benefits of performing CSE are great enough that it is a commonly used optimization.
In simple cases like in the example above, programmers may manually eliminate the duplicate expressions while writing the code. The greatest source of CSEs are intermediate code sequences generated by the compiler, such as for array indexing calculations, where it is not possible for the developer to manually intervene. In some cases language features may create many duplicate expressions. For instance, C macros, where macro expansions may result in common subexpressions not apparent in the original source code.
Compilers need to be judicious about the number of temporaries created to hold values. An excessive number of temporary values creates register pressure possibly resulting in spilling registers to memory, which may take longer than simply recomputing an arithmetic result when it is needed.
See also
Global value numbering
Loop-invariant code motion
References
Steven S. Muchnick, Advanced Compiler Design and Implementation (Morgan Kaufmann, 1997) pp. 378–396
John Cocke. "Global Common Subexpression Elimination." Proceedings of a Symposium on Compiler Construction, ACM SIGPLAN Notices 5(7), July 1970, pages 850-856.
Briggs, Preston, Cooper, Keith D., and Simpson, L. Taylor. "Value Numbering." Software-Practice and Experience, 27(6), June 1997, pages 701-724.
Compiler optimizations |
https://en.wikipedia.org/wiki/PIPE%20Networks | PIPE Networks (also known as PIPE) is an Australian telecommunications company, based in Brisbane, Queensland. It is a subsidiary of TPG Telecom. Its primary business is setting up peering exchanges. PIPE itself stands for "Public Internet Peering Exchange". The company also provides services such as co-location, telehousing, and fibre networks.
PIPE listed on the then Australian Stock Exchange on 17 May 2005 as PIPE Networks Limited with a stock code of: PWK.
Australian ISPs which use PIPE's metropolitan fibre networks include Eftel, iiNet, Internode, Netspace and iPrimus amongst others.
In March 2010, shareholders accepted a takeover offer from TPG Telecom Limited. The company was noted for recently increasing their revenues, in contrast to the general trend in their industry.
Peering exchanges
PIPE currently runs six metropolitan exchange networks.
PIPE International
In January 2008, PIPE Networks announced it would be constructing a $200 million international link, known as PPC-1 (Pipe Pacific Cable), from Sydney to Guam. The link connects Madang in Papua New Guinea. It is operated by a newly formed PIPE subsidiary, PIPE International.
In April 2008, PIPE Networks entered into a joint venture with New Zealand-based Kordia to build an undersea fibre optic cable between New Zealand and Australia. This cable will be known as PPC-2.
Takeover offer
In March 2010, shareholders voted to accept a $373 million takeover offer by TPG Telecom Ltd. for $6.30 per share (TPG Annual Report 2010, p48). The takeover was subject to approval by the Queensland Supreme Court. Shares of TPG rose 11 per cent after the news was released.
See also
Internet in Australia
List of Internet exchange points
Pipe Pacific Cable
References
External links
PIPE Networks
PIPE International
Media presentation for PPC-1
PIPE resources in South Australia/NT
Companies formerly listed on the Australian Securities Exchange
Internet exchange points in Australia
Companies based in Brisbane
Australian companies established in 2001 |
https://en.wikipedia.org/wiki/Experimental%20economics | Experimental economics is the application of experimental methods to study economic questions. Data collected in experiments are used to estimate effect size, test the validity of economic theories, and illuminate market mechanisms. Economic experiments usually use cash to motivate subjects, in order to mimic real-world incentives. Experiments are used to help understand how and why markets and other exchange systems function as they do. Experimental economics have also expanded to understand institutions and the law (experimental law and economics).
A fundamental aspect of the subject is design of experiments. Experiments may be conducted in the field or in laboratory settings, whether of individual or group behavior.
Variants of the subject outside such formal confines include natural and quasi-natural experiments.
Experimental topics
One can loosely classify economic experiments using the following topics:
Markets
Games
Evolutionary game theory
Decision making
Bargaining
Contracts
Auctions
Coordination
Social Preferences
Learning
Matching
Field Experiments
Within economics education, one application involves experiments used in the teaching of economics. An alternative approach with experimental dimensions is agent-based computational modeling. It is important to consider the potential and constraints of games for understanding rational behavior and solving human conflict.
Coordination games
Coordination games are games with multiple pure strategy Nash equilibria. There are two general sets of questions that experimental economists typically ask when examining such games: (1) Can laboratory subjects coordinate, or learn to coordinate, on one of multiple equilibria, and if so are there general principles that can help predict which equilibrium is likely to be chosen? (2) Can laboratory subjects coordinate, or learn to coordinate, on the Pareto best equilibrium and if not, are there conditions or mechanisms which would help subjects coordinate on the Pareto best equilibrium? Deductive selection principles are those that allow predictions based on the properties of the game alone. Inductive selection principles are those that allow predictions based on characterizations of dynamics. Under some conditions at least groups of experimental subjects can coordinate even complex non-obvious asymmetric Pareto-best equilibria. This is even though all subjects decide simultaneously and independently without communication. The way by which this happens is not yet fully understood.
Learning experiments
Economic theories often assume that economic incentives can shape behavior even when individual agents have limited understanding of the environment. The relationship between economic incentives and outcomes may be indirect: The economic incentives determine the agents’ experience, and these experiences may then drive future actions.
Learning experiments can be classified as individual choice tasks or games, where games typically refer to |
https://en.wikipedia.org/wiki/Macintosh%20128K | The Apple Macintosh—later rebranded as the Macintosh 128K—is the original Apple Macintosh personal computer. It played a pivotal role in establishing desktop publishing as a general office function. The motherboard, a CRT monitor, and a floppy drive were housed in a beige case with integrated carrying handle; it came with a keyboard and single-button mouse. It sold for . The Macintosh was introduced by a television commercial entitled "1984" shown during Super Bowl XVIII on January 22, 1984 and directed by Ridley Scott. Sales of the Macintosh were strong at its initial release on January 24, 1984, and reached 70,000 units on May 3, 1984. Upon the release of its successor, the Macintosh 512K, it was rebranded as the Macintosh 128K. The computer's model number was M0001.
Development
1978–1984: Development
In 1978, Apple began to organize the Apple Lisa project, aiming to build a next-generation machine similar to an advanced Apple II or the yet-to-be-introduced IBM PC. In 1979, Apple co-founder Steve Jobs learned of the advanced work on graphical user interfaces (GUI) taking place at Xerox PARC. He arranged for Apple engineers to be allowed to visit PARC to see the systems in action. The Apple Lisa project was immediately redirected to use a GUI, which at that time was well beyond the state of the art for microprocessor abilities; the Xerox Alto required a custom processor that spanned several circuit boards in a case which was the size of a small refrigerator. Things had changed dramatically with the introduction of the 16/32-bit Motorola 68k in 1979, which offered at least an order of magnitude better performance than existing designs and made a software GUI machine a practical possibility. The basic layout of the Lisa was largely complete by 1982, at which point Jobs's continual suggestions for improvements led to him being kicked off the project.
At the same time that the Lisa was becoming a GUI machine in 1979, Jef Raskin began the Macintosh project. The design at that time was for a low-cost, easy-to-use machine for the average consumer. Instead of a GUI, it intended to use a text-based user interface that allowed multitasking, and special command keys on the keyboard that accessed standardized commands in the programs. Bud Tribble, a member of the Macintosh team, asked Burrell Smith to integrate the Apple Lisa's 68k microprocessor into the Macintosh so that it could run graphical programs. By December 1980, Smith had succeeded in designing a board that integrated an 8 MHz Motorola 68k. Smith's design used less RAM than the Lisa, which made producing the board significantly more cost-efficient. The final Mac design was self-contained and had the complete QuickDraw picture language and interpreter in 64 KB of ROM – far more than most other computers which typically had around 4 to 8 KB of ROM; it had 128 kB of RAM, in the form of sixteen 64-kilobit (kb) RAM soldered to the logicboard. The final product's screen was a , 512x342 pixel mono |
https://en.wikipedia.org/wiki/Macintosh%20512K | The Macintosh 512K is a personal computer that was designed, manufactured and sold by Apple Computer from September 1984 to April 1986. It is the first update to the original Macintosh 128K. It was virtually identical to the previous Macintosh, differing primarily in the amount of built-in random-access memory. The increased memory turned the Macintosh into a more business-capable computer and gained the ability to run more software. It is the earliest Macintosh model that can be used as an AppleShare server and, with a bridge Mac, communicate with modern devices.
The Mac 512K originally shipped with Macintosh System 1.1 but was able to run all versions of Mac OS up to System 4.1. It was replaced by the Macintosh 512Ke and the Macintosh Plus. All support for the Mac 512K was discontinued on September 1, 1998.
Features
Processor and memory
Like the Macintosh 128K before it, the 512K contained a Motorola 68000 connected to 512 KB of DRAM by a 16-bit data bus. Though the memory had been quadrupled, it could not be upgraded. The large increase earned it the nickname Fat Mac. A 64 KB ROM chip boosts the effective memory to 576 KB, but this is offset by the display's 22 KB framebuffer, which is shared with the DMA video controller. This shared arrangement reduces CPU performance by up to 35%. It shared a revised logic board with the rebadged Macintosh 128K (previously just called the Macintosh), which streamlined manufacturing. The resolution of the display was the same, at 512 × 342.
Apple sold a memory upgrade for the Macintosh 128K for $995 initially, and reduced the price when 256 kb DRAM prices fell months later.
Software
The applications MacPaint and MacWrite were still bundled with the Mac. Soon after this model was released, several other applications became available, including MacDraw, MacProject, Macintosh Pascal and others. In particular, Microsoft Excel, which was written specifically for the Macintosh, required a minimum of 512 KB of RAM, but solidified the Macintosh as a serious business computer. Models with the enhanced ROM also supported Apple's Switcher, allowing cooperative multitasking among (necessarily few) applications.
New uses
The LaserWriter printer became available shortly after the 512K's introduction, as well as the number pad, mic, tablet, keyboard, mouse, basic mouse, and much more. It utilized Apple's built-in networking scheme LocalTalk which allows sharing of devices among several users. The 512K was the oldest Macintosh capable of supporting Apple's AppleShare built-in file sharing network, when introduced in 1987. The expanded memory in the 512K allowed it to better handle large word-processing documents and make better use of the graphical user interface and generally increased speed over the 128K model.
Color Systems Technology used an army of 512K units connected to a custom Intel 80186-based machine to colorize numerous black-and-white films in the mid-1980s.
System software
The original 512K could ac |
https://en.wikipedia.org/wiki/Metasearch%20engine | A metasearch engine (or search aggregator) is an online information retrieval tool that uses the data of a web search engine to produce its own results. Metasearch engines take input from a user and immediately query search engines for results. Sufficient data is gathered, ranked, and presented to the users.
Problems such as spamming reduces the accuracy and precision of results. The process of fusion aims to improve the engineering of a metasearch engine.
Examples of metasearch engines include Skyscanner and Kayak.com, which aggregate search results of online travel agencies and provider websites and Searx, a free and open-source search engine which aggregates results from internet search engines.
History
The first person to incorporate the idea of meta searching was Daniel Dreilinger of Colorado State University. He developed SearchSavvy, which let users search up to 20 different search engines and directories at once. Although fast, the search engine was restricted to simple searches and thus wasn't reliable. University of Washington student Eric Selberg released a more "updated" version called MetaCrawler. This search engine improved on SearchSavvy's accuracy by adding its own search syntax behind the scenes, and matching the syntax to that of the search engines it was probing. Metacrawler reduced the amount of search engines queried to 6, but although it produced more accurate results, it still wasn't considered as accurate as searching a query in an individual engine.
On May 20, 1996, HotBot, then owned by Wired, was a search engine with search results coming from the Inktomi and Direct Hit databases. It was known for its fast results and as a search engine with the ability to search within search results. Upon being bought by Lycos in 1998, development for the search engine staggered and its market share fell drastically. After going through a few alterations, HotBot was redesigned into a simplified search interface, with its features being incorporated into Lycos' website redesign.
A metasearch engine called Anvish was developed by Bo Shu and Subhash Kak in 1999; the search results were sorted using instantaneously trained neural networks. This was later incorporated into another metasearch engine called Solosearch.
Ixquick is a search engine known for its privacy policy statement. Developed and launched in 1998 by David Bodnick, it is owned by Surfboard Holding BV. In June 2006, Ixquick began to delete private details of its users following the same process with Scroogle. Ixquick's privacy policy includes no recording of users' IP addresses, no identifying cookies, no collection of personal data, and no sharing of personal data with third parties. It also uses a unique ranking system where a result is ranked by stars. The more stars in a result, the more search engines agreed on the result.
In April 2005, Dogpile, then owned and operated by InfoSpace, Inc., collaborated with researchers from the University of Pittsburgh and Penn |
https://en.wikipedia.org/wiki/This%20Is%20Your%20Day | This is Your Day is a Christian television show hosted by Pastor Benny Hinn and broadcast several times a week in the United States and globally by the Trinity Broadcasting Network, INSP Networks, The God Channel and various local affiliates to an estimated four million followers. The program began airing in 1990 and is a half-hour long.
Synopsis
During the program, Benny Hinn and his guests teach, read letters, pray, and show highlights from Hinn's "Miracle Healing Services." Hinn and his crew travel the world frequently, and a large part of the show is devoted to his global services, in which Hinn is said to imbue people with the power of the Holy Spirit. Many claim to have risen from wheelchairs, or to have been healed of other ailments. Towards the final portion of the program Hinn offers gifts such as books, CDs, DVDs and downloadable materials as a thank-you to viewers who donate to the ministry. He then prays for the prayer needs of his viewing audience. Finally he concludes with an invitation for viewers to receive Jesus as their personal savior.
Controversy
The program has generated controversy due to widespread skepticism about Hinn's faith healings depicted in the show. Investigative news programs such as Inside Edition, Dateline NBC, and the fifth estate claim that Hinn uses the power of suggestion to make crusade attendees fall on stage and believe they're cured.
See also
Good Morning, Holy Spirit
Live Prayer
References
Religious mass media in the United States
Television series about Christianity
Evangelicalism in the Church of England
American faith healers
American non-fiction television series
1990 American television series debuts
2000s American television series
2010s American television series |
https://en.wikipedia.org/wiki/Format%20war | A format war is a competition between similar but mutually incompatible technical standards that compete for the same market, such as for data storage devices and recording formats for electronic media. It is often characterized by political and financial influence on content publishers by the developers of the technologies. Developing companies may be characterized as engaging in a format war if they actively oppose or avoid interoperable open-industry technical standards in favor of their own.
A format war emergence can be explained because each vendor is trying to exploit cross-side network effects in a two-sided market. There is also a social force to stop a format war: when one of them wins as de facto standard, it solves a coordination problem for the format users.
19th century
Rail gauge. The Gauge War in Britain pitted the Great Western Railway, which used broad gauge, against other rail companies, which used what would come to be known as standard gauge. Ultimately standard gauge prevailed.
Similarly, in the United States there was incompatibility between railroads built to the standard gauge and those built to the so-called Russian gauge. During the initial period of railroad building, standard gauge was adopted in most of the northeastern United States, while the wider gauge, later called "Russian", was preferred in most of the southern states. In 1886, the southern railroads agreed to coordinate changing gauge on all their tracks. By June 1886, all major railroads in North America were using what was effectively the same gauge.
Direct current vs. alternating current: The 1880s saw the spread of electric lighting with large utilities and manufacturing companies supplying it. The systems initially ran on direct current (DC) and alternating current (AC) with low voltage DC used for interior lighting and high voltage DC and AC running very bright exterior arc lighting. With the invention of the AC transformer in the mid 1880s, alternating current could be stepped up in voltage for long range transmission and stepped down again for domestic use, making it a much more efficient transmission standard now directly competing with DC for the indoor lighting market. In the U.S. Thomas Edison's Edison Electric Light Company tried to protect its patent controlled DC market by playing on the public's fears of the dangers of high voltage AC, portraying their main AC competitor, George Westinghouse's Westinghouse Electric Company, as purveyors of an unsafe system, a back and forth financial and propaganda competition that came to be known as the war of the currents, even promoting AC for the Electric chair execution device. AC, with its more economic transmission would prevail, supplanting DC.
Musical boxes: Several manufacturers introduced musical boxes that utilised interchangeable steel disks that carried the tune. The principal players were Polyphon, Symphonion (in Europe) and Regina (in the United States). Each manufacturer used its own |
https://en.wikipedia.org/wiki/Windows%20Internet%20Name%20Service | Windows Internet Name Service (WINS) is the Microsoft implementation of NetBIOS Name Service (NBNS), a name server and service for NetBIOS computer names. Effectively, WINS is to NetBIOS names what DNS is to domain names — a central mapping of host names to network addresses. Like the DNS, it is implemented in two parts, a server service (that manages the embedded Jet Database, server to server replication, service requests, and conflicts) and a TCP/IP client component which manages the client's registration and renewal of names, and takes care of queries. Basically, Windows Internet Name Service (WINS) is a legacy computer name registration and resolution service that maps computer NetBIOS names to IP addresses.
WINS is Microsoft's predecessor to DNS for name resolution. Though WINS has not been deprecated, Microsoft advise against new deployments.
References
External links
Official sources
Microsoft TechNet: Windows Internet Name Service Overview (Chapter 12 of the downloadable book "TCP/IP Fundamentals for Microsoft Windows")
Microsoft TechNet: WINS Technical Reference
Microsoft TechNet: WINS Concepts
MSKB837391: Exchange Server 2003 and Exchange 2000 Server require NetBIOS name resolution for full functionality
Other
Name Resolution chapter in Using Samba online book (also published by O'Reilly as ), which talks about WINS.
Microsoft server technology
Windows communication and services |
https://en.wikipedia.org/wiki/Dd%20%28Unix%29 | dd is a command-line utility for Unix, Plan 9, Inferno, and Unix-like operating systems and beyond, the primary purpose of which is to convert and copy files. On Unix, device drivers for hardware (such as hard disk drives) and special device files (such as /dev/zero and /dev/random) appear in the file system just like normal files; can also read and/or write from/to these files, provided that function is implemented in their respective driver. As a result, can be used for tasks such as backing up the boot sector of a hard drive, and obtaining a fixed amount of random data. The program can also perform conversions on the data as it is copied, including byte order swapping and conversion to and from the ASCII and EBCDIC text encodings.
History
In 1974, the command appeared as part of Version 5 Unix. According to Dennis Ritchie, the name is an allusion to the DD statement found in IBM's Job Control Language (JCL), in which it is an abbreviation for "Data Definition". According to Douglas McIlroy, was "originally intended for converting files between the ASCII, little-endian, byte-stream world of DEC computers and the EBCDIC, big-endian, blocked world of IBM"; thus, explaining the cultural context of its syntax. Eric S. Raymond believes "the interface design was clearly a prank", due to the command's syntax resembling a JCL statement more than other Unix commands do.
In 1987, the command is specified in the X/Open Portability Guide issue 2 of 1987. This is inherited by IEEE Std 1003.1-2008 (POSIX), which is part of the Single UNIX Specification.
In 1990, David MacKenzie announces GNU fileutils (now part of coreutils) which includes the dd command; it was written by Paul Rubin, David MacKenzie, and Stuart Kemp. Since 1991, Jim Meyering is its maintainer.
In 1995, Plan 9 2nd edition is released; its command interface is redesigned to use a traditional command-line option style instead of a JCL statement style.
Since at least 1999, there's UnxUtils, a native Win32 port for Microsoft Windows using GNU fileutils.
dd is sometimes humorously called "Disk Destroyer", due to its drive-erasing capabilities involving typos.
Usage
The command line syntax of differs from many other Unix programs. It uses the syntax for its command-line options rather than the more standard or formats. By default, reads from stdin and writes to stdout, but these can be changed by using the (input file) and (output file) options.
Certain features of will depend on the computer system capabilities, such as 's ability to implement an option for direct memory access. Sending a SIGINFO signal (or a USR1 signal on Linux) to a running process makes it print I/O statistics to standard error once and then continue copying. can read standard input from the keyboard. When end-of-file (EOF) is reached, will exit. Signals and EOF are determined by the software. For example, Unix tools ported to Windows vary as to the EOF: Cygwin uses (the usual Unix EOF) and MKS To |
https://en.wikipedia.org/wiki/Mankind%20%28video%20game%29 | Mankind was a massively multiplayer online real-time strategy (MMORTS) computer game.
Gameplay
Equipped with one construction unit, a Vibz-type starship, and a small amount of credits, players start out in a guarded star system ("Imperial system") to eventually create their own empire. Typical first steps in Mankind consist of building a small base on one of the nearby planets and mining available resources which could either be sold or used to construct further units. Later, a player can leave the safety of the Imperial systems behind and colonize his own star system.
Environments
Planet surfaces as well as the space in star systems are realized as separate two-dimensional square game maps, called "environments" in game jargon. While space maps have borders, planetary maps are virtually borderless - units leaving the map at the eastern border reappear in the west, those leaving in the north reappear in the south. Each environment can contain player units and installations. Some restrictions exist, such as land vehicles only being able to operate on planetary maps, or specific starships not being able to enter planetary environments. Only one environment per player can be active at a time. Players can switch between maps by loading the unit content of a new environment, thereby leaving the old one.
Game universe
The game takes place in the so-called "Mankind galaxy". The galactic map available for navigation is divided into sectors of space ("cubes" in game jargon), each of which might contain between zero and about 25 stars. Each star system contains between 5 and 8 planets. Early game reviews talked about a total sum of 900 million available planets, each with their own climate, seasons and population, a figure that was repeated in advertising text on the game box and even topped by the official website, which claimed several million systems and billions of planets.
In fact, a majority of these planets and star systems were unavailable ("closed") at the initial release of the game and have never been opened afterwards. During the two game resets since its release, the layout of the Mankind galaxy was changed and its size reduced. The last released galaxy consists of 73,251 star systems with 476,265 planets.
The persistent universe feature means that even when players are not involved in the game their mines extract ore, factories create equipment, ships continue commerce, and combat units continue to do battle. The game also has option to allow the user be notified via cell phone text message if their units came under attack.
Development
Mankind was initially published in December 1998 by the French computer game developer Vibes Online Gaming. After the transfer of Vibes to its Asian partner, the game was bought by O2 Online Entertainment Ltd., it is being primarily maintained by Quantex Online Entertainment since 2008.
Estimates of the number of active players are hard to come by - while the official site claimed both 145,000 and "m |
https://en.wikipedia.org/wiki/ATV%20%28Australian%20TV%20station%29 | ATV is a television station in Melbourne, Australia, part of Network 10 – one of the three major Australian free-to-air commercial television networks. The station is owned by Paramount Networks UK & Australia.
History
In April 1963, the licence to operate Melbourne's third commercial television station was awarded to Austarama Television, owned by transport magnate Reginald Ansett. The new channel, ATV-0 (pronounced as the letter O, never the number zero), began transmission on 1 August 1964 from a large modern studio complex located in the then-outer eastern suburb of Nunawading,
in the locality now known as Forest Hill, but referred to at the time as Burwood East.
The new station opened with a preview program hosted by Barry McQueen and Nancy Cato followed by a variety program, This Is It!. Reception difficulties in parts of the city resulted in the station's virtually permanent third position in the Melbourne television ratings.
In 1964, under Reg Ansett, ATV-0 opened their studios in Nunawading, which was at the time the first purpose-built commercial television station in Melbourne. It was also the studio where the first ever colour broadcast in Australia would be filmed, leading to its consideration for heritage status in 2018.
ATV-0 had been experimenting with colour transmissions from 1967, when the station was the first to mount a colour outside broadcast in Australia, from the Pakenham races. Many other colour test transmissions occurred subsequently. Full-time colour transmission was introduced to ATV-0 in March 1975 in line with other stations around the country.
Rupert Murdoch gained a controlling interest in Sydney television station TEN-10 in 1979 and had bought a controlling stake in transport company Ansett, owner of Austarama Television (licensee of ATV-0). That triggered a government inquiry into media ownership, the main concern being Murdoch having a controlling interest in television stations in Australia's two largest cities, ignoring the fact that the Kerry Packer-owned Australian Consolidated Press had controlled the Nine Network channels in Melbourne and Sydney for many years.
Due to problems in reception and falling ratings, and the desire to move TV stations out of the VHF band so as to enable FM radio in Australia, the station moved frequency and call-sign from ATV-0 to ATV-10, after getting the agreement of neighbouring Gippsland station GLV-10 to change its frequency to become GLV-8.
On 20 January 1980, the revamped ATV-10 was launched with a jingle campaign ("You're on Top With Ten"), Graham Kennedy's introductory presentation and 10's Summer Sunday, a 3-hour live outside broadcast from Torquay Beach. Later in the evening, You're On Top With Ten with Kennedy provided a preview of upcoming shows on the new channel, followed by the movie-length pilot for new drama series Arcade.
On 11 February 1980, Eyewitness News was relaunched with David Johnston and Jana Wendt as chief newsreaders. By May, Eyewitness |
https://en.wikipedia.org/wiki/Verse%20protocol | Verse is a networking protocol allowing real-time communication between computer graphics software. For example, several architects can build a house in the same virtual environment using their own computers, even if they are using different software. If one architect builds a spiral staircase, it instantly appears on the screens of all other users. Verse is designed to use the capacity of one or multiple computers over the Internet: for example, allowing a user with a hand-held computer in Spain to work with the rendering power of a supercomputer in Japan. Its principles are very general, allowing its use in contexts that are advantageous to collaboration such as gaming and visual presentations.
Uni-Verse
The Swedish Royal Institute of Technology (KTH), with several collaborators including the Interactive Institute, set up an EU project called Uni-Verse. The EU Commission granted them nearly SEK 18 million over the next several years to develop a system for graphics, sound, and acoustics using Verse and making it into an Open Source platform.
Verse-enabled projects
Blender
Crystal Space
Love
External links
Verse Project Page
Love the Game which uses the Verse protocol
Video podcast covering the Verse project
Community site for Verse developers
New Verse Protocol by Jiri Hnidek
(Defunct as of October 2012)
(Defunct)
Network protocols
Computer graphics |
https://en.wikipedia.org/wiki/List%20of%20Vienna%20U-Bahn%20stations | The following is a list of the 98 stations in the Vienna U-Bahn metro system in Vienna, Austria. The Vienna U-Bahn network consists of five lines operating on of route.
Legend
Boldface: Terminus station
List
Future stations
Closed stations
References
External links
Vienna
Transport in Vienna
Vienna U-Bahn |
https://en.wikipedia.org/wiki/Department%20for%20Transport | The Department for Transport (DfT) is a department of His Majesty's Government responsible for the English transport network and a limited number of transport matters in Scotland, Wales, and Northern Ireland that have not been devolved. The department is run by the Secretary of State for Transport, currently (since 25 October 2022), Mark Harper.
The expenditure, administration, and policy of the Department of Transport are scrutinised by the Transport Committee.
History
The Ministry of Transport was established by the Ministry of Transport Act 1919 which provided for the transfer to the new ministry of powers and duties of any government department in respect of railways, light railways, tramways, canals and inland waterways, roads, bridges and ferries, and vehicles and traffic thereon, harbours, docks and piers.
In September 1919, all the powers of the Road Board, the Ministry of Health, and the Board of Trade in respect of transport, were transferred to the new ministry. Initially, the department was organised to carry out supervisory, development and executive functions, but the end of railway and canal control by 1921, and the settlement of financial agreements relating to the wartime operations of the railways reduced its role. In 1923, the department was reorganised into three major sections: Secretarial, Finance and Roads.
The ministry's functions were exercised initially throughout the United Kingdom. An Irish Branch was established in 1920, but then was taken over by the government of the Irish Free State on the transfer of functions in 1922.
The department took over transport functions of Scottish departments in the same year, though certain functions relating to local government, loan sanction, byelaws and housing were excepted. In May 1937, power to make provisional orders for harbour, pier and ferry works was transferred to the Secretary of State for Scotland.
The growth of road transport increased the responsibilities of the ministry, and in the 1930s, and especially with defence preparations preceding the outbreak of war, government responsibilities for all means of transport increased significantly.
Government control of transport and diverse associated matters has been reorganised a number of times in modern history, being the responsibility of:
1919–1941: Ministry of Transport
1941–1946: Ministry of War Transport, after absorption of Ministry of Shipping
1946–1953: Ministry of Transport
1953–1959: Ministry of Transport and Civil Aviation
1959–1970: Ministry of Transport
1970–1976: Department of the Environment
1976–1997: Department of Transport
1997–2001: Department for the Environment, Transport and the Regions
2001–2002: Department for Transport, Local Government and the Regions
2002–present: Department for Transport
The name "Ministry of Transport" lives on in the annual MOT test, a test of vehicle safety, roadworthiness, and exhaust emissions, which most vehicles used on public roads in the UK are required to pass |
https://en.wikipedia.org/wiki/New%20York%20Institute%20of%20Technology%20Computer%20Graphics%20Lab | The Computer Graphics Lab was a computer lab located at the New York Institute of Technology (NYIT) in the late 1970s and 1980s, founded by Dr. Alexander Schure. It was originally located at the "pink building" on the NYIT campus.
The lab was initially founded to produce a short high-quality feature film with the project name of The Works. The feature, which was never completed, was a 90-minute feature that was to be the first entirely computer-generated CGI movie. Production mainly focused around DEC PDP and VAX machines.
Many of the original CGL team now form the elite of the CG and computer world with members going on to Silicon Graphics, Microsoft, Cisco, NVIDIA and others, including Pixar president, co-founder and Turing laureate Ed Catmull, Pixar co-founder and Microsoft graphics fellow Alvy Ray Smith, Pixar co-founder Ralph Guggenheim, Walt Disney Animation Studios chief scientist Lance Williams, Netscape and Silicon Graphics founder Jim Clark, Tableau co-founder and Turing laureate Pat Hanrahan, Microsoft graphics fellow Jim Blinn, Thad Beier, Oscar and Bafta nominee Jacques Stroweis, Andrew Glassner, and Tom Brigham. Systems programmer Bruce Perens went on to co-found the Open Source Initiative.
Researchers at the New York Institute of Technology Computer Graphics Lab created the tools that made entirely 3D CGI films possible. Among NYIT CG Lab's innovations was an eight-bit paint system to ease computer animation. NYIT CG Lab was regarded as the top computer animation research and development group in the world during the late 70s and early 80s.
References
External links
NYIT Computer Graphics Lab, People Behind The Pixels
Brief History of the Computer Graphics Lab at NYIT
Computer graphics
Laboratories in the United States
New York Institute of Technology
Computer science institutes in the United States
History of computing
Research institutes in New York (state) |
https://en.wikipedia.org/wiki/The%20Works%20%28film%29 | The Works is a shelved 3D computer-animated feature film, partially produced from 1979 to 1986. It would have been the first entirely 3D CGI film if it had been finished as intended, and included contributions from individuals who would go on to work at digital animation pioneers Pixar and DreamWorks Animation.
The film was developed by the staff of the Computer Graphics Lab in association with the New York Institute of Technology in Old Westbury, New York. The name was inspired by the original meaning of the word "robot", derived from "robota" ("work"), a word found in many Slavic languages. It was originally intended to be approximately 90 minutes long although less than 10 minutes were known to be produced. A trailer of the film was screened at SIGGRAPH in 1982. The project also resulted in other groundbreaking computer animations such as 3DV, Sunstone, Inside a Quark and some segments of the short film The Magic Egg from 1984.
Plot
The story, written by Lance Williams, was never finalized but centered around "Ipso Facto", a charming elliptical robot, and the heroine, a young female pilot nicknamed "T-Square". The story was set at some time in the distant future when a malfunctioning computer, "The Works", triggered a devastating last World War but then, realizing what it had done, set out to repopulate the planet entirely with robots. T-Square, who worked and lived in a nearby asteroid belt, vowed to journey to Earth and fight to make it safe for the return of her fellow space-faring humanity. Many staff-members contributed designs and modeled characters and sets under the coordination of art director Bil Maher who created blueprint-style designs for T-Square and many of the 25 robots called for by the script. Dick Lundin, legendary for his exhaustive and elaborate creations, designed and animated a huge mining ship and the gigantic robot "Ant" which was to be one of the villains in control of the Earth.
Pre-production
The founder of NYIT, entrepreneur and eccentric millionaire Dr. Alexander Schure, had a long and ardent interest in animation. He was a great admirer of Walt Disney and dreamed of making animated features like those from the golden age of theatrical animation. He had already created a traditional animation facility at NYIT. After visiting the University of Utah and seeing the potential of the computer technology in the form of the computer drawing program Sketchpad created by Ivan Sutherland, he told his people to pore over the Utah research center and get him one of everything they had. He then established the NYIT Computer Graphics Lab, buying state-of-the-art equipment and hiring major researchers from throughout the computer graphics field.
At first, one of CGL's main goals was to use computers to produce 2D animation and invent tools to assist traditional animators in their work. Schure reasoned that it should be possible to develop computer technology that would make the animation process cheaper and faster. An earl |
https://en.wikipedia.org/wiki/TimesTen | Oracle TimesTen In-Memory Database is an in-memory, relational database management system with persistence and high availability. Originally designed and implemented at Hewlett-Packard labs in Palo Alto, California, TimesTen spun out into a separate startup in 1996 and was acquired by Oracle Corporation in 2005.
TimesTen databases are persistent and can be highly available. Because it is an in-memory database it provides very low latency and high throughput. It provides standard relational database APIs and interfaces such as the SQL and PL/SQL languages. Applications access TimesTen using standard database APIs such as ODBC and JDBC.
TimesTen can be used as a standalone database, and is also often used as a cache in front of another relational database such as Oracle Database. It is frequently used in very high volume OLTP applications such as prepaid telecom billing and financial trading. It is also used for read-intensive applications such as very large websites and location-based services.
TimesTen can be configured as a shared-nothing clustered system (TimesTen Scaleout) supporting databases much larger than the RAM available on a single machine, and providing scalable throughput and high availability. It can also be configured in replicated active/standby pairs of databases (TimesTen Classic) providing high availability and microsecond response time.
TimesTen runs on Linux, Solaris and AIX and also supports client applications running on Windows and macOS.
Technology
TimesTen is an in-memory database that provides very fast data access time. It ensures that all data will reside in physical memory (RAM) during run time. This allows its internal search and data management algorithms used to be simplified, resulting in very low response times even on commodity hardware. TimesTen can make use of available RAM available on its host machine, up to terabytes in size; using TimesTen Scaleout databases much larger than the RAM of a single machine are supported.
Database Concepts
TimesTen supports standard relational database concepts. Tables consist of rows; rows consist of columns of specific data types. Data is manipulated using SQL. Transactions allow data to be manipulated with appropriate levels of atomicity and isolation; TimesTen supports all standard ACID properties expected of relational databases.
Datatypes supported by TimesTen are in general a subset of those supported by Oracle Database, including NUMBER, VARCHAR and LOBs; TimesTen specific datatypes such as binary integers are also supported.
Applications access TimesTen databases using standard relational APIs such as ODBC, JDBC, OCI, and ODPI-C. This allows applications to be written in many programming languages and environments. Applications use those APIs to access and manipulate data using standard SQL. Stored procedures can also be implemented and executed using PL/SQL.
Persistence
Though an in-memory database, TimesTen databases are persistent and can be highly ava |
https://en.wikipedia.org/wiki/National%20Climatic%20Data%20Center | The United States National Climatic Data Center (NCDC), previously known as the National Weather Records Center (NWRC), in Asheville, North Carolina, was the world's largest active archive of weather data.
In 2015, the NCDC merged with two other federal environmental records agencies to become the National Centers for Environmental Information (NCEI).
History
In 1934, the U.S. government established a tabulation unit in New Orleans, Louisiana, to process weather records. Climate records and upper air observations were punched onto cards in 1936. This organization was transferred to Asheville, North Carolina, in 1951, where the National Weather Records Center (NWRC). It was housed in the Grove Arcade Building in Asheville, North Carolina.
Processing of the climate data was accomplished at Weather Records Processing Centers at Chattanooga, Tennessee; Kansas City, Missouri; and San Francisco, California, until January 1, 1963, when it was consolidated with the NWRC.
In 1967, the agency was renamed the National Climatic Data Center.
In 1995, the NCDC moved into the newly completed Veach-Baley Federal Complex in downtown Asheville.
In 2015, the NCDC merged with the National Geophysical Data Center and the National Oceanographic Data Center to become the National Centers for Environmental Information (NCEI).
Sources
Data were received from a wide variety of sources, including weather satellites, radar, automated airport weather stations, National Weather Service (NWS) Cooperative Observers, aircraft, ships, radiosondes, wind profilers, rocketsondes, solar radiation networks, and NWS Forecast/Warnings/Analyses Products.
Climate focus
The Center provided historical perspectives on climate which were vital to studies on global climate change, the greenhouse effect, and other environmental issues. The Center stored information essential to industry, agriculture, science, hydrology, transportation, recreation, and engineering. These services are still provided by the NCEI.
The NCDC said:
Evidence is mounting that global climate is changing. While it is generally accepted that humans are negatively influencing the climate, the extent to which humans are responsible is still under study. Regardless of the causes, it is essential that a baseline of long-term climate data be compiled; therefore, global data must be acquired, quality controlled, and archived. Working with international institutions such as the International Council of Scientific Unions, the World Data Centers, and the World Meteorological Organization, NCDC develops standards by which data can be exchanged and made accessible.NCDC provides the historical perspective on climate. Through the use of over a hundred years of weather observations, reference data bases are generated. From this knowledge the clientele of NCDC can learn from the past to prepare for a better tomorrow. Wise use of our most valuable natural resource, climate, is the goal of climate researchers, state and re |
https://en.wikipedia.org/wiki/Smeg | Smeg or SMEG may refer to:
Smeg (appliances), an Italian company
Smeg Virus Construction Kit, for computer viruses
Shanghai Media & Entertainment Group, a media conglomerate in China
SMEG (menu editor), Simple Menu Editor for GNOME
Société Monégasque de l'Electricité et du Gaz, Monaco's supplier of electricity and gas
Smeg, a fictional profanity from the British science fiction sitcom Red Dwarf |
https://en.wikipedia.org/wiki/World%20Data%20Center | The World Data Centre (WDC) system was created to archive and distribute data collected from the observational programmes of the 1957–1958 International Geophysical Year by the International Council of Science (ICSU). The WDCs were funded and maintained by their host countries on behalf of the international science community.
Originally established in the United States, Europe, Soviet Union, and Japan, the WDC system expanded to other countries and to new scientific disciplines. The WDC system included up to 52 Centres in 12 countries. All data held in WDCs were available for the cost of copying and sending the requested information.
At the end of 2008, following the ICSU General Assembly in Maputo (Mozambique), the World Data Centres were reformed and a new ICSU World Data System (WDS) established in 2009 building on the 50-year legacy of the ICSU World Data Centre system (WDC) and the ICSU Federation of Astronomical and Geophysical data-analysis Services.
External links
ICSU International Council for Science
List of former WDCs
Open data
International scientific organizations |
https://en.wikipedia.org/wiki/Metalinguistic%20abstraction | In computer science, metalinguistic abstraction is the process of solving complex problems by creating a new language or vocabulary to better understand the problem space. More generally, it also encompasses the ability or skill of a programmer to think outside of the pre-conceived notions of a specific language in order to exploratorily investigate a problem space in search of the kind of solutions which are most natural or cognitively ergonomic to it. It is a recurring theme in the seminal MIT textbook Structure and Interpretation of Computer Programs, which uses Scheme, a dialect of Lisp, as a framework for constructing new languages.
Explanation
For example, consider modelling an airport inside a computer. The airport has elements like passengers, bookings, employees, budgets, planes, luggage, arrivals and departures, and transit services.
A procedural (e.g. C) programmer might create data structures to represent these elements of an airport and procedures or routines to operate on those data structures and update them, modelling the airport as a series of processes undergone by its various elements. E.g., bookings is a database used to keep passengers and planes synchronised via updates logged as arrivals and departures, budgets are similar but for money: airports are a lot of things that need to get done in the right order to see that passengers get where they're going.
An object-oriented (e.g. Java) programmer might create objects to represent the elements of the airport with methods which represent their behaviors, modelling the airport as a collection of possibly related things which characteristically interact with each other. E.g., passengers, employees, and planes possess location attributes which can be modified via applicable transit methods: transit services have methods to bring employees and passengers to and from airports, planes have methods to bring passengers along with themselves between different airports: airports are a grouping of things working together as intended.
A functional (e.g. Scheme) programmer might create higher-order functions representing both the elements and behaviors or processes of the airport, modelling the airport as a map of relations between elements in its various domains and those in their assorted codomains. E.g., airports map budgets to bookings schedules, each of which is itself a map of elements to elements: balances of income and expenditure, and balances of arrivals and departures, each of which is, recursively, its own mapping of elements and their own mappings in kind, collectively comprising a set of morphisms: airports are, transitively, the evaluative transformation of a certain spacetime economy.
Finally, a metalinguistic programmer might abstract the problem by creating new domain-specific languages for modelling airports, with peculiar primitives and types for doing so. The new language could encompass any or all of the above approaches where most suitable, potentially enabling |
https://en.wikipedia.org/wiki/GRASS%20GIS | Geographic Resources Analysis Support System (commonly termed GRASS GIS) is a geographic information system (GIS) software suite used for geospatial data management and analysis, image processing, producing graphics and maps, spatial and temporal modeling, and visualizing. It can handle raster, topological vector, image processing, and graphic data.
GRASS GIS contains over 350 modules to render maps and images on monitor and paper; manipulate raster and vector data including vector networks; process multispectral image data; and create, manage, and store spatial data.
It is licensed and released as free and open-source software under the GNU General Public License (GPL). It runs on multiple operating systems, including , Windows and Linux. Users can interface with the software features through a graphical user interface (GUI) or by plugging into GRASS via other software such as QGIS. They can also interface with the modules directly through a bespoke shell that the application launches or by calling individual modules directly from a standard shell. The latest stable release version (LTS) is GRASS GIS 7, which is available since 2015.
The GRASS development team is a multinational group consisting of developers at many locations. GRASS is one of the eight initial software projects of the Open Source Geospatial Foundation.
Architecture
GRASS supports raster and vector data in two and three dimensions. The vector data model is topological, meaning that areas are defined by boundaries and centroids; boundaries cannot overlap within one layer. In contrast, OpenGIS Simple Features, defines vectors more freely, much as a non-georeferenced vector illustration program does.
GRASS is designed as an environment in which tools that perform specific GIS computations are executed. Unlike GUI-based application software, the GRASS user is presented with a Unix shell containing a modified environment that supports execution of GRASS commands, termed modules. The environment has a state that includes parameters such as the geographic region covered and the map projection in use. All GRASS modules read this state and additionally are given specific parameters (such as input and output maps, or values to use in a computation) when executed. Most GRASS modules and abilities can be operated via a graphical user interface (provided by a GRASS module), as an alternative to manipulating geographic data in a shell.
The GRASS distribution includes over 350 core modules. Over 100 add-on modules created by users are offered on its website. The libraries and core modules are written in C. Other modules are written in C, C++, Python, Unix shell, Tcl, or other scripting languages. The modules are designed under the Unix philosophy and hence can be combined using Python or shell scripting to build more complex or specialized modules, by users, without knowledge of C programming.
There is cooperation between the GRASS and Quantum GIS (QGIS) projects. Recent versions of QG |
https://en.wikipedia.org/wiki/Trailways%20Transportation%20System | The Trailways Transportation System is an American network of approximately 70 independent bus companies that have entered into a brand licensing agreement. The company is headquartered in Fairfax, Virginia.
History
The predecessor to Trailways Transportation System was founded February 5, 1936, by Burlington Transportation Company, Santa Fe Trails Transportation Company, Missouri Pacific Stages, Safeway Lines, Inc., and Frank Martz Coach Company.
The system originated with coast-to-coast service as the National Trailways Bus System (NTBS). Greyhound Lines had grown so quickly in the 1920s and 1930s that the Interstate Commerce Commission encouraged smaller independent operators to form the NTBS to provide competition. Unlike Greyhound, which centralized ownership, Trailways member companies became a formidable competitor while staying an association of almost 100 separate companies. In the 1950s, Morgan W. Walker, Sr., of Alexandria, Louisiana, became head of the southern division of the company. He had entered the business on a small scale during World War II as the Interurban Transportation Company of Alexandria. During the 1950s and 1960s, consolidation among bus operators resulted in four of the five original Trailways members becoming part of a new company, Continental Trailways, which eventually operated the majority of Trailways routes.
In 1968, under the leadership of major stockholder Kemmons Wilson, Holiday Inn acquired Continental Trailways, which remained a subsidiary of Holiday Inn until 1979, when Holiday Inn sold Trailways to private investor Henry Lea Hillman Sr., of Pittsburgh, Pennsylvania. In the years during which Trailways was a subsidiary of Holiday Inn, television commercials for Holiday Inn frequently showed a Trailways bus stopping at a Holiday Inn hotel.
Regular route bus ridership in the United States had been declining steadily since World War II despite minor gains during the 1973 and 1979 energy crises. By 1986, the Greyhound Bus Line had been spun off from the parent company to new owners, which resulted in Greyhound Lines becoming solely a bus transportation company. It was sold off to new owners headed by Fred Currey, a former executive with the largest member of the National Trailways Bus System. The old Greyhound parent had changed its name to Dial Corporation.
Under the new ownership in 1987, led by Currey, Greyhound Lines later acquired the former Continental Trailways company, the largest member of the Trailways system, effectively eliminating a large portion of bus competition. Although Greyhound negotiated cooperative schedules with Carolina Coach Company and Southeastern Trailways, two of the larger members of the Trailways system, many smaller carriers were effectively forced out of business. Greyhound later acquired Carolina and the intercity operations of Southeastern. Most of the survivors diversified into charters and tours.
Current members
Today Trailways members are spread across North Ame |
https://en.wikipedia.org/wiki/Mike%20Cowlishaw | Mike Cowlishaw is a visiting professor at the Department of Computer Science at the University of Warwick, and a Fellow of the Royal Academy of Engineering. He is a retired IBM Fellow, and was a Fellow of the Institute of Engineering and Technology, and the British Computer Society. He was educated at Monkton Combe School and the University of Birmingham.
Career at IBM
Cowlishaw joined IBM in 1974 as an electronic engineer but is best known as a programmer and writer. He is known for designing and implementing the Rexx programming language (1984), his work on colour perception and image processing that led to the formation of JPEG (1985), the STET folding editor (1977), the LEXX live parsing editor with colour highlighting for the Oxford English Dictionary (1985), electronic publishing, SGML applications, the IBM Jargon File IBMJARG (1990), a programmable OS/2 world globe PMGlobe (1993), MemoWiki based on his GoServe Gopher/http server, and the Java-related NetRexx programming language (1997).
He has contributed to various computing standards, including ISO (SGML, COBOL, C, C++), BSI (SGML, C), ANSI (REXX), IETF (HTTP 1.0/RFC 1945), W3C (XML Schema), ECMA (JavaScript/ECMAScript, C#, CLI), and IEEE (754 decimal floating-point). He retired from IBM in March 2010.
Decimal arithmetic
Cowlishaw has worked on aspects of decimal arithmetic; his proposal for an improved Java BigDecimal class (JSR 13) is now included in Java 5.0, and in 2002, he invented a refinement of Chen–Ho encoding known as densely packed decimal encoding. Cowlishaw's decimal arithmetic specification formed the proposal for the decimal parts of the IEEE 754 standard, as well as being followed by many implementations, such as Python and SAP NetWeaver. His decNumber decimal package is also available as open source under several licenses and is now part of GCC, and his proposals for decimal hardware have been adopted by IBM and others. They are integrated into the IBM POWER6 and IBM System z10 processor cores, and in numerous IBM software products such as DB2, TPF (in Sabre), WebSphere MQ, operating systems, and C and PL/I compilers.
Other activities
Cowlishaw wrote an emulator for the Acorn System 1, and collected related documentation. Outside computing, he caved in the UK, New England, Spain, and Mexico
and continues to cave and hike in Spain. He is a life member of the National Speleological Society (NSS), wrote articles in the 1970s and 1980s on battery technology and on the shock strength of caving ropes, and designed LED-based caving lamps.
His current programming projects include MapGazer. and PanGazer
Publications (primary author)
The NetRexx Language, Cowlishaw, Michael F., , Prentice-Hall, 1997
The REXX Language, Cowlishaw, Michael F., in English: , (second edition) 1990; in German: , Carl Hanser Verlag, 1988; in Japanese: , Kindai-kagaku-sha, 1988
, Cowlishaw, Michael F., Proceedings 16th IEEE Symposium on Computer Arithmetic (ARITH 16), , pp. 104–111, IEEE Comp. S |
https://en.wikipedia.org/wiki/OPL | OPL may stand for:
Computing and technology
Open Programming Language
Optical path length
Optimization Programming Language, a modelling language designed for the CPLEX Optimization software
FM Operator Type-L, a series of sound chips made by Yamaha:
YM3526 (OPL)
YM3812 (OPL2)
YMF262 (OPL3)
YMF278 (OPL4)
Libraries
Oakville Public Library, in Oakville, Ontario
Oshawa Public Library, in Oshawa, Ontario
Ottawa Public Library, in Ottawa, Ontario
One-person library
Omaha Public Library, in Omaha, Nebraska
Sports
Oceanic Pro League, a former professional League of Legends league in Oceania
Oman Professional League, an association football league in Oman
Other
Luxembourg Philharmonic Orchestra, abbreviated to OPL
On Patrol: Live, an American docuseries that uses the abbreviation OPL
Optique & Précision de Levallois (1911-1964), a former French optical company
Open Publication License, license predating Creative Commons licenses
Operating lease
Organisation du Peuple en Lutte ("Struggling People's Organization"), a Haitian political party |
https://en.wikipedia.org/wiki/WIMP%20%28computing%29 | In human–computer interaction, WIMP stands for "windows, icons, menus, pointer", denoting a style of interaction using these elements of the user interface. Other expansions are sometimes used, such as substituting "mouse" and "mice" for menus, or "pull-down menu" and "pointing" for pointer.
Though the acronym has fallen into disuse, it has often been likened to the term graphical user interface (GUI). Any interface that uses graphics can be called a GUI, and WIMP systems derive from such systems. However, while all WIMP systems use graphics as a key element (the icon and pointer elements), and therefore are GUIs, the reverse is not true. Some GUIs are not based in windows, icons, menus, and pointers. For example, most mobile phones represent actions as icons and menus, but often do not rely on a conventional pointer or containerized windows to host program interactions.
WIMP interaction was developed at Xerox PARC (see Xerox Alto, developed in 1973) and popularized with Apple's introduction of the Macintosh in 1984, which added the concepts of the "menu bar" and extended window management.
The WIMP interface has the following components:
A window runs a self-contained program, isolated from other programs that (if in a multi-program operating system) run at the same time in other windows.
These individual program containers enable users to move fluidly between different windows.
The window manager software is typically designed such that it is clear which window is currently active. Design principles of spacing, grouping, and simplicity help the user maintain focus when working between more than one window.
An icon acts as a shortcut to an action the computer performs (e.g., execute a program or task).
Text labels can be used alongside icons to help identification for small icon sets.
A menu is a text or icon-based selection system that selects and executes programs or tasks. Menus may change depending on context in which they are accessed.
The pointer is an onscreen symbol that represents movement of a physical device that the user controls to select icons, data elements, etc.
This style of system improves human–computer interaction (HCI) by emulating real-world interactions and providing greater ease of use for non-technical people. Because programs contained by a WIMP interface subsequently rely on the same core input methods, the interactions throughout the system are standardized. This consistency allows users' skills to carry from one application to another.
Criticism
Some human–computer interaction researchers consider WIMP to be ill-suited for multiple applications, especially those requiring precise human input or more than three dimensions of input. Drawing and writing are example of these limitations; a traditional pointer is limited by two dimensions, and consequently doesn't account for the pressure applied when using a physical writing utility. Pressure-sensitive graphics tablets are often used to overcome this limitat |
https://en.wikipedia.org/wiki/OSX%20%28disambiguation%29 | OS X is a former name of Apple's operating system macOS.
OSX or OS X may also refer to:
DC/OSx, 1980s-era Unix operating system by Pyramid Technology
Old Saxon (ISO 639-3 language code), an early form of Low German
OSX, a Brazilian shipbuilding company, part of the EBX Group
OS-X series, a series of sounding rockets built by OneSpace
See also
OS 10 (disambiguation)
System 10 (disambiguation)
System X (disambiguation)
ja:OS X |
https://en.wikipedia.org/wiki/Double%20click%20%28disambiguation%29 | Double click may refer to:
Double-click, the act of pressing a computer mouse twice quickly without moving it
DoubleClick, a subsidiary of Google that develops and provides Internet ad serving services
Doubleclick (musician), a UK musician
The Doubleclicks, and American musical duo |
https://en.wikipedia.org/wiki/U.N.C.L.E. | U.N.C.L.E. is an acronym for the fictional United Network Command for Law and Enforcement, a secret international intelligence agency featured in the 1960s American television series The Man from U.N.C.L.E. and The Girl from U.N.C.L.E..
U.N.C.L.E. is a fictional organisation consisting of agents of all nationalities. Responsible for "maintaining political and legal order anywhere in the world", it is characterized as multinational in its composition and international in scope, protecting and defending nations regardless of size or political persuasion. Within the series, U.N.C.L.E. operates in Communist and Third World countries the same way that it does in the Western nations. In the episode entitled "The Shark Affair," (episode 4 from season 1, from 1964) enforcement agent of U.N.C.L.E. Napoleon Solo reveals that U.N.C.L.E. is sponsored by the US, the Soviet Union, the United Kingdom, the Netherlands, Greece, Spain, Italy and Yugoslavia. Its primary opponent is the independent international criminal organisation THRUSH (Technological Hierarchy for the Removal of Undesirables and the Subjugation of Humanity).
Fictional Headquarters
U.N.C.L.E. headquarters are situated in New York City near the lower East 40's United Nations.
The headquarters has four levels: one ground level, two higher levels (Waverly's office is on the top floor), and one sub-level. The roof has radar, a laser beam weapon, a helipad, and communication antennas disguised as billboards with a worldwide reach. Below the sub-level is an underground docking area and a tunnel that runs under the United Nations headquarters giving U.N.C.L.E.'s boats access to the East River.
The headquarters are designed as a fortress hidden in the center of a block of buildings with Brownstone apartments serving as the exterior façade. On one end of the block there is a public parking garage (complete with machine gun bays hidden in the ceiling). On the other end there is a three-story Whitestone building. The first and second floors of the Whitestone are occupied by The Masque Club, a private, members-only "key club" (like the Playboy Club) in which the waitresses wear masks. On the third floor there are offices of U.N.C.L.E.'s propaganda front, a charity fundraising organization.
There are four primary entrances to U.N.C.L.E. headquarters. In the daytime, field agents are admitted by way of Del Floria's, a small, nondescript tailor/dry-cleaning shop located one flight below street level. The agents go to the single fitting booth and turn the coat hook on the back wall. Outside in the shop, an operator activates a mechanism on the pressing machine that releases the disguised armored door. The wall swings inward and an agent finds him/herself in the main admissions area. There, a receptionist pins on a security badge (white or later, yellow for highest security clearance; red and green for low clearance and visitors). A chemical on the receptionist's fingers activates the badge. There are also |
https://en.wikipedia.org/wiki/Wild%20Arms | , also written as Wild ARMs, is a media franchise developed by Media.Vision and owned by Sony Computer Entertainment. The franchise consists of several role-playing video games and related media. Since the launch of the original Wild Arms title in 1996, the series has gone on to encompass several media, including toys, manga, mobile phone applications, and a 22-episode anime.
The series has largely been overseen by producer Akifumi Kaneko.
Series development
Production
Wild Arms was the first role-playing video game project of Media.Vision, a company that had been known primarily for their shooter game series Crime Crackers and Rapid Reload. Looking for a way to capitalize on the growing role-playing game market of the mid-1990s, Sony commissioned Media.Vision to create a game that would combine elements of a traditional RPG with limited 3D graphics to promote the hardware of their newly released PlayStation console. Supervised and designed primarily by Akifumi Kaneko and Takashi Fukushima, 1996's Wild Arms, while still retaining traditional two-dimensional characters and backgrounds, became one of the first role-playing titles released to showcase 3D battle sequences.
Drawing inspiration from western-themed manga such as Yasuhiro Nightow's Trigun, Kaneko and Fukushima crafted a video game world that resembles the contemporary fantasy environment seen in similar titles. References to seminal role-playing game elements influenced by European fantasy such as castles, magic, dragons, and monsters, were added to attract players to a familiar concept, as well as allow scenario writers from other projects. Other cultural and regional influences include Norse mythology, animism, and Japanese mythology.
Music
The background music of Wild Arms is reminiscent of Western films. The groundwork for the series' music was laid by composer Michiko Naruke, who had previously only written the scores to Super Nintendo Entertainment System titles. Recurring instrumentation includes acoustic guitars, mandolins, drums, woodwind and brass instruments, and pianos, accompanied by clapping and whistling samples. While classically influenced, the music of each game often diverges into other genres, including folk, rock, electronic, swing, and choral. Naruke composed the soundtracks for the first three Wild Arms titles herself, yet she contributed to the soundtrack for Wild Arms 4 along with Nobuyuki Shimizu, Ryuta Suzuki, and Masato Kouda, who emulated her now-established style. Music for Wild Arms 5, the only video game title where Naruke did not contribute, was provided by Kouda along with series newcomer Noriyasu Agematsu.
Recurring themes
The usage of firearms factors heavily into the Wild Arms mythos. Called "ARMs", these weapons are often associated with ancient technology and represent a more violent and warlike age; thus, a social stigma is often given to anyone possessing or using them. Though the exact nature varies from one game to the next, they are see |
https://en.wikipedia.org/wiki/Thomas%20E.%20Kurtz | Thomas Eugene Kurtz (born February 22, 1928) is a retired Dartmouth professor of mathematics and computer scientist, who along with his colleague John G. Kemeny set in motion the then revolutionary concept of making computers as freely available to college students as library books were, by implementing the concept of time-sharing at Dartmouth College. In his mission to allow non-expert users to interact with the computer, he co-developed the BASIC programming language (Beginners All-purpose Symbolic Instruction Code) and the Dartmouth Time Sharing System during 1963 to 1964.
A native of Oak Park, Illinois, United States, Kurtz graduated from Knox College in 1950, and was awarded a Ph.D. degree from Princeton University in 1956, where his advisor was John Tukey, and joined the Mathematics Department of Dartmouth College that same year, where he taught statistics and numerical analysis.
In 1983, Kurtz and Kemeny co-founded a company called True BASIC, Inc. to market True BASIC, an updated version of the language.
Kurtz has also served as Council Chairman and Trustee of EDUCOM, as well as Trustee and Chairman of NERComP, and on the Pierce Panel of the President's Scientific Advisory Committee. Kurtz also served on the steering committees for the CONDUIT project and the CCUC conferences on instructional computing.
In 1974, the American Federation of Information Processing Societies gave an award to Kurtz and Kemeny at the National Computer Conference for their work on BASIC and time-sharing. In 1991, the Computer Society honored Kurtz with the IEEE Computer Pioneer Award, and in 1994, he was inducted as a Fellow of the Association for Computing Machinery.
Early life and education
In 1951, Kurtz' first experience with computing came at the Summer Session of the Institute for Numerical Analysis at University of California, Los Angeles. His interests have included numerical analysis, statistics, and computer science ever since. He graduated in 1950 when he obtained his bachelor's degree majoring in mathematics and in 1956, at the age of 28, he went on to acquire his PhD from Princeton University. His thesis was on a problem of multiple comparisons in mathematical statistics. Kurtz composed his first computer program in 1951 while working with computers at UCLA in the institute of numerical analysis. He performed this feat just after finishing grad school and one year into his tuition at Princeton University.
Dartmouth
In 1963 to 1964, Kurtz and Kemeny developed the first version of the Dartmouth Time-Sharing System, a time-sharing system for university use, and the BASIC language.
From 1966 to 1975, Kurtz served as Director of the Kiewit Computation Center at Dartmouth, and from 1975 to 1978, Director of the Office of Academic Computing. From 1980 to 1988 Kurtz was Director of the Computer and Information Systems program at Dartmouth, a ground-breaking multidisciplinary graduate program to develop information system (IS) leaders for industry. Su |
https://en.wikipedia.org/wiki/GTV%20%28Australian%20TV%20station%29 | GTV is a commercial television station in Melbourne, Australia, owned by the Nine Network. The station is currently based at studios at 717 Bourke Street, Docklands.
History
GTV-9 was amongst the first television stations to begin regular transmission in Australia. Test transmissions began on 27 September 1956, introduced by former 3DB radio announcer Geoff Corke, based at the Mount Dandenong transmitter, as the studios in Richmond were not yet ready. The station covered the 1956 Summer Olympics which Melbourne hosted., the 1956 Carols By Candlelight and the Davis Cup tennis as part of its test transmissions.
The station was officially opened on 19 January 1957 by Victorian Governor
Sir Dallas Brooks from the studios in Bendigo Street, Richmond. GTV-9 was the third television station to launch in Victoria after HSV-7 and ABV-2, on 19 January 1957. A clip from the ceremony has featured in a number of GTV-9 retrospectives, in which the Governor advises viewers that if they did not like the programs, they could just turn off.
The Richmond building, bearing the name Television City, had been converted from a Heinz tinned food factory, also occupied in the past by the Wertheim Piano Company (from 1908 to 1935). A cornerstone, now visible from the staff canteen courtyard, was laid when construction of the Piano factory began.
Eric Pearce was appointed senior newsreader in the late 1960s, after having been the first newsreader at rival station HSV-7. He held that position for almost twenty years. In 1957, GTV-9's first large-scale production was the nightly variety show In Melbourne Tonight ("IMT"), hosted by Graham Kennedy. Kennedy was a radio announcer at 3UZ in Melbourne before being 'discovered' by GTV-9 producer Norm Spencer, when appearing on a GTV-9 telethon. Bert Newton moved from HSV-7 to join Kennedy. IMT continued for thirteen years, dominating Melbourne's television scene for most of that time. It set a precedent for a number of subsequent live variety programmes from the station.
Ownership has changed over the decades. The station was first licensed to the General Television Corporation Ltd., a consortium of two newspapers, The Argus and The Age, together with cinema chains Hoyts, Greater Union, Sir Arthur Warner's Electronic Industries, JC William's Theatres, Cinesound Productions, and radio stations 3XY, 3UZ, 3KZ. In early 1957 The Argus was acquired by The Herald and Weekly Times Ltd, and the paper was closed on the same day that GTV-9 officially opened. The Herald in turn sold its interests in the station to Electronic Industries, later acquired by UK television manufacturer Pye, in 1960. Because of the restriction on foreign ownership of television stations, GTV-9 was then sold to Frank Packer's Australian Consolidated Press, which already owned TCN-9 in Sydney, resulting in the formation of the country's first commercially owned television network. Prior to this GTV-9 was affiliated with ATN-7 in Sydney. Son Clyde Packer r |
https://en.wikipedia.org/wiki/TEN%20%28TV%20station%29 | TEN is Network 10's flagship station in Sydney. It was originally owned and operated by United Telecasters Sydney Limited (UTSL), and began transmission on 5 April 1965 with the highlight of the opening night being the variety special TV Spells Magic. It also serves as the Australian headquarters of Paramount.
History
Ten commenced broadcasting on 5 April 1965 after United Telecasters was granted a Sydney commercial broadcasting licence. Shareholders in United Telecasters included Amalgamated Wireless, Colonial Sugar Refining and Email with 14% each, Bank of New South Wales with 7.5% and the NRMA with 2.5%.
TEN often lagged in the ratings behind the more established commercial channels TCN (Nine) and ATN (Seven) who had dominated viewing habits in Sydney for eight years. The turning point came in 1972 with the premiere of the raunchy soap opera series Number 96 which immediately lifted TEN's overall profile and helped raise the ailing network to No. 1 position by 1973.
TEN launched Australia's first metropolitan nightly one-hour news bulletin in 1975, while NBN-3 in Newcastle was first to air a one-hour news service in Australia in 1972. In 1978, Katrina Lee became only the third female TV newsreader on Australian TV – the first being Melody Iliffe on QTQ-9. The current anchor for the 10 News First 5pm Sydney news bulletin on weeknights is Sandra Sully.
TEN commenced digital television transmission on 1 January 2001, broadcasting on VHF Channel 11 while maintaining analogue transmission on VHF Channel 10.
The analogue signal for TEN was shut off at 9.00am AEDST, Tuesday, 3 December 2013.
Since 2021, the Pyrmont premises also houses office facilities for Network 10 sister channels MTV and Nick.
Digital multiplex
Studio facilities
TEN's broadcast facilities have been in the inner city suburb of Pyrmont since 1997. These studios feature a large open plan newsroom and news-set where all Ten's national and local Sydney news bulletins are produced. This facility is also the network's head office and broadcasts the network signal to other cities. When TEN-10 opened in 1965, it operated from newly built studio facilities at North Ryde, these were sold in the 1990s when the network underwent financial turmoil. The North Ryde complex, which was used by Global Television in recent years, was demolished in September 2007. Following the move from North Ryde in 1991, TEN relocated to a small warehouse in Ultimo, and then to new studios in nearby Pyrmont in May 1997. Most series are produced on location or at external studios by external companies, but a few programs are made in-house by TEN.
North Ryde (1965–March 1991)
Ultimo (March 1991–May 1997)
Pyrmont (May 1997–present)
Current programs produced at Ten's Pyrmont Studios
10 News First (Sydney Edition)
10 News First (Queensland Edition) (Sep 2020–present)
10 News First (Adelaide edition) (2023–present)
10 News First (National Weekend Edition)
10 News First: Midday (Weekdays, 2023–present)
|
https://en.wikipedia.org/wiki/Autoresponder | An autoresponder is a computer program that automatically answers e-mail sent to it. They can be very simple or quite complex.
The first autoresponders were created within mail transfer agents that found they could not deliver an e-mail to a given address. These create bounce messages such as "your e-mail could not be delivered because..." type responses. Today's autoresponders need to be careful to not generate e-mail backscatter, which can result in the autoresponses being considered e-mail spam.
An autoresponder allows you to send email messages automatically to people who have elected to receive them (your subscribers). For example, if you've created a free report, a template, a guide, or other helpful piece of content, you can let your website visitors access it in exchange for their email address.
Such follow-up autoresponders can be divided into two categories:
Outsourced ASP model – these autoresponders operate on the provider's infrastructure and are usually configurable via a web-based control panel. The customer pays a monthly usage fee. This is easiest to implement for the end-user.
Server-side – enables users to install the autoresponder system on their own server. This requires technical skills.
Autoresponders are also incorporated into electronic mailing list software, to confirm subscriptions, unsubscriptions, posts, and other list activities. Popular email clients such as Microsoft Outlook and Gmail contain features to allow users to create autoresponses.
Autoresponder sequence
They are used with autoresponders being used as part of a mailing list manager. These are used by marketers to deliver a queued sequence of messages to mailing list subscribers. Messages are sent relative to the date of subscription to the list or within single list systems, opt-ins that result in the addition of new mailing list tags. However, the length of the sequence for a user should be chosen such that it could be sufficient to accomplish one's goal.
See also
Procmail
Mail delivery agent
Squeeze page
References
External links
Why are auto responders bad? (a SpamCop FAQ)
What is an autoresponder?
software reviewer site/digital software reviewer site
: Recommendations for Automatic Responses to Electronic Mail
: Sieve Email Filtering: Vacation Extension
Autoresponder for Marketer.
Email |
https://en.wikipedia.org/wiki/Steven%20Levy | Steven Levy (born 1951) is an American journalist and Editor at Large for Wired who has written extensively for publications on computers, technology, cryptography, the internet, cybersecurity, and privacy. He is the author of the 1984 book Hackers: Heroes of the Computer Revolution, which chronicles the early days of the computer underground. Levy published eight books covering computer hacker culture, artificial intelligence, cryptography, and multi-year exposés of Apple, Google, and Facebook. His most recent book, Facebook: The Inside Story, recounts the history and rise of Facebook from three years of interviews with employees, including Chamath Palihapitiya, Sheryl Sandberg, and Mark Zuckerberg.
Career
In 1978, Steven Levy rediscovered Albert Einstein's brain in the office of the pathologist who removed and preserved it.
In 1984, his book Hackers: Heroes of the Computer Revolution was published. He described a "hacker ethic", which became a guideline to understanding how computers have advanced into the machines that we know and use today. He identified this hacker ethic to consist of key points such as that all information is free, and that this information should be used to "change life for the better".
Levy was a contributing editor to Popular Computing and wrote a monthly column in the magazine, initially called "Telecomputing" and later named "Micro Journal" and "Computer Journal", from April 1983 to the magazine's closure in December 1985.
Levy was a contributor to Stewart Brand's Whole Earth Software Catalog, first published in 1984.
Levy won the "Computer Press Association Award" for a report he co-wrote in 1998 on the Year 2000 problem.
Levy is writer and Editor at Large for Wired. He was previously chief technology writer and a senior editor for Newsweek. Levy has had articles published in Harper's, Macworld, The New York Times Magazine, The New Yorker, Premiere, and Rolling Stone. In December 1986, Levy founded the Macworld Game Hall of Fame, which Macworld published annually until 2009.
He is regarded as a prominent and respected critic of Apple Inc. In July 2004, Levy wrote a cover story for Newsweek (which also featured an interview with Apple CEO Steve Jobs) which unveiled the 4th generation of the iPod to the world before Apple had officially done so.
Education and personal life
Levy received his bachelor's degree from Temple University and earned a master's degree in literature from Pennsylvania State University. He lives in New York City with his wife, Pulitzer Prize winner Teresa Carpenter, and son.
Bibliography
Books
Hackers: Heroes of the Computer Revolution (1984)
The Unicorn's Secret: Murder in the Age of Aquarius (1988)
Artificial Life: The Quest for a New Creation (1992)
Insanely Great: The Life and Times of Macintosh, the Computer That Changed Everything (1994)
Crypto: How the Code Rebels Beat the Government Saving Privacy in the Digital Age (2001)
The Perfect Thing: How the iPod Shuffles Commerce, |
https://en.wikipedia.org/wiki/16%3A9%20aspect%20ratio | 16:9 (1.78:1) is a widescreen aspect ratio with a width of 16 units and height of 9 units.
Once seen as exotic, since 2009, it has become the most common aspect ratio for televisions and computer monitors, and is also the international standard image format for UHD, HDTV, Full HD, and SD digital television today.
16:9 (1.78:1) ("sixteen-nine") is the international standard format of widescreen and Wide-aspect Clear-vision. Japan's Hi-Vision originally started with a 5:3 (1.67:1) ratio but converted when the international standards group introduced a wider ratio of 16:9. Many digital video cameras have the capability to record in 16:9, and 16:9 is the only widescreen aspect ratio natively supported by the Ultra HD Blu-ray standard. It is also the native aspect ratio of Ultra HD Blu-ray discs, but Ultra HD Blu-ray producers can also choose to show even wider ratios such as 2.00:1 and 2.40:1 within the 16:9 frame adding black bars within the image itself.
History
Dr. Kerns H. Powers, a member of the SMPTE Working Group on High-Definition Electronic Production, first proposed the 16:9 (1.7:1) aspect ratio in 1984. The popular choices in 1980 were 4:3 (based on TV standard's ratio at the time), 15:9 (5:3) (the European "flat" 1.6:1 ratio), 1.85:1 (the American "flat" ratio) and 2.35:1 (the CinemaScope/Panavision) ratio for anamorphic widescreen.
Powers cut out rectangles with equal areas, shaped to match each of the popular aspect ratios. When overlapped with their center points aligned, he found that all of those aspect ratio rectangles fit within an outer rectangle with an aspect ratio of 1.7:1 and all of them also covered a smaller common inner rectangle with the same aspect ratio 1.78:1. The value found by Powers is exactly the geometric mean of the extreme aspect ratios, 4:3 and 2.35:1, ≈1.77 which is coincidentally close to 16:9. Applying the same geometric mean technique to 16:9 and 4:3 yields an aspect ratio of around 1.5396:1, sometimes approximated as 14:9 (1.5:1), which is likewise used as a compromise between these ratios.
While 16:9 (1.7:1) was initially selected as a compromise format, the subsequent popularity of HD broadcast has solidified 16:9 as perhaps the most common video aspect ratio in use. Most 4:3 (1.3:1) and 2.40:1 video is now recorded using a "shoot and protect" technique that keeps the main action within a 16:9 (1.7:1) inner rectangle to facilitate 16:9 conversion and viewing. Conversely it is quite common to use a technique known as center-cutting, to approach the challenge of presenting material shot (typically 16:9) to both an HD and legacy 4:3 audience simultaneously without having to compromise image size for either audience. Content creators frame critical content or graphics to fit within the 1.33:1 raster space. This has similarities to a filming technique called open matte.
In 1993, the European Union instituted the 16:9 Action Plan, to accelerate the development of the advanced television services in 16:9 |
https://en.wikipedia.org/wiki/Gosu | Gosu (고수) is a Korean term used to refer to a highly skilled person. In computer gaming the term is usually used to refer to a person who dominated games like StarCraft, Counter-Strike, Tekken, Warcraft III, Diablo II, DotA, League of Legends, Heroes of the Storm, Overwatch, Overwatch 2, Apex Legends and others. The term was adopted by gaming communities in many countries because of a large South Korean presence in online gaming communities.
Origin
The term is Sino-Korean vocabulary, and cognates in other East Asian languages that feature the same hanja (高手, literally "high hand") include gāoshǒu (Mandarin, "expert; ace; master"), and cao thủ (Vietnamese, "skilled person; master"). In the dialect of the Gyeongnam province, gosu also has the meaning of "leader". Figuratively meaning pro or highly skilled at something, gosu's pre-computing usage usually referred to martial arts or the game of go.
Related terms
Though not as popular, there are also several other commonly used Korean words for describing gamers with various skill levels. Jungsu (hangul: 중수, hanja: 中手, literally "middle hand") stands for "a moderately good player", hasu (hangul: 하수, hanja: 下手, literally "low hand") for "a poor player" or "a person with no skill" and chobo (hangul: 초보, hanja: 初步, literally "first step") for "a novice player". Hasu and chobo are the same skill level, but hasu is disrespectful or derogatory (whereas chobo is not). The English equivalent to hasu would be "scrub" and chobo would be "beginner" or "newbie".
Synonyms
leet or 1337
Über
Pro
Master
See also
List of English words of Korean origin
Pansori
History of Go
Gosu (programming language)
References
Korean words and phrases
South Korean popular culture
Video game culture |
https://en.wikipedia.org/wiki/System%20identification | The field of system identification uses statistical methods to build mathematical models of dynamical systems from measured data. System identification also includes the optimal design of experiments for efficiently generating informative data for fitting such models as well as model reduction. A common approach is to start from measurements of the behavior of the system and the external influences (inputs to the system) and try to determine a mathematical relation between them without going into many details of what is actually happening inside the system; this approach is called black box system identification.
Overview
A dynamic mathematical model in this context is a mathematical description of the dynamic behavior of a system or process in either the time or frequency domain. Examples include:
physical processes such as the movement of a falling body under the influence of gravity;
economic processes such as stock markets that react to external influences.
One of the many possible applications of system identification is in control systems. For example, it is the basis for modern data-driven control systems, in which concepts of system identification are integrated into the controller design, and lay the foundations for formal controller optimality proofs.
Input-output vs output-only
System identification techniques can utilize both input and output data (e.g. eigensystem realization algorithm) or can include only the output data (e.g. frequency domain decomposition). Typically an input-output technique would be more accurate, but the input data is not always available.
Optimal design of experiments
The quality of system identification depends on the quality of the inputs, which are under the control of the systems engineer. Therefore, systems engineers have long used the principles of the design of experiments. In recent decades, engineers have increasingly used the theory of optimal experimental design to specify inputs that yield maximally precise estimators.
White- and black-box
One could build a so-called white-box model based on first principles, e.g. a model for a physical process from the Newton equations, but in many cases, such models will be overly complex and possibly even impossible to obtain in reasonable time due to the complex nature of many systems and processes.
A much more common approach is therefore to start from measurements of the behavior of the system and the external influences (inputs to the system) and try to determine a mathematical relation between them without going into the details of what is actually happening inside the system. This approach is called system identification. Two types of models are common in the field of system identification:
grey box model: although the peculiarities of what is going on inside the system are not entirely known, a certain model based on both insight into the system and experimental data is constructed. This model does however still have a number of unknown fr |
https://en.wikipedia.org/wiki/Return%20type | In computer programming, the return type (or result type) defines and constrains the data type of the value returned from a subroutine or method. In many programming languages (especially statically-typed programming languages such as C, C++, Java) the return type must be explicitly specified when declaring a function.
In the Java example:
public void setShuma(int n1, int n2) {
Shuma = n1 + n2
}
public int getShuma() {
return Shuma;
}
the return type is . The program can therefore rely on the method returning a value of type . Various mechanisms are used for the case where a subroutine does not return any value, e.g., a return type of is used in some programming languages:
public void returnNothing()
Returning a value from a method
A method returns to the code that invoked it when it completes all the statements in the method, reaches a return statement, or
throws an exception, whichever occurs first.
You declare a method's return type in its method declaration. Within the body of the method, you use the return statement to return the value.
Any method declared void doesn't return a value. It does not need to contain a return statement, but it may do so. In such a case, a return statement can be used to branch out of a control flow block and exit the method and is simply used like this:
return;
If you try to return a value from a method that is declared void, you will get a compiler error.
Any method that is not declared void must contain a return statement with a corresponding return value, like this:
return returnValue;
The data type of the return value must match the method's declared return type; you can't return an integer value from a method declared to return a boolean.
The getArea() method in the Rectangle Rectangle class that was discussed in the sections on objects returns an integer:
// A method for computing the area of the rectangle
public int getArea() {
return width * height;
}
This method returns the integer that the expression evaluates to.
The getArea method returns a primitive type. A method can also return a reference type. For example, in a program to manipulate Bicycle objects, we might have a method like this:
public Bicycle seeWhosFastest(Bicycle myBike, Bicycle yourBike,
Environment env) {
Bicycle fastest;
// Code to calculate which bike is
// faster, given each bike's gear
// and cadence and given the
// environment (terrain and wind)
return fastest;
}
References
Subroutines
Articles with example Java code |
https://en.wikipedia.org/wiki/British%20Computer%20Society | The British Computer Society (BCS), branded BCS, The Chartered Institute for IT, since 2009, is a professional body and a learned society that represents those working in information technology (IT), computing, software engineering and computer science, both in the United Kingdom and internationally. Founded in 1957, BCS has played an important role in educating and nurturing IT professionals, computer scientists, software engineers, computer engineers, upholding the profession, accrediting chartered IT professional status, and creating a global community active in promoting and furthering the field and practice of computing.
Overview
With a worldwide membership of 57,625 members as of 2021, BCS is a registered charity and was incorporated by Royal Charter in 1984. Its objectives are to promote the study and application of communications technology and computing technology and to advance knowledge of education in ICT for the benefit of professional practitioners and the general public.
BCS is a member institution of Engineering Council, through which it is licensed to award the designation of Incorporated Engineer and Chartered Engineer and therefore is responsible for the regulation of ICT and computer science fields within the UK. The BCS is also a member of the Council of European Professional Informatics Societies, the Seoul Accord for international tertiary degree recognition, and the European Quality Assurance Network for Informatics Education EQANIE. BCS was previously a member organisation of the Science Council through which it was licensed to award the designation of Chartered Scientist.
BCS has offices in the City of London. The main administrative offices are in Swindon, Wiltshire, west of London. It also has two overseas offices in Sri Lanka and Mauritius.
Members are sent the quarterly IT professional magazine ITNOW (formerly The Computer Bulletin).
BCS is a member organisation of the Federation of Enterprise Architecture Professional Organizations (FEAPO), a worldwide association of professional organisations which have come together to provide a forum to standardise, professionalise, and otherwise advance the discipline of Enterprise Architecture.
History
The forerunner of BCS was the "London Computer Group" (LCG), founded in 1956. BCS was formed a year later from the merger of the LCG and an unincorporated association of scientists into an unincorporated club. In October 1957, BCS was incorporated, by Articles of Association, as "The British Computer Society Ltd": the first President of BCS was Sir Maurice Wilkes (1913–2010), FRS.
In 1966, the BCS was granted charitable status and in 1970, the BCS was given Armorial Bearings including the shield and crest.
The major ethical responsibilities of BCS are emphasised by the leopard's face, surmounting the whole crest and depicting eternal vigilance over the integrity of the Society and its members.
The BCS patron is The Duke of Kent, KG. He became patron in December 1976 and |
https://en.wikipedia.org/wiki/Selection%20algorithm | In computer science, a selection algorithm is an algorithm for finding the th smallest value in a collection of ordered values, such as numbers. The value that it finds is called the order statistic. Selection includes as special cases the problems of finding the minimum, median, and maximum element in the collection. Selection algorithms include quickselect, and the median of medians algorithm. When applied to a collection of values, these algorithms take linear time, as expressed using big O notation. For data that is already structured, faster algorithms may be possible; as an extreme case, selection in an already-sorted array takes
Problem statement
An algorithm for the selection problem takes as input a collection of values, and a It outputs the smallest of these values, or, in some versions of the problem, a collection of the smallest values. For this to be well-defined, it should be possible to sort the values into an order from smallest to largest; for instance, they may be integers, floating-point numbers, or some other kind of object with a numeric key. However, they are not assumed to have been already sorted. Often, selection algorithms are restricted to a comparison-based model of computation, as in comparison sort algorithms, where the algorithm has access to a comparison operation that can determine the relative ordering of any two values, but may not perform any other kind of arithmetic operations on these values.
To simplify the problem, some works on this problem assume that the values are all distinct from each or that some consistent tie-breaking method has been used to assign an ordering to pairs of items with the same value as each other. Another variation in the problem definition concerns the numbering of the ordered values: is the smallest value obtained by as in zero-based numbering of arrays, or is it obtained by following the usual English-language conventions for the smallest, second-smallest, etc.? This article follows the conventions used by Cormen et al., according to which all values are distinct and the minimum value is obtained from
With these conventions, the maximum value, among a collection of values, is obtained by When is an odd number, the median of the collection is obtained by When is even, there are two choices for the median, obtained by rounding this choice of down or up, respectively: the lower median with and the upper median with
Algorithms
Sorting and heapselect
As a baseline algorithm, selection of the smallest value in a collection of values can be performed by the following two steps:
Sort the collection
If the output of the sorting algorithm is an array, retrieve its element; otherwise, scan the sorted sequence to find the element.
The time for this method is dominated by the sorting step, which requires time using a Even when integer sorting algorithms may be used, these are generally slower than the linear time that may be achieved using specialized selection alg |
https://en.wikipedia.org/wiki/Privacy-Enhanced%20Mail | Privacy-Enhanced Mail (PEM) is a de facto file format for storing and sending cryptographic keys, certificates, and other data, based on a set of 1993 IETF standards defining "privacy-enhanced mail." While the original standards were never broadly adopted and were supplanted by PGP and S/MIME, the textual encoding they defined became very popular. The PEM format was eventually formalized by the IETF in RFC 7468.
Format
Many cryptography standards use ASN.1 to define their data structures, and Distinguished Encoding Rules (DER) to serialize those structures. Because DER produces binary output, it can be challenging to transmit the resulting files through systems, like electronic mail, that only support ASCII.
The PEM format solves this problem by encoding the binary data using base64. PEM also defines a one-line header, consisting of , a label, and , and a one-line footer, consisting of , a label, and . The label determines the type of message encoded. Common labels include , , and .
-----BEGIN PRIVATE KEY-----
-----END PRIVATE KEY-----
PEM data is commonly stored in files with a ".pem" suffix, a ".cer" or ".crt" suffix (for certificates), or a ".key" suffix (for public or private keys). The label inside a PEM file represents the type of the data more accurately than the file suffix, since many different types of data can be saved in a ".pem" file. In particular PEM refers to the header and base64 wrapper for a binary format contained within, but does not specify any type or format for the binary data, so that a PEM file may contain "almost anything base64 encoded and wrapped with BEGIN and END lines".
A PEM file may contain multiple instances. For instance, an operating system might provide a file containing a list of trusted CA certificates, or a web server might be configured with a "chain" file containing an end-entity certificate plus a list of intermediate certificates.
Privacy-enhanced mail
The PEM format was first developed in the privacy-enhanced mail series of RFCs: RFC 1421, RFC 1422, RFC 1423, and RFC 1424. These standards assumed prior deployment of a hierarchical public key infrastructure (PKI) with a single root. Such a PKI was never deployed, due to operational cost and legal liability concerns. These standards were eventually obsoleted by PGP and S/MIME, competing e-mail encryption standards.
History
The initiative to develop Privacy Enhanced Mail began in 1985 on behalf of the PSRG (Privacy and Security Research Group) also known as the Internet Research Task Force. This task force is a subsidiary of the Internet Architecture Board (IAB) and their efforts have resulted in the Requests for Comment (RFCs) which are suggested Internet guidelines.
References
Cryptographic protocols
Computer file formats |
https://en.wikipedia.org/wiki/Red%20Heart | Red Heart was a joint venture between the Seven Network and Granada PLC between c. 1999 and 2001. It brought together all of its Australian parents' TV production resources, except those used for Seven's news and soaps.
One theory states that Granada was looking to buy Seven's core, through it eventually owning 100% of Red Heart. Red Heart may have also eventually controlled Seven's news and soaps. This was a time when "content is king" was a popular idea.
Australia's Broadcasting Services Act 1992 disallows foreign entities owning more than 15% of any TV station licence holder, but owning a company that provides a station with 100% of its content for 99% of its income is fine. (CanWest found another way around the law to own a majority stake of Network Ten).
However, Granada decided to pull out of Red Heart when it realised it could make more money selling Australian programs to other Australian broadcasters than it would just supplying Seven. It has been rare to see Australian-made Granada programs on the screen.
References
https://books.google.com/books?id=NUXIAgAAQBAJ&dq=%22Red+Heart%22+granada+-yarn&pg=PA186
https://www.broadcastnow.co.uk/special-mip-tv-granada-shuts-red-heart-after-clash/1173431.article
http://www.c21media.net/granada-walks-away-from-red-heart/
https://www.theguardian.com/media/2001/apr/06/citynews.broadcasting
https://www.smh.com.au/news/business/vizard-finally-ends-long-time-at-the-top/2005/07/05/1120329446704.html
https://variety.com/2008/scene/news/itv-s-paul-jackson-exits-1117992326/
Joint ventures
Television production companies of Australia |
https://en.wikipedia.org/wiki/Posit | Posit or POSIT may refer to:
Postulate
Posit (number format), a universal number (unum type III) format since 2016
POSIT, a computer vision algorithm that performs 3D pose estimation
Posit Software, PBC (formerly known as RStudio, PBC)
See also
Postulator, one who guides a cause for Catholic beatification or canonization |
https://en.wikipedia.org/wiki/Akamai%20%28disambiguation%29 | Akamai may refer to:
Akamai Technologies, a company that develops software for web content and application delivery
Akamai Techs., Inc. v. Limelight Networks, Inc., a patent case involving when patent infringement may be found when a patented method is performed by a group of persons |
https://en.wikipedia.org/wiki/Chemical%20database | A chemical database is a database specifically designed to store chemical information. This information is about chemical and crystal structures, spectra, reactions and syntheses, and thermophysical data.
Types of chemical databases
Bioactivity database
Bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs.
Chemical structures
Chemical structures are traditionally represented using lines indicating chemical bonds between atoms and drawn on paper (2D structural formulae). While these are ideal visual representations for the chemist, they are unsuitable for computational use and especially for search and storage. Small molecules (also called ligands in drug design applications), are usually represented using lists of atoms and their connections. Large molecules such as proteins are however more compactly represented using the sequences of their amino acid building blocks.
Large chemical databases for structures are expected to handle the storage and searching of information on millions of molecules taking terabytes of physical memory.
Literature database
Chemical literature databases correlate structures or other chemical information to relevant references such as academic papers or patents. This type of database includes STN, Scifinder, and Reaxys. Links to literature are also included in many databases that focus on chemical characterization.
Crystallographic database
Crystallographic databases store X-ray crystal structure data. Common examples include Protein Data Bank and Cambridge Structural Database.
NMR spectra database
NMR spectra databases correlate chemical structure with NMR data. These databases often include other characterization data such as FTIR and mass spectrometry.
Reactions database
Most chemical databases store information on stable molecules but in databases for reactions also intermediates and temporarily created unstable molecules are stored. Reaction databases contain information about products, educts, and reaction mechanisms.
Thermophysical database
Thermophysical data are information about
phase equilibria including vapor–liquid equilibrium, solubility of gases in liquids, liquids in solids (SLE), heats of mixing, vaporization, and fusion.
caloric data like heat capacity, heat of formation and combustion,
transport properties like viscosity and thermal conductivity
Chemical structure representation
There are two principal techniques for representing chemical structures in digital databases
As connection tables / adjacency matrices / lists with additional information on bond (edges) and atom attributes (nodes), such as:
MDL Molfile, PDB, CML
As a linear string notation based on depth first or breadth first traversal, such as:
SMILES/SMARTS, SLN, WLN, InChI
These approaches have been refined to allow representation of stereochemical differences and charges as well as special kinds |
https://en.wikipedia.org/wiki/Robert%20G.%20Gallager | Robert Gray Gallager (born May 29, 1931) is an American electrical engineer known for his work on information theory and communications networks.
Gallager was elected a member of the National Academy of Engineering (NAE) in 1979 for contributions to coding and communications theory and practice. He was also elected an IEEE Fellow in 1968, a member of the National Academy of Sciences (NAS) in 1992, and a Fellow of the American Academy of Arts and Sciences (AAAS) in 1999.
He received the Claude E. Shannon Award from the IEEE Information Theory Society in 1983. He also received the IEEE Centennial Medal in 1984, the IEEE Medal of Honor in 1990 "For fundamental contributions to communications coding techniques", the Marconi Prize in 2003, and a
Dijkstra Prize in 2004, among other honors. For most of his career he was a professor of electrical engineering and computer science at the Massachusetts Institute of Technology.
Biography
Gallager received the B.S.E.E. degree from the University of Pennsylvania in 1953. He was a member of the technical staff at the Bell Telephone Laboratories in 1953–1954 and then served in the U.S. Signal Corps 1954–1956.
He returned to graduate school at the Massachusetts Institute of Technology (MIT), and received the S.M. degree in 1957 and Sc.D. in 1960 in electrical engineering.
He has been a faculty member at MIT since 1960 where he was co-director of the Laboratory for Information and Decision Systems from 1986 to 1998, was named Fujitsu Professor in 1988, and became Professor Emeritus in 2001. He was a visiting associate professor at the University of California, Berkeley, in 1965 and a visiting professor at the École Nationale Supérieure des Télécommunications, Paris, in 1978.
Gallager's 1960 Sc.D. thesis, on low-density parity-check codes, was published by the MIT Press as a monograph in 1963.
The codes, which remained useful over 50 years, are sometimes called "Gallager codes".
An abbreviated version appeared in January 1962 in the IRE Transactions on Information Theory and was republished in the 1974 IEEE Press volume, Key Papers in The Development of Information Theory, edited by Elwyn Berlekamp. This paper won an IEEE Information Theory Society Golden-Jubilee Paper Award in 1998 and its subject matter is a very active area of research today. Gallager's January 1965 paper in the IEEE Transactions on Information Theory, "A Simple Derivation of the Coding Theorem and some Applications", won the 1966 IEEE W.R.G. Baker Award "for the most outstanding paper, reporting original work, in the Transactions, Journals and Magazines of the IEEE Societies, or in the Proceedings of the IEEE" and also won another IEEE Information Theory Society Golden-Jubilee Paper Award in 1998. His book, Information Theory and Reliable Communication, Wiley 1968, placed Information Theory on a sound mathematical foundation and is still considered by many as the standard textbook on information theory.
Gallager consulted for Melpar as a |
https://en.wikipedia.org/wiki/Comparison%20%28disambiguation%29 | Comparison is the act of examining the similarities and differences between things. Comparison may also refer to:
Computer science and technology
Comparison (computer programming), a code that makes decisions and selects alternatives based on them
Comparison microscope, a dual microscope for analyzing side-by-side specimens
Comparison sort, a type of data sort algorithm
File comparison, the automatic comparison of data such as files and texts by computer programs
Price comparison service, an Internet service
Language
Comparison (grammar), the modification of adjectives and adverbs to express the relative degree
Mass comparison, a test for the relatedness of languages
Mathematics
Comparison (mathematics), a notation for comparing variable values
Comparison of topologies, an order relation on the set of all topologies on one and the same set
Multiple comparisons, a procedure of statistics
a synonym for co-transitivity, in constructive mathematics
Psychology
Pairwise comparison, a test of psychology
Social comparison theory, a branch of social psychology
Other uses
Compare: A Journal of Comparative and International Education
Cross-cultural studies, which involve cross-cultural comparisons
See also
Comparability, a mathematical definition
Comparative (disambiguation)
Comparator (disambiguation) |
https://en.wikipedia.org/wiki/Haiku%20%28operating%20system%29 | Haiku is a free and open-source operating system capable of running applications written for the now-discontinued BeOS, which it is modeled after. Its development began in 2001, and the operating system became self-hosting in 2008. The first alpha release was made in September 2009, and the last alpha was released on November 2012; the first beta was released in September 2018, followed by beta 2 in June 2020, then beta 3 in July 2021. The fourth beta was released on December 23, 2022, still keeping BeOS 5 compatibility in its x86 32-bit images, with a greatly increased number of modern drivers, GTK3 apps, Wine port, and Xlib (X11) and Wayland compatibility layers.
Haiku is supported by Haiku, Inc., a non-profit organization based in Rochester, New York, United States, founded in 2003 by former project leader Michael Phipps. During the most recent release cycle, Haiku, Inc. employed a developer.
History
In 2001, Be, Inc. was bought by Palm, Inc. and BeOS development was discontinued. The OpenBeOS project began to support the BeOS user community by creating an open-source, backward-compatible replacement for BeOS. The first project by OpenBeOS was a community-created "stop-gap" update for BeOS 5.0.3 in 2002. In 2004, OpenBeOS was renamed Haiku to avoid infringing on Palm's trademarks. In 2009, the first alpha release followed eight years of development. A community poll was launched to redefine the future of Haiku beyond a free software refactoring of BeOS from the late 1990s, and decided to expand vision to supporting basic contemporary systems and protocols. On October 27, 2009, Haiku obtained Qt4 support.
At the end of 2010, a FOSDEM talk was titled: "Haiku has No Future". It cited Lee Edelman on queer futurity and Matthew Fuller's (critical) software studies writing when addressing the situation and stating the Haiku is a "queer" operating system. "Our work will not ever define the future of operating systems, but what it does do is undermine the monotone machinery of the competition. It is in this niche that we can operate best." This gives the opportunity for a "playful approach" in development: "even though we have no future, it does not mean that there will not arrive one eventually. Let us get there the most pleasant way possible."
Branding and style
In 2003, the non-profit organization Haiku, Inc. was registered in Rochester, New York, to financially support development, and in 2004, after a notification of infringement of Palm's trademark of the BeOS name was sent to OpenBeOS, the project was renamed Haiku. Original logo was designed by Stuart "stubear" McCoy who was apparently heavily involved in the early days of the Haiku Usability & Design Team, and created mockups for Haiku R2.
Haiku developer and artist Stephan "Stippi" Assmus, who co-developed graphic editing software WonderBrush for Haiku, updated it and developed the HVIF icon vector format used by Haiku, and a Haiku icon set chosen by popular vote in a contest in 2007.
Mi |
https://en.wikipedia.org/wiki/Crossover%20%28genetic%20algorithm%29 | In genetic algorithms and evolutionary computation, crossover, also called recombination, is a genetic operator used to combine the genetic information of two parents to generate new offspring. It is one way to stochastically generate new solutions from an existing population, and is analogous to the crossover that happens during sexual reproduction in biology. Solutions can also be generated by cloning an existing solution, which is analogous to asexual reproduction. Newly generated solutions may be mutated before being added to the population.
Different algorithms in evolutionary computation may use different data structures to store genetic information, and each genetic representation can be recombined with different crossover operators. Typical data structures that can be recombined with crossover are bit arrays, vectors of real numbers, or trees.
The list of operators presented below is by no means complete and serves mainly as an exemplary illustration of this dyadic genetic operator type. More operators and more details can be found in the literature.
Crossover for binary arrays
Traditional genetic algorithms store genetic information in a chromosome represented by a bit array. Crossover methods for bit arrays are popular and an illustrative example of genetic recombination.
One-point crossover
A point on both parents' chromosomes is picked randomly, and designated a 'crossover point'. Bits to the right of that point are swapped between the two parent chromosomes. This results in two offspring, each carrying some genetic information from both parents.
Two-point and k-point crossover
In two-point crossover, two crossover points are picked randomly from the parent chromosomes. The bits in between the two points are swapped between the parent organisms.
Two-point crossover is equivalent to performing two single-point crossovers with different crossover points. This strategy can be generalized to k-point crossover for any positive integer k, picking k crossover points.
Uniform crossover
In uniform crossover, typically, each bit is chosen from either parent with equal probability. Other mixing ratios are sometimes used, resulting in offspring which inherit more genetic information from one parent than the other.
In a uniform crossover, we don’t divide the chromosome into segments, rather we treat each gene separately. In this, we essentially flip a coin for each chromosome to decide whether or not it will be included in the off-spring.
Crossover for integer or real-valued genomes
For the crossover operators presented above and for most other crossover operators for bit strings, it holds that they can also be applied accordingly to integer or real-valued genomes whose genes each consist of an integer or real-valued number. Instead of individual bits, integer or real-valued numbers are then simply copied into the child genome. The offspring lie on the remaining corners of the hyperbody spanned by the two parents and , as exemplified in |
https://en.wikipedia.org/wiki/Density%20estimation | In statistics, probability density estimation or simply density estimation is the construction of an estimate, based on observed data, of an unobservable underlying probability density function. The unobservable density function is thought of as the density according to which a large population is distributed; the data are usually thought of as a random sample from that population.
A variety of approaches to density estimation are used, including Parzen windows and a range of data clustering techniques, including vector quantization. The most basic form of density estimation is a rescaled histogram.
Example
We will consider records of the incidence of diabetes. The following is quoted verbatim from the data set description:
A population of women who were at least 21 years old, of Pima Indian heritage and living near Phoenix, Arizona, was tested for diabetes mellitus according to World Health Organization criteria. The data were collected by the US National Institute of Diabetes and Digestive and Kidney Diseases. We used the 532 complete records.
In this example, we construct three density estimates for "glu" (plasma glucose concentration), one conditional on the presence of diabetes,
the second conditional on the absence of diabetes, and the third not conditional on diabetes.
The conditional density estimates are then used to construct the probability of diabetes conditional on "glu".
The "glu" data were obtained from the MASS package of the R programming language. Within R, ?Pima.tr and ?Pima.te give a fuller account of the data.
The mean of "glu" in the diabetes cases is 143.1 and the standard deviation is 31.26.
The mean of "glu" in the non-diabetes cases is 110.0 and the standard deviation is 24.29.
From this we see that, in this data set, diabetes cases are associated with greater levels of "glu".
This will be made clearer by plots of the estimated density functions.
The first figure shows density estimates of p(glu | diabetes=1), p(glu | diabetes=0), and p(glu).
The density estimates are kernel density estimates using a Gaussian kernel. That is, a Gaussian density function is placed at each data point, and the sum of the density functions is computed over the range of the data.
From the density of "glu" conditional on diabetes, we can obtain the probability of diabetes conditional on "glu" via Bayes' rule. For brevity, "diabetes" is abbreviated "db." in this formula.
The second figure shows the estimated posterior probability p(diabetes=1 | glu). From these data, it appears that an increased level of "glu" is associated with diabetes.
Application and purpose
A very natural use of density estimates is in the informal investigation of the properties of a given set of data. Density estimates can give a valuable indication of such features as skewness and multimodality in the data. In some cases they will yield conclusions that may then be regarded as self-evidently true, while in others all they will do is to point the way to f |
https://en.wikipedia.org/wiki/Non-blocking%20algorithm | In computer science, an algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread; for some operations, these algorithms provide a useful alternative to traditional blocking implementations. A non-blocking algorithm is lock-free if there is guaranteed system-wide progress, and wait-free if there is also guaranteed per-thread progress. "Non-blocking" was used as a synonym for "lock-free" in the literature until the introduction of obstruction-freedom in 2003.
The word "non-blocking" was traditionally used to describe telecommunications networks that could route a connection through a set of relays "without having to re-arrange existing calls" (see Clos network). Also, if the telephone exchange "is not defective, it can always make the connection" (see nonblocking minimal spanning switch).
Motivation
The traditional approach to multi-threaded programming is to use locks to synchronize access to shared resources. Synchronization primitives such as mutexes, semaphores, and critical sections are all mechanisms by which a programmer can ensure that certain sections of code do not execute concurrently, if doing so would corrupt shared memory structures. If one thread attempts to acquire a lock that is already held by another thread, the thread will block until the lock is free.
Blocking a thread can be undesirable for many reasons. An obvious reason is that while the thread is blocked, it cannot accomplish anything: if the blocked thread had been performing a high-priority or real-time task, it would be highly undesirable to halt its progress.
Other problems are less obvious. For example, certain interactions between locks can lead to error conditions such as deadlock, livelock, and priority inversion. Using locks also involves a trade-off between coarse-grained locking, which can significantly reduce opportunities for parallelism, and fine-grained locking, which requires more careful design, increases locking overhead and is more prone to bugs.
Unlike blocking algorithms, non-blocking algorithms do not suffer from these downsides, and in addition are safe for use in interrupt handlers: even though the preempted thread cannot be resumed, progress is still possible without it. In contrast, global data structures protected by mutual exclusion cannot safely be accessed in an interrupt handler, as the preempted thread may be the one holding the lock—but this can be rectified easily by masking the interrupt request during the critical section.
A lock-free data structure can be used to improve performance.
A lock-free data structure increases the amount of time spent in parallel execution rather than serial execution, improving performance on a multi-core processor, because access to the shared data structure does not need to be serialized to stay coherent.
Implementation
With few exceptions, non-blocking algorithms use atomic read-modify-write primitives that the hardware must p |
https://en.wikipedia.org/wiki/Bicentennial%20Minutes | Bicentennial Minutes was a series of short educational American television segments commemorating the bicentennial of the American Revolution. The segments were produced by the CBS Television Network and broadcast nightly from July 4, 1974, until December 31, 1976. (The series was originally slated to end on July 4, 1976, airing a total of 732 episodes, but was extended to the end of the year.) The segments were sponsored by Shell Oil Company, then later by Raid from July 1976 onward.
Description
The series was created by Ethel Winant and Lewis Freedman of CBS, who had overcome the objections of network executives who considered it to be an unworthy use of program time. The producer of the series was Paul Waigner, the executive producer was Bob Markell, and the executive story editor and writer was Bernard Eismann from 1974 to 1976. He was followed by Jerome Alden. Associate producer Meryle Evans researched the historical facts for the broadcasts. In 1976, the series received an Emmy Award in the category of Special Classification of Outstanding Program and Individual Achievement. It also won a Special Christopher Award in 1976.
The videotaped segments were each one minute long and were broadcast each night during prime time hours, generally at approximately 8:27 or 8:57 P.M. Eastern time. The format of the segments did not change, although each segment featured a different narrator, often a CBS network television star. The narrator, after introducing himself or herself, would say "Two hundred years ago today..." and describe a historical event or personage prominent on that particular date two hundred years before and during the American Revolution. The segment would close with the narrator saying, "I'm (his/her name), and that's the way it was." This was an offhand reference to the close of the weeknight CBS Evening News with Walter Cronkite, who always ended each news telecast by saying, "And that's the way it is."
The Bicentennial Minute on July 3, 1976, was narrated by Vice President Nelson Rockefeller. The Bicentennial Minute on July 4, 1976, was narrated by First Lady Betty Ford. The final Bicentennial Minute, broadcast on December 31, 1976, was narrated by President Gerald Ford (his was also the longest Bicentennial Minute). After the series ended, the time slot of the Bicentennial Minute came to be occupied by a brief synopsis of news headlines ("Newsbreak") read by a CBS anchor.
In popular culture
The Bicentennial Minute achieved a high cultural profile during its run and was widely referenced and parodied. For example, in the All in the Family episode "Mike's Move" (originally broadcast on February 2, 1976), the character Mike Stivic responded to a typical monologue by his father-in-law Archie Bunker about the history of American immigration and the meaning of the Statue of Liberty with the sarcastic comment: "I think we just heard Archie Bunker's Bicentennial Minute." Another Norman Lear-produced sitcom, Sanford and Son, feature |
https://en.wikipedia.org/wiki/Mutation%20%28genetic%20algorithm%29 | Mutation is a genetic operator used to maintain genetic diversity of the chromosomes of a population of a genetic or, more generally, an evolutionary algorithm (EA). It is analogous to biological mutation.
The classic example of a mutation operator of a binary coded genetic algorithm (GA) involves a probability that an arbitrary bit in a genetic sequence will be flipped from its original state. A common method of implementing the mutation operator involves generating a random variable for each bit in a sequence. This random variable tells whether or not a particular bit will be flipped. This mutation procedure, based on the biological point mutation, is called single point mutation. Other types of mutation operators are commonly used for representations other than binary, such as floating-point encodings or representations for combinatorial problems.
The purpose of mutation in EAs is to introduce diversity into the sampled population. Mutation operators are used in an attempt to avoid local minima by preventing the population of chromosomes from becoming too similar to each other, thus slowing or even stopping convergence to the global optimum. This reasoning also leads most EAs to avoid only taking the fittest of the population in generating the next generation, but rather selecting a random (or semi-random) set with a weighting toward those that are fitter.
The following requirements apply to all mutation operators used in an EA:
every point in the search space must be reachable by one or more mutations.
there must be no preference for parts or directions in the search space (no drift).
small mutations should be more probable than large ones.
For different genome types, different mutation types are suitable. Some mutations are Gaussian, Uniform, Zigzag, Scramble, Insertion, Inversion, Swap, and so on. An overview and more operators than those presented below can be found in the introductory book by Eiben and Smith or in.
Bit string mutation
The mutation of bit strings ensue through bit flips at random positions.
Example:
The probability of a mutation of a bit is , where is the length of the binary vector. Thus, a mutation rate of per mutation and individual selected for mutation is reached.
Mutation of real numbers
Many EAs, such as the evolution strategy or the real-coded genetic algorithms, work with real numbers instead of bit strings. This is due to the good experiences that have been made with this type of coding.
The value of a real-valued gene can either be changed or redetermined. A mutation that implements the latter should only ever be used in conjunction with the value-changing mutations and then only with comparatively low probability, as it can lead to large changes.
In practical applications, the decision variables to be changed of the optimisation problem to be solved are usually limited. Accordingly, the values of the associated genes are each restricted to an interval . Mutations may or may not take these restr |
https://en.wikipedia.org/wiki/RT | RT may refer to:
Arts and media
RT (TV network), a Russian television news channel (formerly Russia Today)
RT America, defunct U.S. channel (2010–2022)
RT UK, defunct British channel (2014–2022)
RT France, defunct French channel (2014–2022)
RT Arabic, Arabic-language channel
RT Spanish, Spanish-language channel
RT Documentary, RT's documentary channel
RT!, Canadian music-video director
Radio Times, a British listings magazine
Radio Thailand, a Thai public radio station
Rooster Teeth, an entertainment production company
Rotten Tomatoes, a review aggregator website
Science and technology
Biology and medicine
Radiation therapy or radiotherapy
Rapid test
Reaction time, a term used in psychology
Respiratory therapist
Resuscitative thoracotomy
Reverse transcriptase, an enzyme that transcribes RNA to DNA
Richter's transformation, in chronic leukemia
or effective reproduction number, a measure of the spread of an infection in epidemiology
Computing and telecommunications
IBM RT PC, a computer
Windows RT, for ARM processors
Radiotechnique, a French electronics manufacturer
Radiotelephone
Request Tracker, a ticketing system
Retweeting, a sharing function on Twitter
RT-Mobile, spun off from Rostelecom in Russia
Other uses in science and technology
RT (energy), the product of the gas constant (R) and temperature
Relevance theory, a linguistic framework for understanding utterance interpretation
Sports
Russian Time, a Russian motor racing team
Right tackle, in American and Canadian football
Transportation
R/T (Road/Track), a Dodge car performance designator
AEC Regent III RT, London Transport bus, 1938–1979
Rapid transit
Line 3 Scarborough of the Toronto Subway, Scarborough RT or "The RT"
Sacramento Regional Transit District, SacRT or RT
RT, abbreviation for route number in the US
Airline UVT Aero (IATA code RT)
Other uses
Ruby Tuesday (restaurant) (NYSE symbol), a restaurant chain
Rukun tetangga, an administrative division of Indonesia
See also
Real-time (disambiguation)
The Right Honourable, abbreviated "Rt Hon"
Arty (disambiguation)
RT1 (disambiguation) |
https://en.wikipedia.org/wiki/Num%20Lock | Num Lock or Numeric Lock (⇭) is a key on the numeric keypad of most computer keyboards. It is a lock key, like Caps Lock and Scroll Lock. Its state affects the function of the numeric keypad commonly located to the right of the main keyboard and is commonly displayed by an LED built into the keyboard.
The Num Lock key exists because earlier 84-key IBM PC keyboards did not have cursor control or arrows separate from the numeric keypad. Most earlier computer keyboards had different number keys and cursor control keys; however, to reduce cost, IBM chose to combine the two in their early PC keyboards. Num Lock would be used to select between the two functions. On some laptop computers, the Num Lock key is used to convert part of the main keyboard to act as a (slightly skewed) numeric keypad rather than letters. On some laptop computers, the Num Lock key is absent and replaced by the use of a key combination.
Since Apple Keyboards never had a combination of arrow keys and numeric keypad (but some lacked arrow keys, function keys, and a numeric keypad altogether), Apple has keyboards with a separate numeric keypad but no functional Num Lock key. Keyboards manufactured by Apple will instead use a Clear key but not all Apple manufactured keyboards will be provided with it.
References
Lock keys |
https://en.wikipedia.org/wiki/Caps%20Lock | Caps Lock is a button on a computer keyboard that causes all letters of bicameral scripts to be generated in capital letters. It is a toggle key: each press reverses the previous action. Some keyboards also implement a light to give visual feedback about whether it is on or off. Exactly what Caps Lock does depends on the keyboard hardware, the operating system, the device driver, and the keyboard layout. Usually, the effect is limited to letter keys. Letters of non-bicameral scripts (e.g. Arabic, Hebrew, Hindi) and non-letter characters are generated normally.
History
The Caps Lock key originated as a Shift lock key on mechanical typewriters. An early innovation in typewriters was the introduction of a second character on each typebar, thereby doubling the number of characters that could be typed, using the same number of keys. The second character was positioned above the first on the face of each typebar, and the typewriter's Shift key caused the entire type apparatus to move, physically shifting the positioning of the typebars relative to the ink ribbon. Just as in modern computer keyboards, the shifted position was used to produce capitals and secondary characters.
The Shift lock key was introduced so the shift operation could be maintained indefinitely without continuous effort. It mechanically locked the typebars in the shifted position, causing the upper character to be typed upon pressing any key. Because the two shift keys on a typewriter required more force to operate and were meant to be pressed by the little finger, it could be difficult to hold the shift down for more than two or three consecutive strokes, therefore the introduction of the Shift lock key was also meant to reduce finger muscle pain caused by repetitive typing.
Mechanical typewriter shift lock is typically set by pushing both Shift and lock at the same time, and released by pressing Shift by itself. Computer Caps Lock is set and released by the same key, and the Caps Lock behavior in most QWERTY keyboard layouts differs from the Shift lock behavior in that it capitalizes letters but does not affect other keys, such as numbers or punctuation. Some early computer keyboards, such as the Commodore 64, had a Shift lock but no Caps Lock; others, such as the BBC Micro, had both, only one of which could be enabled at a time.
Abolition
There are some proposals to abolish the caps-lock key as being obsolete. Pieter Hintjens, the CEO of iMatix, started a "Capsoff" organization proposing hardware manufacturers delete the Caps Lock key. Google has removed the Caps Lock on the Chromebook keyboard, replacing it with the "Everything Button"; the caps-lock function is then reproduced using an "alt" key combination.
In fact, the current German keyboard layout standard DIN 2137-01:2023-08 (like its preceding edition from 2018) specifies the function of the key as optional, to be replaced by other keys or key combinations. It recommends the function only to be invoked when it is p |
https://en.wikipedia.org/wiki/Type%20conversion | In computer science, type conversion, type casting, type coercion, and type juggling are different ways of changing an expression from one data type to another. An example would be the conversion of an integer value into a floating point value or its textual representation as a string, and vice versa. Type conversions can take advantage of certain features of type hierarchies or data representations. Two important aspects of a type conversion are whether it happens implicitly (automatically) or explicitly, and whether the underlying data representation is converted from one representation into another, or a given representation is merely reinterpreted as the representation of another data type. In general, both primitive and compound data types can be converted.
Each programming language has its own rules on how types can be converted. Languages with strong typing typically do little implicit conversion and discourage the reinterpretation of representations, while languages with weak typing perform many implicit conversions between data types. Weak typing language often allow forcing the compiler to arbitrarily interpret a data item as having different representations—this can be a non-obvious programming error, or a technical method to directly deal with underlying hardware.
In most languages, the word coercion is used to denote an implicit conversion, either during compilation or during run time. For example, in an expression mixing integer and floating point numbers (like 5 + 0.1), the compiler will automatically convert integer representation into floating point representation so fractions are not lost. Explicit type conversions are either indicated by writing additional code (e.g. adding type identifiers or calling built-in routines) or by coding conversion routines for the compiler to use when it otherwise would halt with a type mismatch.
In most ALGOL-like languages, such as Pascal, Modula-2, Ada and Delphi, conversion and casting are distinctly different concepts. In these languages, conversion refers to either implicitly or explicitly changing a value from one data type storage format to another, e.g. a 16-bit integer to a 32-bit integer. The storage needs may change as a result of the conversion, including a possible loss of precision or truncation. The word cast, on the other hand, refers to explicitly changing the interpretation of the bit pattern representing a value from one type to another. For example, 32 contiguous bits may be treated as an array of 32 booleans, a 4-byte string, an unsigned 32-bit integer or an IEEE single precision floating point value. Because the stored bits are never changed, the programmer must know low level details such as representation format, byte order, and alignment needs, to meaningfully cast.
In the C family of languages and ALGOL 68, the word cast typically refers to an explicit type conversion (as opposed to an implicit conversion), causing some ambiguity about whether this is a re-interpreta |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.