source
stringlengths
32
199
text
stringlengths
26
3k
https://en.wikipedia.org/wiki/SyQuest%20Technology
SyQuest Technology, Inc. () was an early entrant into the hard disk drive market for personal computers. The company was founded on January 27, 1982 by Syed Iftikar who had been a founder of Seagate, along with Ben Alaimo, Bill Krajewski, Anil Nigam and George Toldi. Its earliest products were the SQ306R, a 5 MB 3.9" (100 mm) cartridge disk drive and associated Q-Pak cartridge for IBM XT compatibles. Subsequently a non-removable medium version was announced, the SQ306F. For many years, SyQuest was the most popular means of transferring large desktop publisher documents such as advertisements to professional printers. SyQuest marketed its products as able to give personal computer users "endless" hard drive space for data-intensive applications like desktop publishing, Internet information management, pre-press, multimedia, audio, video, digital photography, fast backup, data exchange and archiving, along with confidential data security and easy portability for the road. History The company was named partially after the founder, Syed Iftikar, because of a company meeting wherein it was decided that "SyQuest" ought to be a shortened name for "Sy's Quest". Its earliest product family of 3.9" (100 mm) cartridge disk drives and associated Q-Pak cartridges achieved limited success in government markets where removable media were required for security purposes. In 1986, SyQuest announced the SQ555 and its SQ400 associated cartridge, a 44 MB 5¼-inch removable cartridge hard disk drive, using the industry standard 130 mm disk as its medium. Double capacity versions, the SQ5110 and SQ800 were introduced in 1991. This generation of products became the de facto standard in the Apple Macintosh world to store, transfer and backup large amounts of data such as generated by graphic artists, musicians and engineers. SyQuest went public on the NASDAQ in 1991. Bankruptcy In early 1996, the company cut 60% of its workforce; later that year, company namesake Syed Iftikar was fired "in a management shakeup." After 1997, SyQuest did not fare well in the market. Their core desktop publishing customers began increasingly to use CD-R media and FTP to transfer files, while Iomega's Zip drives dominated the small office/home office (SOHO) market. Over the period 1995 to 1997, sales declined, resulting in a series of losses. In the first quarter of 1997 those losses had been reduced to $6.8 million with net revenues increasing to $48.3 million. This compares to a net loss of $33.8 million, or $2.98 per share, on net revenues of $78.7 million for the same period the year before. In August 1998, they cut 50% of their staff, and SyQuest subsequently filed for bankruptcy; portions of the company were purchased by Iomega Corp. in January, 1999. SyQuest retained the rights to sell their remaining inventory, on condition of renaming themselves SYQT in order to continue operations. For several subsequent years, a Web site at www.SYQT.com sold disk drives and media, and
https://en.wikipedia.org/wiki/Surrogate%20key
A surrogate key (or synthetic key, pseudokey, entity identifier, factless key, or technical key) in a database is a unique identifier for either an entity in the modeled world or an object in the database. The surrogate key is not derived from application data, unlike a natural (or business) key. Definition There are at least two definitions of a surrogate: Surrogate (1) – Hall, Owlett and Todd (1976) A surrogate represents an entity in the outside world. The surrogate is internally generated by the system but is nevertheless visible to the user or application. Surrogate (2) – Wieringa and De Jonge (1991) A surrogate represents an object in the database itself. The surrogate is internally generated by the system and is invisible to the user or application. The Surrogate (1) definition relates to a data model rather than a storage model and is used throughout this article. See Date (1998). An important distinction between a surrogate and a primary key depends on whether the database is a current database or a temporal database. Since a current database stores only currently valid data, there is a one-to-one correspondence between a surrogate in the modeled world and the primary key of the database. In this case the surrogate may be used as a primary key, resulting in the term surrogate key. In a temporal database, however, there is a many-to-one relationship between primary keys and the surrogate. Since there may be several objects in the database corresponding to a single surrogate, we cannot use the surrogate as a primary key; another attribute is required, in addition to the surrogate, to uniquely identify each object. Although Hall et al. (1976) say nothing about this, others have argued that a surrogate should have the following characteristics: the value is never reused the value is system generated the value is not manipulable by the user or application the value contains no semantic meaning the value is not visible to the user or application the value is not composed of several values from different domains. Surrogates in practice In a current database, the surrogate key can be the primary key, generated by the database management system and not derived from any application data in the database. The only significance of the surrogate key is to act as the primary key. It is also possible that the surrogate key exists in addition to the database-generated UUID (for example, an HR number for each employee other than the UUID of each employee). A surrogate key is frequently a sequential number (e.g. a Sybase or SQL Server "identity column", a PostgreSQL or Informix serial, an Oracle or SQL Server SEQUENCE or a column defined with AUTO_INCREMENT in MySQL). Some databases provide UUID/GUID as a possible data type for surrogate keys (e.g. PostgreSQL UUID or SQL Server UNIQUEIDENTIFIER). Having the key independent of all other columns insulates the database relationships from changes in data values or database design (making t
https://en.wikipedia.org/wiki/List%20of%20railway%20stations%20in%20Melbourne
The Melbourne railway network comprises 17 railway lines organised into six groups and is operated by Metro Trains Melbourne. The first section of the network opened in 1854, making the Melbourne metropolitan rail network the oldest rail system in Australia. Most of the network is above ground, with the main underground section being the City Loop. These 17 lines consist of 16 electrified lines and 1 diesel shuttle service. The electrified 16 lines are the Alamein, Belgrave, Glen Waverley and Lilydale lines which are part of the Burnley Group, the Cranbourne and Pakenham lines which are part of the Caulfield Group, the Hurstbridge and Mernda lines which are part of the Clifton Hill Group, the Frankston, Werribee and Williamstown lines which are part of the Cross City Group, the Craigieburn, Sunbury and Upfield lines which are part of the Northern Group, and the Sandringham line and the Flemington Racecourse line. In addition to these 16 electrified lines, there is also the Stony Point line which operates as a shuttle service between Frankston and Stony Point. There are 222 suburban railway stations that are currently operational in Melbourne. In addition to the stations currently opened, there are an additional 73 closed to passengers, 19 used as heritage/tourist railways and 11 stations expected to open in the near future. The network is broken up into two Myki ticketing zones. These zones determine how much it would cost to travel from one station to another, with cross zone travel costing more than travelling within the same zone. Stations Future Stations Network Map Heritage and tourist railways See also Railways in Melbourne List of closed Melbourne railway stations List of proposed Melbourne rail extensions List of regional railway stations in Victoria Related List of suburban and commuter rail systems Transportation in Australia References External links Public Transport Victoria – official website of Victoria's public transport Vicsig – Victorian railways information Victorian Railway Stations Accessibility features at suburban railway stations Railway stations Melb Railway stations Lists of commuter rail stations
https://en.wikipedia.org/wiki/Brian%20Behlendorf
Brian Behlendorf (born March 30, 1973) is an American technologist, executive, computer programmer and leading figure in the open-source software movement. He was a primary developer of the Apache Web server, the most popular web server software on the Internet, and a founding member of the Apache Group, which later became the Apache Software Foundation. Behlendorf served as president of the foundation for three years. He has served on the board of the Mozilla Foundation since 2003, Benetech since 2009, and the Electronic Frontier Foundation since 2013. Behlendorf served as the General Manager of the Open Source Security Foundation (OpenSSF) from 2021-2023 and is currently the Chief Technology Officer of the OpenSSF. Career Behlendorf, raised in Southern California, became interested in the development of the Internet while he was a student at the University of California, Berkeley, in the early 1990s. One of his first projects was an electronic mailing list and online music resource, SFRaves, which a friend persuaded him to start in 1992. This would soon develop into the Hyperreal.org website, an online resource devoted to electronic music and related subcultures. In 1993, Behlendorf, Jonathan Nelson, Matthew Nelson and Cliff Skolnick co-founded Organic, Inc., the first business dedicated to building commercial web sites. While developing the first online, for-profit, media project—the HotWired web site for Wired magazine—in 1994, they realized that the most commonly used web server software at the time (developed at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign) could not handle the user registration system that the company required. So, Behlendorf patched the open-source code to support HotWired's requirements. It turned out that Behlendorf wasn't the only one busy patching the NCSA code at the time, so he and Skolnick put together an electronic mailing list to coordinate the work of the other programmers. By the end of February 1995, eight core contributors to the project started Apache as a fork of the NCSA codebase. Working loosely together, they eventually rewrote the entire original program as the Apache HTTP Server. In 1999, the project incorporated as the Apache Software Foundation. Behlendorf served as president of the Foundation for three years. Behlendorf was the CTO of the World Economic Forum. He is also a former director and CTO of CollabNet, a company he co-founded with O'Reilly & Associates (now O'Reilly Media) in 1999 to develop tools for enabling collaborative distributed software development. CollabNet used to be the primary corporate sponsor of the open source version control system Subversion, before it became a project of the Apache Software Foundation. He continues to be involved with electronic music community events such as Chillits, and speaks often at open-source conferences worldwide. In 2003, he was named to the MIT Technology Review TR100 as one of the
https://en.wikipedia.org/wiki/Mix%20in
Mix in may refer to: A mix-in is some type of confectionery added to ice cream Mixin is a class in object-oriented programming languages
https://en.wikipedia.org/wiki/Runtime%20library
In computer programming, a runtime library is a set of low-level routines used by a compiler to invoke some of the behaviors of a runtime environment, by inserting calls to the runtime library into compiled executable binary. The runtime environment implements the execution model, built-in functions, and other fundamental behaviors of a programming language. During execution (run time) of that computer program, execution of those calls to the runtime library cause communication between the executable binary and the runtime environment. A runtime library often includes built-in functions for memory management or exception handling. Therefore, a runtime library is always specific to the platform and compiler. The runtime library may implement a portion of the runtime environment's behavior, but if one reads the code of the calls available, they are typically only thin wrappers that simply package information, and send it to the runtime environment or operating system. However, sometimes the term runtime library is meant to include the code of the runtime environment itself, even though much of that code cannot be directly reached via a library call. For example, some language features that can be performed only (or are more efficient or accurate) at runtime are implemented in the runtime environment and may be invoked via the runtime library API, e.g. some logic errors, array bounds checking, dynamic type checking, exception handling, and possibly debugging functionality. For this reason, some programming bugs are not discovered until the program is tested in a "live" environment with real data, despite sophisticated compile-time checking and testing performed during development. As another example, a runtime library may contain code of built-in low-level operations too complicated for their inlining during compilation, such as implementations of arithmetic operations not directly supported by the targeted CPU, or various miscellaneous compiler-specific operations and directives. The concept of a runtime library should not be confused with an ordinary program library like that created by an application programmer or delivered by a third party, nor with a dynamic library, meaning a program library linked at run time. For example, the C programming language requires only a minimal runtime library (commonly called crt0), but defines a large standard library (called C standard library) that has to be provided by each implementation. See also Static build References External links What is the C runtime library? (StackExchange) Computer libraries Run-time systems
https://en.wikipedia.org/wiki/John%20Mashey
John R. Mashey (born 1946) is an American computer scientist, director and entrepreneur. Career Mashey holds a Ph.D. in computer science from Pennsylvania State University, where he developed the ASSIST assembler language teaching software. He worked on the PWB/UNIX operating system at Bell Labs from 1973 to 1983, authoring the PWB shell, also known as the "Mashey Shell". He then moved to Silicon Valley to join Convergent Technologies, ending as director of software. He joined MIPS Computer Systems in early 1985, managing operating systems development, and helping design the MIPS RISC architecture, as well as specific CPUs, systems and software. He continued similar work at Silicon Graphics (1992–2000), contributing to the design of the NUMAflex modular computer architecture using NUMAlink, ending as VP and chief scientist. Mashey was one of the founders of the Standard Performance Evaluation Corporation (SPEC) benchmarking group, was an ACM National Lecturer for four years, has been guest editor for IEEE Micro, and one of the long-time organizers of the Hot Chips conferences. He chaired technical conferences on operating systems and CPU chips, and gave public talks on software engineering, RISC design, performance benchmarking and supercomputing. He has been credited for being the first to spread the term and concept of big data in the 1990s. He became a consultant for venture capitalists and high-tech companies and a trustee of the Computer History Museum in 2001. In 1997 he received Pennsylvania State University's first Outstanding Engineering Alumni Award for Computer Science and Engineering. In 2012, he received the USENIX Lifetime Achievement Award ("Flame Award") "for his contributions to the UNIX community since its early days". He has written articles for the Skeptical Inquirer regarding climate change denial. In 2010 he published a 250-page critical report on the Wegman Report. Mashey's report concluded that the Wegman Report contained plagiarized text. This story was featured in USA Today, and he was interviewed in Science magazine, which stated that he was "spending his retirement years compiling voluminous critiques of what he calls the 'real conspiracy' to produce 'climate antiscience'." His research has investigated the secretive funding of climate contrarian thinktanks. Mashey blogs at DeSmogBlog, which focuses on global warming. Mashey became a scientific and technical consultant for the Committee for Skeptical Inquiry in 2015. Personal life Mashey is married to Angela Hey, a Cambridge University and Waterloo University graduate with a Ph.D. from Imperial College, London. References Living people 1946 births Pennsylvania State University alumni Unix people Scientists at Bell Labs Silicon Graphics people American computer programmers
https://en.wikipedia.org/wiki/ASSIST%20%28computing%29
ASSIST (the Assembler System for Student Instruction and Systems Teaching) is an IBM System/370-compatible assembler and interpreter developed in the early 1970s at Penn State University by Graham Campbell and John Mashey. plus student assistants. In the late 1960s, computer science education expanded rapidly and university computer centers were faced with a large growth in usage by students, whose needs sometimes differed from professionals in batch processing environments. They needed to run short programs on decks of Punched cards with fast turnaround (minutes, not overnight) as their programs more often included syntax errors. Once they compiled, they would often fault quickly, so optimization and flexibility were far less important than low overhead. WATFIV was a successful pioneering effort to build a FORTRAN compiler tuned for student use. Universities began running it in a dedicated "fast-batch" memory partition with a small run-time limit, such as 5 seconds on an IBM System/360 Model 67). The low limit enabled fast turnaround and avoided waste of time by programs stuck in infinite loops. WATFIV's success helped inspire development of ASSIST, PL/C and other student-oriented programs that fit the "fast-batch" model that became widely used among universities. ASSIST was enhanced and promoted by others, such as Northern Illinois University's Wilson Singletary & Ross Overbeek and University of Tennessee's Charles Hughes and Charles Pfleeger who reported in 1978 that ASSIST was being used in 200+ universities. In the 1980s, NIU did a new implementation on IBM PCs, ASSIST/I (Interactive), used by computer scientist John Ehrman to teach a "boot camp" course in assembly programming at SHARE (computing) meetings, at least through 2011, but perhaps for several years after. On March 1, 1998, Penn State declared that ASSIST was no longer copyrighted and that the program was freely available as per the last release notes. The original ASSIST code seems to still get some use, as seen in 2017 demonstration video assembling its source and running it in MVS 3.8 emulation on a laptop. IBM System/360 and /370 computers used 24-bit addressing and ignored the high-order 8 bits. Assembly programmers of the era, including those who wrote ASSIST, often saved precious memory by using the high-order 8 bits for flags, which required a compatibility mode when IBM introduced 31-bit and then 64-bit addressing. References External links ASSIST Introductory Assembler User's Manual ASSIST - Assembler System for Student Instruction & Systems Teaching (IBM System /370 Reference Summary) Assist distribution archive maintained by NIU's Michael Stack Interpreters (computing) IBM mainframe software
https://en.wikipedia.org/wiki/Japan%20Air%20System
was the smallest of the big three Japanese airlines. In contrast to the other two, JAL and ANA, JAS' international route network was very small, but its domestic network incorporated many smaller airports that were not served by the two larger airlines. As an independent company, it was last headquartered in the JAS M1 Building at Haneda Airport in Ōta, Tokyo. It has since merged with Japan Airlines. JAS was famous for its variety of aircraft liveries; Amy Chavez of The Japan Times described the rainbow liveries as "abstract." Many of its color schemes in the 1990s were designed by film director Akira Kurosawa. The airline's slogan was "Good Speed Always". History Formation The company was originally formed as (TDA) in a merger between Toa Airways and Japan Domestic Airlines on May 15, 1971. It adopted the Japan Air System (JAS) name on April 1, 1988. Start of international service In 1988, Japan Air System began service from Narita to Seoul, South Korea, and Taiwan and by 1993 JAS was also flying to Singapore, Honolulu and Indonesia. In 1995 the airline had 99 domestic routes, some international routes, 64 offices in Japan, one office in Seoul, South Korea, and one office in Guangzhou, People's Republic of China. JAS entered into a partnership with Northwest Airlines in 1999 following several years of negotiations, allowing Northwest to codeshare on JAS domestic routes from Kansai Airport in Osaka and JAS to codeshare on Northwest flights between Japan and the US. On Northwest's fifth freedom flights between Japan and Asia, JAS was limited to codesharing on Northwest routes that JAS also had the authority to fly, such as Tokyo-Seoul. Boeing 777 livery design contest In 1996, Japan Air System held a contest for designing the livery of the Boeing 777. The youngest entrant was three years of age while the oldest was 84. A total of 10,364 participants from 42 countries submitted entries. The judges included Akira Kurosawa, Masuo Ikeda, Kenshi Hirokane, Yoshiko Sakurai, and . Thirteen-year-old , a male second year (Grade 8) junior high school student living near Chitose Airport, won the award. The Japan Air System Boeing 777, painted in Watanabe's design, premiered in April 1997 to commemorate the 25th anniversary of Japan Air System. Merger with Japan Airlines JAS and Japan Airlines announced their merger in November 2001. It was the first major airline industry realignment in Japan in three decades, and partly a consequence of the slump in worldwide air traffic following the September 11, 2001 attacks in the United States. At the time, JAL had only a 25% share of the Japanese domestic air travel market, half that of rival All Nippon Airways, and saw the merger as a means of providing stronger competition to ANA domestically. JAS and JAL prepared an integrated timetable in August 2002. On October 2, 2002, they established a new holding company, , with Isao Kaneko as CEO. A new "Arc of the Sun" livery for the JAL group was announced in Se
https://en.wikipedia.org/wiki/Road%20system
Road system may refer to: Road designation or abbreviation Road network, a system of interconnecting lines and points that represent a system of streets or roads for a given area
https://en.wikipedia.org/wiki/Printf
The printf family of functions in the C programming language are a set of functions that take a format string as input among a variable sized list of other values and produce as output a string that corresponds to the format specifier and given input values. The string is written in a simple template language: characters are usually copied literally into the function's output, but format specifiers, which start with a character, indicate the location and method to translate a piece of data (such as a number) to characters. The design has been copied to expose similar functionality in other programming languages. "printf" is the name of one of the main C output functions, and stands for "print formatted". printf format strings are complementary to scanf format strings, which provide formatted input (lexing aka. parsing). In both cases these provide simple functionality and fixed format compared to more sophisticated and flexible template engines or lexers/parsers, but are sufficient for many purposes. Many languages other than C copy the printf format string syntax closely or exactly in their own I/O functions. Mismatches between the format specifiers and type of the data can cause crashes and other vulnerabilities. The format string itself is very often a string literal, which allows static analysis of the function call. However, it can also be the value of a variable, which allows for dynamic formatting but also a security vulnerability known as an uncontrolled format string exploit. History Early programming languages such as Fortran used special statements with completely different syntax from other calculations to build formatting descriptions. In this example, the format is specified on line 601, and the WRITE command refers to it by line number: WRITE OUTPUT TAPE 6, 601, IA, IB, IC, AREA 601 FORMAT (4H A= ,I5,5H B= ,I5,5H C= ,I5, & 8H AREA= ,F10.2, 13H SQUARE UNITS) ALGOL 68 had more function-like API, but still used special syntax (the delimiters surround special formatting syntax): printf(($"Color "g", number1 "6d,", number2 "4zd,", hex "16r2d,", float "-d.2d,", unsigned value"-3d"."l$, "red", 123456, 89, BIN 255, 3.14, 250)); But using the normal function calls and data types simplifies the language and compiler, and allows the implementation of the input/output to be written in the same language. These advantages outweigh the disadvantages (such as a complete lack of type safety in many instances) and in most newer languages I/O is not part of the syntax. C's has its origins in BCPL's function (1966). In comparison to and , is a BCPL language escape sequence representing a newline character (for which C uses the escape sequence ) and the order of the format specification's field width and type is reversed in : WRITEF("%I2-QUEENS PROBLEM HAS %I5 SOLUTIONS*N", NUMQUEENS, COUNT) Probably the first copying of the syntax outside the C language was the Unix shell command, which first appeared i
https://en.wikipedia.org/wiki/Bagle%20%28computer%20worm%29
Bagle (also known as Beagle) was a mass-mailing computer worm affecting Microsoft Windows. The first strain, Bagle.A, did not propagate widely. A second variant, Bagle.B, was considerably more virulent. Overview Bagle used its own SMTP engine to mass-mail itself as an attachment to recipients gathered from the infected computer by combing through all of the computer's .htm, .html, .txt, and .wab files for any email addresses. It does not mail itself to addresses containing certain strings such as "@hotmail.com", "@msn.com", "@microsoft", "@avp", or “.r1”. Bagle pretends to be a different file type (a 15,872 byte Windows Calculator for Bagle.A and an 11,264 byte audio file for Bagle.B), with a randomized name, and it will then open that file type as a cover for opening its own .exe file. It copies itself to the Windows system directory (Bagle.A as , Bagle.B as ), adds HKCU run keys to the registry, and opens a backdoor on a TCP port (6777 for Bagle.A and 8866 for Bagle.B). Using an HTTP GET request, Bagle.B also informs the virus's programmer that the machine has been successfully infected. Bagle variants, including Bagle.A and Bagle.B, generally have a date at which they stop spreading included in their programming. Computers infected with older versions of Bagle are updated when newer ones are released. History The initial strain, Bagle.A, was first sighted on January 18, 2004, seemingly originating in Australia. The original file name for the Bagle virus was Beagle, but computer scientists decided to call it Bagle instead as a way to spite Bagle's programmer. Although it started strong with more than 120,000 infected computers, it quickly dwindled in efficacy. Sometimes accompanied by Trojan.Mitglieder.C, it stopped spreading after January 28, 2004, as designed. The second strain, Bagle.B, was first sighted on February 17, 2004. It was much more widespread and appeared in large numbers; Network Associates rated it a "medium" threat. It was designed to stop spreading after February 25, 2004. At one point in 2004, the Bagle and Netsky viruses exchanged insults and harsh words with each other in their codes, beginning with Bagle.I on March 3, 2004. Notably, Bagle.J contained the message “Hey, NetSky, fuck off you bitch, don't ruine our bussiness, wanna start a war?”, and Netsky-R included, "Yes, true, you have understand it. Bagle is a shitty guy, he opens a backdoor and he makes a lot of money. Netsky not, Netsky is Skynet, a good software, Good guys behind it. Believe me, or not. We will release thousands of our Skynet versions, as long as bagle is there ...". Additionally, Bagle and Netsky both tried to remove each other from an infected system. Subsequent variants have later been discovered. By July 26, 2004, there were 35 variants of Bagle, and by April 22, 2005, that number had increased to over 100. Although they have not all been successful, a number remain notable threats. Additionally, on July 3 and 4, 2004, Bagle.AD and Bagle.AE
https://en.wikipedia.org/wiki/Bagle
Bagle may refer to: Bagle (computer worm) The Bagles See also Bagel (disambiguation) Baggle Beagle (disambiguation)
https://en.wikipedia.org/wiki/Fast%20user%20switching
Fast user switching is a feature of a multi-user operating system which allows users to switch between user accounts without quitting applications and logging out. In Linux The Linux kernel's VT subsystem dates back to 1993 and does not understand the concept of multiple "seats", meaning that of up to 63 VTs, only one VT can be active at any given time. Despite this kernel limitation, multi-seat is supported on Linux. The feature of "fast user switching" has less severe necessities than multi-seat does because the multiple users are not working simultaneously. The most straight forward solution to elegant multi-seat are kmscon/systemd-consoled in combination with systemd-logind. The available desktop environments such as GNOME or KDE Software Compilation adapt their graphical login and session manager (e.g. GDM, SDDM, LightDM, etc.) to the underneath solution and have to be configured to implement fast user switching that way. For installations with older environments, the functionality must be enabled in the appropriate configuration files then a hot key sequence such as CTRL-ALT-F8 is pressed. A separate login window will now appear and the second user can log in (or even the first user again). Alternatively, in the default install, new X sessions can be started at will by using different display parameters to have them run in different virtual terminals (e.g. "startx -- :1" or "X :1 -query localhost"). Again, hot key sequences allow the user switching to take place. Fast user switching may potentially introduce various security-related complications, and is handled differently among operating systems, each having its advantages and disadvantages. One possibility, simple and secure, is that only the first user gets ownership of resources. A second option is to grant ownership of resources to each new user. The last one to log in takes ownership. A third is to allow all users access to shared resources. This is easier and more intuitive, but allows (for example) one user to record another user's conversation. In Windows, shared resources, such as sound, are available to all sessions. In Red Hat Linux, the default behavior is to give ownership of "console resources" to the first connected session, but it can share resources among groups of console users or be configured to manage console ownership differently. In Microsoft Windows Fast user switching in Windows is based on Remote Desktop Services technology. In Windows XP, GINA which is a component of Winlogon, and with which fast user switching interacts, can be programmatically called to automate a fast user switch. A PowerToy known as Super fast user switcher was offered in 2002 by Microsoft. It allowed fast user switching using a keyboard hotkey (Win+Q) (similar to Alt-Tab) without even going to the Welcome screen. It was later made unavailable when the original set of PowerToys was replaced by updated versions, but still works with Windows XP SP3 (32-bit) when running as a
https://en.wikipedia.org/wiki/Service%20pack
In computing, a service pack comprises a collection of updates, fixes, or enhancements to a software program delivered in the form of a single installable package. Companies often release a service pack when the number of individual patches to a given program reaches a certain (arbitrary) limit, or the software release has shown to be stabilized with a limited number of remaining issues based on users' feedback and bug reports. In large software applications such as office suites, operating systems, database software, or network management, it is not uncommon to have a service pack issued within the first year or two of a product's release. Installing a service pack is easier and less error-prone than installing many individual patches, even more so when updating multiple computers over a network, where service packs are common. Service packs are usually numbered, and thus shortly referred to as SP1, SP2, SP3 etc. They may also bring, besides bug fixes, entirely new features, as is the case of SP2 of Windows XP (e.g. Windows Security Center), or SP3 and SP4 of the heavily database dependent Trainz 2009: World Builder Edition. Incremental and cumulative SPs Service Packs for Microsoft Windows were cumulative through Windows XP. This means that the problems that are fixed in a service pack are also fixed in later service packs. For example, Windows XP SP3 contains all the fixes that are included in Windows XP Service Pack 2 (SP2). Windows Vista SP2 was not cumulative, however, but incremental, requiring that SP1 be installed first. Office XP, Office 2003, Office 2007, Office 2010 and Office 2013 service packs have been cumulative. Impact on installation of additional software components Application service packs replace existing files with updated versions that typically fix bugs or close security holes. If, at a later time, additional components are added to the software using the original media, there is a risk of accidentally mixing older and updated components. Depending on the operating system and deployment methods, it may then be necessary to manually reinstall the service pack after each such change to the software. This was, for example, necessary for Windows NT service packs; however, from Windows 2000 onwards, Microsoft redirected setup programs to use updated service pack files instead of files from the original installation media in order to prevent manual reinstall. See also Adaptation Kit Update Apple Software Update Hotfix IBM Program temporary fix Point release Slipstream (computing) Software release life cycle Windows Update References External links Microsoft Support Lifecycle (includes Microsoft's service pack policy) Windows Service Packs List of fixes that are included in Windows XP Service Pack 3 Windows administration System administration Patch utilities Software release
https://en.wikipedia.org/wiki/Software%20engineering%20professionalism
Software engineering professionalism is a movement to make software engineering a profession, with aspects such as degree and certification programs, professional associations, professional ethics, and government licensing. The field is a licensed discipline in Texas in the United States (Texas Board of Professional Engineers, since 2013), Engineers Australia(Course Accreditation since 2001, not Licensing), and many provinces in Canada. History In 1993 the IEEE and ACM began a joint effort called JCESEP, which evolved into SWECC in 1998 to explore making software engineering into a profession. The ACM pulled out of SWECC in May 1999, objecting to its support for the Texas professionalization efforts, of having state licenses for software engineers. ACM determined that the state of knowledge and practice in software engineering was too immature to warrant licensing, and that licensing would give false assurances of competence even if the body of knowledge were mature. The IEEE continued to support making software engineering a branch of traditional engineering. In Canada the Canadian Information Processing Society established the Information Systems Professional certification process. Also, by the late 1990s (1999 in British Columbia) the discipline of software engineering as a professional engineering discipline was officially created. This has caused some disputes between the provincial engineering associations and companies who call their developers software engineers, even though these developers have not been licensed by any engineering association. In 1999, the Panel of Software Engineering was formed as part of the settlement between Engineering Canada and the Memorial University of Newfoundland over the school's use of the term "software engineering" in the name of a computer science program. Concerns were raised over inappropriate use of the name "software engineering" to describe non-engineering programs could lead to student and public confusion, and ultimately threaten public safety. The Panel issued recommendations to create a Software Engineering Accreditation Board, but the task force created to carry out the recommendations were unable to get the various stakeholders to agree to concrete proposals, resulting in separate accreditation boards. Ethics Software engineering ethics is a large field. In some ways it began as an unrealistic attempt to define bugs as unethical. More recently it has been defined as the application of both computer science and engineering philosophy, principles, and practices to the design and development of software systems. Due to this engineering focus and the increased use of software in mission critical and human critical systems, where failure can result in large losses of capital but more importantly lives such as the Therac-25 system, many ethical codes have been developed by a number of societies, associations and organizations. These entities, such as the ACM, IEEE, EGBC and Institute fo
https://en.wikipedia.org/wiki/Intel%20iAPX%20432
The iAPX 432 (Intel Advanced Performance Architecture) is a discontinued computer architecture introduced in 1981. It was Intel's first 32-bit processor design. The main processor of the architecture, the general data processor, is implemented as a set of two separate integrated circuits, due to technical limitations at the time. Although some early 8086, 80186 and 80286-based systems and manuals also used the iAPX prefix for marketing reasons, the iAPX 432 and the 8086 processor lines are completely separate designs with completely different instruction sets. The project started in 1975 as the 8800 (after the 8008 and the 8080) and was intended to be Intel's major design for the 1980s. Unlike the 8086, which was designed the following year as a successor to the 8080, the iAPX 432 was a radical departure from Intel's previous designs meant for a different market niche, and completely unrelated to the 8080 or x86 product lines. The iAPX 432 project is considered a commercial failure for Intel, and was discontinued in 1986. Description The iAPX 432 was referred to as a "micromainframe", designed to be programmed entirely in high-level languages. The instruction set architecture was also entirely new and a significant departure from Intel's previous 8008 and 8080 processors as the iAPX 432 programming model is a stack machine with no visible general-purpose registers. It supports object-oriented programming, garbage collection and multitasking as well as more conventional memory management directly in hardware and microcode. Direct support for various data structures is also intended to allow modern operating systems to be implemented using far less program code than for ordinary processors. Intel iMAX 432 is a discontinued operating system for the 432, written entirely in Ada, and Ada was also the intended primary language for application programming. In some aspects, it may be seen as a high-level language computer architecture. These properties and features resulted in a hardware and microcode design that was more complex than most processors of the era, especially microprocessors. However, internal and external buses are (mostly) not wider than 16-bit, and, just like in other 32-bit microprocessors of the era (such as the 68000 or the 32016), 32-bit arithmetical instructions are implemented by a 16-bit ALU, via random logic and microcode or other kinds of sequential logic. The iAPX 432 enlarged address space over the 8080 was also limited by the fact that linear addressing of data could still only use 16-bit offsets, somewhat akin to Intel's first 8086-based designs, including the contemporary 80286 (the new 32-bit segment offsets of the 80386 architecture was described publicly in detail in 1984). Using the semiconductor technology of its day, Intel's engineers weren't able to translate the design into a very efficient first implementation. Along with the lack of optimization in a premature Ada compiler, this contributed to rather slow but
https://en.wikipedia.org/wiki/Duffy%27s%20Tavern
Duffy's Tavern is an American radio situation comedy that ran for a decade on several networks (CBS, 1941–42; NBC-Blue Network, 1942–44; and NBC, 1944–51), concluding with the December 28, 1951, broadcast. The program often featured celebrity guest stars but always hooked them around the misadventures of Archie, the tavern's manager, portrayed by Ed Gardner. Archie was prone to involvement in get-rich-quick schemes and romantic missteps, and constantly communicated with malaprops and mixed metaphors. Gardner had performed the character of Archie, talking about Duffy's Tavern, as early as November 9, 1939, when he appeared on NBC's Good News of 1940. Characters and story In the early 1940s, Gardner worked as a director, writer, and producer for radio programs. In 1941, he created a character for This Is New York, a program that he was producing. The character, which Gardner played, became Archie of Duffy's Tavern. In the familiar opening, "When Irish Eyes Are Smiling," performed either solo on an old-sounding piano or by a larger orchestra, is interrupted by the ring of a telephone and Gardner's New Yorkese accent as he answers, "Hello, Duffy's Tavern, where the elite meet to eat. Archie the manager speakin'. Duffy ain't here—oh, hello, Duffy." Owner Duffy was never heard nor seen, either on the radio program or in the 1945 film adaptation or the short-lived 1954 TV series. Archie constantly bantered with Duffy's man-crazy daughter, Miss Duffy, played by several actresses, beginning with Gardner's real-life first wife, Shirley Booth, followed by Florence Halop and, later, by actress Hazel Shermet, and especially with Clifton Finnegan (Charlie Cantor, later Sid Raymond), a likeable soul with several screws loose and a knack for falling for every other salesman's scam. Eddie the Waiter was played by Eddie Green. The pianist Fats Pichon took over the role after Green's death in 1950. Hoping to take advantage of the income-tax-free status of Puerto Rico, Gardner moved Duffy's Tavern there in 1949. Unfortunately, many guest personalities declined to make the journey to appear on the show and it eventually went off the air in 1951. Guest stars The series featured many high-profile guest stars, including Fred Allen, Mel Allen, Lucille Ball, Joan Bennett, Nigel Bruce, Billie Burke, Bing Crosby, Gracie Fields, Rex Harrison, Susan Hayward, Bob Hope, Lena Horne, Boris Karloff, Alan Ladd, Veronica Lake, Peter Lorre, Tony Martin, Marie McDonald, Vincent Price, Gene Tierney, Arthur Treacher, and Shelley Winters. As the series progressed, Archie slipped in and out of a variety of quixotic, self-imploding plotlines—from writing an opera to faking a fortune to marry an heiress. Such situations mattered less than did the clever depiction of earthbound-but-dreaming New York life and its individualistic, often bizarre characters. Duffy's Tavern was Gardner's creation, and he oversaw its writing intently enough, drawing also on his earlier experience as a suc
https://en.wikipedia.org/wiki/Pseudo-spectral%20method
Pseudo-spectral methods, also known as discrete variable representation (DVR) methods, are a class of numerical methods used in applied mathematics and scientific computing for the solution of partial differential equations. They are closely related to spectral methods, but complement the basis by an additional pseudo-spectral basis, which allows representation of functions on a quadrature grid. This simplifies the evaluation of certain operators, and can considerably speed up the calculation when using fast algorithms such as the fast Fourier transform. Motivation with a concrete example Take the initial-value problem with periodic conditions . This specific example is the Schrödinger equation for a particle in a potential , but the structure is more general. In many practical partial differential equations, one has a term that involves derivatives (such as a kinetic energy contribution), and a multiplication with a function (for example, a potential). In the spectral method, the solution is expanded in a suitable set of basis functions, for example plane waves, Insertion and equating identical coefficients yields a set of ordinary differential equations for the coefficients, where the elements are calculated through the explicit Fourier-transform The solution would then be obtained by truncating the expansion to basis functions, and finding a solution for the . In general, this is done by numerical methods, such as Runge–Kutta methods. For the numerical solutions, the right-hand side of the ordinary differential equation has to be evaluated repeatedly at different time steps. At this point, the spectral method has a major problem with the potential term . In the spectral representation, the multiplication with the function transforms into a vector-matrix multiplication, which scales as . Also, the matrix elements need to be evaluated explicitly before the differential equation for the coefficients can be solved, which requires an additional step. In the pseudo-spectral method, this term is evaluated differently. Given the coefficients , an inverse discrete Fourier transform yields the value of the function at discrete grid points . At these grid points, the function is then multiplied, , and the result Fourier-transformed back. This yields a new set of coefficients that are used instead of the matrix product . It can be shown that both methods have similar accuracy. However, the pseudo-spectral method allows the use of a fast Fourier transform, which scales as , and is therefore significantly more efficient than the matrix multiplication. Also, the function can be used directly without evaluating any additional integrals. Technical discussion In a more abstract way, the pseudo-spectral method deals with the multiplication of two functions and as part of a partial differential equation. To simplify the notation, the time-dependence is dropped. Conceptually, it consists of three steps: are expanded in a finite set of basi
https://en.wikipedia.org/wiki/List%20of%20Billboard%20Hot%20100%20number%20ones%20of%202003
The Billboard Hot 100 is a chart that ranks the best-performing singles of the United States. Published by Billboard magazine, the data are compiled by Nielsen SoundScan based collectively on each single's weekly physical sales, and airplay. In 2003, there were 11 singles that topped the chart. During the year, ten acts achieved a first U.S. number-one single, either as a lead artist or featured guest. Those acts were B2K, LL Cool J, 50 Cent, Sean Paul, Nate Dogg, Clay Aiken, Murphy Lee, Ludacris, and Shawnna. Beyoncé, despite having hit number one with Destiny's Child, also earns her first number one song as a solo act. P. Diddy, 50 Cent, Sean Paul, and Beyoncé were the only acts to hit number one more than once, each of them reaching number one twice throughout the year. 50 Cent and Knowles both had the longest-running singles of 2003: 50 Cent's first number-one single "In da Club" and Knowles' second number-one single "Baby Boy" both topped the Hot 100 for nine straight weeks, longer than any single to have topped this year. Hip hop duo OutKast's "Hey Ya!" charted at the top spot for nine consecutive weeks, three of which were in 2003. Although "Hey Ya!" continued its pole position until early 2004, Billboard magazine credited it as one of the longest-running number-one singles of 2003. Rapper Eminem's "Lose Yourself" continued its 12-week chart run at the top spot in this year, but is credited as the longest-running number-one single of 2002. Knowles had the most weeks on top with 17, combining the chart run of both "Crazy in Love" and "Baby Boy". She surpassed the record previously set by 50 Cent during this year, the latter of whom gained a total of 13 weeks at number one combining "In the Club", which is the best-performing single of 2003, and "21 Questions". With the combined chart run of "Get Busy" and "Baby Boy", the latter in which he is featured, dancehall artist Sean Paul became the most successful Jamaican-born artist in terms of chart performance in the Billboard Hot 100. Seven collaboration singles topped the chart in 2003, setting a record for most number-one collaborations in a calendar year since the onset of the rock era in 1955; the first number-one collaboration to have topped the chart was in 1967 with Somethin' Stupid, a song by singer Frank Sinatra and daughter Nancy Sinatra. The record was later tied in 2004, and broken in 2006 with eight number-one collaborations. American Idol contestant Clay Aiken's debut single "This Is the Night" debuted at number one on the Billboard Hot 100, becoming the first single to do so since 1998. "Crazy in Love", which charted for eight straight weeks in July to August, has been credited as 2003's Song of the Summer. Chart history Number-one artists See also 2003 in music List of Billboard number-one singles Billboard Year-End Hot 100 singles of 2003 References Additional sources Fred Bronson's Billboard Book of Number 1 Hits, 5th Edition () Joel Whitburn's Top Pop Singles 1955-200
https://en.wikipedia.org/wiki/List%20of%20Billboard%20Hot%20100%20number%20ones%20of%202004
The Billboard Hot 100 is a chart that ranks the best-performing singles of the United States. Published by Billboard magazine, the data are compiled by Nielsen SoundScan based collectively on each single's weekly physical sales and airplay. In 2004, there were 11 singles that topped the chart. Although there were 12 singles that claimed the top spot in the 52 issues of the chart, hip hop duo Outkast's "Hey Ya!" began its peak position in 2003, and is thus excluded. In 2004, 13 acts achieved their first U.S. number-one single, either as a lead artist or featured guest, including Sleepy Brown, Twista, Kanye West, Jamie Foxx, Lil Jon, Fantasia Barrino, Juvenile, Soulja Slim, Terror Squad, Ciara, Petey Pablo, Snoop Dogg, and Pharrell. Barrino and Ciara were the only acts to have earned a number-one debut single this year. R&B singer Usher had four number-one singles that appeared in the 2004 issues, and Outkast had two. Soulja Slim became the sixth artist to have a number one song posthumously, after his death in November 2003. During the year, seven collaboration singles reached the number-one position, tying the record set in 2003. Usher's "Yeah!" is the longest-running number-one single of 2004, remaining in that position for 12 straight weeks. It is followed by his other single "Burn", whose streak on the top spot reached eight non-consecutive weeks. Other singles with extended chart runs include Ciara's "Goodies", which features Petey Pablo, and Usher's "My Boo", a duet with Alicia Keys, each topping the chart for seven and six weeks, respectively. Usher is the most successful act of 2004. He had four singles that topped the Billboard Hot 100: "Yeah!, "Burn", "Confessions Part II", and "My Boo"; he is the only act in 2004 to have earned multiple number-one singles. Overall, Usher had 28 weeks on top in a calendar year, becoming the first act to have achieved such an extended chart run on the Billboard Hot 100. The feat broke the record set by Glenn Miller and His Orchestra in 1940; their records spent 26 consecutive weeks at the top spot of Record Buying Guide, a jukebox chart Billboard magazine published in the late 1930s and early 1940s. "Yeah!" is the best-performing single of the calendar year, having topped the Top Hot 100 Hits of 2004. Following periods of fluctuating success, urban music attained commercial dominance during the early 2000s, which featured massive crossover success on the Billboard charts by R&B and hip hop artists. In 2004, all 12 songs that topped the Billboard Hot 100 were performed by African-American recording artists and accounted for 80% of the number-one R&B hits that year. Along with Usher's streak of singles, Top 40 radio and both pop and R&B charts were topped by OutKast's "Hey Ya!", Snoop Dogg's "Drop It Like It's Hot", Terror Squad's "Lean Back", and Ciara's "Goodies". Chris Molanphy of The Village Voice later remarked that "by the early 2000s, urban music was pop music." Chart history Number-one artists
https://en.wikipedia.org/wiki/List%20of%20Billboard%20Hot%20100%20number%20ones%20of%202002
The Billboard Hot 100 is a chart that ranks the best-performing singles of the United States. Published by Billboard magazine, the data are compiled by Nielsen SoundScan based collectively on each single's weekly physical sales and airplay. In 2002, there were seven singles that topped the chart, the lowest number of singles to top the chart in a single year ever (if the two songs which peaked in 2001 are included, 2002 would have the second lowest number of chart-topping singles in a year, behind 2005). Although nine singles reached number one in fifty-two issues of the magazine in the calendar year, two songs began their peak position in 2001 and are thus excluded. In 2002, five acts earned their first U.S. number one single, either as a lead artist or featured guest. These artists were Ashanti, Nelly, Kelly Clarkson, and Eminem. Kelly Rowland, despite having hit number one with Destiny's Child, also earns her first number one song as a solo act. In 2002, Ja Rule, Ashanti, and Nelly had two number-one singles in the Billboard Hot 100. Most of the number-one singles in 2002 were extended chart-toppers. "Lose Yourself" is the longest-running single, topping the Billboard Hot 100 for 12 consecutive weeks, eight of which were in this calendar year. "Foolish" and "Dilemma" both stayed at number one for 10 weeks, the latter of which was non-consecutive. "Ain't It Funny" by Jennifer Lopez, in its remix version with Ja Rule, peaked at number one for six weeks. Rock band Nickelback's "How You Remind Me", which first peaked at number one in 2001, is the best-performing single of 2002. "Lose Yourself", which is the soundtrack to the 2002 film 8 Mile, is the second most-successful soundtrack song in the entire rock era. It is behind Whitney Houston's version of "I Will Always Love You", having topped the chart for 14 weeks. "Lose Yourself" is also the longest-running Oscar-winning number-one song since singer-actor Bing Crosby's "White Christmas" had 14 weeks on top in the 1940s. "A Moment Like This" is noted for its fifty-two-to-one leap in 2002, breaking the 38-year-old record set by The Beatles' "Can't Buy Me Love", which jumped from number twenty-seven to one. Nelly became the first act to have consecutive number-one singles as the lead artist since 1994, when Boyz II Men had consecutive number-ones. Chart history Number-one artists See also 2002 in music List of Billboard number-one singles Billboard Year-End Hot 100 singles of 2002 References Additional sources Fred Bronson's Billboard Book of Number 1 Hits, 5th Edition () Joel Whitburn's Top Pop Singles 1955-2008, 12 Edition () Joel Whitburn Presents the Billboard Hot 100 Charts: The 2000s () Additional information obtained can be verified within Billboard's online archive services and print editions of the magazine. 2002 record charts 2002
https://en.wikipedia.org/wiki/List%20of%20Billboard%20Hot%20100%20number%20ones%20of%202001
The Billboard Hot 100 is a chart that ranks the best-performing singles of the United States. Published by Billboard magazine, the data are compiled by Nielsen SoundScan based collectively on each single's weekly physical sales, and airplay. In 2001, there were 14 singles that topped the chart, in 52 issue dates. Although 15 singles claimed the top position throughout the year, group Destiny's Child's "Independent Women Part I" is credited in 2000, and is thus excluded. During the year, 12 acts had achieved a first U.S. number-one single, namely: Shaggy, Ricardo "RikRok" Ducent, OutKast, Mystikal, Crazy Town, Rayvon, Lil' Kim, Mýa, Pink, Alicia Keys, Ja Rule, Mary J. Blige, and Nickelback. Destiny's Child, Usher and Shaggy had two number-one singles in 2001. Janet Jackson's "All for You" is the longest-running single of the year, staying at number one for seven consecutive weeks. 2001 is the first year since 1993 that there has not been at least one number-one hit with a double-digit run. "All for You" is also responsible to give to Jackson the tenth Hot 100 number one of her career; making her the fourth female artist with most number ones in the rock era. Other singles that had a multiple chart run includes Alicia Keys' "Fallin'" and Mary J. Blige's "Family Affair"; both stayed atop for six weeks. Chart history Number-one artists See also 2001 in music List of Billboard number-one singles Billboard Year-End Hot 100 singles of 2001 References Additional sources Fred Bronson's Billboard Book of Number 1 Hits, 5th Edition () Joel Whitburn's Top Pop Singles 1955-2008, 12 Edition () Joel Whitburn Presents the Billboard Hot 100 Charts: The 2000s () Additional information obtained can be verified within Billboard's online archive services and print editions of the magazine. 2001 record charts 2001
https://en.wikipedia.org/wiki/List%20of%20Billboard%20Hot%20100%20number%20ones%20of%202000
The Billboard Hot 100 is a chart that ranks the best-performing singles of the United States. Published by Billboard magazine, the data are compiled by Nielsen SoundScan based collectively on each single's weekly physical sales, and airplay. There were 18 number-one singles in 2000. The first of these, Santana's "Smooth", spent two weeks at the top in January, concluding a 12-week run that had begun in October 1999. During the year, 10 acts each achieved a first U.S. number-one single, either as a lead artist or featured guest: Joe, 98 Degrees, Lonestar, The Product G&B, Aaliyah, Vertical Horizon, Matchbox Twenty, NSYNC, Sisqó, and Creed. Destiny's Child and Christina Aguilera were the only acts to have earned two number-one singles in this year. There were two collaboration singles that reached number-one on the chart: "Maria Maria" by Santana featuring The Product G&B, and "Thank God I Found You" by Mariah Carey featuring Joe and 98 Degrees. With the latter single, Carey set the record for most consecutive years charting a number-one single on the Billboard Hot 100 with 11 years from 1990 (beginning with "Vision of Love") through 2000, and became her 15th number-one single on the chart. Destiny's Child's "Independent Women" is the second-longest-running single of 2000, topping the chart for 7 consecutive weeks, with another four consecutive weeks in the 2001 chart year. Santana's "Maria Maria" is the longest-running single, staying at number one for 10 straight weeks. Other singles with extended chart run include pop singer Madonna's "Music" and Christina Aguilera's "Come On Over Baby (All I Want Is You)", each of which topped the chart for four weeks. Chart history Number-one artists See also 2000 in music List of Billboard number-one singles References Additional sources Fred Bronson's Billboard Book of Number 1 Hits, 5th Edition () Joel Whitburn's Top Pop Singles 1955-2008, 12 Edition () Joel Whitburn Presents the Billboard Hot 100 Charts: The 2000s () Additional information obtained can be verified within Billboard's online archive services and print editions of the magazine. 2000 record charts 2000
https://en.wikipedia.org/wiki/Code%20Red%20II
Code Red II is a computer worm similar to the Code Red worm. Released two weeks after Code Red on August 4, 2001, it is similar in behavior to the original, but analysis showed it to be a new worm instead of a variant. Unlike the first, the second has no function for attack; instead it has a backdoor that allows attacks. The worm was designed to exploit a security hole in the indexing software included as part of Microsoft's Internet Information Server (IIS) web server software. A typical signature of the Code Red II worm appears in a web server log as: GET /default.ida?XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX %u9090%u6858%ucbd3%u7801%u9090%u6858%ucbd3%u7801 %u9090%u6858%ucbd3%u7801%u9090%u9090%u8190%u00c3 %u0003%u8b00%u531b%u53ff%u0078%u0000%u00=a HTTP/1.0 While the original worm tried to infect other computers at random, Code Red II tries to infect machines on the same subnet as the infected machine. Microsoft had released a security patch for IIS on June 18, 2001, that fixed the security hole, however not everyone had patched their servers, including Microsoft themselves. See also Nimda Timeline of computer viruses and worms References External links Original Analysis of Code Red II - analysis by Steve Friedl ANALYSIS: CodeRed II Worm - analysis by eEye Digital Security Exploit-based worms Hacking in the 2000s
https://en.wikipedia.org/wiki/Data%20striping
In computer data storage, data striping is the technique of segmenting logically sequential data, such as a file, so that consecutive segments are stored on different physical storage devices. Striping is useful when a processing device requests data more quickly than a single storage device can provide it. By spreading segments across multiple devices which can be accessed concurrently, total data throughput is increased. It is also a useful method for balancing I/O load across an array of disks. Striping is used across disk drives in redundant array of independent disks (RAID) storage, network interface controllers, disk arrays, different computers in clustered file systems and grid-oriented storage, and RAM in some systems. Method One method of striping is done by interleaving sequential segments on storage devices in a round-robin fashion from the beginning of the data sequence. This works well for streaming data, but subsequent random accesses will require knowledge of which device contains the data. If the data is stored such that the physical address of each data segment is assigned a one-to-one mapping to a particular device, the device to access each segment requested can be calculated from the address without knowing the offset of the data within the full sequence. Other methods might be employed in which sequential segments are not stored on sequential devices. Such non-sequential interleaving can have benefits in some error correction schemes. Advantages and disadvantages Advantages of striping include performance and throughput. Sequential time interleaving of data accesses allows the lesser data access throughput of each storage devices to be cumulatively multiplied by the number of storage devices employed. Increased throughput allows the data processing device to continue its work without interruption, and thereby finish its procedures more quickly. This is manifested in improved performance of the data processing. Because different segments of data are kept on different storage devices, the failure of one device causes the corruption of the full data sequence. In effect, the failure rate of the array of storage devices is equal to the sum of the failure rate of each storage device. This disadvantage of striping can be overcome by the storage of redundant information, such as parity, for the purpose of error correction. In such a system, the disadvantage is overcome at the cost of requiring extra storage. Terminology The segments of sequential data written to or read from a disk before the operation continues on the next disk are usually called chunks, strides or stripe units, while their logical groups forming single striped operations are called strips or stripes. The amount of data in one chunk (stripe unit), often denominated in bytes, is variously referred to as the chunk size, stride size, stripe size, stripe depth or stripe length. The number of data disks in the array is sometimes called the stripe width, but
https://en.wikipedia.org/wiki/Udi%20Manber
Udi Manber () is an Israeli computer scientist. He is one of the authors of agrep and GLIMPSE. After a career in engineering and management, he worked on medical research. Education He earned both his bachelor's degree in 1975 in mathematics and his master's degree in 1978 from the Technion in Israel. At the University of Washington, he earned another master's degree in 1981 and his PhD in computer science in 1982. Career He has won a Presidential Young Investigator Award in 1985, 3 best-paper awards, and the Usenix annual Software Tools User Group Award software award in 1999. Together with Gene Myers he developed the suffix array, a data structure for string matching. He was a professor at the University of Arizona and authored several articles while there, including "Using Induction to Design Algorithms" summarizing his textbook (which remains in print) Introduction to Algorithms: A Creative Approach. He became the chief scientist at Yahoo! in 1998. In 2002, he joined Amazon.com, where he became "chief algorithms officer" and a vice president. He later was appointed CEO of the Amazon subsidiary company A9.com. He filed a patent on behalf of Amazon. In 2004, Google promoted sponsored listings for its own recruiting whenever someone searched for his name on Google's search engine. In 2006, he was hired by Google as one of their vice presidents of engineering. In December 2007, he announced Knol, Google's project to create a knowledge repository. In October 2010, he was responsible for all the search products at Google. In October 2014, Manber was named the vice president of engineering at YouTube. In February 2015, Manber announced that he was leaving YouTube for the National Institutes of Health. He left the role in 2016. In February 2017, Manber went to work for the Department of Medicine at the University of California, San Francisco and a technical advisor to UCSF's Institute for Computational Health Sciences. In October 2018, it was reported that Manber was joining Anthem as its chief AI officer. Manber serves on the board of directors for Twiggle and is an advisor to Amino Health. In 2020, Manber announced a new venture called Weekly Medical News. References External links IT Conversations podcast about Google search Google employees American technology chief executives American computer businesspeople American computer scientists Israeli computer scientists Jewish American writers Year of birth missing (living people) Living people University of Arizona faculty University of Washington alumni Technion – Israel Institute of Technology alumni University of California, San Francisco staff 21st-century American Jews
https://en.wikipedia.org/wiki/LightWave%203D
LightWave 3D is a 3D computer graphics program developed by LightWave Digital. It has been used in films, television, motion graphics, digital matte painting, visual effects, video game development, product design, architectural visualizations, virtual production, music videos, pre-visualizations and advertising. Overview LightWave is a software package used for rendering 3D images, both animated and static. It includes a fast rendering engine that supports such advanced features as realistic reflection, radiosity, caustics, and 999 render nodes. The 3D modeling component supports both polygon modeling and subdivision surfaces. The animation component has features such as inverse and forward kinematics for character animation, particle systems and dynamics. Programmers can expand LightWave's capabilities using an included SDK which offers Python, LScript (a proprietary scripting language) scripting and C language interfaces. History In 1988, Allen Hastings created a rendering and animation program called VideoScape 3D, and his friend Stuart Ferguson created a complementary 3D modeling program called Modeler, both sold by Aegis Software. NewTek planned to incorporate VideoScape and Modeler into its video editing suite, Video Toaster. Originally intended to be called "NewTek 3D Animation System for the Amiga", Hastings later came up with the name "LightWave 3D", inspired by two contemporary high-end 3D packages: Intelligent Light and Wavefront. In 1990, the Video Toaster suite was released, incorporating LightWave 3D, and running on the Amiga computer. LightWave 3D has been available as a standalone application since 1994, and version 9.3 runs on both Mac OS X and Windows platforms. Starting with the release of version 9.3, the Mac OS X version has been updated to be a Universal Binary. The last known standalone revision for the Amiga was LightWave 5.0, released in 1995. Shortly after the release of the first PC version, NewTek discontinued the Amiga version, citing the platform's uncertain future. Versions were soon released for the DEC Alpha, Silicon Graphics (SGI), and Macintosh platforms. LightWave was used to create special effects for the television series Babylon 5, Star Trek: Voyager, Space: Above and Beyond, seaQuest DSV, Lost, and Battlestar Galactica. The program was also utilized in the production of Titanic as well as Avatar, Sin City, and 300. The short film 405 was produced by two artists from their homes using LightWave. In the Finnish Star Trek parody Star Wreck: In the Pirkinning, most of the visual effects were done in LightWave by Finnish filmmaker Samuli Torssonen, who produced the VFX work for the feature film Iron Sky. The film Jimmy Neutron: Boy Genius was made entirely in LightWave 6 and messiah:Studio. In 2007, the first feature film to be 3D animated entirely by one person made its debut, Flatland the Film by Ladd Ehlinger Jr. It was animated entirely in LightWave 3D 7.5 and 8.0. In its ninth version, the market f
https://en.wikipedia.org/wiki/Program%20derivation
In computer science, program derivation is the derivation of a program from its specification, by mathematical means. To derive a program means to write a formal specification, which is usually non-executable, and then apply mathematically correct rules in order to obtain an executable program satisfying that specification. The program thus obtained is then correct by construction. Program and correctness proof are constructed together. The approach usually taken in formal verification is to first write a program, and then provide a proof that it conforms to a given specification. The main problems with this are that: the resulting proof is often long and cumbersome; no insight is given as to how the program was developed; it appears "like a rabbit out of a hat"; should the program happen to be incorrect in some subtle way, the attempt to verify it is likely to be long and certain to be fruitless. Program derivation tries to remedy these shortcomings by: keeping proofs shorter, by development of appropriate mathematical notations; making design decisions through formal manipulation of the specification. Terms that are roughly synonymous with program derivation are: transformational programming, algorithmics, deductive programming. The Bird-Meertens Formalism is an approach to program derivation. Approaches to achieving correctness in Distributed computing include research languages such as the P programming language. See also Automatic programming Hoare logic Program refinement Design by contract Program synthesis Proof-carrying code References Edsger W. Dijkstra, Wim H. J. Feijen, A Method of Programming, Addison-Wesley, 1988, 188 pages Edward Cohen, Programming in the 1990s, Springer-Verlag, 1990 Anne Kaldewaij, Programming: The Derivation of Algorithms, Prentice-Hall, 1990, 216 pages David Gries, The Science of Programming, Springer-Verlag, 1981, 350 pages Carroll Morgan (computer scientist), Programming from Specifications, International Series in Computer Science (2nd ed.), Prentice-Hall, 1998. Eric C.R. Hehner, a Practical Theory of Programming, 2008, 235 pages A.J.M. van Gasteren. On the Shape of Mathematical Arguments. Lecture Notes in Computer Science #445, Springer-Verlag, 1990. Teaches how to write proofs with clarity and precision. Martin Rem. "Small Programming Exercises", appeared in Science of Computer Programming, Vol.3 (1983) through Vol.14 (1990). Roland Backhouse. Program Construction: Calculating Implementations from Specifications. Wiley, 2003. . Derrick G. Kourie, Bruce W. Watson. The Correctness-by-Construction Approach to Programming. Springer-Verlag, 2012. . Provides a step-by-step explanation of how to derive mathematically correct algorithms using small and tractable refinements.
https://en.wikipedia.org/wiki/Skipjack%20%28cipher%29
In cryptography, Skipjack is a block cipher—an algorithm for encryption—developed by the U.S. National Security Agency (NSA). Initially classified, it was originally intended for use in the controversial Clipper chip. Subsequently, the algorithm was declassified. History of Skipjack Skipjack was proposed as the encryption algorithm in a US government-sponsored scheme of key escrow, and the cipher was provided for use in the Clipper chip, implemented in tamperproof hardware. Skipjack is used only for encryption; the key escrow is achieved through the use of a separate mechanism known as the Law Enforcement Access Field (LEAF). The algorithm was initially secret, and was regarded with considerable suspicion by many for that reason. It was declassified on 24 June 1998, shortly after its basic design principle had been discovered independently by the public cryptography community. To ensure public confidence in the algorithm, several academic researchers from outside the government were called in to evaluate the algorithm. The researchers found no problems with either the algorithm itself or the evaluation process. Moreover, their report gave some insight into the (classified) history and development of Skipjack: In March 2016, NIST published a draft of its cryptographic standard which no longer certifies Skipjack for US government applications. Description Skipjack uses an 80-bit key to encrypt or decrypt 64-bit data blocks. It is an unbalanced Feistel network with 32 rounds. It was designed to be used in secured phones. Cryptanalysis Eli Biham and Adi Shamir discovered an attack against 16 of the 32 rounds within one day of declassification, and (with Alex Biryukov) extended this to 31 of the 32 rounds (but with an attack only slightly faster than exhaustive search) within months using impossible differential cryptanalysis. A truncated differential attack was also published against 28 rounds of Skipjack cipher. A claimed attack against the full cipher was published in 2002, but a later paper with attack designer as a co-author clarified in 2009 that no attack on the full 32 round cipher was then known. In pop culture An algorithm named Skipjack forms part of the back-story to Dan Brown's 1998 novel Digital Fortress. In Brown's novel, Skipjack is proposed as the new public-key encryption standard, along with a back door secretly inserted by the NSA ("a few lines of cunning programming") which would have allowed them to decrypt Skipjack using a secret password and thereby "read the world's email". When details of the cipher are publicly released, programmer Greg Hale discovers and announces details of the backdoor. In real life there is evidence to suggest that the NSA has added back doors to at least one algorithm; the Dual_EC_DRBG random number algorithm may contain a backdoor accessible only to the NSA. Additionally, in the Half-Life 2 modification Dystopia, the "encryption" program used in cyberspace apparently uses both Skipjack and Bl
https://en.wikipedia.org/wiki/Open%20Services%20Access
The Open Service Access or OSA is part of the third generation mobile telecommunications network or UMTS. OSA describes how services are designed in a UMTS network. The standards for OSA are being developed as part of the 3rd Generation Partnership Project (3GPP). The standards for OSA are published by ETSI and 3GPP. The API for OSA is called Parlay, (or Parlay/OSA or OSA/Parlay) as the APIs are developed jointly in collaboration by 3GPP, ETSI, and the Parlay Group. These APIs can be freely downloaded from the web. Sometimes OSA would be misspelled as Open Services Architecture or even confused with Open systems architecture. See also Parlay Group CAMEL IP Multimedia Subsystem Multitier architecture Service layer External links Download APIs The joint 3GPP/ETSI/Parlay work group UMTS
https://en.wikipedia.org/wiki/Geometric%20hashing
In computer science, geometric hashing is a method for efficiently finding two-dimensional objects represented by discrete points that have undergone an affine transformation, though extensions exist to other object representations and transformations. In an off-line step, the objects are encoded by treating each pair of points as a geometric basis. The remaining points can be represented in an invariant fashion with respect to this basis using two parameters. For each point, its quantized transformed coordinates are stored in the hash table as a key, and indices of the basis points as a value. Then a new pair of basis points is selected, and the process is repeated. In the on-line (recognition) step, randomly selected pairs of data points are considered as candidate bases. For each candidate basis, the remaining data points are encoded according to the basis and possible correspondences from the object are found in the previously constructed table. The candidate basis is accepted if a sufficiently large number of the data points index a consistent object basis. Geometric hashing was originally suggested in computer vision for object recognition in 2D and 3D, but later was applied to different problems such as structural alignment of proteins. Geometric hashing in computer vision Geometric hashing is a method used for object recognition. Let’s say that we want to check if a model image can be seen in an input image. This can be accomplished with geometric hashing. The method could be used to recognize one of the multiple objects in a base, in this case the hash table should store not only the pose information but also the index of object model in the base. Example For simplicity, this example will not use too many point features and assume that their descriptors are given by their coordinates only (in practice local descriptors such as SIFT could be used for indexing). Training Phase Find the model's feature points. Assume that 5 feature points are found in the model image with the coordinates , see the picture. Introduce a basis to describe the locations of the feature points. For 2D space and similarity transformation the basis is defined by a pair of points. The point of origin is placed in the middle of the segment connecting the two points (P2, P4 in our example), the axis is directed towards one of them, the is orthogonal and goes through the origin. The scale is selected such that absolute value of for both basis points is 1. Describe feature locations with respect to that basis, i.e. compute the projections to the new coordinate axes. The coordinates should be discretised to make recognition robust to noise, we take the bin size 0.25. We thus get the coordinates Store the basis in a hash table indexed by the features (only transformed coordinates in this case). If there were more objects to match with, we should also store the object number along with the basis pair. Repeat the process for a different basis pair (Step 2).
https://en.wikipedia.org/wiki/Patch%20%28computing%29
A patch is a set of changes to a computer program or its supporting data designed to update, fix, or improve it. This includes fixing security vulnerabilities and other bugs, with such patches usually being called bugfixes or bug fixes. Patches are often written to improve the functionality, usability, or performance of a program. The majority of patches are provided by software vendors for operating system and application updates. Patches may be installed either under programmed control or by a human programmer using an editing tool or a debugger. They may be applied to program files on a storage device, or in computer memory. Patches may be permanent (until patched again) or temporary. Patching makes possible the modification of compiled and machine language object programs when the source code is unavailable. This demands a thorough understanding of the inner workings of the object code by the person creating the patch, which is difficult without close study of the source code. Someone unfamiliar with the program being patched may install a patch using a patch utility created by another person who is the Admin. Even when the source code is available, patching makes possible the installation of small changes to the object program without the need to recompile or reassemble. For minor changes to software, it is often easier and more economical to distribute patches to users rather than redistributing a newly recompiled or reassembled program. Although meant to fix problems, poorly designed patches can sometimes introduce new problems (see software regressions). In some special cases updates may knowingly break the functionality or disable a device, for instance, by removing components for which the update provider is no longer licensed. Patch management is a part of lifecycle management, and is the process of using a strategy and plan of what patches should be applied to which systems at a specified time. Types Binary patches Patches for proprietary software are typically distributed as executable files instead of source code. When executed these files load a program into memory which manages the installation of the patch code into the target program(s) on disk. Patches for other software are typically distributed as data files containing the patch code. These are read by a patch utility program which performs the installation. This utility modifies the target program's executable file—the program's machine code—typically by overwriting its bytes with bytes representing the new patch code. If the new code will fit in the space (number of bytes) occupied by the old code, it may be put in place by overwriting directly over the old code. This is called an inline patch. If the new code is bigger than the old code, the patch utility will append load record(s) containing the new code to the object file of the target program being patched. When the patched program is run, execution is directed to the new code with branch instructions (jumps or
https://en.wikipedia.org/wiki/Reality%20Check%20%28American%20TV%20series%29
Reality Check was a 1995 television show starring Ryan Seacrest as Jack Craft, a 19-year-old inventor who gets stuck in his computer mainframe project on June 8, 1995. The two Bonner siblings (Samantha and Nicholas) reactivate the computer on September 17, 1995, attempting to get Jack Craft out of the mainframe, while also encountering additional members of the project. The show was broadcast under syndication with each episode running for 15 minutes including commercials. It was produced in association with S & S Productions and ran for fourteen episodes before ending. Characters Abigail Gustafson - Samantha Bonner John Aaron Bennett - Nicholas Bonner Ryan Seacrest - Jack Craft Tom Greer - Will Maria Cabini - Isis Blake Heron - Bud McNeight Yasmine Seyfi - Yasmine Shanna Marsha Crenshaw - DEV the computer, and additional voices Mike Dyche - Glitch and voices Episodes "Note Of A Different Color" - Samantha composes an Earth Day song with the help of animated computer program Mr. Re. "The Great Escape" "The Ole Ballgame" - Nicholas learns about swinging strategies with the help of Jack and guest star Terry Pendleton. ? - This episode travelled through time for what had happened in the 1960s, 1970s (featuring Jack Craft), 1980s (featuring Isis), and 1990s. External links 1995 American television series debuts 1990s American children's television series Mainframe computers
https://en.wikipedia.org/wiki/Logistics%20automation
Logistics automation is the application of computer software or automated machinery to improve the efficiency of logistics operations. Typically this refers to operations within a warehouse or distribution center, with broader tasks undertaken by supply chain engineering systems and enterprise resource planning systems. Logistics automation systems can powerfully complement the facilities provided by these higher level computer systems. The focus on an individual node within a wider logistics network allows systems to be highly tailored to the requirements of that node. Components Logistics automation systems comprise a variety of hardware and software components: Fixed machinery Automated storage and retrieval systems, including: Cranes serve a rack of locations, allowing many levels of stock to be stacked vertically, and allowing for higher storage densities and better space utilization than alternatives. In systems produced by Amazon Robotics, automated guided vehicles move items to a human picker. Conveyors: Containers can enter automated conveyors in one area of the warehouse and, either through hard-coded rules or data input, be moved to a selected destination. Vertical carousels based on the paternoster lift system or using space optimization, similar to vending machines, but on a larger scale. Sortation systems: similar to conveyors but typically with higher capacity and able to divert containers more quickly. Typically used to distribute high volumes of small cartons to a large set of locations. Industrial robots: four- to six-axis industrial robots, e.g. palleting robots, are used for palleting, depalleting, packaging, commissioning and order picking. Typically all of these will automatically identify and track containers using barcodes or, increasingly, RFID tags. Motion check weighers may be used to reject cases or individual products that are under or over their specified weight. They are often used in kitting conveyor lines to ensure all pieces belonging in the kit are present. Mobile technology Radio data terminals: these are handheld or truck-mounted terminals which connect by radio to logistics automation software and provide instructions to operators moving throughout the warehouse. Many also have barcode scanners to allow identification of containers more quickly and accurately than manual keyboard entry. Software Integration software: this provides overall control of the automation machinery and allows cranes to be connected to conveyors for seamless stock movements. Operational control software: provides low-level decision-making, such as where to store incoming containers, and where to retrieve them when requested. Business control software: provides higher-level functionality, such as identification of incoming deliveries/stock, scheduling order fulfillment, and assignment of stock to outgoing trailers. Benefits of logistics automation A typical warehouse or distribution center will receive stock of a
https://en.wikipedia.org/wiki/Symbol%20table
In computer science, a symbol table is a data structure used by a language translator such as a compiler or interpreter, where each identifier (or symbol), constant, procedure and function in a program's source code is associated with information relating to its declaration or appearance in the source. In other words, the entries of a symbol table store the information related to the entry's corresponding symbol. Background A symbol table may only exist in memory during the translation process, or it may be embedded in the output of the translation, such as in an ABI object file for later use. For example, it might be used during an interactive debugging session, or as a resource for formatting a diagnostic report during or after execution of a program. Description The minimum information contained in a symbol table used by a translator and intermediate representation (IR) includes the symbol's name and its location or address. For a compiler targeting a platform with a concept of relocatability, it will also contain relocatability attributes (absolute, relocatable, etc.) and needed relocation information for relocatable symbols. Symbol tables for high-level programming languages may store the symbol's type: string, integer, floating-point, etc., its size, and its dimensions and its bounds. Not all of this information is included in the output file, but may be provided for use in debugging. In many cases, the symbol's cross-reference information is stored with or linked to the symbol table. Most compilers print some or all of this information in symbol table and cross-reference listings at the end of translation. Implementation Numerous data structures are available for implementing tables. Trees, linear lists and self-organizing lists can all be used to implement a symbol table. The symbol table is accessed by most phases of a compiler, beginning with lexical analysis, and continuing through optimization. A compiler may use one large symbol table for all symbols or use separated, or hierarchical symbol tables for different scopes. For example, in a scoped language such as Algol or PL/I a symbol "p" can be declared separately in several procedures, perhaps with different attributes. The scope of each declaration is the section of the program in which references to "p" resolve to that declaration. Each declaration represents a unique identifier "p". The symbol table must have some means of differentiating references to the different "p"s. A common data structure used to implement symbol tables is the hash table. The time for searching in hash tables is independent of the number of elements stored in the table, so it is efficient for a large number of elements. It also simplifies the classification of literals in tabular format by including the classification in calculation of the hash key. As the lexical analyser spends a great proportion of its time looking up the symbol table, this activity has a crucial effect on the overall speed of the
https://en.wikipedia.org/wiki/Deutsch%E2%80%93Jozsa%20algorithm
The Deutsch–Jozsa algorithm is a deterministic quantum algorithm proposed by David Deutsch and Richard Jozsa in 1992 with improvements by Richard Cleve, Artur Ekert, Chiara Macchiavello, and Michele Mosca in 1998. Although of little practical use, it is one of the first examples of a quantum algorithm that is exponentially faster than any possible deterministic classical algorithm. The Deutsch–Jozsa problem is specifically designed to be easy for a quantum algorithm and hard for any deterministic classical algorithm. It is a black box problem that can be solved efficiently by a quantum computer with no error, whereas a deterministic classical computer would need a exponential number of queries to the black box to solve the problem. More formally, it yields an oracle relative to which EQP, the class of problems that can be solved exactly in polynomial time on a quantum computer, and P are different. Since the problem is easy to solve on a probabilistic classical computer, it does not yield an oracle separation with BPP, the class of problems that can be solved with bounded error in polynomial time on a probabilistic classical computer. Simon's problem is an example of a problem that yields an oracle separation between BQP and BPP. Problem statement In the Deutsch–Jozsa problem, we are given a black box quantum computer known as an oracle that implements some function: The function takes n-bit binary values as input and produces either a 0 or a 1 as output for each such value. We are promised that the function is either constant (0 on all inputs or 1 on all inputs) or balanced (1 for exactly half of the input domain and 0 for the other half). The task then is to determine if is constant or balanced by using the oracle. Classical solution For a conventional deterministic algorithm where is the number of bits, evaluations of will be required in the worst case. To prove that is constant, just over half the set of inputs must be evaluated and their outputs found to be identical (because the function is guaranteed to be either balanced or constant, not somewhere in between). The best case occurs where the function is balanced and the first two output values are different. For a conventional randomized algorithm, a constant evaluations of the function suffices to produce the correct answer with a high probability (failing with probability with ). However, evaluations are still required if we want an answer that has no possibility of error. The Deutsch-Jozsa quantum algorithm produces an answer that is always correct with a single evaluation of . History The Deutsch–Jozsa algorithm generalizes earlier (1985) work by David Deutsch, which provided a solution for the simple case where . Specifically, given a boolean function whose input is one bit, , is it constant? The algorithm, as Deutsch had originally proposed it, was not deterministic. The algorithm was successful with a probability of one half. In 1992, Deutsch and Jozsa produce
https://en.wikipedia.org/wiki/De%20Boor%27s%20algorithm
In the mathematical subfield of numerical analysis de Boor's algorithm is a polynomial-time and numerically stable algorithm for evaluating spline curves in B-spline form. It is a generalization of de Casteljau's algorithm for Bézier curves. The algorithm was devised by Carl R. de Boor. Simplified, potentially faster variants of the de Boor algorithm have been created but they suffer from comparatively lower stability. Introduction A general introduction to B-splines is given in the main article. Here we discuss de Boor's algorithm, an efficient and numerically stable scheme to evaluate a spline curve at position . The curve is built from a sum of B-spline functions multiplied with potentially vector-valued constants , called control points, B-splines of order are connected piece-wise polynomial functions of degree defined over a grid of knots (we always use zero-based indices in the following). De Boor's algorithm uses O(p2) + O(p) operations to evaluate the spline curve. Note: the main article about B-splines and the classic publications use a different notation: the B-spline is indexed as with . Local support B-splines have local support, meaning that the polynomials are positive only in a finite domain and zero elsewhere. The Cox-de Boor recursion formula shows this: Let the index define the knot interval that contains the position, . We can see in the recursion formula that only B-splines with are non-zero for this knot interval. Thus, the sum is reduced to: It follows from that . Similarly, we see in the recursion that the highest queried knot location is at index . This means that any knot interval which is actually used must have at least additional knots before and after. In a computer program, this is typically achieved by repeating the first and last used knot location times. For example, for and real knot locations , one would pad the knot vector to . The algorithm With these definitions, we can now describe de Boor's algorithm. The algorithm does not compute the B-spline functions directly. Instead it evaluates through an equivalent recursion formula. Let be new control points with for . For the following recursion is applied: Once the iterations are complete, we have , meaning that is the desired result. De Boor's algorithm is more efficient than an explicit calculation of B-splines with the Cox-de Boor recursion formula, because it does not compute terms which are guaranteed to be multiplied by zero. Optimizations The algorithm above is not optimized for the implementation in a computer. It requires memory for temporary control points . Each temporary control point is written exactly once and read twice. By reversing the iteration over (counting down instead of up), we can run the algorithm with memory for only temporary control points, by letting reuse the memory for . Similarly, there is only one value of used in each step, so we can reuse the memory as well. Furthermore, it is more co
https://en.wikipedia.org/wiki/Fedora%20Project
The Fedora Project is an independent project to co-ordinate the development of Fedora Linux, a Linux-based operating system, operating with the vision of "a world where everyone benefits from free and open source software built by inclusive, welcoming, and open-minded communities." The project's mission statement is to create "an innovative platform for hardware, clouds, and containers that enables software developers and community members to build tailored solutions for their users". The project also oversees Extra Packages for Enterprise Linux, a special interest group which maintains the eponymous packages. The project was founded in 2003 as a result of a merger between the Red Hat Linux (RHL) and Fedora Linux projects. It is sponsored by Red Hat (an IBM subsidiary) primarily, but its employees make up only 35% of project contributors, and most of the over 2,000 contributors are unaffiliated members of the community. History The Fedora Project was founded in November 2003 when Red Hat decided to split Red Hat Linux into Red Hat Enterprise Linux (RHEL) and a community-based operating system, Fedora. Red Hat Professional Workstation was created at this same time with the intention of filling the niche that RHL had once filled but it was created without a certain future. This option quickly fell by the wayside for non-enterprise RHL users in favor of Fedora. Fedora operating system The first edition of the Fedora operating system—then known as Fedora Core 1—was released on November 6, 2003. Fedora Core 1 was released on a fixed schedule, every four to six months. The Fedora distribution has a reputation as being a FOSS distribution that focuses on innovation and close work with upstream Linux communities. In November 2021, the company announced the release of Fedora Linux 35. Fedora 36 was released the following year in May 2022. Security intrusion In August 2008, several Fedora servers were compromised. Upon investigation it was found that one of the compromised servers was used for signing Fedora update packages. The Fedora Project stated that the attacker(s) did not get the package signing key which could be used to introduce malicious software onto Fedora users' systems through the update process. Project administrators performed checks on the software and did not find anything to suggest that a Trojan horse had been introduced into the software. As a precaution the project converted to new package signing keys. Fedora published the full details on March 30, 2009. Governance The Fedora Project is not a separate legal entity or organization; Red Hat retains liability for its actions. The Fedora Council is the top-level community leadership and governance body. The Council is composed of a mix of representatives from different areas of the project, named roles appointed by Red Hat, and a variable number of seats connected to medium-term project goals. The previous governance structure (Fedora Board) comprised five Red Hat appointed me
https://en.wikipedia.org/wiki/Disk%20array%20controller
A disk array controller is a device that manages the physical disk drives and presents them to the computer as logical units. It almost always implements hardware RAID, thus it is sometimes referred to as RAID controller. It also often provides additional disk cache. Disk array controller is often improperly shortened to disk controller. The two should not be confused as they provide very different functionality. Front-end and back-end side A disk array controller provides front-end interfaces and back-end interfaces. The back-end interface communicates with the controlled disks. Hence, its protocol is usually ATA (a.k.a. PATA), SATA, SCSI, FC or SAS. The front-end interface communicates with a computer's host adapter (HBA, Host Bus Adapter) and uses: one of ATA, SATA, SCSI, FC; these are popular protocols used by disks, so by using one of them a controller may transparently emulate a disk for a computer. somewhat less popular dedicated protocols for specific solutions: FICON/ESCON, iSCSI, HyperSCSI, ATA over Ethernet or InfiniBand. A single controller may use different protocols for back-end and for front-end communication. Many enterprise controllers use FC on front-end and SATA on back-end. Enterprise controllers In a modern enterprise architecture disk array controllers (sometimes also called storage processors, or SPs) are parts of physically independent enclosures, such as disk arrays placed in a storage area network (SAN) or network-attached storage (NAS) servers. Those external disk arrays are usually purchased as an integrated subsystem of RAID controllers, disk drives, power supplies, and management software. It is up to controllers to provide advanced functionality (various vendors name these differently): Automatic failover to another controller (transparent to computers transmitting data) Long-running operations performed without downtime Forming a new RAID set Reconstructing degraded RAID set (after a disk failure) Adding a disk to online RAID set Removing a disk from a RAID set (rare functionality) Partitioning a RAID set to separate volumes/LUNs Snapshots Business continuance volumes (BCV) Replication with a remote controller.... Simple controllers A simple disk array controller may fit inside a computer, either as a PCI expansion card or just built onto a motherboard. Such a controller usually provides host bus adapter (HBA) functionality itself to save physical space. Hence it is sometimes called a RAID adapter. Intel started integrating their own Matrix RAID controller in their more upmarket motherboards, giving control over 4 devices and an additional 2 SATA connectors, and totalling 6 SATA connections (3Gbit/s each). For backward compatibility one IDE connector able to connect 2 ATA devices (100 Mbit/s) is also present. History While hardware RAID controllers were available for a long time, they always required expensive SCSI hard drives and aimed at the server and high-end computing market. SCSI
https://en.wikipedia.org/wiki/Ecological%20fallacy
An ecological fallacy (also ecological inference fallacy or population fallacy) is a formal fallacy in the interpretation of statistical data that occurs when inferences about the nature of individuals are deduced from inferences about the group to which those individuals belong. From the conceptual standpoint of mereology, four common ecological fallacies are: Correlation/relation: confusion regarding relations belonging to parts versus relations belonging to wholes, Characteristics: confusion between characteristics of parts and characteristics of a whole, Extrapolation/extension: confusion from false inference of part-whole dynamics: assuming the behavior of partially unknown and/or future wholes from information which is relatively partial, Confusion between qualities not bound to individual parts, for example "atmospheres", "moods" and "vibes", versus properties, hæccities or identities of indivisible units. From a statistical point of view, these ideas can be unified by specifying proper statistical models to make formal inferences, using aggregate data to make unobserved relationships in individual level data. Examples Mean and median An example of ecological fallacy is the assumption that a population mean has a simple interpretation when considering likelihoods for an individual. For instance, if the mean score of a group is larger than zero, this does not imply that a random individual of that group is more likely to have a positive score than a negative one (as long as there are more negative scores than positive scores an individual is more likely to have a negative score). Similarly, if a particular group of people is measured to have a lower mean IQ than the general population, it is an error to conclude that a randomly-selected member of the group is more likely than not to have a lower IQ than the mean IQ of the general population; it is also not necessarily the case that a randomly selected member of the group is more likely than not to have a lower IQ than a randomly-selected member of the general population. Mathematically, this comes from the fact that a distribution can have a positive mean but a negative median. This property is linked to the skewness of the distribution. Consider the following numerical example: Group A: 80% of people got 40 points and 20% of them got 95 points. The mean score is 51 points. Group B: 50% of people got 45 points and 50% got 55 points. The mean score is 50 points. If we pick two people at random from A and B, there are 4 possible outcomes: A – 40, B – 45 (B wins, 40% probability – 0.8 × 0.5) A – 40, B – 55 (B wins, 40% probability – 0.8 × 0.5) A – 95, B – 45 (A wins, 10% probability – 0.2 × 0.5) A – 95, B – 55 (A wins, 10% probability – 0.2 × 0.5) Although Group A has a higher mean score, 80% of the time a random individual of A will score lower than a random individual of B. Individual and aggregate correlations Research dating back to Émile Durkheim suggests tha
https://en.wikipedia.org/wiki/Gmax
Gmax is an application based on Autodesk's 3ds Max application used by professional computer graphics artists. 3ds Max is a comprehensive modeling, animation and rendering package with some secondary post-production and compositing features. Gmax is much more limited due to its singular intended use—game content creation. Infrequently used tools and features, or the ones completely unrelated to creating 3D game models, were removed (these include most, if not all of the more complex rendering, materials, shaders, physics simulation, some of the more advanced geometry tools, in addition to the rendering engine), leaving the core modeling, texturing, and basic animation rigging and keyframing capabilities. In 2005, the promotional freeware software was discontinued after version 1.2. Features Gmax's utility can be expanded by "game packs" which feature customized tools that allow creation and exporting of customizable content to games and websites. As Gmax did not have its parent software's rendering engine, game packs were typically required to provide those features when needed(Auran was the first company to write and distribute a dedicated renderer for their Trainz Railroad Simulator series Gmax gamepacks in their initial pre-production Beta Release V0.9 (2000). Thereafter Gmax was bundled with subsequentTrainz until Trainz 2009. Maxis was the second company to write a dedicated renderer for their Gmax gamepack, BAT (Building Architect Tool) for SimCity 4 (ca. 2002). In both the game environments, a user can enter the virtual 3D world, interact with the environment, ride a car and sight see, or in the Trainz series of simulators - operate a carefully modeled realistic Train Locomotive, ride in an observation car, automobile, boat, airplane, or steamship.) The introduction of Gmax and Autodesk's distribution of the core tools was thought to be aimed towards remedying the 'limited-options infringement' of 3D modeling packages that had been widespread among amateur 3D modeling and game mod communities to that point. Until the introduction of Gmax, and a similar 'game modeler' version of Maya soon after, amateur modelers had extremely limited access to the tools needed for 3D artwork. Gmax enabled modelers to have legitimate access to content creation tools similar to those used by professionals. Redistribution Auran/N3V distributed Gmax from the outset as a licensed partner as an included part of the Trainz series of simulators, along with a document "Content Creators" manual, up through and including the release of Trainz 2004 Deluxe, after which they provided a download link in the many versions of TRS2006 before the technology became obsolescent. Microsoft distributed Gmax with Microsoft Flight Simulator (MSFS) beginning with the 2002 version. Most of the freeware and payware add-on aircraft and scenery is created with Gmax and it's considered to be the standard modeller for MSFS, although it does have competition in the form of the more us
https://en.wikipedia.org/wiki/Bleu%20Nuit
Bleu Nuit (English: "Midnight Blue") is a television series that was broadcast late night on the Télévision Quatre Saisons, or TQS, television network (now called Noovo) in Quebec, Canada, from 1986 until 2007. The content of the series was softcore pornography, mostly European films. The series was popular with both francophones and anglophones living in Quebec, as well as in other provinces in Canada that received the network. Bleu Nuit was considered a part of Quebec culture. Films shown on Bleu Nuit Beau-père (1981) Note: very first film shown on Bleu Nuit as it was shown on September 13, 1986, 6 days after the launch of TQS. Black Emanuelle, White Emanuelle (1976) Note: Italian Version of Emmanuelle with one "M". Stars Laura Gemser Black Emanuelle 2 (1976) Les Branchés à Saint-Tropez Coups de Matraque avec Cynthia Dernier Tango a Paris Deux femmes en or Emmanuelle I, Emmanuelle II, Emanuelle III, Emmanuelle IV Emmanuelle & The White Slave Trade (1978); Note: memorable sex scene in Africa with mechanic under car hoist. L'Enchainee (Italy 1985) Equateur (shown 9 June 1990) Fanny Hill (shown 21 July 1990) La Femme flambee Hôtel Exotica Je t'aime moi non plus La Bonne La Chiave (Tinto Brass film with Italian actress Stefania Sandrelli) Loulou La Vie secrète de Roméo et Juliette Le Gigolo (1960, France) Le journal de désirs L'été en pente douce (1987, France) Malizia (by Salvatore Samperi, with Laura Antonelli) Malena (2000) Monica Bellucci seduces a young man. The film is set in Sicily in 1940 during World War II just as Italy enters the war. Malena's husband, who left to join the military is presumed dead and she is seduced by a young man. La Seconda Moglie Italian actress Maria Grazia Cucinotta stars in this film based in early 1960s, a Sicilian single mother marries an older, crass widowed truck driver. When he is arrested trying to smuggle an antique, she ends up falling in love with her handsome stepson. Neuf semaines et demie On se calme et on boit frais à Saint-Tropez Paprika (Tinto Brass erotic film) Raspoutine (softcore version of Rasputin-Orgien am Zarenhof) Rendez-vous Rosa la rose, fille publique (France, 1985) Samanka Ile des Passions Tarzan l'Homme Singe Tendres Cousines (1980) - The most talked about erotic film in Canada that appeared on Bleu Nuit in 1987 with the famous "Touche Les" scene. In the summer of 1939 in Provence, France: the 14-year-old Julien has a crush on his cousin Julia, who lives together with his family in their small hotel. Julien has a famous barn house scene with a much older woman who asks him to touch her breasts: 'Touche les'. Y Tu Mamá También - Abandoned by their girlfriends for the summer, rich teenagers Tenoch and Julio meet older woman Luisa at a wedding. Trying to impress Luisa, the friends tell her they are headed on a road trip to a beautiful, secret beach. Die heissen Nächte der Josefine Mutzenbacher (1981) Wild Orchid Television series shown on Bleu Nuit
https://en.wikipedia.org/wiki/OPS5
OPS5 is a rule-based or production system computer language, notable as the first such language to be used in a successful expert system, the R1/XCON system used to configure VAX computers. The OPS (said to be short for "Official Production System") family was developed in the late 1970s by Charles Forgy while at Carnegie Mellon University. Allen Newell's research group in artificial intelligence had been working on production systems for some time, but Forgy's implementation, based on his Rete algorithm, was especially efficient, sufficiently so that it was possible to scale up to larger problems involving hundreds or thousands of rules. OPS5 uses a forward chaining inference engine; programs execute by scanning "working memory elements" (which are vaguely object-like, with classes and attributes) looking for matches with the rules in "production memory". Rules have actions that may modify or remove the matched element, create new ones, perform side effects such as output, and so forth. Execution continues until no more matches can be found. In this sense, OPS5 is an execution engine for a Petri net extended with inhibitor arcs. The OPS5 forward chaining process makes it extremely parallelizeable during the matching phase, and several automatic parallelizing compilers were created. OPS4 was an early version, while OPS83 came later. The first implementation of OPS5 was written in Lisp, and later rewritten in BLISS for speed. DEC OPS5 is an extended implementation of the OPS5 language definition, developed for use with the OpenVMS, RISC ULTRIX, and DEC OSF/1 operating systems. References Charles Forgy, OPS5 User's Manual, Technical Report CMU-CS-81-135 (Carnegie Mellon University, 1981) Lee Brownston, Robert Farrell, Elaine Kant, Nancy Martin, Programming Expert Systems in OPS5 (Addison-Wesley, 1985) Anoop Gupta, Miland Tambe, Dirk Kalp, Charles Forgy, and Allen Newell, Parallel Implementation of OPS5 on the Encore Multiprocessor: Results and Analysis Rob Lewis, OPS5 Revisited (Amazon 2016) External links OPS5 overview OPS5 Reference manual RuleWorks - Open-sourced language based on OPS5, with added modularity constructs. OPS5: RETE-based expert system shell - CMU Artificial Intelligence Repository source code - OPS5 source code on GitHub Free OPS5 implementation in .Net Core Functional languages Common Lisp (programming language) software
https://en.wikipedia.org/wiki/List%20of%20Billboard%20Hot%20100%20number%20ones%20of%201998
The Billboard Hot 100 is a chart that ranks the best-performing singles of the United States. Published by Billboard magazine, the data are compiled by Nielsen SoundScan based collectively on each single's weekly physical sales and airplay. In 1998, there were 16 singles that topped the chart, in 52 issue dates. During the year, 10 acts had achieved a first U.S. number-one single, namely: Savage Garden, Usher, Will Smith, Next, Brandy, Monica, Aerosmith, Barenaked Ladies, Lauryn Hill, and Divine. R&B singer Monica and pop singer Céline Dion both had two number-one singles in 1998. Brandy and Monica's "The Boy Is Mine" is the longest-running single of the year, staying at number one for thirteen consecutive weeks. Other singles that had a multiple chart run includes Monica's "The First Night" with five weeks atop and Next's "Too Close", Aerosmith's "I Don't Want to Miss a Thing" and R. Kelly's "I'm Your Angel" (featuring Céline Dion) with four weeks (the latter spending two additional weeks at number one in 1999). Chart history Number-one artists See also 1998 in music List of Billboard number-one singles References Additional sources Fred Bronson's Billboard Book of Number 1 Hits, 5th Edition () Joel Whitburn's Top Pop Singles 1955-2008, 12 Edition () Joel Whitburn Presents the Billboard Hot 100 Charts: The Nineties () Additional information obtained can be verified within Billboard's online archive services and print editions of the magazine. 1998 record charts 1998
https://en.wikipedia.org/wiki/Strength%20reduction
In compiler construction, strength reduction is a compiler optimization where expensive operations are replaced with equivalent but less expensive operations. The classic example of strength reduction converts strong multiplications inside a loop into weaker additions – something that frequently occurs in array addressing. Examples of strength reduction include replacing a multiplication within a loop with an addition and replacing exponentiation within a loop with a multiplication. Code analysis Most of a program's execution time is typically spent in a small section of code (called a hot spot), and that code is often inside a loop that is executed over and over. A compiler uses methods to identify loops and recognize the characteristics of register values within those loops. For strength reduction, the compiler is interested in: Loop invariants: the values which do not change within the body of a loop. Induction variables: the values which are being iterated each time through the loop. Loop invariants are essentially constants within a loop, but their value may change outside of the loop. Induction variables are changing by known amounts. The terms are relative to a particular loop. When loops are nested, an induction variable in the outer loop can be a loop invariant in the inner loop. Strength reduction looks for expressions involving a loop invariant and an induction variable. Some of those expressions can be simplified. For example, the multiplication of loop invariant c and induction variable i c = 7; for (i = 0; i < N; i++) { y[i] = c * i; } can be replaced with successive weaker additions c = 7; k = 0; for (i = 0; i < N; i++) { y[i] = k; k = k + c; } Strength reduction example Below is an example that will strength-reduce all the loop multiplications that arose from array indexing address calculations. Imagine a simple loop that sets an array to the identity matrix. for (i = 0; i < n; i++) { for (j = 0; j < n; j++) { A[i,j] = 0.0; } A[i,i] = 1.0; } Intermediate code The compiler will view this code as 0010 ; for (i = 0, i < n; i++) 0020 ; { 0030 r1 = #0 ; i = 0 0040 G0000: 0050 load r2, n ; i < n 0060 cmp r1, r2 0070 bge G0001 0080 0090 ; for (j = 0; j < n; j++) 0100 ; { 0110 r3 = #0 ; j = 0 0120 G0002: 0130 load r4, n ; j < n 0140 cmp r3, r4 0150 bge G0003 0160 0170 ; A[i,j] = 0.0; 0180 load r7, n 0190 r8 = r1 * r7 ; calculate subscript i * n + j 0200 r9 = r8 + r3 0210 r10 = r9 * #8 ; calculate byte address 0220 fr3 = #0.0 0230 fstore fr3, A[r10] 0240 0250 r3 = r3 + #1 ; j++ 0260 br G0002 0270 ; } 0280 G0003: 0290 ; A[i,i] = 1.0; 0300 load r12, n ; calculate subscript i * n + i
https://en.wikipedia.org/wiki/List%20of%20Billboard%20Hot%20100%20number%20ones%20of%201999
The Billboard Hot 100 is a chart that ranks the best singles of the United States. Published by Billboard magazine, the data are compiled by Nielsen SoundScan based collectively on each single's weekly physical sales and airplays. There were 15 singles that topped the chart this year. The first of these, "I'm Your Angel" by R. Kelly and Celine Dion, spent two weeks at the top, concluding a six-week run that had begun in December 1998. During the year, 11 acts had achieved a first U.S. number-one single, including: Britney Spears, Ricky Martin, Jennifer Lopez, Destiny's Child, Dru Hill, Kool Moe Dee, Christina Aguilera, Enrique Iglesias, Jay-Z, Santana, and Rob Thomas. The longest running number-one single is "Smooth" by Santana featuring Matchbox Twenty frontman Rob Thomas, which attained 12 weeks at number-one. Ten of those weeks were logged in 1999 and two additional weeks were logged in 2000. TLC was the only act with more than one number one song, with them hitting the top twice. Chart history Number-one artists See also 1999 in music List of Billboard number-one singles Billboard Year-End Hot 100 singles of 1999 List of Billboard Hot 100 top 10 singles in 1999 References Additional sources Fred Bronson's Billboard Book of Number 1 Hits, 5th Edition () Joel Whitburn's Top Pop Singles 1955-2008, 12 Edition () Joel Whitburn Presents the Billboard Hot 100 Charts: The Nineties () Additional information obtained can be verified within Billboard's online archive services and print editions of the magazine. 1999 record charts 1999
https://en.wikipedia.org/wiki/Phoenix%20%28computer%29
Phoenix (February 1973 – September 30, 1995) was an IBM mainframe computer at Cambridge University's Computer Laboratory. "Phoenix/MVS" was also the name of the computer's operating system, written in-house by Computer Laboratory members. Its DNS hostname was . Hardware The Phoenix system was an IBM 370/165. It was made available for test purposes to 20 selected users, via consoles in the public console room, in February 1973. The following month, the Computing Service petitioned the Computer Board for an extra mebibyte of store, to double the amount of storage that the machine had. The petition was accepted and the extra store was delivered in September 1973. Communications The IBM-supplied Telecommunications Access Method (TCAM) and communications controller were replaced in 1975 by a system, called Parrot, that was created locally by the staff of the Computer Laboratory, comprising their own software and a PDP-11 complex. Their goal in doing so was to provide a better user interface than was available with a standard IBM system, alongside greater flexibility, reliability, and efficiency. They wanted to support 300 terminals. The initial system, supplied in 1972, comprised the PDP-11 emulating an IBM 2703 transmission control unit, which TCAM communicated with just as though it were a 2703. The PDP-11 was used instead of a bank of 2703s because for a projected 300 terminals a bank of 2703s was not scalable, too expensive, and inadequate for the Computing Service's needs, since it required paper tape readers and card punches as well. Even this solution proved to be unsatisfactory, and in 1975 TCAM was replaced by Parrot, with 200 terminals connected to the PDP-11, of which 80 could be simultaneously active. For full technical details of Parrot, see the technical report by Hazel and Stoneley. Software The staff were motivated to write their own system software for the IBM installation as a result of their dissatisfaction with IBM's own interactive command interpreter TSO. The initial product of their efforts was a Phoenix command interpreter which completely replaced the TSO command interpreter and was also available as a language for controlling batch job submissions through the use of a single IBM JCL command to invoke the Phoenix command interpreter. The Phoenix command interpreter was based on that of the Titan Multiple Access System which had inline input files and was in service from 1967. Steve Bourne, who wrote the Bourne Shell for Unix, was at Cambridge in the 1960s and early 1970s. It seems likely that some of the Bourne Shell's constructs in Unix also derived from the Titan command interpreter. GEC's OS4000 JCL was based on the Phoenix command interpreter. Upgrades By 1973 Phoenix had a thousand megabytes of disk space. In 1982 it was upgraded to an IBM 3081D, and in 1989 to an IBM 3084Q. Decommissioning The system was decommissioned 24 years after its installation, on 30 September 1995 at 09:17 (by its own clock
https://en.wikipedia.org/wiki/Acornsoft
Acornsoft was the software arm of Acorn Computers, and a major publisher of software for the BBC Micro and Acorn Electron. As well as games, it also produced a large number of educational titles, extra computer languages and business and utility packages – these included word processor VIEW and the spreadsheet ViewSheet supplied on ROM and cartridge for the BBC Micro/Acorn Electron and included as standard in the BBC Master and Acorn Business Computer. History Acornsoft was formed in late 1980 by Acorn Computers directors Hermann Hauser and Chris Curry, and David Johnson-Davies, author of the first game for a UK personal computer and of the official Acorn Atom manual "Atomic Theory and Practice". David Johnson-Davies was managing director and in early 1981 was joined by Tim Dobson, Programmer and Chris Jordan, Publications Editor. While some of their games were clones or remakes of popular arcade games (e.g. Hopper is a clone of Sega's Frogger, Snapper is Namco's Pac-Man, Arcadians is Namco's Galaxian), they also published a number of original titles such as Aviator, Elite, and Revs. Acornsoft also published text adventures by authors such as Peter Killworth, including Philosopher's Quest (previously titled Brand X) and Countdown to Doom. As a result of the publication of a method to circumvent copy protection measures employed by Acornsoft titles, a High Court injunction against Computing Publications - publisher of Personal Computer World - was granted to Acorn Computers "requiring all copies of the January 1984 issue of PCW to be withdrawn from sale", with the article concerned being regarded as inciting readers to "duplicate computer programs". This injunction was subsequently lifted as a consequence of an out-of-court settlement between the parties involving a damages payment of £65,000 plus costs to Acorn "to meet Acorn's expenses in developing a new locking device". The article's author, Guy Kewney, and the magazine's editor, Jane Bird, argued that printing a software routine showing how to save Acornsoft cassette software to disk was a service to the magazine's readers. The cost of printing the magazine issue concerned was estimated at £100,000. Acornsoft became a subsidiary within Acorn Computer Group, distinct from Acorn Computers who were responsible for the development of Acorn's microcomputer systems, but Acornsoft ceased to operate as a separate company upon the departure of David Johnson-Davies in January 1986. Past this date, Acorn Computers used the Acornsoft name on office software it released in the VIEW family for the BBC Master series. In 1986 Superior Software was granted a licence to publish some Acornsoft games and re-released many, individually and as compilations such as the Play It Again Sam and Acornsoft Hits series. By agreement, the Acornsoft name was also used on the packaging of some of the subsequent Superior games. Superior chose not to take on Acornsoft's text adventure games, most of which were released in
https://en.wikipedia.org/wiki/Charles%20Forgy
Charles L. Forgy (born December 12, 1949, in Texas) is an American computer scientist, known for developing the Rete algorithm used in his OPS5 and other production system languages used to build expert systems. Early life and education Forgy attended Woodrow Wilson High School in Dallas, Texas, and then advanced to Arlington State College (now University of Texas at Arlington, or UTA) graduating with a degree in mathematics in 1972. From there he went to Carnegie Institute of Technology (later Carnegie Mellon University) in Pittsburgh, a renowned center for study of artificial intelligence. While studying at Carnegie he met his future wife, Diana, whom he married in 1977. Career Rete As a student of Allen Newell, he received his Ph.D. in 1979 based on the Rete algorithm. Even though Forgy did not work directly on the DEC XCON AI problem of configuring computers for DEC in the late 1970s and early 1980s, the Rete algorithm was later incorporated into the system for more speed. The XCON used the early versions of OPS (Official Production System) that migrated to OPS2 and later OPS5. DEC reported that XCON saved at least $1M USD per year. XCON, a project headed up by John McDermott and later transferred to DEC programmers, was eventually composed of over 10K rules. The Rete (Latin for "network") algorithm allowed systems to run as much as 3,000 times faster in those days. The original Rete algorithm was developed under a Department of Defence grant and, as such, is public domain. Rete II and III Forgy remained at Carnegie Mellon post-graduation and worked on further improvements to OPS5; in 1983 he formed a company called Production Systems Technologies to develop and sell rule-based software, where he developed "Rete II", a more efficient successor to Rete. Rete II enabled rule-based programs to run between 50 and 100 times faster than the original Rete algorithm, depending on the complexity of the rules and objects. (The more complex, the faster the comparative results.) Rete II is incorporated in CLIPS/R2, OPSJ and FICO's Blaze Advisor. Forgy was a founder and chief scientist for Rules Power, a Work Flow Management company founded in 2002 and based in Boston. During that time, Forgy incorporated Rete II with Relational Logic Technology, which became named "Rete III". The performance of Rete II and Rete III are virtually the same but Rete III has some extensions that allow it to work more efficiently with Relational Logic Technology but slows it down on benchmarks. Rete-NT Forgy developed a next-generation algorithm, called Rete-NT, that has improved the execution speed by another order of magnitude. To this date Sparkling Logic SMARTS is the only BRMS product that uses this algorithm. Present times In 2005, RulesPower was acquired by Fair Isaac Corporation, who obtained a license to integrate Rete III into Blaze Advisor, their own business rules product. Forgy retained the intellectual property rights to Rete II and his personal c
https://en.wikipedia.org/wiki/Susan%20Kare
Susan Kare ( "care"; born February 5, 1954) is an American artist and graphic designer, who contributed interface elements and typefaces for the first Apple Macintosh personal computer from 1983 to 1986. She was employee #10 and Creative Director at NeXT, the company formed by Steve Jobs after he left Apple in 1985. She was a design consultant for Microsoft, IBM, Sony Pictures, Facebook, and Pinterest. Kare was an employee of Niantic Labs. As a pioneer of pixel art and of the graphical computer interface, she has been celebrated as one of the most significant designers of modern technology. Early life and education Kare was born in Ithaca, New York. Her father was a professor at the University of Pennsylvania and director of the Monell Chemical Senses Center, a research facility for the senses of taste and smell. Her mother taught her counted-thread embroidery as she immersed herself in drawings, paintings, and crafts. Her brother was aerospace engineer Jordin Kare. She graduated from Harriton High School in 1971. She graduated summa cum laude with a B.A. in art from Mount Holyoke College in 1975, with an undergraduate honors thesis on sculpture. She received an M.A. and a Ph.D. in fine arts from New York University in 1978 with a doctoral dissertation on "the use of caricature in selected sculptures of Honoré Daumier and Claes Oldenburg". Her goal was "to be either a fine artist or teacher". Career Early Susan Kare's career has always focused on fine art. For several summers during high school she interned at the Franklin Institute for designer Harry Loucks, who introduced her to typography and graphic design while she did phototypesetting with "strips of type for labels in a dark room on a PhotoTypositor". Because she did not attend an artist training school, she built her experience and portfolio by taking many pro-bono graphics jobs such as posters and brochure design in college, holiday cards, and invitations. After her Ph.D., she moved to San Francisco to work at the Fine Arts Museums of San Francisco (FAMSF), as sculptor and occasional curator. She later reflected that her "ideal life would be to make art full-time but that sculpture was too solitary". Apple In 1982, Kare was welding a life-sized razorback hog sculpture commissioned by an Arkansas museum when she received a phone call from high school friend Andy Hertzfeld. In exchange for an Apple II computer, he solicited her to hand-draw a few icons and font elements to inspire the upcoming Macintosh computer. However, she had no experience in computer graphics and "didn't know the first thing about designing a typeface" or pixel art so she drew heavily upon her fine art experience in mosaics, needlepoint, and pointillism. He suggested that she get a grid notebook of the smallest graph paper she could find at the University Art store in Palo Alto and mock up several representations of his software commands and applications. This includes an icon of scissors for the "cut" command
https://en.wikipedia.org/wiki/Sobel%20operator
The Sobel operator, sometimes called the Sobel–Feldman operator or Sobel filter, is used in image processing and computer vision, particularly within edge detection algorithms where it creates an image emphasising edges. It is named after Irwin Sobel and Gary M. Feldman, colleagues at the Stanford Artificial Intelligence Laboratory (SAIL). Sobel and Feldman presented the idea of an "Isotropic 3 × 3 Image Gradient Operator" at a talk at SAIL in 1968. Technically, it is a discrete differentiation operator, computing an approximation of the gradient of the image intensity function. At each point in the image, the result of the Sobel–Feldman operator is either the corresponding gradient vector or the norm of this vector. The Sobel–Feldman operator is based on convolving the image with a small, separable, and integer-valued filter in the horizontal and vertical directions and is therefore relatively inexpensive in terms of computations. On the other hand, the gradient approximation that it produces is relatively crude, in particular for high-frequency variations in the image. Formulation The operator uses two 3×3 kernels which are convolved with the original image to calculate approximations of the derivatives – one for horizontal changes, and one for vertical. If we define A as the source image, and Gx and Gy are two images which at each point contain the horizontal and vertical derivative approximations respectively, the computations are as follows: where here denotes the 2-dimensional signal processing convolution operation. Since the Sobel kernels can be decomposed as the products of an averaging and a differentiation kernel, they compute the gradient with smoothing. For example, can be written as The x-coordinate is defined here as increasing in the "right"-direction, and the y-coordinate is defined as increasing in the "down"-direction. At each point in the image, the resulting gradient approximations can be combined to give the gradient magnitude, using: Using this information, we can also calculate the gradient's direction: where, for example, is 0 for a vertical edge which is lighter on the right side (for see atan2). More formally Since the intensity function of a digital image is only known at discrete points, derivatives of this function cannot be defined unless we assume that there is an underlying differentiable intensity function that has been sampled at the image points. With some additional assumptions, the derivative of the continuous intensity function can be computed as a function on the sampled intensity function, i.e. the digital image. It turns out that the derivatives at any particular point are functions of the intensity values at virtually all image points. However, approximations of these derivative functions can be defined at lesser or larger degrees of accuracy. The Sobel–Feldman operator represents a rather inaccurate approximation of the image gradient, but is still of sufficient quality to be of pra
https://en.wikipedia.org/wiki/Roberts%20cross
The Roberts cross operator is used in image processing and computer vision for edge detection. It was one of the first edge detectors and was initially proposed by Lawrence Roberts in 1963. As a differential operator, the idea behind the Roberts cross operator is to approximate the gradient of an image through discrete differentiation which is achieved by computing the sum of the squares of the differences between diagonally adjacent pixels. Motivation According to Roberts, an edge detector should have the following properties: the produced edges should be well-defined, the background should contribute as little noise as possible, and the intensity of edges should correspond as close as possible to what a human would perceive. With these criteria in mind and based on then prevailing psychophysical theory Roberts proposed the following equations: where x is the initial intensity value in the image, z is the computed derivative and i,j represent the location in the image. The results of this operation will highlight changes in intensity in a diagonal direction. One of the most appealing aspects of this operation is its simplicity; the kernel is small and contains only integers. However with the speed of computers today this advantage is negligible and the Roberts cross suffers greatly from sensitivity to noise. Formulation In order to perform edge detection with the Roberts operator we first convolve the original image, with the following two kernels: Let be a point in the original image and be a point in an image formed by convolving with the first kernel and be a point in an image formed by convolving with the second kernel. The gradient can then be defined as: The direction of the gradient can also be defined as follows: Note that angle of 0° corresponds to a vertical orientation such that the direction of maximum contrast from black to white runs from left to right on the image. Example comparisons Here, four different gradient operators are used to estimate the magnitude of the gradient of the test image. See also Digital image processing Feature detection (computer vision) Feature extraction Sobel operator Prewitt operator References Edge detection
https://en.wikipedia.org/wiki/Ohrdruf%20concentration%20camp
Ohrdruf was a German forced labor and concentration camp located near Ohrdruf, south of Gotha, in Thuringia, Germany. It was part of the Buchenwald concentration camp network. Operation Created in November 1944 near the town of Ohrdruf, south of Gotha, in Thuringia, Germany, Ohrdruf was initially a separate forced labour camp directly controlled by the SS Main Economic and Administrative Office (SS-WVHA) but then became a subcamp of the Buchenwald concentration camp near Weimar. It made use of huts originally built in 1940 for Wehrmacht troops using the Truppenübungsplatz nearby as well as other facilities. The camp, code-named Außenlager S III, consisted of a northern and a southern camp; later, a tent camp at Espenfeld and a camp at Crawinkel were added. The camp supplied forced labor in the form of concentration camp prisoners for a planned railway construction project for an immense communications center inside the basement of the Mühlberg castle in Ohrdruf. Inmates had to work to connect the castle to the main railroad line and to dig tunnels in the nearby mountains, which would be used as emergency shelter for the train that contained the "Führerhauptquartier". The proposed communication centre was never completed due to the rapid American advance. By late 1944, around 10,000 prisoners were housed here; through March 1945, the total number sent here was around 20,000, mainly Russians, Poles, Hungarian Jews, some French, Czechs, Italians, Belgians, Greeks, Yugoslavians and Germans. Conditions were atrocious: in the huts there were no beds, "only blood-covered straw and lice". Despite the season, not all prisoners were housed in huts—some were accommodated in stables, tents and old bunkers. Work days were initially 10 to 11 hours long, then later 14 hours, involving strenuous physical labor building roads, railways and tunnels. In addition, inmates had to cope with long marches and musterings, total lack of sanitary equipment and medical facilities, and insufficient food and clothing. In January 1945, the SS guards were reinforced by units from Auschwitz. Towards the end of the war, the prisoners were used to construct a subterranean headquarters for the government (Führerhauptquartier) to be used following a possible evacuation of Berlin. It was never completed. It is still not clear exactly what projects the prisoners of Ohrdruf were working on. Besides the temporary quarters for the Reich leadership, the extensive tunneling and other works at Jonastal point to an armament factory of some kind. There is a theory, advanced by Rainer Karlsch that the facility was intended as (and was in fact used as) a testing site for a German nuclear bomb. Other possibilities are an improved V-2 rocket or long-range jet-powered bombers, but all of this is speculative. Those unable to work were moved by the SS to Sterbelager: 4,300 sick inmates were moved to Bergen-Belsen, or the Kleines Lager at Buchenwald. In late March 1945, the camp had a prisoner
https://en.wikipedia.org/wiki/Apple%20Developer
Apple Developer (formerly Apple Developer Connection) is Apple Inc.'s website for software development tools, application programming interfaces (APIs), and technical resources. It contains resources to help software developers write software for the macOS, iOS, iPadOS, watchOS, and tvOS platforms. The applications are created in Xcode, or sometimes using other supported 3rd party programs. The apps can then be submitted to App Store Connect (formerly iTunes Connect), another one of Apple's website for approval the internal review team. Once approved, they can be distributed publicly via the respective app stores, i.e. App Store (iOS) for iOS and iPadOS apps, iMessage app store for Messages apps and Sticker pack apps, App Store (tvOS) for Apple TV apps, watchOS app store for Apple Watch apps with watchOS 6 and later, and via App Store (iOS) for earlier versions of watchOS. macOS apps are a notable exception to this, as they can be distributed similarly via Apple's Mac App Store or independently on the World Wide Web. Programs Mac The Mac developer program is a way for developers of Apple's macOS operating system to distribute their apps through the Mac App Store. It costs US$99/year. Unlike iOS, developers are not required to sign up for the program in order to distribute their applications. Mac applications can freely be distributed via the developer's website and/or any other method of distribution excluding the Mac App Store. The Mac Developer Program also provides developers with resources to help them distribute their Mac applications. Software leaks There have been several leaks of secret Apple software through the prerelease program, most notably the Mac OS X 10.4 Tiger leaks, in which Apple sued three men who allegedly obtained advance copies of Mac OS X 10.4 prerelease builds from the site and leaked it to BitTorrent. Attempted hacks On July 18, 2013, an intruder attempted to access sensitive personal information on Apple's developer servers. The information was encrypted, but Apple could not guarantee that some information about developers may have been accessed. The Developer website was taken down for "maintenance" that Thursday, and was said to be undergoing maintenance through Sunday, when Apple posted a notice on the site notifying users of the attempted hack. They have stated that they will be rebuilding their servers and the developer system to prevent this from happening in the future. Developer tutorials and tools Apple provides free tutorials and guide support for their developer program. In the beginning of July 2023, Apple finished construction on their Developer Center in Cupertino, California. During special events, developers are able to visit the center for one-on-one’s with Apple employees, demos of upcoming software, and more. See also Apple ID References External links Apple Developer website GeeksforGeeks MacDevelopers Apple Inc. services Macintosh operating systems development Software developer com
https://en.wikipedia.org/wiki/Texas%20sharpshooter%20fallacy
The Texas sharpshooter fallacy is an informal fallacy which is committed when differences in data are ignored, but similarities are overemphasized. From this reasoning, a false conclusion is inferred. This fallacy is the philosophical or rhetorical application of the multiple comparisons problem (in statistics) and apophenia (in cognitive psychology). It is related to the clustering illusion, which is the tendency in human cognition to interpret patterns where none actually exist. The name comes from a joke about a Texan who fires some gunshots at the side of a barn, then paints a shooting target centered on the tightest cluster of hits and claims to be a sharpshooter. Structure The Texas sharpshooter fallacy often arises when a person has a large amount of data at their disposal but only focuses on a small subset of that data. Some factor other than the one attributed may give all the elements in that subset some kind of common property (or pair of common properties, when arguing for correlation). If the person attempts to account for the likelihood of finding some subset in the large data with some common property by a factor other than its actual cause, then that person is likely committing a Texas sharpshooter fallacy. The fallacy is characterized by a lack of a specific hypothesis prior to the gathering of data, or the formulation of a hypothesis only after data have already been gathered and examined. Thus, it typically does not apply if one had an ex ante, or prior, expectation of the particular relationship in question before examining the data. For example, one might, prior to examining the information, have in mind a specific physical mechanism implying the particular relationship. One could then use the information to give support or cast doubt on the presence of that mechanism. Alternatively, if additional information can be generated using the same process as the original information, one can use the original information to construct a hypothesis, and then test the hypothesis on the new data. (See hypothesis testing.) What one cannot do is use the same information to construct and test the same hypothesis (see hypotheses suggested by the data)—to do so would be to commit the Texas sharpshooter fallacy. Examples A Swedish study in 1992 tried to determine whether power lines caused some kind of poor health effects. The researchers surveyed people living within 300 meters of high-voltage power lines over 25 years and looked for statistically significant increases in rates of over 800 ailments. The study found that the incidence of childhood leukemia was four times higher among those who lived closest to the power lines, and it spurred calls to action by the Swedish government. The problem with the conclusion, however, was that the number of potential ailments, i.e., over 800, was so large that it created a high probability that at least one ailment would exhibit the appearance of a statistically significant difference by chance a
https://en.wikipedia.org/wiki/MirOS%20BSD
MirOS BSD (originally called MirBSD) is a free and open source operating system which started as a fork of OpenBSD 3.1 in August 2002. It was intended to maintain the security of OpenBSD with better support for European localisation. Since then it has also incorporated code from other free BSD descendants, including NetBSD, MicroBSD and FreeBSD. Code from MirOS BSD was also incorporated into ekkoBSD, and when ekkoBSD ceased to exist, artwork, code and developers ended up working on MirOS BSD for a while. Unlike the three major BSD distributions, MirOS BSD supports only the x86 and SPARC architectures. One of the project's goals was to be able to port the MirOS userland to run on the Linux kernel, hence the deprecation of the MirBSD name in favour of MirOS. History MirOS BSD originated as OpenBSD-current-mirabilos, an OpenBSD patchkit, but soon grew on its own after some differences in opinion between the OpenBSD project leader Theo de Raadt and Thorsten Glaser. Despite the forking, MirOS BSD was synchronised with the ongoing development of OpenBSD, thus inheriting most of its good security history, as well as NetBSD and other BSD flavours. One goal was to provide a faster integration cycle for new features and software than OpenBSD. According to the developers, "controversial decisions are often made differently from OpenBSD; for instance, there won't be any support for SMP in MirOS". There will also be a more tolerant software inclusion policy, and "the end result is, hopefully, a more refined BSD experience". Another goal of MirOS BSD was to create a more "modular" base BSD system, similar to Debian. While MirOS Linux (linux kernel + BSD userland) was discussed by the developers sometime in 2004, it has not materialised. Features Development snapshots are live and installation CD for x86 and SPARC architectures on one media, via the DuaLive technology. Latest snapshots have been extended to further boot a grml (a Linux-based rescue system, x86 only) via the Triforce technology mksh (MirBSD Korn shell): an actively developed flavour of KornShell and heir of pdksh The base system and some MirPorts store "dotfiles" data in ~/.etc. directory in user's home to avoid cluttering the root of the home directory Application packages from the NetBSD-derived pkgsrc repositories were configured for use in MirBSD starting in 2011. The most important differences to OpenBSD were: Completely rewritten, GRUB multi boot compatible, boot loader and boot manager without an 8 GiB limit and with Soekris support Slim base system (without NIS, Kerberos, BIND, i18n, BSD games, etc.), Bind and the BSDgames being available as a port Binary security updates for stable releases ISDN support IPv6 support in the web server software wtf, a database of acronyms Some of the GNUtools (like gzip and *roff) were replaced by original UNIX code released by Caldera International (SCO) under a BSD licence 64-bit time handling routines (time_t) Correct handling of
https://en.wikipedia.org/wiki/IRIS%20GL
IRIS GL (Integrated Raster Imaging System Graphics Library) is a proprietary graphics API created by Silicon Graphics (SGI) in the early 1980s for producing 2D and 3D computer graphics on their IRIX-based IRIS graphical workstations. Later SGI removed their proprietary code, reworked various system calls, and released IRIS GL as the industry standard OpenGL. See also Silicon Graphics Image for file extension .iris SGI IRIS IrisVision - first port to PCs References External links Official OpenGL website SGI's OpenGL website 3D graphics software Application programming interfaces Graphics libraries Graphics standards SGI graphics Video game development software
https://en.wikipedia.org/wiki/John%20Galbraith%20Graham
The Reverend John Galbraith Graham MBE (16 February 1921 – 26 November 2013) was a British crossword compiler, best known as Araucaria of The Guardian. He was also, like his father Eric Graham, a Church of England priest. Career Graham was born in Oxford, where his father, Eric Graham, held the post of dean of Oriel College. The family moved to a country rectory in Wiltshire. After attending St Edward's School, Oxford, he obtained a place to read classics at King's College, Cambridge, leaving to join the RAF when the Second World War began. After the war he returned to King's to read theology. In 1949 he joined the staff of St Chad's College, Durham as Chaplain and Tutor where he worked until 1952. On Graham's departure the Principal, Theo Wetherall, paying tribute to his good nature, wrote that "he squandered his sensitive taste and knowledge of Classics on 1B Greek with unfailing patience enlivened by rare expressions of nausea". He later became a vicar in Huntingdonshire. Writing his first puzzle for The Guardian in July 1958, he eventually took to compiling crosswords full-time when his divorce in the late 1970s lost him his living as a clergyman (he was reinstated after the death of his first wife). In December 1970, The Guardian began publishing its crosswords under the pseudonyms of their compilers, at which point Graham selected the name "Araucaria". Besides Araucaria's cryptic crosswords in The Guardian, of which he produced around six per month, he also set around a third of the quick crosswords for The Guardian, cryptic crosswords as Cinephile in the Financial Times and puzzles for other publications. In 1984, he founded 1 Across magazine as a way of providing more of his puzzles to subscribers who wanted them; the magazine still publishes five crosswords monthly: four new puzzles by various setters, and one by Araucaria taken from the extensive 1 Across archive. He took his pseudonym from the monkey-puzzle tree, whose Latin name is Araucaria araucana. Another name for this tree is the "Chile Pine", of which "Cinephile" is an anagram, demonstrating his love for film. Graham lived in Somersham, Cambridgeshire. He was made a Member of the Order of the British Empire in the 2005 New Year's Honours, for services to the newspaper industry. In July 2011 Graham was the subject of the BBC radio programme Desert Island Discs, in which he revealed that he always used Scrabble tiles as an aid when compiling. The December 2012 issue of 1 Across magazine printed an Araucaria puzzle which revealed that Graham had oesophageal cancer. The puzzle was reprinted as Guardian cryptic No. 25,842 on 11 January 2013. The puzzle had a supplementary narrative beginning "Araucaria has 18 down of the 19, which is being treated with 13 15". Those who solved the puzzle found the answer to 18 down was "cancer", to 19 "oesophagus", and to 13 and 15 "palliative" and "care". Other clues had answers such as "Macmillan Nurse", "stent", "chemotherapy", "endoscopy"
https://en.wikipedia.org/wiki/Elsa
Elsa may refer to: ELSA (acronym) ELSA Technology, a manufacturer of computer hardware English Language Skills Assessment English Longitudinal Study of Ageing Ethical, Legal and Social Aspects research European Law Students' Association European League of Stuttering Associations Evangelical Lutheran Synod of Australia, a group in the history of the Lutheran Church of Australia Experimental light-sport aircraft (E-LSA) People Elsa (given name), a female given name Pedro Elsa (1901–unknown), Argentine Olympic athlete Characters Elsa (Frozen), fictional character from the Disney animated franchise, Frozen Elsa von Brabant, a character in the 1850 Richard Wagner opera Lohengrin Places Elsa, California, a place in California, U.S. Elsa, Texas, U.S. Elsa, Yukon, Canada Other 182 Elsa, an asteroid Elsa (album), debut album of Elsa Lunghini Elsa (river), Tuscany, Italy Elsa the lioness, subject of the book and film Born Free Storm Elsa, 13–20 December 2019 The abbreviation for × Elearethusa, an orchid genus Hurricane Elsa, Category 1 Atlantic hurricane in 2021 See also Elsa & Fred (disambiguation) Else (disambiguation)
https://en.wikipedia.org/wiki/Patch%20%28Unix%29
The computer tool patch is a Unix program that updates text files according to instructions contained in a separate file, called a patch file. The patch file (also called a patch for short) is a text file that consists of a list of differences and is produced by running the related diff program with the original and updated file as arguments. Updating files with patch is often referred to as applying the patch or simply patching the files. History The original patch program was written by Larry Wall (who went on to create the Perl programming language) and posted to mod.sources (which later became comp.sources.unix) in May 1985. patch was added to XPG4, which later became POSIX. Wall's code remains the basis of "patch" programs provided in OpenBSD, FreeBSD, and schilytools. The Open Software Foundation, which merged into The Open Group, is said to have maintained a derived version. The GNU project/FSF maintains its patch, forked from the Larry Wall version. The repository is different from that of GNU diffutils, but the documentation is managed together. Usage context Developed by a programmer for other programmers, patch was frequently used for updating source code to a newer version. Because of this, many people came to associate patches with source code, whereas patches can in fact be applied to any text. Patched files do not accumulate any unneeded text, which is what some people perceive based on the English meaning of the word; patch is as capable of removing text as it is of adding it. Patches described here should not be confused with binary patches, which, although can be conceptually similar, are distributed to update binary files comprising the program to a new release. Patches in software development The diff files that serve as input to patch are readable text files, which means that they can be easily reviewed or modified by humans before use. In addition to the "diff" program, diffs can also be produced by other programs, such as Subversion, CVS, RCS, Mercurial and Git. Patches have been the crucial component of many source control systems, including CVS. Advanced diffs When more advanced diffs are used, patches can be applied even to files that have been modified in the meantime, as long as those modifications do not interfere with the patch. This is achieved by using "context diffs" and "unified diffs" (also known as "unidiffs"), which surround each change with context, which is the text immediately before and after the changed part. Patch can then use this context to locate the region to be patched even if it has been displaced by changes earlier in the file, using the line numbers in the diffs as a starting point. Because of this property, context and unified diffs are the preferred form of patches for submission to many software projects. The above features make diff and patch especially popular for exchanging modifications to open-source software. Outsiders can download the latest publicly available source code,
https://en.wikipedia.org/wiki/Focus%20%28computing%29
In a computing graphical user interface (GUI), a component has focus when it is selected to receive input from the user by an event such as a mouse button click or keypress. Moving the focus away from a specific user interface element is known as a blur event in relation to this element. Typically, the focus is withdrawn from an element by giving another element the focus. This means that focus and blur events are virtually simultaneous in relation to different user interface elements, one that becomes focused and one that is "blurred" (in the computing, not visual, sense). The concept is similar to a cursor in a text-based environment. However, when considering a graphical interface, there is also a mouse pointer involved. Moving the mouse will typically move the mouse pointer without changing the focus. The focus can usually be changed by clicking on a component that can receive focus with the mouse. Many desktops also allow the focus to be changed with the keyboard. By convention, the key is used to move the focus to the next focusable component and to the previous one. When graphical interfaces were first introduced, many computers did not have mice, so this alternative was necessary. This feature makes it easier for people unable to use a mouse to use the user interface. In certain circumstances the arrow keys can be used to change focus. Window focus The behaviour of focus on one's desktop can be governed by policies in window management. Click to focus On most mainstream user-interfaces, such as ones made by Microsoft and Apple, it is common to find a "focus follows click" policy (or "click to focus"), where one must click the mouse inside of the window for that window to gain focus. This also typically results in the window being raised above all other windows on screen. If a clickfocus model such as this is being used, the current application window continues to retain focus and collect input, even if the mouse pointer is over another application window. Focus follows pointer Another common policy on Unix systems using X Window System (X11) is the "focus follows mouse" policy (or FFM), where the focus automatically follows the current placement of the pointer. The focused window is not necessarily raised; parts of it may remain below other windows. Window managers with this policy usually offer "autoraise," which raises the window when it is focused, typically after a configurable short delay. A possible consequence of a followfocus policy is that no window has focus when the pointer is moved over the background with no window underneath; otherwise the focus simply remains in the last window. Sloppy focus The sloppyfocus model is a variant of the followfocus model. It allows input to continue to be collected by the last focused window when the mouse pointer is moved away from any window, such as over a menubar or desktop area. Focus models used by X11 window managers Intra-window component focus Individual components of a window
https://en.wikipedia.org/wiki/Cron
The cron command-line utility is a job scheduler on Unix-like operating systems. Users who set up and maintain software environments use cron to schedule jobs (commands or shell scripts), also known as cron jobs, to run periodically at fixed times, dates, or intervals. It typically automates system maintenance or administration—though its general-purpose nature makes it useful for things like downloading files from the Internet and downloading email at regular intervals. Cron is most suitable for scheduling repetitive tasks. Scheduling one-time tasks can be accomplished using the associated at utility. Cron's name originates from chronos, the Greek word for time. Overview The actions of cron are driven by a crontab (cron table) file, a configuration file that specifies shell commands to run periodically on a given schedule. The crontab files are stored where the lists of jobs and other instructions to the cron daemon are kept. Users can have their own individual crontab files and often there is a system-wide crontab file (usually in /etc or a subdirectory of /etc e.g. ) that only system administrators can edit. Each line of a crontab file represents a job, and looks like this: # ┌───────────── minute (0–59) # │ ┌───────────── hour (0–23) # │ │ ┌───────────── day of the month (1–31) # │ │ │ ┌───────────── month (1–12) # │ │ │ │ ┌───────────── day of the week (0–6) (Sunday to Saturday; # │ │ │ │ │ 7 is also Sunday on some systems) # │ │ │ │ │ # │ │ │ │ │ # * * * * * <command to execute> The syntax of each line expects a cron expression made of five fields which represent the time to execute the command, followed by a shell command to execute. While normally the job is executed when the time/date specification fields all match the current time and date, there is one exception: if both "day of month" (field 3) and "day of week" (field 5) are restricted (not contain "*"), then one or both must match the current day. For example, the following clears the Apache error log at one minute past midnight (00:01) every day, assuming that the default shell for the cron user is Bourne shell compliant: 1 0 * * * printf "" > /var/log/apache/error_log This example runs a shell program called export_dump.sh at 23:45 (11:45 PM) every Saturday. 45 23 * * 6 /home/oracle/scripts/export_dump.sh Note: On some systems it is also possible to specify */n to run for every n-th interval of time. Also, specifying multiple specific time intervals can be done with commas (e.g., 1,2,3). The line below would output "hello world" to the command line every 5th minute of every first, second and third hour (i.e., 01:00, 01:05, 01:10, up until 03:55). */5 1,2,3 * * * echo hello world The configuration file for a user can be edited by calling crontab -e regardless of where the actual implementation stores this file. Some cron implementations, such as the popular 4th BSD edition written by Paul Vixie and included in many Linux distributions
https://en.wikipedia.org/wiki/Canny%20edge%20detector
The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It was developed by John F. Canny in 1986. Canny also produced a computational theory of edge detection explaining why the technique works. Development Canny edge detection is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. Canny has found that the requirements for the application of edge detection on diverse vision systems are relatively similar. Thus, an edge detection solution to address these requirements can be implemented in a wide range of situations. The general criteria for edge detection include: Detection of edge with low error rate, which means that the detection should accurately catch as many edges shown in the image as possible The edge point detected from the operator should accurately localize on the center of the edge. A given edge in the image should only be marked once, and where possible, image noise should not create false edges. To satisfy these requirements Canny used the calculus of variations – a technique which finds the function which optimizes a given functional. The optimal function in Canny's detector is described by the sum of four exponential terms, but it can be approximated by the first derivative of a Gaussian. Among the edge detection methods developed so far, Canny edge detection algorithm is one of the most strictly defined methods that provides good and reliable detection. Owing to its optimality to meet with the three criteria for edge detection and the simplicity of process for implementation, it became one of the most popular algorithms for edge detection. Process The process of Canny edge detection algorithm can be broken down to five different steps: Apply Gaussian filter to smooth the image in order to remove the noise Find the intensity gradients of the image Apply gradient magnitude thresholding or lower bound cut-off suppression to get rid of spurious response to edge detection Apply double threshold to determine potential edges Track edge by hysteresis: Finalize the detection of edges by suppressing all the other edges that are weak and not connected to strong edges. Gaussian filter Since all edge detection results are easily affected by the noise in the image, it is essential to filter out the noise to prevent false detection caused by it. To smooth the image, a Gaussian filter kernel is convolved with the image. This step will slightly smooth the image to reduce the effects of obvious noise on the edge detector. The equation for a Gaussian filter kernel of size (2k+1)×(2k+1) is given by: Here is an example of a 5×5 Gaussian filter, used to create the adjacent image, with = 1. (The asterisk denotes a convolution operation.) It is important to understand that the selection of the size of the
https://en.wikipedia.org/wiki/Noise%20reduction
Noise reduction is the process of removing noise from a signal. Noise reduction techniques exist for audio and images. Noise reduction algorithms may distort the signal to some degree. Noise rejection is the ability of a circuit to isolate an undesired signal component from the desired signal component, as with common-mode rejection ratio. All signal processing devices, both analog and digital, have traits that make them susceptible to noise. Noise can be random with an even frequency distribution (white noise), or frequency-dependent noise introduced by a device's mechanism or signal processing algorithms. In electronic systems, a major type of noise is hiss created by random electron motion due to thermal agitation. These agitated electrons rapidly add and subtract from the output signal and thus create detectable noise. In the case of photographic film and magnetic tape, noise (both visible and audible) is introduced due to the grain structure of the medium. In photographic film, the size of the grains in the film determines the film's sensitivity, more sensitive film having larger-sized grains. In magnetic tape, the larger the grains of the magnetic particles (usually ferric oxide or magnetite), the more prone the medium is to noise. To compensate for this, larger areas of film or magnetic tape may be used to lower the noise to an acceptable level. In general Noise reduction algorithms tend to alter signals to a greater or lesser degree. The local signal-and-noise orthogonalization algorithm can be used to avoid changes to the signals. In seismic exploration Boosting signals in seismic data is especially crucial for seismic imaging, inversion, and interpretation, thereby greatly improving the success rate in oil & gas exploration. The useful signal that is smeared in the ambient random noise is often neglected and thus may cause fake discontinuity of seismic events and artifacts in the final migrated image. Enhancing the useful signal while preserving edge properties of the seismic profiles by attenuating random noise can help reduce interpretation difficulties and misleading risks for oil and gas detection. In audio Tape hiss is a performance-limiting issue in analog tape recording. This is related to the particle size and texture used in the magnetic emulsion that is sprayed on the recording media, and also to the relative tape velocity across the tape heads. Four types of noise reduction exist: single-ended pre-recording, single-ended hiss reduction, single-ended surface noise reduction, and codec or dual-ended systems. Single-ended pre-recording systems (such as Dolby HX Pro), work to affect the recording medium at the time of recording. Single-ended hiss reduction systems (such as DNL or DNR) work to reduce noise as it occurs, including both before and after the recording process as well as for live broadcast applications. Single-ended surface noise reduction (such as CEDAR and the earlier SAE 5000A, Burwen TNE 7000, and Packb
https://en.wikipedia.org/wiki/MTV%20Video%20Music%20Award%20for%20Video%20of%20the%20Year
The MTV Video Music Award for Video of the Year is the most prestigious competitive award and the final award presented at the annual MTV Video Music Awards. The award was created by the U.S. network MTV to honor artists with the best music videos. At the first MTV Video Music Awards ceremony in , the Video of the Year honor was presented to The Cars for the video "You Might Think". Originally, all winners were determined by a special panel of music video directors, producers, and record company executives. Since the awards, winners of major categories are determined by viewers' votes through MTV's website, while the jury decides in the technical categories. History Taylor Swift holds the record for the most wins, with a total of four for "Bad Blood" (), "You Need to Calm Down" (), All Too Well: The Short Film () and Anti-Hero" (). Eminem holds the record for the most nominations, with seven as lead artist. David Lee Roth (), U2 (), and Lady Gaga () are the only acts to have had two Video of the Year nominations in a single ceremony. Two acts have won both the Video of the Year and the honorary Michael Jackson Video Vanguard Award in the same night—Peter Gabriel in with "Sledgehammer" and Justin Timberlake in with "Mirrors". Swift is the first artist to win Video of the Year for a self-directed video, with All Too Well: The Short Film. Kendrick Lamar, Swift, and Lil Nas X have further won the award for a video they co-directed: Lamar for "Humble" in , Swift for "You Need to Calm Down" in 2019, and Lil Nas X for "Montero (Call Me By Your Name)" in . Recipients 1980s 1990s 2000s 2010s 2020s Statistics Artists with multiple wins 4 wins Taylor Swift 2 wins Beyoncé Eminem Kendrick Lamar Missy Elliott Rihanna Artists with multiple nominations 8 nominations Eminem 7 nominations Beyoncé 6 nominations Drake Taylor Swift 5 nominations Bruno Mars Justin Timberlake 4 nominations Jay-Z Kanye West Lady Gaga Madonna Rihanna U2 3 nominations Aerosmith Ariana Grande Britney Spears Doja Cat Ed Sheeran Kendrick Lamar Lil Nas X Michael Jackson Missy Elliott Pharrell Williams R.E.M. Red Hot Chili Peppers The Weeknd 2 nominations Adele Beastie Boys Billie Eilish Christina Aguilera David Lee Roth DJ Khaled Don Henley Future Green Day Gwen Stefani Janet Jackson Jonas Brothers Justin Bieber Katy Perry Lil' Kim Miley Cyrus Nirvana NSYNC Olivia Rodrigo Peter Gabriel Steve Winwood SZA Will Smith Young Thug See also Grammy Award for Best Music Video MTV Europe Music Award for Best Video Notes References External links MTV Video Music Awards Awards established in 1984
https://en.wikipedia.org/wiki/Synchronous%20Data%20Link%20Control
Synchronous Data Link Control (SDLC) is a computer communications protocol. It is the layer 2 protocol for IBM's Systems Network Architecture (SNA). SDLC supports multipoint links as well as error correction. It also runs under the assumption that an SNA header is present after the SDLC header. SDLC was mainly used by IBM mainframe and midrange systems; however, implementations exist on many platforms from many vendors. In the United States and Canada, SDLC can be found in traffic control cabinets. In 1975, IBM developed the first bit-oriented protocol, SDLC, from work done for IBM in the early 1970s. This de facto standard has been adopted by ISO as High-Level Data Link Control (HDLC) in 1979 and by ANSI as Advanced Data Communication Control Procedures (ADCCP). The latter standards added features such as the Asynchronous Balanced Mode, frame sizes that did not need to be multiples of bit-octets, but also removed some of the procedures and messages (such as the TEST message). SDLC operates independently on each communications link, and can operate on point-to-point multipoint or loop facilities, on switched or dedicated, two-wire or four-wire circuits, and with full-duplex and half-duplex operation. A unique characteristic of SDLC is its ability to mix half-duplex secondary stations with full-duplex primary stations on four-wire circuits, thus reducing the cost of dedicated facilities. Intel used SDLC as a base protocol for BITBUS, still popular in Europe as fieldbus and included support in several controllers (i8044/i8344, i80152). The 8044 controller is still in production by third-party vendors. Other vendors putting hardware support for SDLC (and the slightly different HDLC) into communication controller chips of the 1980s included Zilog, Motorola, and National Semiconductor. As a result, a wide variety of equipment in the 1980s used it and it was very common in the mainframe centric corporate networks which were the norm in the 1980s. The most common alternatives for SNA with SDLC were probably DECnet with Digital Data Communications Message Protocol (DDCMP), Burroughs Network Architecture (BNA) with Burroughs Data Link Control (BDLC), and ARPANET with IMPs. Differences between SDLC and HDLC HDLC is mostly an extension of SDLC, but some features were deleted or renamed. HDLC features not in SDLC Features present in HDLC, but not SDLC, are: frames not a multiple of 8 bits long are illegal in SDLC, but optionally legal in HDLC. HDLC optionally allows addresses more than 1 byte long. HDLC has an option for a 32-bit frame check sequence. asynchronous response mode, and the associated SARM and SARME U frames, asynchronous balanced mode, and the associated SABM and SABME U frames, and several other frame types created for HDLC: the selective reject (SREJ) S frame, the reset (RSET) command, and the nonreserved (NR0 through NR3) U frames. Also not in SDLC are later HDLC extensions in ISO/IEC 13239 such as: 15- and 31-bit seq
https://en.wikipedia.org/wiki/Linear%20trend%20estimation
Linear trend estimation is a statistical technique to aid interpretation of data. When a series of measurements of a process are treated as, for example, a sequence or time series, trend estimation can be used to make and justify statements about tendencies in the data, by relating the measurements to the times at which they occurred. This model can then be used to describe the behavior of the observed data, without explaining it. In particular, it may be useful to determine if measurements exhibit an increasing or decreasing trend which is statistically distinguished from random behavior. Some examples are determining the trend of the daily average temperatures at a given location from winter to summer, and determining the trend in a global temperature series over the last 100 years. In the latter case, issues of homogeneity are important (for example, about whether the series is equally reliable throughout its length). Fitting a trend: least-squares Given a set of data and the desire to produce some kind of model of those data, there are a variety of functions that can be chosen for the fit. If there is no prior understanding of the data, then the simplest function to fit is a straight line with the data values on the y axis, and time (t = 1, 2, 3, ...) on the x axis. Once it has been decided to fit a straight line, there are various ways to do so, but the most usual choice is a least-squares fit. This method minimizes the sum of the squared errors in the data series y. Given a set of points in time , and data values observed for those points in time, values of and are chosen so that is minimized. Here at + b is the trend line, so the sum of squared deviations from the trend line is what is being minimized. This can always be done in closed form since this is a case of simple linear regression. For the rest of this article, “trend” will mean the slope of the least squares line, since this is a common convention. Trends in random data Before considering trends in real data, it is useful to understand trends in random data. If a series which is known to be random is analysed – fair dice falls, or computer-generated pseudo-random numbers – and a trend line is fitted through the data, the chances of an exactly zero estimated trend are negligible. But the trend would be expected to be small. If an individual series of observations is generated from simulations that employ a given variance of noise that equals the observed variance of our data series of interest, and a given length (say, 100 points), a large number of such simulated series (say, 100,000 series) can be generated. These 100,000 series can then be analysed individually to calculate estimated trends in each series, and these results establish a distribution of estimated trends that are to be expected from such random data – see diagram. Such a distribution will be normal according to the central limit theorem except in pathological cases. A level of statistical certainty, S,
https://en.wikipedia.org/wiki/Cybiko
The Cybiko is a handheld computer introduced in the United States by David Yang's company Cybiko Inc. as a retail test market in New York on April 2000, and rolled out nationwide in May 2000. It is designed for teens, featuring its own two-way radio text messaging system. It has over 430 "official" freeware games and applications. It features a rubber QWERTY keyboard. An MP3 player add-on with a SmartMedia card slot was made for the unit as well. The company stopped manufacturing the units after two product versions and a few years on the market. Cybikos can communicate with each other up to a maximum range of . Several Cybikos can chat with each other in a wireless chatroom. By the end of 2000, the Cybiko Classic had sold over 500,000 units. Models Cybiko Classic There are two models of the Classic Cybiko. Visually, the only difference is that the original version has a power switch on the side, while the updated version uses the "escape" key for power management. Internally, the differences between the two models are in the internal memory and the firmware location. The CPU is a Hitachi H8S/2241 clocked at 11.0592 MHz and the Cybiko Classic also has an Atmel AT90S2313 co-processor, clocked at 4 MHz to provide some support for RF communications. It has 512KB flash memory-based ROM flash memory and 256KB RAM installed. An add-on slot is located in the rear. The Cybiko Classics were sold in five colors: blue, purple, neon green, white, and black. The black version has a yellow keypad, instead of the white unit found on other Cybikos. The add-on slot has the same pin arrangement as a PC card, but it is not electrically compatible. Cybiko Xtreme The Cybiko Xtreme is the second-generation Cybiko handheld. It features various improvements over the original Cybiko, such as a faster processor, more RAM, more ROM, a new operating system, a new keyboard layout and case design, greater wireless range, a microphone, improved audio output, and smaller size. The CPU is a Hitachi H8S/2323 at 18 MHz, and like the original version, it also has an Atmel AT90S2313 co-processor at 4 MHz to provide some support for RF communications. 512KiB ROM flash memory and 1.5MiB RAM is installed. It features an add-on slot in the rear, which is compatible with the MP3 player. It was released in two variants. US variant (Model No. CY44801) has frequency range of 902-928 MHz and European variant (Model No. CY44802) with frequency range of 868-870 MHz. No other functional difference exists between these variants. Options MP3 player Classic MP3 Player: The MP3 player for the Classic plugs into the bottom of the Cybiko and used SmartMedia cards; it can support a maximum size of 64 MB. The player has built-in controls. Xtreme MP3 Player: The MP3 player plugs into the rear of the Cybiko Xtreme. It has a slot for one MMC memory card. The MP3 player can only be controlled from the Cybiko. A memory card from the MP3 player can also be addressed from the Cybiko and used for
https://en.wikipedia.org/wiki/Nick%20Pippenger
Nicholas John Pippenger is a researcher in computer science. He has produced a number of fundamental results many of which are being widely used in the field of theoretical computer science, database processing and compiler optimization. He has also achieved the rank of IBM Fellow at Almaden IBM Research Center in San Jose, California. He has taught at the University of British Columbia in Vancouver, British Columbia, Canada and at Princeton University in the US. In the Fall of 2006 Pippenger joined the faculty of Harvey Mudd College. Pippenger holds a B.S. in Natural Sciences from Shimer College and a PhD from the Massachusetts Institute of Technology. He is married to Maria Klawe, President of Harvey Mudd College. In 1997 he was inducted as a Fellow of the Association for Computing Machinery. In 2013 he became a fellow of the American Mathematical Society. The complexity class, Nick's Class (NC), of problems quickly solvable on a parallel computer, was named by Stephen Cook after Nick Pippenger for his research on circuits with polylogarithmic depth and polynomial size. Pippenger became one of the most recent mathematicians to write a technical article in Latin, when he published a brief derivation of a new formula for e, whereby the Wallis product for is modified by taking roots of its terms: References External links Pippenger's web page at HMC Harvey Mudd College faculty IBM Fellows Fellows of the Association for Computing Machinery Fellows of the American Mathematical Society Theoretical computer scientists American computer scientists Living people Shimer College alumni Year of birth missing (living people) Massachusetts Institute of Technology alumni
https://en.wikipedia.org/wiki/Hierarchical%20clustering
In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis that seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two categories: Agglomerative: This is a "bottom-up" approach: Each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy. Divisive: This is a "top-down" approach: All observations start in one cluster, and splits are performed recursively as one moves down the hierarchy. In general, the merges and splits are determined in a greedy manner. The results of hierarchical clustering are usually presented in a dendrogram. Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances. On the other hand, except for the special case of single-linkage distance, none of the algorithms (except exhaustive search in ) can be guaranteed to find the optimum solution. Complexity The standard algorithm for hierarchical agglomerative clustering (HAC) has a time complexity of and requires memory, which makes it too slow for even medium data sets. However, for some special cases, optimal efficient agglomerative methods (of complexity ) are known: SLINK for single-linkage and CLINK for complete-linkage clustering. With a heap, the runtime of the general case can be reduced to , an improvement on the aforementioned bound of , at the cost of further increasing the memory requirements. In many cases, the memory overheads of this approach are too large to make it practically usable. Divisive clustering with an exhaustive search is , but it is common to use faster heuristics to choose splits, such as k-means. Cluster Linkage In order to decide which clusters should be combined (for agglomerative), or where a cluster should be split (for divisive), a measure of dissimilarity between sets of observations is required. In most methods of hierarchical clustering, this is achieved by use of an appropriate distance d, such as the Euclidean distance, between single observations of the data set, and a linkage criterion, which specifies the dissimilarity of sets as a function of the pairwise distances of observations in the sets. The choice of metric as well as linkage can have a major impact on the result of the clustering, where the lower level metric determines which objects are most similar, whereas the linkage criterion influences the shape of the clusters. For example, complete-linkage tends to produce more spherical clusters than single-linkage. The linkage criterion determines the distance between sets of observations as a function of the pairwise distances between observations. Some commonly used linkage criteria between two sets of observations A and B and a distance d are: Some of these can only be recomputed recursively (WPGMA, WPGMC), for m
https://en.wikipedia.org/wiki/Triangle%20Fraternity
Triangle is a fraternity for male students majoring in engineering, architecture, and the physical, mathematical, biological, and computer sciences. It is the only member of the North American Interfraternity Conference to limit its membership recruitment to these majors. Triangle Fraternity organized at the University of Illinois at Urbana–Champaign in the fall of 1906 and was incorporated by the state of Illinois on 15 April 1907, which is celebrated each year as Founders' Day. As of February 2020 there are 39 chapters and six colonies of Triangle Fraternity active in the U.S. The headquarters is located in Plainfield, Indiana in a historic building erected as a Carnegie library in 1912. Triangle Fraternity is one of three active national fraternities not to use Greek letters for its name, the others being Acacia and FarmHouse. History Triangle was formed in the fall of 1906 by sixteen civil engineering juniors at the University of Illinois. It was formally incorporated on 15 April 1907. The date of incorporation has been designated as Founders' Day, and Triangle celebrates it every year at each chapter. Triangle's mission statement reads, "The purpose of Triangle shall be to maintain a fraternity of engineers, architects and scientists. It shall carry out its purpose by establishing chapters that develop balanced men who cultivate high moral character, foster lifelong friendships, and live their lives with integrity." Symbols Colors: Old Rose and Gray Coat of Arms: The crest consists of a rising sun beneath a Triangle T. Beneath is an esquire helmet in profile. At the center of the Coat of Arms is the fraternity's shield and a ribbon containing the organization's motto "Veritas Omnia Vincit" (Truth Conquers All). Surrounding the shield is a mantling. Flower: White Chrysanthemum Flag: The Coat of Arms on a Yellow T with Gray field. Code of Ethics Triangle Fraternity was founded on high ethical and moral ideals, and expects the men of the fraternity to follow a set Code of Ethics, which is as follows: As a member of Triangle, I recognize my obligation to: Observe the precepts of the Fraternity as set forth in the Ritual; Accept cheerfully my full share of any task, however menial, involved in maintaining a chapter home; Preserve and promote the chosen ideals of my Fraternity; Pay all personal bills promptly, and always live within my means; Help create in my chapter home an environment in which enduring friendships may be formed; Maintain a creditable scholastic record; Promote the welfare of my profession; Maintain my self-respect by proper conduct at all times; Uphold faithfully the traditions and program of my Alma Mater; Pay the price of success in honest effort. Notable members Chapter list See also List of social fraternities and sororities Professional fraternities and sororities References External links Triangle Education Foundation Student organizations established in 1907 North American Interfratern
https://en.wikipedia.org/wiki/Xplore
Xplore may refer to: Science and technology IEEE Xplore, an online database of IEEE research publications XploRe, a statistical software environment Xplore Technologies, designer, marketer and manufacturer of rugged tablet computers Xplore (space exploration company), a satellite manufacturer and operator Xplore!, a science centre in Wrexham, Wales Xplore, a series of PDAs by Group Sense PDA Other uses Xplore Dundee, a bus operator based in Dundee, Scotland See also Explorer (disambiguation) Exploration
https://en.wikipedia.org/wiki/Patricia%20Crowther%20%28caver%29
Patricia ("Pat") P. Crowther (born 1943), later known as Patricia P. Wilcox, is an American cave explorer and cave surveyor active in the 1960s and early 1970s. She also worked as a computer programmer. Crowther was well-known among Kentucky cavers for her slight frame (she weighed 115 pounds) and her extreme dedication. These two traits led her to pursue promising leads that other cavers were unwilling or unable to attempt. Of particular note is her traversal of a narrow canyon known as "The Tight Spot" in the portion of the Flint Ridge Cave System underlying Houchins Valley. The Tight Spot proved to be the critical juncture leading to the passages connecting Mammoth Cave and the Flint Ridge Cave System. Both Patricia Crowther and her then-husband Will Crowther, also a computer programmer, participated in many expeditions that attempted to connect the caves. She was part of the September 9, 1972 expedition that discovered and surveyed the historic final connection. Crowther earned a B.S. degree in physics at MIT where she met and married William. The couple had two daughters, Sandy and Laura, and divorced in 1976. Later that year, William would go on to create Colossal Cave Adventure, one of the first examples of interactive fiction, based on his caving experiences with Pat in the Mammoth Cave system as a way to connect with his daughters after the divorce. Pat first encountered the game at a Boston meeting of the Cave Research Foundation in 1976 or 1977. Though embellished to include elements like an underground volcano, cavers noted that the game was accurate to Crowther's maps and descriptions. In 1977, Crowther married John Wilcox, who had led the cave connection expeditions. They were married for 33 years until his death on September 1, 2010. Crowther authored The Grand Kentucky Junction, an account of the expeditions undertaken to connect the Mammoth and Flint Ridge cave systems. Crowther participated in the 1997 National Geographic documentary Mysteries Underground, which discussed the connection of the Flint Ridge Cave System with Mammoth Cave. References American cavers American explorers Living people 1943 births
https://en.wikipedia.org/wiki/JDSU
JDS Uniphase Corporation (JDSU) was an American company that designed and manufactured products for optical communications networks, communications test and measurement equipment, lasers, optical solutions for authentication and decorative applications, and other custom optics. It was headquartered in Milpitas, California. In August 2015, JDSU split into two different companies – Viavi Solutions and Lumentum Holdings. History Uniphase was started in 1979 in a San Jose, California garage, and made lasers for chip makers and scanners. In 1981, JDS Optics was founded in Ottawa, Ontario by Philip Garel-Jones, Gary Duck, Jozef Straus, and Bill Sinclair. The "JDS" is short for Jones, Duck and Straus/Sinclair. The company became JDS Fitel when it formed a partnership with Fitel, a fiber optic and optical connector company. In 1999, JDSU was formed by the merger between JDS Fitel and Uniphase, and it became known as JDS Uniphase subsequent to the merger. Three other major fiber companies were acquired by JDS Uniphase during the telecom boom – Optical Coating Laboratory Inc. (OCLI), bought for $6.2 billion and based in Santa Rosa, California, E-TEK Dynamics, bought for $15 billion, and SDL, bought for $45 billion, both based in San Jose, California. In 2003, CEO Straus retired, and the company moved its headquarters to San Jose, California, to consolidate its business. On August 3, 2005, the company acquired test and measurement equipment company Acterna for $760 million, which became part of JDSU's Test and Measurement Group. Acterna had been formed by the May 2000 merger of network test solutions developer Wavetek Wandel Goltermann (WWG) and hand-held test equipment developer TTC. In 2006, the company moved its headquarters to Milpitas, California. In December 2013, the company announced it was acquiring fellow network performance management company Network Instruments, for $200 million. On August 1, 2015, JDSU split into Viavi Solutions and Lumentum Holdings Inc. Stock During the 1990s, JDS Uniphase stock was a high-flyer tech stock investor favorite. Its stock price doubled three times and three stock splits of 2:1 occurred roughly every 90 days during the last half of 1999 through early 2000, making millionaires of many employees who were stock option holders, and further enabling JDS Uniphase to go on an acquisition and merger binge. After a downturn in the telecom industry as part of the dot com bubble, JDS Uniphase announced in late July 2001 the largest (up to then) write-down of goodwill. Employment soon dropped as part of the Global Realignment Program from nearly 29,000 to approximately 5,300, many of its factories and facilities were closed around the world, and the stock price dropped from $153 per share to less than $2 per share. On September 23, 2005, JDSU announced a reverse stock split one-to-eight. Shareholder litigation After the 2001 crash of the telecommunications industry, the state of Connecticut filed a lawsuit agai
https://en.wikipedia.org/wiki/Password%20cracking
In cryptanalysis and computer security, password cracking is the process of recovering passwords from data that has been stored in or transmitted by a computer system in scrambled form. A common approach (brute-force attack) is to repeatedly try guesses for the password and to check them against an available cryptographic hash of the password. Another type of approach is password spraying, which is often automated and occurs slowly over time in order to remain undetected, using a list of common passwords. The purpose of password cracking might be to help a user recover a forgotten password (due to the fact that installing an entirely new password would involve System Administration privileges), to gain unauthorized access to a system, or to act as a preventive measure whereby system administrators check for easily crackable passwords. On a file-by-file basis, password cracking is utilized to gain access to digital evidence to which a judge has allowed access, when a particular file's permissions restricted. Time needed for password searches The time to crack a password is related to bit strength , which is a measure of the password's entropy, and the details of how the password is stored. Most methods of password cracking require the computer to produce many candidate passwords, each of which is checked. One example is brute-force cracking, in which a computer tries every possible key or password until it succeeds. With multiple processors, this time can be optimized through searching from the last possible group of symbols and the beginning at the same time, with other processors being placed to search through a designated selection of possible passwords. More common methods of password cracking, such as dictionary attacks, pattern checking, word list substitution, etc. attempt to reduce the number of trials required and will usually be attempted before brute force. Higher password bit strength exponentially increases the number of candidate passwords that must be checked, on average, to recover the password and reduces the likelihood that the password will be found in any cracking dictionary. The ability to crack passwords using computer programs is also a function of the number of possible passwords per second which can be checked. If a hash of the target password is available to the attacker, this number can be in the billions or trillions per second, since an offline attack is possible. If not, the rate depends on whether the authentication software limits how often a password can be tried, either by time delays, CAPTCHAs, or forced lockouts after some number of failed attempts. Another situation where quick guessing is possible is when the password is used to form a cryptographic key. In such cases, an attacker can quickly check to see if a guessed password successfully decodes encrypted data. For some kinds of password hash, ordinary desktop computers can test over a hundred million passwords per second using password cracking tools r
https://en.wikipedia.org/wiki/Von%20Neumann%20architecture
The von Neumann architecture—also known as the von Neumann model or Princeton architecture—is a computer architecture based on a 1945 description by John von Neumann, and by others, in the First Draft of a Report on the EDVAC. The document describes a design architecture for an electronic digital computer with these components: A processing unit with both an arithmetic logic unit and processor registers A control unit that includes an instruction register and a program counter Memory that stores data and instructions External mass storage Input and output mechanisms The term "von Neumann architecture" has evolved to refer to any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time (since they share a common bus). This is referred to as the von Neumann bottleneck, which often limits the performance of the corresponding system. The design of a von Neumann architecture machine is simpler than in a Harvard architecture machine—which is also a stored-program system, yet has one dedicated set of address and data buses for reading and writing to memory, and another set of address and data buses to fetch instructions. A stored-program computer uses the same underlying mechanism to encode both program instructions and data as opposed to designs which use a mechanism such as discrete plugboard wiring or fixed control circuitry for instruction implementation. Stored-program computers were an advancement over the manually reconfigured or fixed function computers of the 1940s, such as the Colossus and the ENIAC. These were programmed by setting switches and inserting patch cables to route data and control signals between various functional units. The vast majority of modern computers use the same hardware mechanism to encode and store both data and program instructions, but have caches between the CPU and memory, and, for the caches closest to the CPU, have separate caches for instructions and data, so that most instruction and data fetches use separate buses (split cache architecture). History The earliest computing machines had fixed programs. Some very simple computers still use this design, either for simplicity or training purposes. For example, a desk calculator (in principle) is a fixed program computer. It can do basic mathematics, but it cannot run a word processor or games. Changing the program of a fixed-program machine requires rewiring, restructuring, or redesigning the machine. The earliest computers were not so much "programmed" as "designed" for a particular task. "Reprogramming"—when possible at all—was a laborious process that started with flowcharts and paper notes, followed by detailed engineering designs, and then the often-arduous process of physically rewiring and rebuilding the machine. It could take three weeks to set up and debug a program on ENIAC. With the proposal of the stored-program computer, this changed. A stored-program computer includes, by design, an ins
https://en.wikipedia.org/wiki/21%20Jump%20Street
21 Jump Street is an American police procedural television series that aired on the Fox network and in first-run syndication from April 12, 1987 to April 27, 1991, running for 103 episodes. The series focuses on a squad of youthful-looking undercover police officers investigating crimes in schools, gangs, and other teenage venues. It was originally going to be titled Jump Street Chapel, after the deconsecrated church building in which the unit has its headquarters, but was changed at Fox's request so as not to mislead viewers into thinking it was a religious program. Created by Patrick Hasburgh and Stephen J. Cannell, the series was produced by Patrick Hasburgh Productions and Stephen J. Cannell Productions in association with 20th Century Fox Television. Executive Producers included Hasburgh, Cannell, Steve Beers and Bill Nuss. The show was an early hit for the fledgling Fox network, and was created to attract a younger audience. The final season aired in first-run syndication mainly on local Fox affiliates. It was later rerun on the FX cable network from 1996 to 1998. The series provided a spark to Johnny Depp's nascent acting career, garnering him national recognition as a teen idol. Depp found this status irritating, but he continued on the series under his contract and was paid $45,000 per episode. Eventually he was released from his contract after the fourth season. A spin-off series, Booker, was produced for the character of Dennis Booker (Richard Grieco); it ran only one season, from September 24, 1989, to May 6, 1990. A film adaptation directed by Phil Lord and Christopher Miller was released on March 16, 2012. The film is set in the same continuity as the series, with Johnny Depp, Holly Robinson and Peter DeLuise reprising their characters in cameo appearances. Richard Grieco and Dustin Nguyen also have cameos in the 2014 film sequel 22 Jump Street. Premise The series focuses on a group of police officers headquartered at the eponymous address. These young officers all have especially youthful appearances, allowing them to pass for teenagers. Their assignments generally consist of undercover work in high schools or, less commonly, colleges, where they generally investigate drug trafficking and abuse. The show's plots cover issues such as alcoholism, hate crimes, drug abuse, homophobia, AIDS, child abuse, and sexual promiscuity. Similarly, each problem is often solved by the end of the hour-long episode, giving an implicit moral about a particular activity's impact. When the show originally aired, some episodes were followed immediately by public service announcements featuring cast members. Episodes Cast Jeff Yagher was originally cast as Officer Tom Hanson in the pilot. He was replaced after the original pilot episode was filmed, and his scenes were reshot with Johnny Depp. Midway through the first season, Frederic Forrest was replaced by Steven Williams. On the show, Forrest's character Richard Jenko is killed by a drunk drive
https://en.wikipedia.org/wiki/MOP
A mop is an implement for mopping floors MOP, mop or MoP may refer to: Computer science Maintenance Operations Protocol, in computer networks Metaobject protocol, a technique that allows a computer programmer to extend or alter the semantics of a language Multiple Online Programming – see MINIMOP Government and organizations Macanese pataca, the currency of Macau, by ISO 4217 code Ministry of Public Works (Chile) United Nations Messengers of Peace Messengers of Peace (Scouting) Places Manila Ocean Park, an aquarium in the Philippines Mount Pleasant Municipal Airport (Michigan) (IATA: MOP), in Mount Pleasant, Michigan Science mu opioid peptide (MOP) receptor, also referred to as mu Opioid receptor MOP Flippase, Multidrug/Oligosaccharidyl-lipid/Polysaccharide (MOP) Flippase superfamily of transport proteins Muriate of potash, see potassium chloride Mathematical Olympiad Program, held at Carnegie Mellon University to train team members for the International Mathematical Olympiad Sports Major Opportunity Point, tennis terminology used to describe the point 0-30 Most Outstanding Player, see also Most Valuable Player NCAA basketball tournament Most Outstanding Player in the NCAA basketball tournaments List of NCAA Division I Ice Hockey Tournament Most Outstanding Player in NCAA ice hockey tournaments Other GBU-57A/B MOP, or Massive Ordnance Penetrator, a bomb used by the United States Air Force Manual of Practice, or Project Resource Manual, published by the Construction Speficiations Institute M.O.P., or Mash Out Posse, an American rap duo Mother of pearl Museum of Printing, a collection and library dedicated to the history and culture of printing and graphic arts A type of paintbrush See also Mop & the Dropouts, a 1980s Australian band led by Mop Conlon MOPP (disambiguation) Mops (disambiguation)
https://en.wikipedia.org/wiki/Bert%20Sutherland
William Robert Sutherland (May 10, 1936 – February 18, 2020) was an American computer scientist who was the longtime manager of three prominent research laboratories, including Sun Microsystems Laboratories (1992–1998), the Systems Science Laboratory at Xerox PARC (1975–1981), and the Computer Science Division of Bolt, Beranek and Newman, Inc. which helped develop the ARPANET. In these roles, Sutherland participated in the creation of the personal computer, the technology of advanced microprocessors, the Smalltalk programming language, the Java programming language and the Internet. Unlike traditional corporate research managers, Sutherland added individuals from fields like psychology, cognitive science, and anthropology to enhance the work of his technology staff. He also directed his scientists to take their research, like the Xerox Alto "personal" computer, outside of the laboratory to allow people to use it in a corporate setting and to observe their interaction with it. In addition, Sutherland fostered a collaboration between the researchers at California Institute of Technology developing techniques of very large scale integrated circuits (VLSI) — his brother Ivan and Carver Mead — and Lynn Conway of his PARC staff. With PARC resources made available by Sutherland, Mead and Conway developed a textbook and university syllabus that helped expedite the development and distribution of a technology whose effect is now immeasurable. Sutherland said that a research lab is primarily a teaching institution, "teaching whatever is new so that the new can become familiar, old, and used widely." Sutherland was born in Hastings, Nebraska, on May 10, 1936, to a father from New Zealand; his mother was from Scotland. The family moved to Wilmette, Illinois, then Scarsdale, New York, for his father's career. Bert Sutherland graduated from Scarsdale High School, then received his bachelor's degree in electrical engineering from Rensselaer Polytechnic Institute (RPI), and his master's degree and Ph.D. from Massachusetts Institute of Technology (MIT); his thesis advisor was Claude Shannon. During his military service in the United States Navy, he was awarded the Legion of Merit as a Carrier ASW plane commander. He was the older brother of Ivan Sutherland. Bert Sutherland died on February 18, 2020, aged 83. References 1936 births 2020 deaths Sun Microsystems people Recipients of the Legion of Merit Rensselaer Polytechnic Institute alumni United States Navy officers Scientists at PARC (company) People from Hastings, Nebraska Scientists from Nebraska American people of Scottish descent American people of New Zealand descent Scientists from New York (state) People from Scarsdale, New York Scarsdale High School alumni
https://en.wikipedia.org/wiki/ChucK
ChucK is a concurrent, strongly timed audio programming language for real-time synthesis, composition, and performance, which runs on Linux, Mac OS X, Microsoft Windows, and iOS. It is designed to favor readability and flexibility for the programmer over other considerations such as raw performance. It natively supports deterministic concurrency and multiple, simultaneous, dynamic control rates. Another key feature is the ability to live code; adding, removing, and modifying code on the fly, while the program is running, without stopping or restarting. It has a highly precise timing/concurrency model, allowing for arbitrarily fine granularity. It offers composers and researchers a powerful and flexible programming tool for building and experimenting with complex audio synthesis programs, and real-time interactive control. ChucK was created and chiefly designed by Ge Wang as a graduate student working with Perry R. Cook. ChucK is distributed freely under the terms of the GNU General Public License on Mac OS X, Linux and Microsoft Windows. On iPhone and iPad, ChiP (ChucK for iPhone) is distributed under a limited, closed source license, and is not currently licensed to the public. However, the core team has stated that it would like to explore "ways to open ChiP by creating a beneficial environment for everyone". Language features The ChucK programming language is a loosely C-like object-oriented language, with strong static typing. ChucK is distinguished by the following characteristics: Direct support for real-time audio synthesis A powerful and simple concurrent programming model A unified timing mechanism for multi-rate event and control processing. A language syntax that encourages left-to-right syntax and semantics within program statements. Precision timing: a strongly timed sample-synchronous timing model. Programs are dynamically compiled to ChucK virtual machine bytecode. A runtime environment that supports on-the-fly programming. The ChucK Operator (=>) that can be used in several ways to "chuck" any ordered flow of data from left to right. ChucK standard libraries provide: MIDI input and output. Open Sound Control support. HID connectivity. Unit generators (UGens) - ie oscillators, envelopes, synthesis toolkit ugens, filters, etc. Unit analyzers (UAnae) - blocks that perform analysis functions on audio signals and/or metadata input, and produce metadata analysis results as output - ie FFT/IFFT, Spectral Flux/Centroid, RMS, etc. Serial IO capabilities - ie Arduino. File IO capabilities. Code example The following is a simple ChucK program that generates sound and music: // our signal graph (patch) SinOsc f => dac; // set gain .3 => f.gain; // an array of pitch classes (in half steps) [ 0, 2, 4, 6, 9, 10 ] @=> int hi[]; // infinite loop while( true ) { // choose a note, shift registers, convert to frequency Std.mtof( 65 + Std.rand2(0,1) * 43 + hi[Std.rand2(0,hi.cap()-1)] ) =>
https://en.wikipedia.org/wiki/The%20Mad%20Dash
The Mad Dash is a television game show created by Sidney M. Cohen (who hosted the pilot episode) which first appeared in 1978 on Canada's CTV network and ran until 1981. The series proved to be a family favourite based on Canada's BBM ratings, and was also popular in parts of the northern United States, where CTV affiliates were available to Americans living near the Canada–United States border, both over the air and via cable. Pierre Lalonde was the MC, and Nick Hollinrake was the announcer for the show, which was taped at the studios of CFCF-TV in Montreal. This classic series is included in the collection of Canadian icons in the 2006 feature film Souvenir of Canada based on the book by Douglas Coupland. The series was later rerun on GameTV in Canada, from 2007 to 2010. Only a handful of episodes still exist, due to the then common practice known as wiping. The pilot episode is up on YouTube. Gameplay Two pairs of contestants, always composed of a man and a woman and one a returning champion, competed in a giant board game, laid out as a winding path across the studio floor. Each team chose one member to be the "Dasher," moving on the board, and one as the "Roller," answering questions at the host's podium. Both Dashers began on the Start space at one end of the board, with the matchups always being man against woman. The host asked a series of multiple-choice toss-up questions to the Rollers, and the first to buzz-in with the correct answer rolled an oversized six-sided die. A miss gave the opponent a chance to answer and steal the roll. Five of the die's faces were marked with pips to indicate numbers from 1 to 5, while the sixth face showed a dollar sign. If a number came up, the team's Dasher moved ahead that many spaces. The dollar sign added $10 to the team's bank and gave a free roll; if three consecutive dollar signs came up, the team was credited with $50. In order to win, a Dasher had to reach the Win space at the end of the path by exact count. If the Roller rolled a number higher than the number of spaces needed to reach Win, the Dasher had to use the excess spaces to back up from Win. (E.g. if the Dasher was one space away from Win, a roll of 3 would leave him/her two spaces away.) When a Dasher reached Win, that team kept all cash and prizes they had banked during the game. If they had banked nothing, the Roller rolled the die once and received either $100 for a dollar sign, or $10 times the number rolled. Teams remained on the show until they lost twice. For every seven games a team won, they were rewarded with the Lucky 7 jackpot, which consisted of $250 cash and an array of merchandise. Spaces Spaces on the board awarded cash or prizes, or affected the movement or gameplay in various ways. Some spaces remained constant, while others changed from one game to the next. Spaces took immediate effect, regardless of whether a Dasher landed on them while moving forward or backward. Colour designations The effects of landing
https://en.wikipedia.org/wiki/Cooking%20with%20the%20Wolfman
Cooking With the Wolfman is a cooking series first produced for the Aboriginal Peoples Television Network, an aboriginal television network in Canada. The series is created, executive produced, and hosted by chef David Wolfman, whose home community is the Xaxli'p First Nation in British Columbia, western Canada, and who serves as professor of culinary arts at George Brown College in Toronto. Also executive produced by Larry Pasemko, this series combines traditional North American Native cuisine with modern dishes. The series is now in its eighth season, produced by David Wolfman and directed for the last 5 seasons by Sidney M. Cohen. References External links Cooking With The Wolfman - Official Website Cooking With The Wolfman on APTN - Wolfman on Aboriginal Peoples television network 2000s Canadian cooking television series Aboriginal Peoples Television Network original programming Indigenous cuisine in Canada 2010s Canadian cooking television series First Nations television series
https://en.wikipedia.org/wiki/Three-address%20code
In computer science, three-address code (often abbreviated to TAC or 3AC) is an intermediate code used by optimizing compilers to aid in the implementation of code-improving transformations. Each TAC instruction has at most three operands and is typically a combination of assignment and a binary operator. For example, t1 := t2 + t3. The name derives from the use of three operands in these statements even though instructions with fewer operands may occur. Since three-address code is used as an intermediate language within compilers, the operands will most likely not be concrete memory addresses or processor registers, but rather symbolic addresses that will be translated into actual addresses during register allocation. It is also not uncommon that operand names are numbered sequentially since three-address code is typically generated by the compiler. A refinement of three-address code is A-normal form (ANF). Examples In three-address code, this would be broken down into several separate instructions. These instructions translate more easily to assembly language. It is also easier to detect common sub-expressions for shortening the code. In the following example, one calculation is composed of several smaller ones: # Calculate one solution to the [[Quadratic equation]]. x = (-b + sqrt(b^2 - 4*a*c)) / (2*a) t1 := b * b t2 := 4 * a t3 := t2 * c t4 := t1 - t3 t5 := sqrt(t4) t6 := 0 - b t7 := t5 + t6 t8 := 2 * a t9 := t7 / t8 x := t9 Three-address code may have conditional and unconditional jumps and methods of accessing memory. It may also have methods of calling functions, or it may reduce these to jumps. In this way, three-address code may be useful in control-flow analysis. In the following C-like example, a loop stores the squares of the numbers between 0 and 9: ... for (i = 0; i < 10; ++i) { b[i] = i*i; } ... t1 := 0 ; initialize i L1: if t1 >= 10 goto L2 ; conditional jump t2 := t1 * t1 ; square of i t3 := t1 * 4 ; word-align address t4 := b + t3 ; address to store i*i *t4 := t2 ; store through pointer t1 := t1 + 1 ; increase i goto L1 ; repeat loop L2: See also Intermediate language Reduced instruction set computer Static single-assignment form (SSA) References Compiler construction Articles with example C code
https://en.wikipedia.org/wiki/Onga%20District%2C%20Fukuoka
is a district located in Fukuoka Prefecture, Japan. According to 2005 Japanese Census data, the district has a population of 97,537 and a density of 1,045.73 persons per km2. The total area is 93.10 km2. Towns currently in this district Ashiya-machi(芦屋町) Mizumaki-machi(水巻町) Okagaki-machi(岡垣町) Onga-chō(遠賀町) The word roots "machi" and "chō" (both written as 町) mean "town." Onga is the only town in Fukuoka Prefecture for which 町 is pronounced "chō." For all other towns it is pronounced "machi." Towns and Villages formerly in this district Katsuki(香月町)- Merged to Yahata City (八幡市. described below) in 1955, currently Yahata Nishi-ku, Kitakyūshū(北九州市八幡西区) Kōjaku Village(上津役村)- Merged to Yahata City in 1937, currently Yahata Nishi-ku, Kitakyūshū Kurosaki(黒崎町)- Merged to Yahata City in 1926, currently Yahata Nishi-ku, Kitakyūshū Nakama(中間町)- Upgraded to City(中間市)status in 1958 Orio(折尾町)- Merged to Yahata City in 1944, currently Yahata Nishi-ku, Kitakyūshū Tobata(戸畑町)- Upgraded to City status in 1924, currently Tobata-ku, Kitakyūshū(北九州市戸畑区) Wakamatsu(若松町)- Upgraded to City status in 1914, currently Wakamatsu-ku, Kitakyūshū(北九州市若松区) Yahata(八幡町)- Upgraded to City(八幡市)status in 1917, currently Yahata Higashi-ku, Kitakyūshū(北九州市八幡東区) Districts in Fukuoka Prefecture
https://en.wikipedia.org/wiki/Serial%20Storage%20Architecture
Serial Storage Architecture (SSA) was a serial transport protocol used to attach disk drives to server computers. History SSA was invented by Ian Judd of IBM in 1990. IBM produced a number of successful products based upon this standard before it was overtaken by the more widely adopted Fibre Channel protocol. SSA was promoted as an open standard by the SSA Industry Association, unlike its predecessor the first generation Serial Disk Subsystem. A number of vendors including IBM, Pathlight Technology and Vicom Systems produced products based on SSA. It was also adopted as an American National Standards Institute (ANSI) X3T10.1 standard. SSA devices are logically SCSI devices and conform to all of the SCSI command protocols. SSA provides data protection for critical applications by helping to ensure that a single cable failure will not prevent access to data. All the components in a typical SSA subsystem are connected by bi-directional cabling. Data sent from the adaptor can travel in either direction around the loop to its destination. SSA detects interruptions in the loop and automatically reconfigures the system to help maintain connection while a link is restored. Up to 192 hot swappable hard disk drives can be supported per system. Drives can be designated for use by an array in the event of hardware failure. Up to 32 separate RAID arrays can be supported per adaptor, and arrays can be mirrored across servers to provide cost-effective protection for critical applications. Furthermore, arrays can be sited up to 25 metres apart - connected by thin, low-cost copper cables - allowing subsystems to be located in secure, convenient locations, far from the server itself. SSA was deployed in server RAID environments, where it was capable of providing for up to 80 Mbyte/s of data throughput, with sustained data rates as high as 60 Mbytes/s in non-RAID mode and 35 Mbytes/s in RAID mode. Link characteristics The copper cables used in SSA configurations are round bundles of two or four twisted pairs, up to 25 metres long and terminated with 9-pin micro-D connectors. Impedances are 75 ohm single-ended, and 150 ohm differential. For longer-distance connections, it is possible to use fiber-optic cables up to 10 km (6 mi) in length. Signals are differential TTL. The transmission capacity is 20 megabytes per second in each direction per channel, with up to two channels per cable. The transport layer protocol is non-return-to-zero, with 8B/10B encoding (10 bits per character). Higher protocol layers were based on the SCSI-3 standard. Products IBM 7133 Disk expansion enclosures IBM 2105 Versatile Storage Server (VSS) IBM 2105 Enterprise Storage Server (ESS) IBM 7190 SBUS SSA Adapter Pathlight Technology Streamline PCI Host Bus Adapter, SSA Data Pump, storage area network gateway See also List of device bandwidths References Serial buses SCSI IBM storage devices American National Standards Institute standards Computer storage buses
https://en.wikipedia.org/wiki/All%20Saints%20%28TV%20series%29
All Saints is an Australian medical drama television series that first screened on the Seven Network on 24 February 1998. Set in the fictional All Saints Western General Hospital, it focused on the staff of Ward 17 until its closure in 2004, which is when the focus changed and began following the staff of the Emergency Department. The show was produced by John Holmes alongside Jo Porter, MaryAnne Carroll and Di Drew. The final episode aired on 27 October 2009, completing its record-breaking 12-year run. Plot All Saints follows the lives of the staff at All Saints Western General Hospital. Until its closure in 2004, the show primarily focused on the staff in Ward 17. Known as the "garbage ward" as it took all the overflow from the other wards, Ward 17 was run by compassionate nun, Sister Terri Sullivan (Georgie Parker). Her staff included her nurses Connor Costello (Jeremy Cumpston), Von Ryan (Judith McGrath), Bronwyn Craig (Libby Tanner), Jared Levine (Ben Tari) and Stephanie Markham (Kirrily White) and her ward clerk Jaz Hillerman (Sam Healy). Luke Forlano (Martin Lynes) and Peter Morrison (Andrew McKaige) were doctors who frequently worked with Terri and her staff. Ben Markham (Brian Vriends) was an ambulance officer who worked closely with Luke, despite their rivalry. Bronwyn left Ward 17 and became an ambulance officer at the end of 1998 but returned to the ward full-time at the end of season 3. Peter and Jaz were written out early on in the second season which introduced Doctor Mitch Stevens (Erik Thomson, an old boyfriend of Terri's with whom she had unfinished business. More of the original cast left: Stephanie was killed in a car accident in Season 3 and Connor left in Season 4. The beginning of the fourth season gave Ben a new ambulance partner, Scott Zinenko (Conrad Coleby), and the concluding episodes introduced two new nurses, Paula Morgan (Jenni Baird) and Nelson Curtis (Paul Tassone). Long-serving doctor Charlotte Beaumont (Tammy Macintosh) made her debut in the fifth season. Cast Main cast Recurring cast Guest cast Production After the death of Dr. Mitch Stevens (Erik Thomson) and the departure of Bron Craig (Libby Tanner) in 2003, the producers decided to do something in response to considerable drop in ratings and to prolong the life of the series. In February 2004, John Holmes told The Age journalist Debi Enker that All Saints would be undergoing "major surgery" when the focus shifted from Ward 17 to the Emergency Department. He also stated that while four familiar faces will be leaving, new characters will be introduced to fill the void. Holmes recalled a statement that he made in May 2003 in which he said, "we [myself and Seven script executive Bevan Lee] were seeing the scripts and watching episodes and we were feeling that there was a little bit of a sameness in it. We started to think, 'Don't know about this. Sixth year. Maybe we've had a few too many people through the door of Ward 17 on a trolley and had the 'Hi
https://en.wikipedia.org/wiki/Raw%20device
In computing, specifically in Unix and Unix-like operating systems, a raw device is a special kind of logical device associated with a character device file that allows a storage device such as a hard disk drive to be accessed directly, bypassing the operating system's caches and buffers (although the hardware caches might still be used). Applications like a database management system can use raw devices directly, enabling them to manage how data is cached, rather than deferring this task to the operating system. In FreeBSD, all device files are in fact raw devices. Support for non-raw devices was removed in FreeBSD 4.0 in order to simplify buffer management and increase scalability and performance. In Linux kernel, raw devices were deprecated and scheduled for removal at one point, because the flag can be used instead. However, later the decision was made to keep raw devices support since some software cannot use the flag. Raw devices simply open block devices as if the flag would have been specified. Raw devices are character devices (major number 162). The first minor number (i.e. 0) is reserved as a control interface and is usually found at . A command-line utility called can be used to bind a raw device to an existing block device. These "existing block devices" may be disks or CD-ROMs/DVDs whose underlying interface can be anything supported by the Linux kernel (for example, IDE/ATA or SCSI). References Unix file system technology Linux kernel features
https://en.wikipedia.org/wiki/Ion%20%28window%20manager%29
In Unix computing, Ion is a tiling and tabbing window manager for the X Window System. It is designed such that it is possible to manage windows using only a keyboard, without needing a mouse. It is the successor of PWM and is written by the same author, Tuomo Valkonen. Since the first release of Ion in the summer 2000, similar alternative window management ideas have begun to show in other new window managers: Larswm, ratpoison, StumpWM, wmii, xmonad and dwm. First versions of Ion were released under the Artistic License, Ion2 and the development versions of Ion3 were released under the GNU Lesser General Public License (LGPL). However, the first release candidate of Ion3 included a license change to a custom license based on the LGPL (specifically modified versions must not use the name ion). Since version 2, Ion has been scriptable in Lua. As of September 17, 2009, Valkonen states he is unlikely to continue development of Ion by himself. The official home page went offline in early 2010. A fork, Notion (Not-ion), is being maintained. Alternatives The Notion (Not-ion) fork is actively maintained with packages available for the Linux distributions gentoo, Debian, Arch, SUSE and Fedora as well as NetBSD and Solaris (Solaris 10, OpenSolaris and OpenIndiana). Window managers similar to ion include awesome, dwm, i3, larswm, and xmonad. Controversy Tuomo Valkonen, the author of Ion, has been at the center of several controversies concerning the licensing and distribution of his software, in particular the proclivity of major Linux and BSD distributions of making outdated development versions of Ion3 (the current unstable development branch) available as part of "frozen" software repositories. Often, such versions will include patches, such as for Xinerama or Xft support, both of which Valkonen disapproves on professional and personal grounds and has had removed from the main source tree. Yet, such distribution would seem to imply that the patched version is the official Ion3 package maintained by Valkonen himself, which he sees as unacceptable. Valkonen has even recently become an outspoken critic of the entire free software and open-source movement (the "FOSS herd", as he refers to it) due to his perceived mistreatment at the hands of several major distributions, including Arch Linux, Debian, pkgsrc (NetBSD, DragonflyBSD), and FreeBSD. On April 28, 2007, Valkonen warned the Arch Linux maintainers of possible legal action because the (unofficial) Arch User Repository contained scripts to install Ion3 with patches he did not approve of. Later on he did the same with the pkgsrc maintainer of the NetBSD project and the ports maintainer of the FreeBSD project. As of December 12, 2007, the development branch of Ion, along with other software by Valkonen, was pulled from the FreeBSD ports tree, after the author filed a complaint about outdated development releases still being available. Any version of Ion may still be installed from source code o
https://en.wikipedia.org/wiki/Md5sum
is a computer program that calculates and verifies 128-bit MD5 hashes, as described in RFC 1321. The MD5 hash functions as a compact digital fingerprint of a file. As with all such hashing algorithms, there is theoretically an unlimited number of files that will have any given MD5 hash. However, it is very unlikely that any two non-identical files in the real world will have the same MD5 hash, unless they have been specifically created to have the same hash. The underlying MD5 algorithm is no longer deemed secure. Thus, while is well-suited for identifying known files in situations that are not security related, it should not be relied on if there is a chance that files have been purposefully and maliciously tampered. In the latter case, the use of a newer hashing tool such as sha256sum is recommended. is used to verify the integrity of files, as virtually any change to a file will cause its MD5 hash to change. Most commonly, is used to verify that a file has not changed as a result of a faulty file transfer, a disk error or non-malicious meddling. The program is included in most Unix-like operating systems or compatibility layers such as Cygwin. The original C code was written by Ulrich Drepper and extracted from a 2001 release of . Examples All of the following files are assumed to be in the current directory. Create MD5 hash file hash.md5 $ md5sum filetohashA.txt filetohashB.txt filetohashC.txt > hash.md5 File produced File contains hash and filename pairs: $ cat hash.md5 595f44fec1e92a71d3e9e77456ba80d1 filetohashA.txt 71f920fa275127a7b60fa4d4d41432a3 filetohashB.txt 43c191bf6d6c3f263a8cd0efd4a058ab filetohashC.txt Please note: After the value there must be a space followed by either a second space (for text mode) or an asterisk (for binary mode); otherwise, the following error will result: no properly formatted MD5 checksum lines found. Many programs don't distinguish between the two modes, but some utils do. The file must also be UNIX line ending formatted, otherwise this will be seen: md5sum: WARNING: x listed files could not be read. will convert it quickly if it is DOS/Windows formatted. Check MD5 $ md5sum -c hash.md5 filetohashA.txt: OK filetohashB.txt: OK filetohashC.txt: OK Check single MD5 $ echo 'D43F2404CA13E22594E5C8B04D3BBB81 filetohashA.txt' | md5sum -c filetohashA.txt: OK On non-GNU UNIX-like systems is specific to systems that use GNU coreutils or a clone such as BusyBox. On FreeBSD and OpenBSD the utilities are called , , , and . These versions offer slightly different options and features. Additionally, FreeBSD offers the "SKEIN" family of message digests. On Windows systems Print MD5 hash of a file > certutil -hashfile <file> MD5 MD5 hash of <file>: <hash number> CertUtil: -hashfile command completed successfully. See also References External links Unix security-related software Unix file system-related software
https://en.wikipedia.org/wiki/Real-time%20Cmix
Real-Time Cmix (RTcmix) is one of the MUSIC-N family of computer music programming languages. RTcmix is descended from the MIX program developed by Paul Lansky at Princeton University in 1978 to perform algorithmic composition using digital audio soundfiles on an IBM 3031 mainframe computer. After synthesis functions were added, the program was renamed Cmix in the 1980s. Real-time capability was added by Brad Garton and David Topper in the mid-1990s, with support for TCP socket connectivity, interactive control of the scheduler, and object-oriented embedding of the synthesis engine into fully featured applications. Over the years Cmix/RTcmix has run on a variety of computer platforms and operating systems, including NeXT, Sun Microsystems, IRIX, Linux, and Mac OS X. It is and has always been an open source project, differentiating it from commercial synthesizers and music software. It is currently developed by a group of computer music researchers at Princeton, Columbia University, and the University of Virginia. RTcmix has a number of unique (or highly unusual) features when compared with other synthesis and signal processing languages. For one, it has a built-in MINC parser, which enables the user to write C-style code within the score file, extending its innate capability for algorithmic composition and making it closer in some respects to later music software such as SuperCollider and Max/MSP. It uses a single-script instruction file (the score file), and synthesis and signal processing routines (called instruments) exist as compile shared libraries. This is different from MUSIC-N languages such as Csound where the instruments exist in a second file written in a specification language that builds the routines out of simple building blocks (organized as opcodes or unit generators). RTcmix has similar functionality to Csound and other computer music languages, however, and their shared lineage means that scripts written for one language will be extremely familiar-looking (if not immediately comprehensible) to users of the other language. External links RTcmix home page at Columbia University Audio programming languages Columbia University University of Virginia
https://en.wikipedia.org/wiki/Princeton%20Sound%20Lab
The Princeton Sound Lab is a research laboratory in the Department of Computer Science at Princeton University, in collaboration with the Department of Music. The Sound Lab conducts research in a variety of areas in computer music, including physical modeling, audio analysis, audio synthesis, programming languages for audio and multimedia, interactive controller design, psychoacoustics, and real-time systems for composition and performance. External links Princeton University Audio engineering
https://en.wikipedia.org/wiki/MUSIC-N
MUSIC-N refers to a family of computer music programs and programming languages descended from or influenced by MUSIC, a program written by Max Mathews in 1957 at Bell Labs. MUSIC was the first computer program for generating digital audio waveforms through direct synthesis. It was one of the first programs for making music (in actuality, sound) on a digital computer, and was certainly the first program to gain wide acceptance in the music research community as viable for that task. The world's first computer-controlled music was generated in Australia by programmer Geoff Hill on the CSIRAC computer which was designed and built by Trevor Pearcey and Maston Beard. However, CSIRAC produced sound by sending raw pulses to the speaker, it did not produce standard digital audio with PCM samples, like the MUSIC-series of programs. Design All MUSIC-N derivative programs have a (more-or-less) common design, made up of a library of functions built around simple signal-processing and synthesis routines (written as "opcodes" or unit generators). These simple opcodes are then constructed by the user into an instrument (usually through a text-based instruction file, but increasingly through a graphical interface) that defines a sound which is then "played" by a second file (called the score) which specifies notes, durations, pitches, amplitudes, and other parameters relevant to the musical informatics of the piece. Some variants of the language merge the instrument and score, though most still distinguish between control-level functions (which operate on the music) and functions that run at the sampling rate of the audio being generated (which operate on the sound). A notable exception is ChucK, which unifies audio-rate and control-rate timing into a single framework, allowing arbitrarily fine time granularity and also one mechanism to manage both. This has the advantage of more flexible and readable code as well as drawbacks of reduced system performance. MUSIC-N and derived software are mostly available as complete self-contained programs, which can have different types of user-interfaces, from text- to GUI-based ones. In this aspect, Csound and RTcmix have since evolved to work effectively as software libraries which can be accessed through a variety of frontends and programming languages, such as C, C++, Java, Python, Tcl, Lua, Lisp, Scheme, etc., as well as other music systems such as Pure Data, Max/MSP and plugin frameworks LADSPA and VST. A number of highly original (and to this day largely unchallenged) assumptions are implemented in MUSIC and its descendants about the best way to create sound on a computer. Many of Mathews' implementations (such as using pre-calculated arrays for waveform and envelope storage, the use of a scheduler that runs in musical time rather than at audio rate) are the norm for most hardware and software synthesis and audio DSP systems today. Family MUSIC included a number of variants, e.g.: MUSIC was developed by
https://en.wikipedia.org/wiki/Asymmetric
Asymmetric may refer to: Asymmetry in geometry, chemistry, and physics Computing Asymmetric cryptography, in public-key cryptography Asymmetric digital subscriber line, Internet connectivity Asymmetric multiprocessing, in computer architecture Other Asymmetric relation, in set theory Asymmetric synthesis, in organic synthesis Asymmetric warfare, in modern war Asymmetric Publications, a video game company Asymmetry (Mallory Knox album), 2014 Asymmetry (Karnivool album) Asymmetry (population ethics) Asymmetry (novel), a 2018 novel by Lisa Halliday See also
https://en.wikipedia.org/wiki/Max%20%28software%29
Max, also known as Max/MSP/Jitter, is a visual programming language for music and multimedia developed and maintained by San Francisco-based software company Cycling '74. Over its more than thirty-year history, it has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations. The Max program is modular, with most routines existing as shared libraries. An application programming interface (API) allows third-party development of new routines (named external objects). Thus, Max has a large user base of programmers unaffiliated with Cycling '74 who enhance the software with commercial and non-commercial extensions to the program. Because of this extensible design, which simultaneously represents both the program's structure and its graphical user interface (GUI), Max has been described as the lingua franca for developing interactive music performance software. History 1980s: Miller Puckette began work on Max in 1985, at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM) in Paris. Originally called The Patcher, this first version provided composers with a graphical interface for creating interactive computer music scores on the Macintosh. At this point in its development Max couldn't perform its own real-time sound synthesis in software, but instead sent control messages to external hardware synthesizers and samplers using MIDI or a similar protocol. Its earliest widely recognized use in composition was for Pluton, a 1988 piano and computer piece by Philippe Manoury; the software synchronized a computer to a piano and controlled a Sogitec 4X for audio processing. In 1989, IRCAM developed Max/FTS ("Faster Than Sound"), a version of Max ported to the IRCAM Signal Processing Workstation (ISPW) for the NeXT. Also known as "Audio Max", it would prove a forerunner to Max's MSP audio extensions, adding the ability to do real-time synthesis using an internal hardware digital signal processor (DSP) board. The same year, IRCAM licensed the software to Opcode Systems. 1990s: Opcode launched a commercial version named Max in 1990, developed and extended by David Zicarelli. However, by 1997, Opcode was considering cancelling it. Instead, Zicarelli acquired the publishing rights and founded a new company, Cycling '74, to continue commercial development. The timing was fortunate, as Opcode was acquired by Gibson Guitar in 1998 and ended operations in 1999. IRCAM's in-house Max development was also winding down; the last version produced there was jMax, a direct descendant of Max/FTS developed in 1998 for Silicon Graphics (SGI) and later for Linux systems. It used Java for its graphical interface and C for its real-time backend, and was eventually released as open-source software. Meanwhile, Puckette had independently released a fully redesigned open-source composition tool named Pure Data (Pd) in 1996, which, despite some underlying engineering differences from the I
https://en.wikipedia.org/wiki/Max%20Mathews
Max Vernon Mathews (November 13, 1926 in Columbus, Nebraska, US – April 21, 2011 in San Francisco, CA, US) was an American pioneer of computer music. Biography Max Vernon Mathews was born in Columbus, Nebraska, by two science schoolteachers. His father in particular taught physics, chemistry and biology in the Peru High School of Nebraska, where he was also the principal. His father allowed him to learn and play in the physics, biology and chemistry laboratories, where he enjoyed making lots of things from motors to mercury barometers. At the age of 9, when students are usually introduced to algebra, he started to study by himself the subject with few other students. That was because the vast majority of population there were farmers and their sons weren't interested about learning algebra, since it isn't useful for the everyday work. In the same way he studied calculus, but he never graduated from high school. After a period as a radar repairman in the navy, where he felt in love with electronics, Mathews decide to study electrical engineering at the California Institute of Technology and the Massachusetts Institute of Technology, receiving a Sc.D. in 1954. Working at Bell Labs, Mathews wrote MUSIC, the first widely used program for sound generation, in 1957. For the rest of the century, he continued as a leader in digital audio research, synthesis, and human-computer interaction as it pertains to music performance. In 1968, Mathews and L. Rosler developed Graphic 1, an interactive graphical sound system on which one could draw figures using a light-pen that would be converted into sound, simplifying the process of composing computer generated music. Also in 1970, Mathews and F. R. Moore developed the GROOVE (Generated Real-time Output Operations on Voltage-controlled Equipment) system, a first fully developed music synthesis system for interactive composition and realtime performance, using 3C/Honeywell DDP-24 (or DDP-224) minicomputers. It used a CRT display to simplify the management of music synthesis in realtime, 12bit D/A for realtime sound playback, an interface for analog devices, and even several controllers including a musical keyboard, knobs, and rotating joysticks to capture realtime performance. Although MUSIC was not the first attempt to generate sound with a computer (an Australian CSIRAC computer played tunes as early as 1951), Mathews fathered generations of digital music tools. He described his work in parental terms, in the following excerpt from "Horizons in Computer Music", March 8–9, 1997, Indiana University: In 1961, Mathews arranged the accompaniment of the song "Daisy Bell" for an uncanny performance by computer-synthesized human voice, using technology developed by John Kelly, Carol Lochbaum, Joan Miller and Lou Gerstman of Bell Laboratories. Author Arthur C. Clarke was coincidentally visiting friend and colleague John Pierce at the Bell Labs Murray Hill facility at the time of this remarkable speech synthesis de
https://en.wikipedia.org/wiki/Algorithmic%20composition
Algorithmic composition is the technique of using algorithms to create music. Algorithms (or, at the very least, formal sets of rules) have been used to compose music for centuries; the procedures used to plot voice-leading in Western counterpoint, for example, can often be reduced to algorithmic determinacy. The term can be used to describe music-generating techniques that run without ongoing human intervention, for example through the introduction of chance procedures. However through live coding and other interactive interfaces, a fully human-centric approach to algorithmic composition is possible. Some algorithms or data that have no immediate musical relevance are used by composers as creative inspiration for their music. Algorithms such as fractals, L-systems, statistical models, and even arbitrary data (e.g. census figures, GIS coordinates, or magnetic field measurements) have been used as source materials. Models for algorithmic composition Compositional algorithms are usually classified by the specific programming techniques they use. The results of the process can then be divided into 1) music composed by computer and 2) music composed with the aid of computer. Music may be considered composed by computer when the algorithm is able to make choices of its own during the creation process. Another way to sort compositional algorithms is to examine the results of their compositional processes. Algorithms can either 1) provide notational information (sheet music or MIDI) for other instruments or 2) provide an independent way of sound synthesis (playing the composition by itself). There are also algorithms creating both notational data and sound synthesis. One way to categorize compositional algorithms is by their structure and the way of processing data, as seen in this model of six partly overlapping types: translational models mathematical models knowledge-based systems grammars optimization approaches evolutionary methods systems which learn hybrid systems Translational models This is an approach to music synthesis that involves "translating" information from an existing non-musical medium into a new sound. The translation can be either rule-based or stochastic. For example, when translating a picture into sound, a JPEG image of a horizontal line may be interpreted in sound as a constant pitch, while an upwards-slanted line may be an ascending scale. Oftentimes, the software seeks to extract concepts or metaphors from the medium, (such as height or sentiment) and apply the extracted information to generate songs using the ways music theory typically represents those concepts. Another example is the translation of text into music, which can approach composition by extracting sentiment (positive or negative) from the text using machine learning methods like sentiment analysis and represents that sentiment in terms of chord quality such as minor (sad) or major (happy) chords in the musical output generated. Mathematical model
https://en.wikipedia.org/wiki/MINC
MINC ("MINC is not C") is a data specification language written in the mid-1980s by a Princeton University graduate student named Lars Graf. This kind of naming is known as a "recursive acronym". It contains many (though not all) of the syntactical capabilities of the C programming language, and can be used to implement simple procedural programs that can be executed by a runtime parser (that is to say, MINC does not need to be compiled in any way). MINC continues to be used only in a handful of programs written in the 1980s (e.g. Real-Time Cmix). It has been for all intents and purposes superseded by modern scripting languages such as Perl, Python, and Tcl. A controversial aspect of the language is whether it is pronounced "mink" or "min-see". External links MINC tutorial at CMIX home page Scripting languages
https://en.wikipedia.org/wiki/DEC%20Text%20Processing%20Utility
The DEC Text Processing Utility (or DECTPU) is a dedicated programming language developed by Digital Equipment Corporation (DEC) to easily create multi-functional text editors. TPU is part of OpenVMS. It can be used on a terminal, a console, or on a graphical system like DECwindows. Functionality TPU provides text buffer management APIs in concert with window management APIs which are targeted for the VT100 line of terminals. This allowed split-screen windows with scrolling and hence multiple views of the same buffer content. There are also key mapping APIs provided, allowing a wide range of functionality for editing text. The keyboard mapping could be easily adapted by the admin or the user. Users could write their own specific editor, to e.g. translate text or short (error) messages to multiple natural languages in a synchronised small text window. The text editor is callable, so you could have small text editors built into specific applications, e.g. a simple mail client. You might redirect output from applications into a text window, using inter-process communication. Therefore one could call web services to return their results into a text buffer. Implementations EVE (Extensible Versatile Editor), the first TPU-based editor, delivered with VAX/VMS by mid-1985. In 1986, DEC developed a new version of EDT written in TPU Language-Sensitive Editor, part of VAXset (software development platform) A version of the vi editor was created by Gregg Wonderly at Oklahoma State University called TPUVI or VITPU. VITPU is still available via the DECUS archives online. References Text-oriented programming languages OpenVMS software
https://en.wikipedia.org/wiki/PicoBSD
PicoBSD is a discontinued single-floppy disk version of FreeBSD, one of the BSD operating system descendants. In its different variations, PicoBSD allows one to have secure Dial-up Internet access , a small diskless router, or a dial-in server, all on one standard floppy disc. It runs on a minimum 386SX CPU with of RAM (no hard drive required). PicoBSD is freely available under the BSD license. The main developer was Andrzej Bialecki, and the latest version is 0.42. Dinesh Nair had then backported the PicoBSD build scripts to FreeBSD 2.2.5, allowing the addition of a few more binaries in the dial-up flavor due to FreeBSD 2.2.5's smaller binary executable format. With flexibility that FreeBSD gives, along with the full source code being available, one can build a small installation performing various tasks, including: Diskless workstation Portable dial-up access solution Custom demo-disk Embedded controller (flash or EEPROM) Firewall Communication server Replacement for commercial router Diskless home automation system And many others PicoBSD is now included in the FreeBSD source files where it is used by embedded system developers to create their own system images. It can be used with recent versions of FreeBSD and it is located in /usr/src/release/picobsd/. In FreeBSD 5, it has been superseded by the NanoBSD framework References See also Comparison of BSD operating systems FreeBSD Lightweight Unix-like systems