id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
328,000 | https://en.wikipedia.org/wiki/Advil | Advil is primarily a brand of ibuprofen (a pain reliever in the nonsteroidal anti-inflammatory drug category). Advil has been called a "megabrand" because it offers various "products for a wide range of pain, head cold, and sleep problems."
History
The brand first entered the American market in 1984 through Whitehall (itself a division of Wyeth, which was purchased by Pfizer in 2009), the same year ibuprofen gained Food and Drug Administration (FDA) approval for over-the-counter (OTC) sales in the United States (being available via prescription since 1974). Within ten years of having a market presence, it outsold Bayer Aspirin and was a fierce competitor to Tylenol (primarily a brand of acetaminophen). In the mid-1990s, for example, it held 13% of the multibillion-dollar over-the-counter American market for analgesics.
Varieties
In 2023, there were 23 varieties of Advil available on the U.S. market including:
Advil
Advil Liqui-Gels
Advil Migraine Liqui-Gels
Infant's Advil
Pediatric Advil
Junior Strength Advil
Children's Advil
Flavored Children's Advil
Advil Dual Action With Acetaminophen (Ibuprofen/acetaminophen)
Advil PM (with Diphenhydramine)
Advil Cold And Sinus (with Pseudoephedrine)
Advil Congestion Relief (with Phenylephrine)
Advil Allergy Sinus (with Chlorpheniramine and Pseudoephedrine)
Advil Allergy And Congestion Relief (with Chlorpheniramine and Phenylephrine)
Advil Multi-Symptom Cold & Flu (with Chlorpheniramine and Phenylephrine)
Children's Advil Cold (with Pseudoephedrine)
Children's Advil Allergy Sinus (with Chlorpheniramine and Pseudoephedrine)
Marketing
Marketing campaigns for the brand (some including celebrities like Regis Philbin) have pushed slogans such as "Take Action. Take Advil." and have been presented under the premise of "True Advil Stories"; the brand has also been involved in sponsorship deals such as with Major League Pickleball.
See also
Haleon
Wyeth Consumer Healthcare Products
Ibuprofen
References
External links
Drug brand names
Analgesics
Products introduced in 1984 | Advil | [
"Chemistry"
] | 518 | [
"Pharmacology",
"Drug brand names"
] |
328,029 | https://en.wikipedia.org/wiki/Seagate%20Technology | Seagate Technology Holdings plc is an American data storage company. It was incorporated in 1978 as Shugart Technology and commenced business in 1979. Since 2010, the company has been incorporated in Dublin, Ireland, with operational headquarters in Fremont, California, United States.
Seagate developed the first 5.25-inch hard disk drive (HDD), the 5-megabyte ST-506, in 1980. They were a major supplier in the microcomputer market during the 1980s, especially after the introduction of the IBM XT in 1983. Much of their growth has come through their acquisition of competitors. In 1989, Seagate acquired Control Data Corporation's Imprimis division, the makers of CDC's HDD products. Seagate acquired Conner Peripherals in 1996, Maxtor in 2006, and Samsung's HDD business in 2011. Today, Seagate, along with its competitor Western Digital, dominates the HDD market.
History
Founding as Shugart Technology
Seagate Technology (then called Shugart Technology) was incorporated on November 1, 1978, and commenced operations with co-founders Al Shugart, Tom Mitchell, Doug Mahon, Finis Conner, and Syed Iftikar in October 1979. The company came into being when Conner approached Shugart with the idea of starting a new company to develop 5.25-inch HDDs which Conner predicted would be a coming economic boom in the disk drive market. The name was changed to Seagate Technology to avoid a lawsuit from Xerox's subsidiary Shugart Associates (also founded by Shugart).
Early history and Tom Mitchell era
The company's first product, the ST-506, with a storage capacity of 5 megabytes (MB), was released in 1980. It was the first hard disk to fit the 5.25-inch form factor of the Shugart mini-floppy drive. It used a Modified Frequency Modulation (MFM) encoding and was later released in a 10 MB version, the ST-412. With this, Seagate secured a contract as a major OEM supplier for the IBM XT, IBM's first personal computer to contain a hard disk. The large volumes of units sold to IBM fueled Seagate's early growth. In their first year, Seagate shipped $10 million worth of units to consumers. By 1983, the company shipped over 200,000 units for revenues of $110 million.
In 1983, Al Shugart was replaced as president by then chief operating officer, Tom Mitchell, in order to move forward with corporate restructuring in the face of a changing market. Shugart continued to oversee corporate planning. By this point, the company had a 45% market share of the single-user hard drive market, with IBM purchasing 60% of the total business Seagate was doing at the time.
In 1989, Seagate acquired Imprimis Technology, the disk storage division of Control Data Corporation, resulting in a combined market share of 43%. Seagate benefited from Imprimis' head technology and reputation while Imprimis gained access to Seagate's lower component and manufacturing costs.
Second Al Shugart era (1990s)
In September 1991, Tom Mitchell resigned as president under pressure from the board of directors, with Al Shugart reassuming presidency of the company. Shugart refocused the company on its more lucrative markets, and on mainframe drives instead of external drives. He also pulled away from outsourcing component production overseas. This allowed Seagate to better keep up with demand for PCs, which increased extremely rapidly in 1993 across the market. This included a domestic partnership with Corning Inc., which began using a new glass-ceramic compound to manufacture disk substrates. In 1991, Seagate also introduced the Barracuda HDD, the industry's first hard disk with a 7,200 RPM spindle speed.
In May 1993, Seagate became the first company to cumulatively ship 50 million HDDs over its firm's history. The following year, Seagate Technology Inc moved from the Nasdaq stock exchange to the New York Stock Exchange, trading under the ticker symbol SEG. Upon leaving, the company was the 17th-largest company in terms of trading volume on the Nasdaq exchange. In 1996, Seagate merged with Conner Peripherals to form the world's largest independent hard-drive manufacturer. Following the merger, the company began a system of consolidating the components and production methods within its production chain of factories in order to streamline how products were built between plants.
In May 1995, Seagate Technology acquired Frye Computer Systems, a software company based in Boston, Massachusetts. This company developed the LAN monitoring software kit The Frye Utilities for Networks, which won PC Magazine's "Editor's Choice" award in 1995.
In 1996, Seagate introduced the industry's first hard disk with a 10,000 RPM spindle speed, the Cheetah 4LP. By 2000, this product increased to a speed of 15,000 RPM with the release of the Cheetah 15X. In May 1997, the High Court of Justice in England awarded Amstrad PLC $93 million in a lawsuit over reportedly faulty disk drives Seagate sold to Amstrad, a British manufacturer and marketer of personal computers. That year, Seagate also introduced the first Fibre Channel interface hard drive.
In 1997, Seagate experienced a downturn, along with the rest of the industry. In July 1998, Shugart resigned his positions with the company. Stephen J. "Steve" Luczo became the new chief executive officer, also joining the board of directors.
First Steve Luczo era (1998–2004)
Luczo joined Seagate Technology in October 1993 as Senior Vice President of Corporate Development. In March 1995, he was appointed Executive Vice President of Corporate Development and chief operating officer of Seagate Software Holdings. In 1996, Luczo led the Seagate acquisition of Conner Peripherals, creating the world's largest disk drive manufacturer and completing the company's strategy of vertical integration and ownership of key disk drive components. In September 1997, he was promoted to the positions of President and Chief Operating Officer.
In 1998, the board appointed Luczo as the new CEO and Seagate launched a restructuring effort. Historically, Seagate's design centers had been organized around function, with one product line manager in charge of tracking the progress of all programs. In 1998, Luczo and CTO Tom Porter called for an organizational redesign of design centers into core teams focused on individual projects, in order to meet the corporate objective of faster time to market. As the CEO, Luczo decided to increase investment in technology and to diversify into faster-growing, higher-margin businesses. He decided to implement a highly automated platform strategy for manufacturing. Between 1997 and 2004, Seagate reduced its headcount from approximately 111,000 to approximately 50,000, reduced its manufacturing factories from 24 to 11, and reduced design centers from seven to three. During this period, Seagate's output increased from approximately 9 million drives per quarter to approximately 20 million drives per quarter.
In 1998, the company's Seagate Research facility was also established in Pittsburgh, a $30 million investment that focused on future technologies and prototypes. Technology developed by the facility would include devices like the hard drive disk for Microsoft's first Xbox.
In 1999, Seagate shipped its 250 millionth hard drive.
In May 1999, Seagate sold its Network & Storage Management Group (NSMG) to Veritas Software in return for 155 million shares of Veritas' stock. With this deal, Seagate became the largest shareholder in Veritas, with an ownership stake of more than 40%.
Re-privatization (2000)
In 2000, Seagate became a private company again. Luczo led a management buyout of Seagate, believing that Seagate needed to make significant capital investments to achieve its goals. He decided to turn the company private, since disk drive producers had a hard time obtaining capital for long-term projects. The company was incorporated in Grand Cayman and stayed private until it re-entered the public market in 2002.
In early November 1999, Luczo met with representatives of Silver Lake Partners to discuss a major restructuring of Seagate. After two failed attempts to increase Seagate's stock price and unlock its value from Veritas, Seagate's board of directors authorized Luczo to seek advice from Morgan Stanley in October 1999. In early November 1999, Morgan Stanley arranged a meeting between Seagate executives and representatives of Silver Lake Partners. On November 22, 2000, Seagate management, Veritas Software, and an investor group led by Silver Lake closed a complex deal that privatized Seagate. At the time, this was the largest buyout ever of a technology company. The total deal, worth about $20 billion, included the sale of its disk-drive operations for $2 billion to an investor group led by Silver Lake Partners. The goal of the deal was to unlock the value of the 33% ownership stake Seagate had in Veritas, which had put the value of Seagate's stock at around $33 billion even though its market cap was only $15 billion.
Following the relocation to the Cayman Islands in 2000, the legal name of a holding company was simplified to Seagate Technology. The de facto operational company, incorporated in Delaware, became a limited liability company (LLC) named Seagate Technology LLC and operates to this day as such.
Both the Stanford Graduate School of Business and the Harvard Business School have written multiple case studies on the Seagate buyout and turnaround. In addition, several leading management books cite the Seagate turnaround.
Re-emerging as a public company (2002–2010)
Luczo became the chairman of the board of directors of Seagate Technology on June 19, 2002. In 2003, he accepted an invitation from the New York Stock Exchange to join its Listed Companies Advisory Committee.
In 2003, Seagate re-entered the HDD market for notebook computers and provided the 1-inch hard drives for the first iPods. This led to a trend of digital devices being created with progressively more and more memory, especially in cameras and music devices. In September 2004, The New York Times called Seagate "the nation's top maker of hard drives used to store data in computers", following the company forecasting its quarterly revenue above Wall Street estimates.
In 2004, the company separated the roles of chairman and CEO. Luczo resigned as the Seagate CEO on July 3, but retained his position as chairman of the board of directors. Bill Watkins became CEO.
At the beginning of 2006, Forbes magazine named Seagate its Company of the Year as the best managed company in the United States. Forbes wrote that, "Seagate is riding the world's gadget boom. Its 1-inch drives are the archives for cameras and MP3 players." It also credited Seagate as being the company that "sparked the personal computer revolution 25 years ago with the first 5.25-inch hard drive for the PC".
In April 2006, Seagate announced the first professional Direct-To-Disc digital cinema professional video camera aimed at the independent filmmaking market. This technology used Seagate's HDDs.
In 2007, Seagate created the hybrid drive concept.
In 2007, Seagate announced the "phase out" of parallel ATA hard disk drives by early 2008.
In April 2008, Seagate was the first to ship one billion HDDs. According to CNET, it took 17 years to ship the first 100 million and 15 years to ship the next 900 million. In 2009, Bill Watkins was released from employment as CEO.
On August 27, 2008, Seagate's stock listing was transferred to the NASDAQ Global Select Market (NASDAQ-GS large cap), trading under the same ticker symbol, STX.
Second Steve Luczo era (2009–2017)
In January 2009, Luczo was asked by the Seagate Board to return as CEO of the company, replacing Bill Watkins. As of the date of his hiring, Seagate was losing market share, facing rapidly declining revenues, was lagging in product delivery with high manufacturing costs, had an excessive operating expense structure, and had $2 billion of debt that was due within 2 years. The company's market value was less than $1.5 billion.
Luczo revamped the entire management team, and quickly reorganized the company back to a functional structure after a failed attempt to organize by business units in 2007. Led by a new Head of Sales (Dave Mosley), a new head of Operations and Development (Bob Whitmore), and a new CFO (Pat O'Malley), the team worked to address the multitude of challenges that it faced. By the end of 2009, the company had refinanced its debt and had begun to turn around its operations. In 2010, Seagate reinstated its dividend and began a stock buyback plan.
In 2010, Seagate announced that it was moving its headquarters and most of its staff from Scotts Valley to Cupertino, California.
In June 2010, Seagate released the world's first 3 TB hard drive. That September, Seagate released the first portable 1.5 TB hard drive.
In July 2011, the company changed its country of incorporation from the Cayman Islands to Ireland. Since then, the holding company became a public limited company (PLC) named Seagate Technology plc.
In December 2011, Seagate acquired Samsung's HDD business. Seagate also acquired a license to use Samsung trademark on HDD products for 5 years; after license expired, Seagate rebranded all Samsung-branded external HDD products to Maxtor, a company that Seagate acquired at the end of 2005. After the acquisition, internal HDDs from former Samsung factories were simultaneously branded as Samsung and Seagate, and then were exclusively branded as Seagate.
In 2012, Seagate continued to raise its dividend and repurchased nearly 30% of the company's outstanding shares. In the fiscal year ending June 2012, Seagate had achieved record revenues, record gross margins, record profits, and regained its position as the largest disc drive manufacturer. Its market value had increased to over $14 billion. In March 2012, Seagate demonstrated the first 1 TB/square inch density hard drive, with the possibility of scaling up to 60 TB by 2030.
In 2013, Seagate was the first HDD company to begin shipment of shingled magnetic recording drives, announcing in September that they had already shipped over 1 million such drives.
In February 2016, Seagate received a class action lawsuit concerning defective hard drives.
In August 2016, Seagate demonstrated its 60 TB SSD—claimed to be "the largest SSD ever demonstrated"—at the Flash Memory Summit in Santa Clara.
In January 2017, Seagate announced the shutdown of one of its largest HDD assembly plants, located in Suzhou, China. The plant became part of Seagate after Maxtor's acquisition in 2006; Maxtor started producing hard drives in Suzhou in 2004.
Dave Mosley era (2017–present)
On July 25, 2017, David "Dave" Mosley was appointed CEO, effective from October 1, 2017 after longtime CEO, Steve Luczo stepped down and became executive chairman.
In June 2018, Seagate was honored at the 14th Annual Manufacturing Leadership Awards Gala in Huntington Beach, California. In 2018, Seagate invested in Series A and B of Ripple, an enterprise blockchain company.
In 2019, Seagate invested £47 million in a research and development project at its factory in Derry, Northern Ireland.
In 2020, Seagate announced that it was moving its headquarters and most of its staff from Cupertino to Fremont, California. From May to June that year, the company laid off 500 employees across 12 countries due to a push for better operational efficiencies. Seagate planned for the rearrangement of more resources, including combining facilities in Minnesota.
In September 2020, Seagate announced it had entered the object storage business and introduced CORTX, an open-source object storage software, Lyve Rack, a reference architecture based on CORTX, and a corresponding developer community. The community is a group of open-source researchers and developers working to advance mass-capacity object storage. CORTX open-source software is hosted for download and collaboration on GitHub.
As of May 18, 2021, the new Irish public limited company Seagate Technology Holdings plc became the publicly traded parent company of Seagate, replacing "Seagate Technology plc."
In November 2021, at the Open Compute Summit, Seagate demonstrated the industry's first HDD with a non-volatile memory express (NVMe) interface. This was unusual because HDDs operate far below the capabilities of the NVMe interface, which is usually associated with faster storage media like SSDs.
In May 2022, Seagate presented and demonstrated their LiDAR system at the Autosens conference in Detroit. In February 2023, it divested its LiDAR division to Luminar Technologies.
In October 2022, Seagate announced a restructuring plan to reduce headcount by 8%, equivalent to approximately 3,000 jobs.
On January 17, 2024, Seagate announced the release of the first 30 TB HDD with the Exos Mozaic 3+ HDD series. The series utilizes Heat-Assisted Magnetic Recording (HAMR) and Shingled Magnetic Recording (SMR) technology with an areal density of 3 TB per platter. The 30 TB Mozaic 3+ drive uses ten platters, only one more than the 16 TB Exos X16. Seagate plans to ramp capacity of their HAMR drives, eventually reaching an areal density of 5 TB per platter by 2028. Seagate claims the new drives will have a cheaper cost per TB compared to existing drive models. The series is initially only available to enterprise markets, but is expected to be available to end users by mid 2025.
Products
Internal SSD and HDD storage
Seagate offers various internal solid-state drive (SSD) and hard disk drive (HDD) products that are classed by name for their intended usage:
Barracuda – Seagate's most popular and inexpensive general-usage SSDs and HDDs meant for devices such as computers, laptops, gaming consoles, and set-top boxes. The Barracuda HDD series has speeds of 5,200–7,200 RPM, storage capacities of 500 GB8 TB, with max speeds up to 190 MB/s. The Barracuda SSDs come with either SATA or NVMe interface, storage sizes from 240 GB2 TB, and read speeds up to 560 MB/s for SATA and 3,400 MB/s for NVMe.
Firecuda – For gaming usage in computers, laptops, and gaming consoles. Seagate offers internal and external Firecuda SSDs and HDDs with SATA, NVMe, or USB-C interface with storage capacity between 250 GB16 TB.
Ironwolf – NAS device storage drives, with HDD storage capacities of 1–20 TB, regular or helium drive type, SATA interface, and up to 260 MB/s. Ironwolf SSDs have capacities of 240 GB4 TB, SATA or NVMe interface, and speeds up to 560 MB/s for SATA and 3,150 MB/s for NVMe.
Skyhawk – Surveillance system recording drives for use in devices like DVRs or NVRs that come in 2 series. The first series, the Skyhawk AI, has capacities of 8–18 TB, regular or helium drive type, CMR recording technology, and speeds up to 260 MB/s. The regular Skyhawk series has capacities of 1–8 TB, regular drive type, CMR or SMR recording technology, and speeds up to 210 MB/s.
Exos – Enterprise drives for usage in datacenters, with three series offered:
Exos E – capacities of 300 GB8 TB, SAS or SATA interface, and speeds up to 300 MB/s.
Exos X – capacities of 12–20 TB, helium drive type, SAS or SATA interface, and speeds up to 524 MB/s on certain models.
Exos Mozaic 3+ – 30TB+ capacity, 7200 RPM spindle speed and 512 MB cache. Was introduced in 2023 with less storage as Exos X. Mozaic 3+ will be sold not only to enterprise customers but also to end users, no specialised hardware is needed to read.
Nytro – Series of enterprise Serial Attached SCSI Solid State Drives, with capacities up to 15 TB.
External SSD and HDD storage
Seagate offers various external storage product series for computers and laptops:
Seagate Basic External HDDs
Backup Plus External HDDs
Backup Plus Hub External HDDs
Photo Drive External HDDs
Barracuda Fast External SSDs
Seagate Expansion External SSD and HDDs
One Touch External SSD and HDDs
Ultra Touch External SSD and HDDs
Gaming console storage
Seagate has partnered with both PlayStation and Xbox to offer various storage devices for the PlayStation 4, Xbox One and Xbox Series X/S. For the PlayStation 4 and Xbox One Series, Seagate offers the "Game Drive" which is a 2–4 TB USB 3.0 external hard drive. Additionally for the Xbox One series, Seagate now offers a "New Game Drive" in capacities of 2–5 TB and a "Game Drive Hub" which has a capacity up to 8 TB, both of which also use the USB 3.0 interface. During the development of the new Xbox Series X/S, Seagate partnered with Xbox to make a proprietary SSD expansion card that is inserted into the back of the console, available in a capacity of 1 TB, with 2 TB planned to be made later.
Lyve Cloud
Lyve Cloud is a cloud-based storage service first offered by Seagate in February 2021. It was developed in partnership with Equinix and is intended for enterprise usage.
Data storage systems
Seagate offers various data storage systems for enterprises such as "compute & storage convergence platforms" and flash, hybrid, and disk arrays.
In June 2021, Seagate introduces the Exos CORVAULT, a 4U block storage system with dual storage controllers powered by Seagate's own VelosCT chip. The storage array uses Advanced Distributed Autonomic Protection Technology (ADAPT) and Autonomous Drive Regeneration (ADR) for automating maintenance and thus reducing e-waste.
Legacy product lines
Some of Seagate's old product lines that are no longer produced include:
U-Series – Lower performance, cheaper desktop HDDs introduced in the late 1990s.
Medalist – A line of mainstream HDDs for use in desktops and more. Later replaced by the Barracuda series.
Cheetah – High speed & performance HDDs with speeds of 10,000–15,000 RPM. Discontinued in the early 2000s.
Momentus – High performance laptop HDDs.
Decathlon – High performance desktop hard disk drives that were popular but expensive at the time.
Corporate affairs
Seagate was initially traded as a public company on the Nasdaq stock exchange under the ticker symbol SGAT. In 1994, it moved to the New York Stock Exchange as SEG. In 2000, Seagate incorporated in the Cayman Islands in order to reduce income taxes. In 2000, the company was taken private by an investment group composed of Seagate management, Silver Lake Partners, Texas Pacific Group, and others in a three-way merger-spinoff with Veritas Software; Veritas merged with Seagate, which was bought by the investment group. Veritas was then immediately spun off to shareholders, gaining rights to Seagate Software Network and Storage Management Group (with products such as Backup Exec), as well as Seagate's shares in SanDisk and Dragon Systems. Seagate Software Information Management Group was renamed Crystal Decisions in May 2001. In December 2002, Seagate re-entered the public market on the Nasdaq as STX.
By November 2023, 89% of the stock was controlled by institutional investors, with a controlling stake of 52% held by 8 investors. The largest stakes held by hedge funds Vanguard Group, Inc, Sanders Capital, LLC and BlackRock, Inc. account for 11%, 7.2%, 7.2% of shares outstanding respectively.
Partnerships and acquisitions
Finis Conner left Seagate in early 1985 and founded Conner Peripherals, which originally specialized in small-form-factor drives for portable computers. Conner Peripherals also entered the tape drive business with its purchase of Archive Corporation. After ten years as an independent company, Conner Peripherals was acquired by Seagate in a 1996 merger.
In 2005, Seagate acquired Mirra Inc., a producer of personal servers for data recovery. It also acquired ActionFront Data Recovery Labs, which provides data recovery services.
In 2006, Seagate acquired Maxtor in an all-stock deal worth $1.9 billion, and afterwards continued to market the separate Maxtor brand. The following year, Seagate acquired EVault and MetaLINCS, later rebranded as i365.
In 2014, Seagate acquired Xyratex, a storage systems company, for approximately $375 million. The same year, it acquired LSI's flash enterprise PCIe flash and SSD controller products, and its engineering capabilities, from Avago for $450 million.
In October 2015, Seagate acquired Dot Hill Systems, a supplier of software and hardware storage systems, for approximately $696 million.
Controversies
In 2015, Seagate's NAS drives—a type of wireless storage device—was found to have an undocumented hardcoded password.
On January 21, 2014, numerous tech articles around the globe published findings from the cloud storage provider Backblaze that Seagate hard disks are least reliable among prominent hard disk manufacturers. However, the Backblaze tests have been criticized for having a flawed methodology that has inconsistent environment variables, such as ambient temperatures, vibration, and disk usage. In addition, Backblaze's statistics show that the vast majority of their installed drives are manufactured by Seagate, and Backblaze editor Andy Klein has noted "that a large number of new Seagate drives being deployed could be statistically responsible" for failure rate data in their specific datacenter population. In the broader landscape, Seagate enterprise drives were named "most reliable" for seven years running in the IT Brand Pulse survey of top IT professionals, and cited as the leader for the previous two years in every measured category: reliability, performance, innovation, price, and service and support. In 2019, Backblaze released updated statistics which reported that Seagate drives had the most failures in Q2 2019, whereas its best-rated drives were made by Toshiba.
In October 2021, a report by U.S. Senate Republicans claimed that Seagate violated Export Administration Regulations by selling parts and components to Huawei following U.S. sanctions against the Chinese telecommunications company. The company received another letter in August 2022 from the U.S. Commerce Department's Bureau of Industry and Security (BIS) for allegedly violating export sanctions to sell Huawei hard drives. Seagate denied any violations claiming that its foreign-made hard drives are not subject to the restriction since the disks and the equipment to make them were not a direct product of any American semiconductor technology or software. In April 2023, Seagate reached a settlement agreement with the Department agreeing to pay $300millionthe largest civil penalty imposed by the BISfor selling over 7.4 million hard drives to Huawei without BIS authorization. The resolution also included three stages of audits focusing on its export controls compliance program and a suspended denial order.
References
External links
1979 establishments in California
American brands
American companies established in 1979
Companies based in Fremont, California
Companies listed on the Nasdaq
Computer companies established in 1979
Computer hardware companies
Computer storage companies
Hard disk drives
Information technology companies of the United States
Private equity portfolio companies
Silver Lake (investment firm) companies
Computer companies of the United States
Technology companies based in the San Francisco Bay Area
2000 mergers and acquisitions
2002 initial public offerings
Tax inversions | Seagate Technology | [
"Technology"
] | 5,775 | [
"Computer hardware companies",
"Computers"
] |
328,252 | https://en.wikipedia.org/wiki/Pick%27s%20theorem | In geometry, Pick's theorem provides a formula for the area of a simple polygon with integer vertex coordinates, in terms of the number of integer points within it and on its boundary. The result was first described by Georg Alexander Pick in 1899. It was popularized in English by Hugo Steinhaus in the 1950 edition of his book Mathematical Snapshots. It has multiple proofs, and can be generalized to formulas for certain kinds of non-simple polygons.
Formula
Suppose that a polygon has integer coordinates for all of its vertices. Let be the number of integer points interior to the polygon, and let be the number of integer points on its boundary (including both vertices and points along the sides). Then the area of this polygon is:
The example shown has interior points and boundary points, so its area is square units.
Proofs
Via Euler's formula
One proof of this theorem involves subdividing the polygon into triangles with three integer vertices and no other integer points. One can then prove that each subdivided triangle has area exactly . Therefore, the area of the whole polygon equals half the number of triangles in the subdivision. After relating area to the number of triangles in this way, the proof concludes by using Euler's polyhedral formula to relate the number of triangles to the number of grid points in the polygon.
The first part of this proof shows that a triangle with three integer vertices and no other integer points has area exactly , as Pick's formula states. The proof uses the fact that all triangles tile the plane, with adjacent triangles rotated by 180° from each other around their shared edge. For tilings by a triangle with three integer vertices and no other integer points, each point of the integer grid is a vertex of six tiles. Because the number of triangles per grid point (six) is twice the number of grid points per triangle (three), the triangles are twice as dense in the plane as the grid points. Any scaled region of the plane contains twice as many triangles (in the limit as the scale factor goes to infinity) as the number of grid points it contains. Therefore, each triangle has area , as needed for the proof. A different proof that these triangles have area is based on the use of Minkowski's theorem on lattice points in symmetric convex sets.
This already proves Pick's formula for a polygon that is one of these special triangles. Any other polygon can be subdivided into special triangles: add non-crossing line segments within the polygon between pairs of grid points until no more line segments can be added. The only polygons that cannot be subdivided in this way are the special triangles considered above; therefore, only special triangles can appear in the resulting subdivision. Because each special triangle has area , a polygon of area will be subdivided into special triangles.
The subdivision of the polygon into triangles forms a planar graph, and Euler's formula gives an equation that applies to the number of vertices, edges, and faces of any planar graph. The vertices are just the grid points of the polygon; there are of them. The faces are the triangles of the subdivision, and the single region of the plane outside of the polygon. The number of triangles is , so altogether there are faces. To count the edges, observe that there are sides of triangles in the subdivision. Each edge interior to the polygon is the side of two triangles. However, there are edges of triangles that lie along the polygon's boundary and form part of only one triangle. Therefore, the number of sides of triangles obeys the equation , from which one can solve for the number of edges, . Plugging these values for , , and into Euler's formula gives
Pick's formula is obtained by solving this linear equation for . An alternative but similar calculation involves proving that the number of edges of the same subdivision is , leading to the same result.
It is also possible to go the other direction, using Pick's theorem (proved in a different way) as the basis for a proof of Euler's formula.
Other proofs
Alternative proofs of Pick's theorem that do not use Euler's formula include the following.
One can recursively decompose the given polygon into triangles, allowing some triangles of the subdivision to have area larger than 1/2. Both the area and the counts of points used in Pick's formula add together in the same way as each other, so the truth of Pick's formula for general polygons follows from its truth for triangles. Any triangle subdivides its bounding box into the triangle itself and additional right triangles, and the areas of both the bounding box and the right triangles are easy to compute. Combining these area computations gives Pick's formula for triangles, and combining triangles gives Pick's formula for arbitrary polygons.
Alternatively, instead of using grid squares centered on the grid points, it is possible to use grid squares having their vertices at the grid points. These grid squares cut the given polygon into pieces, which can be rearranged (by matching up pairs of squares along each edge of the polygon) into a polyomino with the same area.
Pick's theorem may also be proved based on complex integration of a doubly periodic function related to Weierstrass elliptic functions.
Applying the Poisson summation formula to the characteristic function of the polygon leads to another proof.
Pick's theorem was included in a 1999 web listing of the "top 100 mathematical theorems", which later became used by Freek Wiedijk as a benchmark set to test the power of different proof assistants. , Pick's theorem had been formalized and proven in only two of the ten proof assistants recorded by Wiedijk.
Generalizations
Generalizations to Pick's theorem to non-simple polygons are more complicated and require more information than just the number of interior and boundary vertices. For instance, a polygon with holes bounded by simple integer polygons, disjoint from each other and from the boundary, has area
It is also possible to generalize Pick's theorem to regions bounded by more complex planar straight-line graphs with integer vertex coordinates, using additional terms defined using the Euler characteristic of the region and its boundary, or to polygons with a single boundary polygon that can cross itself, using a formula involving the winding number of the polygon around each integer point as well as its total winding number.
The Reeve tetrahedra in three dimensions have four integer points as vertices and contain no other integer points, but do not all have the same volume. Therefore, there does not exist an analogue of Pick's theorem in three dimensions that expresses the volume of a polyhedron as a function only of its numbers of interior and boundary points. However, these volumes can instead be expressed using Ehrhart polynomials.
Related topics
Several other mathematical topics relate the areas of regions to the numbers of grid points. Blichfeldt's theorem states that every shape can be translated to contain at least its area in grid points. The Gauss circle problem concerns bounding the error between the areas and numbers of grid points in circles. The problem of counting integer points in convex polyhedra arises in several areas of mathematics and computer science.
In application areas, the dot planimeter is a transparency-based device for estimating the area of a shape by counting the grid points that it contains. The Farey sequence is an ordered sequence of rational numbers with bounded denominators whose analysis involves Pick's theorem.
Another simple method for calculating the area of a polygon is the shoelace formula. It gives the area of any simple polygon as a sum of terms computed from the coordinates of consecutive pairs of its vertices. Unlike Pick's theorem, the shoelace formula does not require the vertices to have integer coordinates.
References
External links
Pick's Theorem by Ed Pegg, Jr., the Wolfram Demonstrations Project.
Pi using Pick's Theorem by Mark Dabbs, GeoGebra
Digital geometry
Lattice points
Euclidean plane geometry
Area
Theorems about polygons
Articles containing proofs
Analytic geometry | Pick's theorem | [
"Physics",
"Mathematics"
] | 1,672 | [
"Scalar physical quantities",
"Planes (geometry)",
"Physical quantities",
"Quantity",
"Lattice points",
"Euclidean plane geometry",
"Size",
"Number theory",
"Articles containing proofs",
"Wikipedia categories named after physical quantities",
"Area"
] |
328,274 | https://en.wikipedia.org/wiki/Palynology | Palynology is the study of microorganisms and microscopic fragments of mega-organisms that are composed of acid-resistant organic material and occur in sediments, sedimentary rocks, and even some metasedimentary rocks. Palynomorphs are the microscopic, acid-resistant organic remains and debris produced by a wide variety of plants, animals, and Protista that have existed since the late Proterozoic.
It is the science that studies contemporary and fossil palynomorphs (paleopalynology), including pollen, spores, orbicules, dinocysts, acritarchs, chitinozoans and scolecodonts, together with particulate organic matter (POM) and kerogen found in sedimentary rocks and sediments. Palynology does not include diatoms, foraminiferans or other organisms with siliceous or calcareous tests. The name of the science and organisms is derived from the Greek , "strew, sprinkle" and -logy) or of "particles that are strewn".
Palynology is an interdisciplinary science that stands at the intersection of earth science (geology or geological science) and biological science (biology), particularly plant science (botany). Biostratigraphy, a branch of paleontology and paleobotany, involves fossil palynomorphs from the Precambrian to the Holocene for their usefulness in the relative dating and correlation of sedimentary strata. Palynology is also used to date and understand the evolution of many kinds of plants and animals. In paleoclimatology, fossil palynomorphs are studied for their usefulness in understanding ancient Earth history in terms of reconstructing paleoenvironments and paleoclimates.
Palynology is quite useful in disciplines such as archeology, in honey production, and criminal and civil law. In archaeology, palynology is widely used to reconstruct ancient paleoenvironments and environmental shifts that significantly influenced past human societies and reconstruct the diet of prehistoric and historic humans. Melissopalynology, the study of pollen and other palynomorphs in honey, identifies the sources of pollen in terms of geographical location(s) and genera of plants. This not only provides important information on the ecology of honey bees, it also an important tool in discovering and policing the criminal adultriation and mislabeling of honey and its products. Forensic palynology uses palynomorphs as evidence in criminal and civil law to prove or disprove a physical link between objects, people, and places.
Palynomorphs
Palynomorphs are broadly defined as the study of organic remains, including microfossils, and microscopic fragments of mega-organisms that are composed of acid-resistant organic material and range in size between 5 and 500 micrometres. They are extracted from soils, sedimentary rocks and sediment cores, and other materials by a combination of physical (ultrasonic treatment and wet sieving) and chemical (acid digestion) procedures to remove the non-organic fraction. Palynomorphs may be composed of organic material such as chitin, pseudochitin and sporopollenin.
Palynomorphs form a geological record of importance in determining the type of prehistoric life that existed at the time the sedimentary strata was laid down. As a result, these microfossils give important clues to the prevailing climatic conditions of the time. Their paleontological utility derives from an abundance numbering in millions of palynomorphs per gram in organic marine deposits, even when such deposits are generally not fossiliferous. Palynomorphs, however, generally have been destroyed in metamorphic or recrystallized rocks.
Typical palynomorphs include dinoflagellate cysts, acritarchs, spores, pollen, plant tissue, fungi, scolecodonts (scleroprotein teeth, jaws, and associated features of polychaete annelid worms), arthropod organs (such as insect mouthparts), and chitinozoans. Palynomorph microscopic structures that are abundant in most sediments are resistant to routine pollen extraction.
Palynofacies
A palynofacies is the complete assemblage of organic matter and palynomorphs in a fossil deposit. The term was introduced by the French geologist in 1964. Palynofacies studies are often linked to investigations of the organic geochemistry of sedimentary rocks. The study of the palynofacies of a sedimentary depositional environment can be used to learn about the depositional palaeoenvironments of sedimentary rocks in exploration geology, often in conjunction with palynological analysis and vitrinite reflectance.
Palynofacies can be used in two ways:
Organic palynofacies considers all the acid insoluble particulate organic matter (POM), including kerogen and palynomorphs in sediments and palynological preparations of sedimentary rocks. The sieved or unsieved preparations may be examined using strew mounts on microscope slides that may be examined using a transmitted light biological microscope or ultraviolet (UV) fluorescence microscope. The abundance, composition and preservation of the various components, together with the thermal alteration of the organic matter is considered.
Palynomorph palynofacies considers the abundance, composition and diversity of palynomorphs in a sieved palynological preparation of sediments or palynological preparation of sedimentary rocks. The ratio of marine fossil phytoplankton (acritarchs and dinoflagellate cysts), together with chitinozoans, to terrestrial palynomorphs (pollen and spores) can be used to derive a terrestrial input index in marine sediments.
History
Early history
The earliest reported observations of pollen under a microscope are likely to have been in the 1640s by the English botanist Nehemiah Grew, who described pollen and the stamen, and concluded that pollen is required for sexual reproduction in flowering plants.
By the late 1870s, as optical microscopes improved and the principles of stratigraphy were worked out, Robert Kidston and P. Reinsch were able to examine the presence of fossil spores in the Devonian and Carboniferous coal seams and make comparisons between the living spores and the ancient fossil spores. Early investigators include Christian Gottfried Ehrenberg (radiolarians, diatoms and dinoflagellate cysts), Gideon Mantell (desmids) and Henry Hopley White (dinoflagellate cysts).
1890s to 1940s
Quantitative analysis of pollen began with Lennart von Post's published work. Although he published in the Swedish language, his methodology gained a wide audience through his lectures. In particular, his Kristiania lecture of 1916 was important in gaining a wider audience. Because the early investigations were published in the Nordic languages (Scandinavian languages), the field of pollen analysis was confined to those countries. The isolation ended with the German publication of Gunnar Erdtman's 1921 thesis. The methodology of pollen analysis became widespread throughout Europe and North America and revolutionized Quaternary vegetation and climate change research.
Earlier pollen researchers include Früh (1885), who enumerated many common tree pollen types, and a considerable number of spores and herb pollen grains. There is a study of pollen samples taken from sediments of Swedish lakes by Trybom (1888); pine and spruce pollen was found in such profusion that he considered them to be serviceable as "index fossils". Georg F. L. Sarauw studied fossil pollen of middle Pleistocene age (Cromerian) from the harbour of Copenhagen. Lagerheim (in Witte 1905) and C. A.Weber (in H. A. Weber 1918) appear to be among the first to undertake 'percentage frequency' calculations.
1940s to 1989
The term palynology was introduced by Hyde and Williams in 1944, following correspondence with the Swedish geologist Ernst Antevs, in the pages of the Pollen Analysis Circular (one of the first journals devoted to pollen analysis, produced by Paul Sears in North America). Hyde and Williams chose palynology on the basis of the Greek words paluno meaning 'to sprinkle' and pale meaning 'dust' (and thus similar to the Latin word pollen). The archive-based background to the adoption of the term palynology and to alternative names (e.g. paepalology, pollenology) has been exhaustively explored. It has been argued there that the word gained general acceptance once used by the influential Swedish palynologist Gunnar Erdtman.
Pollen analysis in North America stemmed from Phyllis Draper, an MS student under Sears at the University of Oklahoma. During her time as a student, she developed the first pollen diagram from a sample that depicted the percentage of several species at different depths at Curtis Bog. This was the introduction of pollen analysis in North America; pollen diagrams today still often remain in the same format with depth on the y-axis and abundances of species on the x-axis.
1990s to the 21st century
Pollen analysis advanced rapidly in this period due to advances in optics and computers. Much of the science was revised by Johannes Iversen and Knut Fægri in their textbook on the subject.
Methods of studying palynomorphs
Chemical preparation
Chemical digestion follows a number of steps. Initially the only chemical treatment used by researchers was treatment with potassium hydroxide (KOH) to remove humic substances; defloculation was accomplished through surface treatment or ultra-sonic treatment, although sonification may cause the pollen exine to rupture. In 1924, the use of hydrofluoric acid (HF) to digest silicate minerals was introduced by Assarson and Granlund, greatly reducing the amount of time required to scan slides for palynomorphs.
Palynological studies using peats presented a particular challenge because of the presence of well-preserved organic material, including fine rootlets, moss leaflets and organic litter. This was the last major challenge in the chemical preparation of materials for palynological study. Acetolysis was developed by Gunnar Erdtman and his brother to remove these fine cellulose materials by dissolving them. In acetolysis the specimen is treated with acetic anhydride and sulfuric acid, dissolving cellulistic materials and thus providing better visibility for palynomorphs.
Some steps of the chemical treatments require special care for safety reasons, in particular the use of HF which diffuses very fast through the skin and, causes severe chemical burns, and can be fatal.
Another treatment includes kerosene flotation for chitinous materials.
Analysis
Once samples have been prepared chemically, they are mounted on microscope slides using silicon oil, glycerol or glycerol-jelly and examined using light microscopy or mounted on a stub for scanning electron microscopy.
Researchers will often study either modern samples from a number of unique sites within a given area, or samples from a single site with a record through time, such as samples obtained from peat or lake sediments. More recent studies have used the modern analog technique in which paleo-samples are compared to modern samples for which the parent vegetation is known.
When the slides are observed under a microscope, the researcher counts the number of grains of each pollen taxon. This record is next used to produce a pollen diagram. These data can be used to detect anthropogenic effects, such as logging, traditional patterns of land use or long term changes in regional climate
Applications
Palynology can be applied to problems in many scientific disciplines including geology, botany, paleontology, archaeology, pedology (soil study), and physical geography:
Biostratigraphy and geochronology. Geologists use palynological studies in biostratigraphy to correlate strata and determine the relative age of a given bed, horizon, formation or stratigraphical sequence. Because the distribution of acritarchs, chitinozoans, dinoflagellate cysts, pollen and spores provides evidence of stratigraphical correlation through biostratigraphy and palaeoenvironmental reconstruction, one common and lucrative application of palynology is in oil and gas exploration.
Paleoecology and climate change. Palynology can be used to reconstruct past vegetation (land plants) and marine and Freshwater phytoplankton communities, and so infer past environmental (palaeoenvironmental) and palaeoclimatic conditions in an area thousands or millions of years ago, a fundamental part of research into climate change.
Organic palynofacies studies, which examine the preservation of the particulate organic matter and palynomorphs provides information on the depositional environment of sediments and depositional palaeoenvironments of sedimentary rocks.
Geothermal alteration studies examine the colour of palynomorphs extracted from rocks to give the thermal alteration and maturation of sedimentary sequences, which provides estimates of maximum palaeotemperatures.
Limnology studies. Freshwater palynomorphs and animal and plant fragments, including the prasinophytes and desmids (green algae) can be used to study past lake levels and long term climate change.
Taxonomy and evolutionary studies. Involving the use of pollen morphological characters as source of taxonomic data to delimit plant species under same family or genus. Pollen apertural status is frequently used for differential sorting or finding similarities between species of the same taxa. This is also called Palynotaxonomy.
Forensic palynology: the study of pollen and other palynomorphs for evidence at a crime scene.
Allergy studies and pollen counting. Studies of the geographic distribution and seasonal production of pollen, can be used to forecast pollen conditions, helping sufferers of allergies such as hay fever.
Melissopalynology: the study of pollen and spores found in honey.
Archaeological palynology examines human uses of plants in the past. This can help determine seasonality of site occupation, presence or absence of agricultural practices or products, and 'plant-related activity areas' within an archaeological context. Bonfire Shelter is one such example of this application.
See also
References
Sources
Moore, P.D., et al. (1991), Pollen Analysis (Second Edition). Blackwell Scientific Publications.
Traverse, A. (1988), Paleopalynology. Unwin Hyman.
Roberts, N. (1998), The Holocene an environmental history, Blackwell Publishing.
External links
The AASP - The Palynological Society
International Federation of Palynological Societies
Palynology Laboratory, French Institute of Pondicherry, India
The Palynology Unit, Kew Gardens, UK
PalDat, palynological database hosted by the University of Vienna, Austria
The Micropalaeontological Society
Commission Internationale de Microflore du Paléozoique (CIMP), International Commission for Palaeozoic Palynology
Centre for Palynology, University of Sheffield, UK
Linnean Society Palynology Specialist Group (LSPSG)
Canadian Association of Palynologists
Pollen and Spore Identification Literature
Palynologische Kring, The Netherlands and Belgium
Palynofacies, an annotated link directory.
Acosta et al., 2018. Climate change and peopling of the Neotropics during the Pleistocene-Holocene transition. Boletín de la Sociedad Geológica Mexicana. http://boletinsgm.igeolcu.unam.mx/bsgm/index.php/component/content/article/368-sitio/articulos/cuarta-epoca/7001/1857-7001-1-Acosta
Earth sciences
Archaeological science
Subfields of paleontology
Microfossils
Branches of botany
Sedimentology | Palynology | [
"Chemistry",
"Biology"
] | 3,268 | [
"Branches of botany",
"Microfossils",
"Microscopy"
] |
328,305 | https://en.wikipedia.org/wiki/Trampoline%20%28computing%29 | In computer programming, the word trampoline has a number of meanings, and is generally associated with jump instructions (i.e. moving to different code paths).
Low-level programming
Trampolines (sometimes referred to as indirect jump vectors) are memory locations holding addresses pointing to interrupt service routines, I/O routines, etc. Execution jumps into the trampoline and then immediately jumps out, or bounces, hence the term trampoline. They have many uses:
Trampoline can be used to overcome the limitations imposed by a central processing unit (CPU) architecture that expects to always find vectors in fixed locations.
When an operating system is booted on a symmetric multiprocessing (SMP) machine, only one processor, the bootstrap processor, will be active. After the operating system has configured itself, it will instruct the other processors to jump to a piece of trampoline code that will initialize the processors and wait for the operating system to start scheduling threads on them.
High-level programming
As used in some Lisp implementations, a trampoline is a loop that iteratively invokes thunk-returning functions (continuation-passing style). A single trampoline suffices to express all control transfers of a program; a program so expressed is trampolined, or in trampolined style; converting a program to trampolined style is trampolining. Programmers can use trampolined functions to implement tail-recursive function calls in stack-oriented programming languages.
In Java, trampoline refers to using reflection to avoid using inner classes, for example in event listeners. The time overhead of a reflection call is traded for the space overhead of an inner class. Trampolines in Java usually involve the creation of a GenericListener to pass events to an outer class.
In Mono Runtime, trampolines are small, hand-written pieces of assembly code used to perform various tasks.
When interfacing pieces of code with incompatible calling conventions, a trampoline is used to convert the caller's convention into the callee's convention.
In embedded systems, trampolines are short snippets of code that start up other snippets of code. For example, rather than write interrupt handlers entirely in assembly language, another option is to write interrupt handlers mostly in C, and use a short trampoline to convert the assembly-language interrupt calling convention into the C calling convention.
When passing a callback to a system that expects to call a C function, but one wants it to execute the method of a particular instance of a class written in C++, one uses a short trampoline to convert the C function-calling convention to the C++ method-calling convention. One way of writing such a trampoline is to use a thunk. Another method is to use a generic listener.
In Objective-C, a trampoline is an object returned by a method that captures and reifies all messages sent to it and then "bounces" those messages on to another object, for example in higher order messaging.
In the GCC compiler, trampoline refers to a technique for implementing pointers to nested functions when -ftrampolines option is enabled. The trampoline is a small piece of code which is constructed on the fly on the stack when the address of a nested function is taken. The trampoline sets up the static link pointer, which allows the nested function to access local variables of the enclosing function. The function pointer is then simply the address of the trampoline. This avoids having to use "fat" function pointers for nested functions which carry both the code address and the static link. This, however, conflicts with the desire to make the stack non-executable for security reasons.
In the esoteric programming language Befunge, a trampoline is an instruction to skip the next cell in the control flow.
No-execute stacks
Some implementations of trampolines cause a loss of no-execute stacks (NX stack). In the GNU Compiler Collection (GCC) in particular, a nested function builds a trampoline on the stack at runtime, and then calls the nested function through the data on stack. The trampoline requires the stack to be executable.
No-execute stacks and nested functions are mutually exclusive under GCC. If a nested function is used in the development of a program, then the NX stack is silently lost. GCC offers the -Wtrampolines warning to alert of the condition.
Software engineered using secure development lifecycle often do not allow the use of nested functions due to the loss of NX stacks.
See also
DLL trampolining
Retpoline
References
Computing terminology | Trampoline (computing) | [
"Technology"
] | 990 | [
"Computing terminology"
] |
328,352 | https://en.wikipedia.org/wiki/Anastomosis | An anastomosis (, : anastomoses) is a connection or opening between two things (especially cavities or passages) that are normally diverging or branching, such as between blood vessels, leaf veins, or streams. Such a connection may be normal (such as the foramen ovale in a fetus' heart) or abnormal (such as the patent foramen ovale in an adult's heart); it may be acquired (such as an arteriovenous fistula) or innate (such as the arteriovenous shunt of a metarteriole); and it may be natural (such as the aforementioned examples) or artificial (such as a surgical anastomosis). The reestablishment of an anastomosis that had become blocked is called a reanastomosis. Anastomoses that are abnormal, whether congenital or acquired, are often called fistulas.
The term is used in medicine, biology, mycology, geology, and geography.
Etymology
Anastomosis: medical or Modern Latin, from Greek ἀναστόμωσις, anastomosis, "outlet, opening", Greek ana- "up, on, upon", stoma "mouth", "to furnish with a mouth". Thus the -stom- syllable is cognate with that of stoma in botany or stoma in medicine.
Medical anatomy
An anastomosis is the connection of two normally divergent structures. It refers to connections between blood vessels or between other tubular structures such as loops of intestine.
Circulatory
In circulatory anastomoses, many arteries naturally anastomose with each other; for example, the inferior epigastric artery and superior epigastric artery, or the anterior and/or posterior communicating arteries in the Circle of Willis in the brain. The circulatory anastomosis is further divided into arterial and venous anastomosis. Arterial anastomosis includes actual arterial anastomosis (e.g., palmar arch, plantar arch) and potential arterial anastomosis (e.g. coronary arteries and cortical branch of cerebral arteries). Anastomoses also form alternative routes around capillary beds in areas that do not need a large blood supply, thus helping regulate systemic blood flow.
Surgical
Surgical anastomosis occurs when segments of intestine, blood vessel, or any other structure are connected together surgically (anastomosed). Examples include arterial anastomosis in bypass surgery, intestinal anastomosis after a piece of intestine has been resected, Roux-en-Y anastomosis and ureteroureterostomy. Surgical anastomosis techniques include linear stapled anastomosis, hand sewn anastomosis, end-to-end anastomosis (EEA). Anastomosis can be performed by hand or with an anastomosis assist device. Studies have been performed comparing various anastomosis approaches taking into account surgical "time and cost, postoperative anastomotic bleeding, leakage, and stricture".
Pathological
Pathological anastomosis results from trauma or disease and may involve veins, arteries, or intestines. These are usually referred to as fistulas. In the cases of veins or arteries, traumatic fistulas usually occur between artery and vein. Traumatic intestinal fistulas usually occur between two loops of intestine (entero-enteric fistula) or intestine and skin (enterocutaneous fistula). Portacaval anastomosis, by contrast, is an anastomosis between a vein of the portal circulation and a vein of the systemic circulation, which allows blood to bypass the liver in patients with portal hypertension, often resulting in hemorrhoids, esophageal varices, or caput medusae.
Biology
Evolution
In evolution, anastomosis is a recombination of evolutionary lineage. Conventional accounts of evolutionary lineage present themselves as the branching out of species into novel forms. Under anastomosis, species might recombine after initial branching out, such as in the case of recent research that shows that ancestral populations along human and chimpanzee lineages may have interbred after an initial branching event. The concept of anastomosis also applies to the theory of symbiogenesis, in which new species emerge from the formation of novel symbiotic relationships.
Mycology
In mycology, anastomosis is the fusion between branches of the same or different hyphae. Hence the bifurcating fungal hyphae can form true reticulating networks. By sharing materials in the form of dissolved ions, hormones, and nucleotides, the fungus maintains bidirectional communication with itself. The fungal network might begin from several origins; several spores (i.e. by means of conidial anastomosis tubes), several points of penetration, each a spreading circumference of absorption and assimilation. Once encountering the tip of another expanding, exploring self, the tips press against each other in pheromonal recognition or by an unknown recognition system, fusing to form a genetic singular clonal colony that can cover hectares called a genet or just microscopical areas.
For fungi, anastomosis is also a component of reproduction. In some fungi, two different haploid mating types – if compatible – merge. Somatically, they form a morphologically similar mycelial wave front that continues to grow and explore. The significant difference is that each septated unit is binucleate, containing two unfused nuclei, i.e. one from each parent that eventually undergoes karyogamy and meiosis to complete the sexual cycle.
Also the term "anastomosing" is used for mushroom gills which interlink and separate to form a network.
Botany
The growth of a strangler fig around a host tree, with tendrils fusing together to form a mesh, is called anastomosing.
Geosciences
Geology
In geology, veins of quartz (or other) minerals can display anastomosis.
Ductile shear zones frequently show anastomosing geometries of highly-strained rocks around lozenges of less-deformed material.
Molten lava flows sometimes flow in anastomosed lava channels or lava tubes.
In cave systems, anastomosis is the splitting of cave passages that later reconnect.
Geography and hydrology
Anastomosing rivers, anastomosing streams consist of multiple channels that divide and reconnect and are separated by semi-permanent banks formed of cohesive material, such that they are unlikely to migrate from one channel position to another. They can be confused with braided rivers based on their planforms alone, but braided rivers are much shallower and more dynamic than anastomosing rivers. Some definitions require that an anastomosing river be made up of interconnected channels that enclose floodbasins, again in contrast with braided rivers.
Rivers with anastomosed reaches include the Magdalena River in Colombia, the upper Columbia River in British Columbia, Canada, the Drumheller Channels of the Channeled Scablands of the state of Washington, US, and the upper Narew River in Poland. The term anabranch has been used for segments of anastomosing rivers.
Braided streams show anastomosing channels around channel bars of alluvium.
References
Angiology
Digestive system
Evolutionary biology
Petrology
Surgery | Anastomosis | [
"Biology"
] | 1,590 | [
"Digestive system",
"Organ systems",
"Evolutionary biology"
] |
328,564 | https://en.wikipedia.org/wiki/Star%20party | A star party is a gathering of amateur astronomers for the purpose of observing objects and events in the sky. Local star parties may be one-night affairs, but larger events can last a week or longer and attract hundreds or even thousands of participants. Many astronomy clubs have monthly star parties during the warmer months. Large regional star parties are held annually and are an important part of the hobby of amateur astronomy. A naturally dark site away from light pollution is typical.
Participants bring telescopes and binoculars of all types and sizes and spend the nights observing astronomical objects such as planets, comets, stars, and deep-sky objects together. Astrophotography and CCD imaging are also very popular. At larger star parties, lectures, swap meets, exhibitions of home-built telescopes, contests, tours, raffles, and other similar activities are common. Commercial vendors selling a variety of astronomical equipment may also be present. As with other hobbyist gatherings, much camaraderie and discussion of various aspects of the hobby occurs at any star party.
History
The idea of a star party is not new and allegedly goes back at least as far as George III of the United Kingdom, who was passionately interested in astronomy and mathematics. On nights when poor weather blocked the view of the real stars and planets, attendants are said to have hung paper lanterns marked with drawings in the trees around the royal palace to provide something else for the King and his guests to spot through their telescopes.
Public star parties
Star parties whose focus is on bringing the stars to the people are often staged in urban areas where people congregate in large numbers. This is in contrast to star parties typically held in remote dark-sky areas more conducive to stargazing.
In the US, notable star parties include the annual Winter Star Party, held in the Florida Keys; the Mid Atlantic Star Party, held on the east coast of the United States; the Oregon Star Party; the Stellafane Convention, held in Vermont; the Texas Star Party, held in west Texas; and the Okie-Tex Star Party, held near Kenton, Oklahoma. In Canada, Starfest, held near Ayton, Ontario, is organized by the North York Astronomical Association. In the United Kingdom, notable annual star parties include the Spring and Autumn Equinox star parties held at Kelling Heath Holiday Park and Kielder in Northumbria. In Australia, the South Pacific Star Party is held each year. In Sri Lanka, Star Party Sri Lanka is held annually at the University of Peradeniya premises.
See also
References
External links
Astronomy Event Calendar at Sky and Telescope Website
List of star parties in North America with astronomy-weather forecasts
Astronomy events
Amateur astronomy organizations | Star party | [
"Astronomy"
] | 546 | [
"Star parties",
"Astronomy events",
"History of astronomy",
"Astronomy organizations",
"Amateur astronomy organizations"
] |
328,602 | https://en.wikipedia.org/wiki/Ansari%20X%20Prize | The Ansari X Prize was a space competition in which the X Prize Foundation offered a US$10,000,000 prize for the first non-government organization to launch a reusable crewed spacecraft into space twice within two weeks. It was modeled after early 20th-century aviation prizes, and aimed to spur development of low-cost spaceflight.
Created in May 1996 and initially called just the "X Prize", it was renamed the "Ansari X Prize" on May 6, 2004, following a multimillion-dollar donation from entrepreneurs Anousheh Ansari and Amir Ansari.
The prize was won on October 4, 2004, the 47th anniversary of the Sputnik 1 launch, by the Tier One project designed by Burt Rutan and financed by Microsoft co-founder Paul Allen, using the experimental spaceplane SpaceShipOne. $10 million was awarded to the winner, and more than $100 million was invested in new technologies in pursuit of the prize.
Several other X Prizes have since been announced by the X Prize Foundation, promoting further development in space exploration and other technological fields.
Motivation
The X Prize was inspired by the Orteig Prize—the 1919 prize worth 25,000 dollars offered by New York hotel owner Raymond Orteig that encouraged a number of intrepid aviators in the mid-1920s to fly across the Atlantic Ocean from New York to Paris—which was ultimately won in 1927 by Charles Lindbergh in his aircraft Spirit of St. Louis. In reading the 1953 book, The Spirit of St. Louis during 1994, Peter Diamandis realized that "such a prize, updated and offered ... as a space prize, might be just what was needed to bring space travel to the general public, to jump-start a commercial space industry."
Diamandis developed a fully formed idea for a "suborbital space barnstorming prize", and set an initial goal of finding backers to support a prize. He named it the X Prize, in part because "X" could serve as a variable for the name of the person who might later back the prize; any craft built to win the prize would be experimental, and a long line of experimental aircraft built for the US Air Force had been so designated, including the X-15 that was, in 1963, the first government-built craft to carry a human into space; and because "Ten is the Roman numeral X".
The X Prize was first publicly proposed by Diamandis in an address to the NSS International Space Development Conference in 1995. The competition goal was adopted from the SpaceCub project, demonstration of a private vehicle capable of flying a pilot to the edge of space, defined as 100 km altitude. This goal was selected to help encourage the space industry in the private sector, which is why the entries were not allowed to have any government funding. It aimed to demonstrate that spaceflight can be affordable and accessible to corporations and civilians, opening the door to commercial spaceflight and space tourism. It is also hoped that competition will breed innovation, introducing new low-cost methods of reaching Earth orbit, and ultimately pioneering low-cost space travel and unfettered human expansion into the Solar System.
NASA is developing a similar prize program called Centennial Challenges to generate innovative solutions to space technology problems.
Contestants
Twenty-six teams from around the world participated, ranging from volunteer hobbyists to large corporate-backed operations:
Some sources mention two other companies:
AeroAstro*
Cerulean Freight Forwarding Co.,
but do not mention Whalen Aeronautics Inc.
Winning team
The Tier One project made two successful competitive flights: X1 on September 29, 2004, piloted by Mike Melvill to 102.9 km; and X2 on October 4, 2004, piloted by Brian Binnie to 112 km. They thus won the prize, which was awarded on November 6, 2004. In press coverage, the winning team has been variously referred to as Mojave Aerospace Ventures, the corporation that funded the attempt; Tier One, the project name of Mojave's contest entry; and Scaled Composites, the manufacturer of the craft.
At least two documentaries were created to document the efforts of the winning team to win the prize. They included Black Sky: The Race for Space and Black Sky: Winning the X Prize. The documentaries chronicle the story of Burt Rutan and SpaceShipOne.
As of 2011, the trophy is on display in the Saint Louis Science Center in St. Louis, Missouri.
Unsuccessful attempts
Although only the Tier One team actually launched a spacecraft on a sub-orbital spaceflight, several other teams have conducted low-altitude tests or announced future plans to launch into space:
ARCA launched Demonstrator 2B rocket on September 9, 2004, at Cape Midia Air Force Base in Romania. It was the first flight of a reusable monopropellant rocket.
The da Vinci Project originally announced that their first flight would be on October 2, 2004, but this was postponed indefinitely on September 23, 2004, as they were unable to obtain a few necessary components in time. No flight ever occurred.
The Canadian Arrow team conducted a successful full-power engine test in 2005 and announced on June 2, 2005, that it had received permission from the Canadian government to use Cape Rich as a future launch site.
On August 8, 2004, Space Transport Corporation's Rubicon 1 and Armadillo Aerospace's unnamed test vehicle, in two separate uncrewed test launches, both crashed and were destroyed.
On February 15, 2005, AERA Corporation (formerly American Astronautics) announced its plans to send seven paying passengers into space as early as 2006, a full year before the first announced speculative Virgin Galactic flight.
List of major donors by order of donation
Anousheh Ansari and Amir Ansari, the official sponsors of the competition.
First USA (J.P. Morgan Chase), US$1,000,000
New Spirit of St. Louis Organization
Danforth Foundation, US$500,000
Tom Clancy, $100K–US$500,000
J.S. McDonnell (McDonnell Douglas)
Andrew Taylor (Enterprise Rent-A-Car)
Andrew Beal (Beal Bank)
St. Louis Science Center
Organization
With the Ansari X Prize, the X Prize Foundation (based in Santa Monica, CA) established a philanthropic model in which offering a prize for achieving a specific goal stimulates entrepreneurial investment that produces a tenfold or greater return on the prize purse and at least one hundredfold in follow-on investment and social benefit. The Foundation has developed into a non-profit prize institute that conceives, designs and manages public competitions for the benefit of humanity.
Funding
The funding for the US $10,000,000 prize was unconventional. It came from a "hole-in-one insurance policy". It was "fully funded through January 1, 2005, through private donations and backed by an insurance policy to guarantee that the $10 million is in place on the day that the prize is won."
Spin-offs
The success of the X Prize competition has spurred spin-offs that are set up in the same way. There have been two major spin-offs at this point, the first of which is the M Prize (short for Methuselah Mouse Prize), which is a prize set up by University of Cambridge biogerontologist Aubrey de Grey which will go to the scientific team that successfully extends the life or reverses the aging of mice, which would then eventually be available to humans. The second is the NASA Centennial Challenges, which consist of (among others) the Tether Challenge in which teams compete to develop superstrong tethers as a component to space elevators, and the Beam Power Challenge which encourages ideas for transmitting power wirelessly. An independent spin-off called the N-Prize was started by Cambridge Microbiologist Paul H. Dear in 2007, designed to foster research into low-cost orbital launchers.
The X Prize foundation itself is developing additional prizes: the Archon X Prize, to advance research in the field of genomics; the Automotive X Prize, an engineering competition to create a fuel efficient clean car; the Wirefly X Prize Cup, an annually held air & space exposition featuring space-related competitions and rocketry, and the Google Lunar X Prize, a competition for privately funded lunar exploration. Of several awards on offer, the largest—$20 million—will be awarded to the first privately funded team to produce a robot that lands on the Moon and travels 500 m (1,640 ft) across its surface.
There is also a possible "H-Prize", focused on hydrogen vehicle research, although this goal has been addressed by H.R. 5143, an X-Prize-inspired bill passed by the United States House of Representatives, which was later folded into the Energy Independence and Security Act of 2007.
See also
Ansari X Prize:
Tier One: SpaceShipOne + WhiteKnightOne
Black Sky: The Race For Space (2004 telefim) Discovery Channel documentary about the Ansari X Prize
How to Make a Spaceship (2016 book) by Julian Buthrie, about the Ansari X Prize
Similar topics:
NASA Centennial Challenges
Orteig Prize
America's Space Prize
Methuselah Mouse Prize, or M Prize (modeled after the Ansari X Prize)
N-Prize, a low-budget orbital satellite insertion challenge
List of space technology awards
List of challenge awards
List of awards named after people
Related technical topics:
Specific impulse
Tsiolkovsky rocket equation
Delta-v
Further reading
"The X Prize", an article by Ian Parker on pages 52–63 of the 4 October 2004 issue of The New Yorker
References
External links
X Prize founder talks about the prize and the future of space travel (MIT Video)
FAI Rules for Astronautic Record Attempts
Challenge awards
Space-related awards
Private spaceflight
X Prizes | Ansari X Prize | [
"Technology"
] | 2,004 | [
"Science and technology awards",
"Space-related awards"
] |
328,684 | https://en.wikipedia.org/wiki/Annihilator%20%28ring%20theory%29 | In mathematics, the annihilator of a subset of a module over a ring is the ideal formed by the elements of the ring that give always zero when multiplied by each element of .
Over an integral domain, a module that has a nonzero annihilator is a torsion module, and a finitely generated torsion module has a nonzero annihilator.
The above definition applies also in the case of noncommutative rings, where the left annihilator of a left module is a left ideal, and the right-annihilator, of a right module is a right ideal.
Definitions
Let R be a ring, and let M be a left R-module. Choose a non-empty subset S of M. The annihilator of S, denoted AnnR(S), is the set of all elements r in R such that, for all s in S, . In set notation,
for all
It is the set of all elements of R that "annihilate" S (the elements for which S is a torsion set). Subsets of right modules may be used as well, after the modification of "" in the definition.
The annihilator of a single element x is usually written AnnR(x) instead of AnnR({x}). If the ring R can be understood from the context, the subscript R can be omitted.
Since R is a module over itself, S may be taken to be a subset of R itself, and since R is both a right and a left R-module, the notation must be modified slightly to indicate the left or right side. Usually and or some similar subscript scheme are used to distinguish the left and right annihilators, if necessary.
If M is an R-module and , then M is called a faithful module.
Properties
If S is a subset of a left R-module M, then Ann(S) is a left ideal of R.
If S is a submodule of M, then AnnR(S) is even a two-sided ideal: (ac)s = a(cs) = 0, since cs is another element of S.
If S is a subset of M and N is the submodule of M generated by S, then in general AnnR(N) is a subset of AnnR(S), but they are not necessarily equal. If R is commutative, then the equality holds.
M may be also viewed as an R/AnnR(M)-module using the action . Incidentally, it is not always possible to make an R-module into an R/I-module this way, but if the ideal I is a subset of the annihilator of M, then this action is well-defined. Considered as an R/AnnR(M)-module, M is automatically a faithful module.
For commutative rings
Throughout this section, let be a commutative ring and a finitely generated -module.
Relation to support
The support of a module is defined as
Then, when the module is finitely generated, there is the relation
,
where is the set of prime ideals containing the subset.
Short exact sequences
Given a short exact sequence of modules,
the support property
together with the relation with the annihilator implies
More specifically, the relations
If the sequence splits then the inequality on the left is always an equality. This holds for arbitrary direct sums of modules, as
Quotient modules and annihilators
Given an ideal and let be a finitely generated module, then there is the relation
on the support. Using the relation to support, this gives the relation with the annihilator
Examples
Over the integers
Over any finitely generated module is completely classified as the direct sum of its free part with its torsion part from the fundamental theorem of abelian groups. Then the annihilator of a finitely generated module is non-trivial only if it is entirely torsion. This is because
since the only element killing each of the is . For example, the annihilator of is
the ideal generated by . In fact the annihilator of a torsion module
is isomorphic to the ideal generated by their least common multiple, . This shows the annihilators can be easily be classified over the integers.
Over a commutative ring R
There is a similar computation that can be done for any finitely presented module over a commutative ring . The definition of finite presentedness of implies there exists an exact sequence, called a presentation, given by
where is in . Writing explicitly as a matrix gives it as
hence has the direct sum decomposition
If each of these ideals is written as
then the ideal given by
presents the annihilator.
Over k[x,y]
Over the commutative ring for a field , the annihilator of the module
is given by the ideal
Chain conditions on annihilator ideals
The lattice of ideals of the form where S is a subset of R is a complete lattice when partially ordered by inclusion. There is interest in studying rings for which this lattice (or its right counterpart) satisfies the ascending chain condition or descending chain condition.
Denote the lattice of left annihilator ideals of R as and the lattice of right annihilator ideals of R as . It is known that satisfies the ascending chain condition if and only if satisfies the descending chain condition, and symmetrically satisfies the ascending chain condition if and only if satisfies the descending chain condition. If either lattice has either of these chain conditions, then R has no infinite pairwise orthogonal sets of idempotents.
If R is a ring for which satisfies the A.C.C. and RR has finite uniform dimension, then R is called a left Goldie ring.
Category-theoretic description for commutative rings
When R is commutative and M is an R-module, we may describe AnnR(M) as the kernel of the action map determined by the adjunct map of the identity along the Hom-tensor adjunction.
More generally, given a bilinear map of modules , the annihilator of a subset is the set of all elements in that annihilate :
Conversely, given , one can define an annihilator as a subset of .
The annihilator gives a Galois connection between subsets of and , and the associated closure operator is stronger than the span.
In particular:
annihilators are submodules
An important special case is in the presence of a nondegenerate form on a vector space, particularly an inner product: then the annihilator associated to the map is called the orthogonal complement.
Relations to other properties of rings
Given a module M over a Noetherian commutative ring R, a prime ideal of R that is an annihilator of a nonzero element of M is called an associated prime of M.
Annihilators are used to define left Rickart rings and Baer rings.
The set of (left) zero divisors DS of S can be written as
(Here we allow zero to be a zero divisor.)
In particular DR is the set of (left) zero divisors of R taking S = R and R acting on itself as a left R-module.
When R is commutative and Noetherian, the set is precisely equal to the union of the associated primes of the R-module R.
See also
Faltings' annihilator theorem
Socle
Support of a module
Notes
References
Israel Nathan Herstein (1968) Noncommutative Rings, Carus Mathematical Monographs #15, Mathematical Association of America, page 3.
Richard S. Pierce. Associative algebras. Graduate Texts in Mathematics, Vol. 88, Springer-Verlag, 1982,
Ideals (ring theory)
Module theory
Ring theory | Annihilator (ring theory) | [
"Mathematics"
] | 1,632 | [
"Fields of abstract algebra",
"Ring theory",
"Module theory"
] |
328,721 | https://en.wikipedia.org/wiki/Tierra%20%28computer%20simulation%29 | Tierra is a computer simulation developed by ecologist Thomas S. Ray in the early 1990s in which computer programs compete for time (central processing unit (CPU) time) and space (access to main memory). In this context, the computer programs in Tierra are considered to be evolvable and can mutate, self-replicate and recombine. Tierra's virtual machine is written in C. It operates on a custom instruction set designed to facilitate code changes and reordering, including features such as jump to template (as opposed to the relative or absolute jumps common to most instruction sets).
Simulations
The basic Tierra model has been used to experimentally explore in silico the basic processes of evolutionary and ecological dynamics. Processes such as the dynamics of punctuated equilibrium, host-parasite co-evolution and density-dependent natural selection are amenable to investigation within the Tierra framework. A notable difference between Tierra and more conventional models of evolutionary computation, such as genetic algorithms, is that there is no explicit, or exogenous fitness function built into the model. Often in such models there is the notion of a function being "optimized"; in the case of Tierra, the fitness function is endogenous: there is simply survival and death.
According to Thomas S. Ray and others, this may allow for more "open-ended" evolution, in which the dynamics of the feedback between evolutionary and ecological processes can itself change over time (see evolvability), although this claim has not been realized – like other digital evolution systems, it eventually reaches a point where novelty ceases to be created, and the system at large begins either looping or ceases to 'evolve'. The issue of how true open-ended evolution can be implemented in an artificial system is still an open question in the field of artificial life.
Mark Bedau and Norman Packard developed a statistical method of classifying evolutionary systems and in 1997, Bedau et al. applied these statistics to Evita, an Artificial life model similar to Tierra and Avida, but with limited organism interaction and no parasitism, and concluded that Tierra-like systems do not exhibit the open-ended evolutionary signatures of naturally evolving systems.
Russell K. Standish has measured the informational complexity of Tierran 'organisms', and has similarly not observed complexity growth in Tierran evolution.
Tierra is an abstract model, but any quantitative model is still subject to the same validation and verification techniques applied to more traditional mathematical models, and as such, has no special status. The creation of more detailed models in which more realistic dynamics of biological systems and organisms are incorporated is now an active research field (see systems biology).
See also
Avida
Digital organism
Digital organism simulator
Evolutionary computation
Fitness landscape
References
Further reading
Bentley, Peter, J. 2001, "Digital Biology:How Nature is transforming Our Technology and Our Lives", Simon & Schuster, New York, NY. Previously published in Great Britain in 2001 by Headline Book Publishing.
Ray, T. S. 1991, "Evolution and optimization of digital organisms", in Billingsley K.R. et al. (eds), Scientific Excellence in Supercomputing: The IBM 1990 Contest Prize Papers, Athens, GA, 30602: The Baldwin Press, The University of Georgia. Publication date: December 1991, pp. 489–531.
Casti, John L. (1997). Would-Be-Worlds. John Wiley & Sons, Inc. New York
External links
Tierra home page
Artificial life
Artificial life models
Digital organisms | Tierra (computer simulation) | [
"Biology"
] | 729 | [
"Digital organisms",
"Artificial life models",
"Biological models"
] |
328,763 | https://en.wikipedia.org/wiki/Emblem | An emblem is an abstract or representational pictorial image that represents a concept, like a moral truth, or an allegory, or a person, like a monarch or saint.
Emblems vs. symbols
Although the words emblem and symbol are often used interchangeably, an emblem is a pattern that is used to represent an idea or an individual. An emblem develops in concrete, visual terms some abstraction: a deity, a tribe or nation, or a virtue or vice.
An emblem may be worn or otherwise used as an identifying badge or patch. For example, in America, police officers' badges refer to their personal metal emblem whereas their woven emblems on uniforms identify members of a particular unit. A real or metal cockle shell, the emblem of James the Great, sewn onto the hat or clothes, identified a medieval pilgrim to his shrine at Santiago de Compostela. In the Middle Ages, many saints were given emblems, which served to identify them in paintings and other images: St. Catherine of Alexandria had a wheel, or a sword, St. Anthony the Abbot, a pig and a small bell. These are also called attributes, especially when shown carried by or close to the saint in art. Monarchs and other grand persons increasingly adopted personal devices or emblems that were distinct from their family heraldry. The most famous include Louis XIV of France's sun, the salamander of Francis I of France, the boar of Richard III of England and the armillary sphere of Manuel I of Portugal. In the fifteenth and sixteenth century, there was a fashion, started in Italy, for making large medals with a portrait head on the obverse and the emblem on the reverse; these would be given to friends and as diplomatic gifts. Pisanello produced many of the earliest and finest of these.
A symbol, on the other hand, substitutes one thing for another, in a more concrete fashion:
The Christian cross is a symbol of the crucifixion of Jesus; it is an emblem of sacrifice.
The Red Cross is one of three symbols representing the International Red Cross. A red cross on a white background is the emblem of humanitarian spirit.
The crescent shape is a symbol of the moon; it is an emblem of Islam.
The skull and crossbones is a symbol identifying a poison. The skull is an emblem of the transitory nature of human life.
Other terminology
A totem is specifically an animal emblem that expresses the spirit of a clan. Emblems in heraldry are known as charges. The lion passant serves as the emblem of England, the lion rampant as the emblem of Scotland.
An icon consists of an image (originally a religious image), that has become standardized by convention. A logo is an impersonal, secular icon, usually of a corporate entity.
Emblems in history
Since the 15th century, the terms of emblem (emblema; from , meaning "embossed ornament") and emblematura belong to the termini technici of architecture. They mean an iconic painted, drawn, or sculptural representation of a concept affixed to houses and belong—like the inscriptions—to the architectural ornaments (ornamenta). Since the publication of
(1452) by Leon Battista Alberti (1404–1472), patterned after the by the Roman architect and engineer Vitruvius, emblema are related to Egyptian hieroglyphics and are considered as being the lost universal language. Therefore, the emblems belong to the Renaissance knowledge of antiquity which comprises not only Greek and Roman antiquity but also Egyptian antiquity as proven by the numerous obelisks built in 16th and 17th century Rome.
Evidence of the use of emblems in pre-Columbian America has also been found, such as those used in Mayan city states, kingdoms, and even empires such as the Aztec or Inca. The use of these in the American context does not differ much from the contexts of other regions of the world, being even the equivalent of the coats of arms of their respective territorial entities.
The 1531 publication in Augsburg of the first emblem book, the Emblemata of the Italian jurist Andrea Alciato launched a fascination with emblems that lasted two centuries and touched most of the countries of western Europe. "Emblem" in this sense refers to a didactic or moralizing combination of picture and text intended to draw the reader into a self-reflective examination of their own life. Complicated associations of emblems could transmit information to the culturally-informed viewer, a characteristic of the 16th-century artistic movement called Mannerism.
A popular collection of emblems, which ran to many editions, was presented by Francis Quarles in 1635. Each of the emblems consisted of a paraphrase from a passage of Scripture, expressed in ornate and metaphorical language, followed by passages from the Christian Fathers, and concluding with an epigram of four lines. These were accompanied by an emblem that presented the symbols displayed in the accompanying passage.
Emblems in speech
Emblems are certain gestures which have a specific meaning attached to them. These meanings are usually associated with the culture they are established in. Using emblems creates a way for humans to communicate with one another in a non-verbal way. An individual waving their hand at a friend, for example, would communicate "hello" without having to verbally say anything.
Emblems vs. sign language
Although sign language uses hand gestures to communicate words in a non-verbal way, it should not be confused with emblems. Sign language contains linguistic properties, similar to those used in verbal languages, and is used to communicate entire conversations. Linguistic properties are verbs, nouns, pronouns, adverbs, adjectives, etc.. In contrast with sign language, emblems are a non-linguistic form of communication. Emblems are single gestures which are meant to get a short non-verbal message to another individual.
Emblems in culture
Emblems are associated with the culture they are established in and are subjective to that culture. For example, the sign made by forming a circle with the thumb and forefinger is used in America to communicate "OK" in a non-verbal way, in Japan to mean "money", and in some southern European countries to mean something sexual. Furthermore, the thumbs up sign in America means "good job ", but in some parts of the Middle East the thumbs up sign means something highly offensive.
See also
Coat of arms
Crest
Emblem book
Logo
Meme
Mission patch
National emblem
Saint symbology
Seal (emblem)
Symbol
Badge
Icon
References
Further reading
Emblematica Online. University of Illinois at Urbana Champaign Libraries. 1,388 facsimiles of emblem books.
Moseley, Charles, A Century of Emblems: An Introduction to the Renaissance Emblem (Aldershot: Scolar Press, 1989)
Notes
External links
Camerarius, Joachim (1605) Symbolorum & emblematum - digital facsimile of book of emblems, from the website of the Linda Hall Library | Emblem | [
"Mathematics"
] | 1,417 | [
"Symbols"
] |
328,784 | https://en.wikipedia.org/wiki/Computer%20scientist | A computer scientist is a scientist who specializes in the academic study of computer science.
Computer scientists typically work on the theoretical side of computation. Although computer scientists can also focus their work and research on specific areas (such as algorithm and data structure development and design, software engineering, information theory, database theory, theoretical computer science, numerical analysis, programming language theory, compiler, computer graphics, computer vision, robotics, computer architecture, operating system), their foundation is the theoretical study of computing from which these other fields derive.
A primary goal of computer scientists is to develop or validate models, often mathematical, to describe the properties of computational systems (processors, programs, computers interacting with people, computers interacting with other computers, etc.) with an overall objective of discovering designs that yield useful benefits (faster, smaller, cheaper, more precise, etc.).
Education
Most computer scientists are required to possess a PhD, M.S., Bachelor's degree in computer science, or other similar fields like Information and Computer Science (CIS), or a closely related discipline such as mathematics or physics.
Areas of specialization
Theoretical computer science – including data structures and algorithms, theory of computation, information theory and coding theory, programming language theory, and formal methods
Computer systems – including computer architecture and computer engineering, computer performance analysis, concurrency, and distributed computing, computer networks, computer security and cryptography, and databases.
Computer applications – including computer graphics and visualization, human–computer interaction, scientific computing, and artificial intelligence.
Software engineering – the application of engineering to software development in a systematic method
Employment
Computer scientists are often hired by software publishing firms, scientific research and development organizations where they develop the theories and computer model that allow new technologies to be developed. Computer scientists are also employed by educational institutions such as universities.
Computer scientists can follow more practical applications of their knowledge, doing things such as software engineering. They can also be found in the field of information technology consulting, and may be seen as a type of mathematician, given how much of the field depends on mathematics. Computer scientists employed in industry may eventually advance into managerial or project leadership positions.
Employment prospects for computer scientists are said to be excellent. Such prospects seem to be attributed, in part, to very rapid growth in computer systems design and related services industry, and the software publishing industry, which are projected to be among the fastest growing industries in the U.S. economy.
See also
Computational scientist
Software engineering
List of computer scientists
List of computing people
List of pioneers in computer science
References
Computer occupations
Mathematical science occupations | Computer scientist | [
"Technology"
] | 513 | [
"Computer science",
"Computer occupations",
"Computer scientists"
] |
328,815 | https://en.wikipedia.org/wiki/Theistic%20evolution | Theistic evolution (also known as theistic evolutionism or God-guided evolution), alternatively called evolutionary creationism, is a view that God acts and creates through laws of nature. Here, God is taken as the primary cause while natural causes are secondary, positing that the concept of God and religious beliefs are compatible with the findings of modern science, including evolution. Theistic evolution is not in itself a scientific theory, but includes a range of views about how science relates to religious beliefs and the extent to which God intervenes. It rejects the strict creationist doctrines of special creation, but can include beliefs such as creation of the human soul. Modern theistic evolution accepts the general scientific consensus on the age of the Earth, the age of the universe, the Big Bang, the origin of the Solar System, the origin of life, and evolution.
Supporters of theistic evolution generally attempt to harmonize evolutionary thought with belief in God and reject the conflict between religion and science; they hold that religious beliefs and scientific theories do not need to contradict each other. Diversity exists regarding how the two concepts of faith and science fit together.
Definition
Francis Collins describes theistic evolution as the position that "evolution is real, but that it was set in motion by God", and characterizes it as accepting "that evolution occurred as biologists describe it, but under the direction of God". He lists six general premises on which different versions of theistic evolution typically rest. They include:
The prevailing cosmological model, with the universe coming into being about 13.8 billion years ago;
The fine-tuned universe;
Evolution and natural selection;
No special supernatural intervention is involved once evolution got under way;
Humans are a result of these evolutionary processes; and
Despite all these, humans are unique. The concern for the Moral Law (the knowledge of right and wrong) and the continuous search for God among all human cultures defy evolutionary explanations and point to our spiritual nature.
The executive director of the National Center for Science Education in the United States of America, Eugenie Scott, has used the term to refer to the part of the overall spectrum of beliefs about creation and evolution holding the theological view that God creates through evolution. It covers a wide range of beliefs about the extent of any intervention by God, with some approaching deism in rejecting the concepts of continued intervention or special creation, while others believe that God has directly intervened at crucial points such as the origin of humans.
In the Catholic version of theistic evolution, human evolution may have occurred, but God must create the human soul, and the creation story in the book of Genesis should be read metaphorically.
Some Muslims believe that only humans were exceptions to common ancestry (human exceptionalism), while some give an allegorical reading of Adam's creation (Non-exceptionalism). Some Muslims believe that only Adam and Hawa (Eve) were special creations and they alongside their earliest descendants were exceptions to common ancestry, but the later descendants (including modern humans) share common ancestry with the rest of life on Earth because there were human-like beings on Earth before Adam's arrival who came through evolution. This belief is known as "Adamic exceptionalism".
When evolutionary science developed, so did different types of theistic evolution. Creationists Henry M. Morris and John D. Morris have listed different terms which were used to describe different positions from the 1890s to the 1920s: "Orthogenesis" (goal-directed evolution), "nomogenesis" (evolution according to fixed law), "emergent evolution", "creative evolution", and others.
The Jesuit paleontologist Pierre Teilhard de Chardin (1881–1955) was an influential proponent of God-directed evolution or "orthogenesis", in which man will eventually evolve to the "omega point" of union with the Creator.
Alternative terms
Others see "evolutionary creation" (EC, also referred to by some observers as "evolutionary creationism") as the belief that God, as Creator, uses evolution to bring about his plan. Eugenie Scott states in Evolution Vs. Creationism that it is a type of evolution rather than creationism, despite its name. "From a scientific point of view, evolutionary creationism is hardly distinguishable from Theistic Evolution ... [the differences] lie not in science but in theology." Those who hold to evolutionary creationism argue that God is involved to a greater extent than the theistic evolutionist believes.
Canadian biologist Denis Lamoureux published a 2003 article and a 2008 theological book, both aimed at Christians who do not believe in evolution (including young Earth creationists), and at those looking to reconcile their Christian faith with evolutionary science. His main argument was that Genesis presents the "science and history of the day" as "incidental vessels" to convey spiritual truths. Lamoureux rewrote his article as a 2009 journal paper, incorporating excerpts from his books, in which he noted the similarities of his views to theistic evolution, but objected to that term as making evolution the focus rather than creation. He also distanced his beliefs from the deistic or more liberal beliefs included in theistic evolution. He also argued that although referring to the same view, the word arrangement in the term "theistic evolution" places "the process of evolution as the primary term, and makes the Creator secondary as merely a qualifying adjective".
Divine intervention is seen at critical intervals in history in a way consistent with scientific explanations of speciation, with similarities to the ideas of progressive creationism that God created "kinds" of animals sequentially.
Regarding the embracing of Darwinian evolution, historian Ronald Numbers describes the position of the late 19th-century geologist George Frederick Wright as "Christian Darwinism".
Jacob Klapwijk and Howard J. Van Till have, while accepting both theistic creation and evolution, rejected the term "theistic evolution".
In 2006, American geneticist and Director of the National Institute of Health, Francis Collins, published The Language of God. He stated that faith and science are compatible and suggested the word "BioLogos" (Word of Life) to describe theistic evolution. Collins later laid out the idea that God created all things, but that evolution is the best scientific explanation for the diversity of all life on Earth. The name BioLogos instead became the name of the organization Collins founded years later. This organization now prefers the term "evolutionary creation" to describe their take on theistic evolution.
Historical development
Historians of science (and authors of pre-evolutionary ideas) have pointed out that scientists had considered the concept of biological change well before Darwin.
In the 17th century, the English Nonconformist/Anglican priest and botanist John Ray, in his book The Wisdom of God Manifested in the Works of Creation (1692), had wondered "why such different species should not only mingle together, but also generate an animal, and yet that that hybridous production should not again generate, and so a new race be carried on".
18th-century scientist Carl Linnaeus (1707–1778) published Systema Naturae (1735), a book in which he considered that new varieties of plants could arise through hybridization, but only under certain limits fixed by God. Linnaeus had initially embraced the Aristotelian idea of immutability of species (the idea that species never change), but later in his life he started to challenge it. Yet, as a Christian, he still defended "special creation", the belief that God created "every living creature" at the beginning, as read in Genesis, with the peculiarity a set of original species of which all the present species have descended.
Linnaeus wrote:
Linnaeus attributed the active process of biological change to God himself, as he stated:
Jens Christian Clausen (1967), refers to Linnaeus' theory as a "forgotten evolutionary theory [that] antedates Darwin's by nearly 100 years", and reports that he was a pioneer in doing experiments about hybridization.
Later observations by Protestant botanists Carl Friedrich von Gärtner (1772–1850) and Joseph Gottlieb Kölreuter (1733–1806) denied the immutability of species, which the Bible never teaches. Kölreuter used the term "transmutation of species" to refer to species which have experienced biological changes through hybridization, although they both were inclined to believe that hybrids would revert to the parental forms by a general law of reversion, and therefore, would not be responsible for the introduction of new species. Later, in a number of experiments carried out between 1856 and 1863, the Augustinian friar Gregor Mendel (1822–1884), aligning himself with the "new doctrine of special creation" proposed by Linnaeus, concluded that new species of plants could indeed arise, although limitedly and retaining their own stability.
Georges Cuvier's analysis of fossils and discovery of extinction disrupted static views of nature in the early 19th century, confirming geology as showing a historical sequence of life. British natural theology, which sought examples of adaptation to show design by a benevolent Creator, adopted catastrophism to show earlier organisms being replaced in a series of creations by new organisms better adapted to a changed environment. Charles Lyell (1797–1875) also saw adaptation to changing environments as a sign of a benevolent Creator, but his uniformitarianism envisaged continuing extinctions, leaving unanswered the problem of providing replacements. As seen in correspondence between Lyell and John Herschel, scientists were looking for creation by laws rather than by miraculous interventions. In continental Europe, the idealism of philosophers including Lorenz Oken (1779–1851) developed a Naturphilosophie in which patterns of development from archetypes were a purposeful divine plan aimed at forming humanity. These scientists rejected transmutation of species as materialist radicalism threatening the established hierarchies of society. The idealist Louis Agassiz (1807–1873), a persistent opponent of transmutation, saw mankind as the goal of a sequence of creations, but his concepts were the first to be adapted into a scheme of theistic evolutionism, when in Vestiges of the Natural History of Creation published in 1844, its anonymous author (Robert Chambers) set out goal-centred progressive development as the Creator's divine plan, programmed to unfold without direct intervention or miracles. The book became a best-seller and popularised the idea of transmutation in a designed "law of progression". The scientific establishment strongly attacked Vestiges at the time, but later more sophisticated theistic evolutionists followed the same approach of looking for patterns of development as evidence of design.
The comparative anatomist Richard Owen (1804–1892), a prominent figure in the Victorian era scientific establishment, opposed transmutation throughout his life. When formulating homology he adapted idealist philosophy to reconcile natural theology with development, unifying nature as divergence from an underlying form in a process demonstrating design. His conclusion to his On the Nature of Limbs of 1849 suggested that divine laws could have controlled the development of life, but he did not expand this idea after objections from his conservative patrons. Others supported the idea of development by law, including the botanist Hewett Watson (1804–1881) and the Reverend Baden Powell (1796–1860), who wrote in 1855 that such laws better illustrated the powers of the Creator. In 1858 Owen in his speech as President of the British Association said that in "continuous operation of Creative power" through geological time, new species of animals appeared in a "successive and continuous fashion" through birth from their antecedents by a Creative law rather than through slow transmutation.
On the Origin of Species
When Charles Darwin published On the Origin of Species in 1859, many liberal Christians accepted evolution provided they could reconcile it with divine design. The clergymen Charles Kingsley (1819–1875) and Frederick Temple (1821–1902), both conservative Christians in the Church of England, promoted a theology of creation as an indirect process controlled by divine laws. Some strict Calvinists welcomed the idea of natural selection, as it did not entail inevitable progress and humanity could be seen as a fallen race requiring salvation. The Anglo-Catholic Aubrey Moore (1848–1890) also accepted the theory of natural selection, incorporating it into his Christian beliefs as merely the way God worked. Darwin's friend Asa Gray (1810–1888) defended natural selection as compatible with design.
Darwin himself, in his second edition of the Origin (January 1860), had written in the conclusion:
Within a decade most scientists had started espousing evolution, but from the outset some expressed opposition to the concept of natural selection and searched for a more purposeful mechanism. In 1860 Richard Owen attacked Darwin's Origin of Species in an anonymous review while praising "Professor Owen" for "the establishment of the axiom of the continuous operation of the ordained becoming of living things". In December 1859 Darwin had been disappointed to hear that Sir John Herschel apparently dismissed the book as "the law of higgledy-pigglety", and in 1861 Herschel wrote of evolution that "[a]n intelligence, guided by a purpose, must be continually in action to bias the direction of the steps of change—to regulate their amount—to limit their divergence—and to continue them in a definite course". He added "On the other hand, we do not mean to deny that such intelligence may act according to law (that is to say, on a preconceived and definite plan)". The scientist Sir David Brewster (1781–1868), a member of the Free Church of Scotland, wrote an article called "The Facts and Fancies of Mr. Darwin" (1862) in which he rejected many Darwinian ideas, such as those concerning vestigial organs or questioning God's perfection in his work. Brewster concluded that Darwin's book contained both "much valuable knowledge and much wild speculation", although accepting that "every part of the human frame had been fashioned by the Divine hand and exhibited the most marvellous and beneficent adaptions for the use of men".
In the 1860s theistic evolutionism became a popular compromise in science and gained widespread support from the general public. Between 1866 and 1868 Owen published a theory of derivation, proposing that species had an innate tendency to change in ways that resulted in variety and beauty showing creative purpose. Both Owen and Mivart (1827–1900) insisted that natural selection could not explain patterns and variation, which they saw as resulting from divine purpose. In 1867 the Duke of Argyll published The Reign of Law, which explained beauty in plumage without any adaptive benefit as design generated by the Creator's laws of nature for the delight of humans. Argyll attempted to reconcile evolution with design by suggesting that the laws of variation prepared rudimentary organs for a future need.
Cardinal John Henry Newman wrote in 1868: "Mr Darwin's theory need not then to be atheistical, be it true or not; it may simply be suggesting a larger idea of Divine Prescience and Skill ... and I do not [see] that 'the accidental evolution of organic beings' is inconsistent with divine design—It is accidental to us, not to God."
In 1871 Darwin published his own research on human ancestry in The Descent of Man, concluding that humans "descended from a hairy quadruped, furnished with a tail and pointed ears", which would be classified amongst the Quadrumana along with monkeys, and in turn descended "through a long line of diversified forms" going back to something like the larvae of sea squirts. Critics promptly complained that this "degrading" image "tears the crown from our heads", but there is little evidence that it led to loss of faith. Among the few who did record the impact of Darwin's writings, the naturalist Joseph LeConte struggled with "distress and doubt" following the death of his daughter in 1861, before enthusiastically saying in the late 1870s there was "not a single philosophical question connected with our highest and dearest religious and spiritual interests that is fundamentally affected, or even put in any new light, by the theory of evolution", and in the late 1880s embracing the view that "evolution is entirely consistent with a rational theism". Similarly, George Frederick Wright (1838–1921) responded to Darwin's Origin of Species and Charles Lyell's 1863 Geological Evidences of the Antiquity of Man by turning to Asa Gray's belief that God had set the rules at the start and only intervened on rare occasions, as a way to harmonise evolution with theology. The idea of evolution did not seriously shake Wright's faith, but he later suffered a crisis when confronted with historical criticism of the Bible.
Acceptance
According to Eugenie Scott: "In one form or another, Theistic Evolutionism is the view of creation taught at the majority of mainline Protestant seminaries, and despite the Catholic Church having no official position, it does support belief in it. Studies show that acceptance of evolution is lower in the United States than in Europe or Japan; among 34 countries sampled, only Turkey had a lower rate of acceptance than the United States.
Theistic evolution has been described as arguing for compatibility between science and religion, and as such it is viewed with disdain both by some atheists and many young Earth creationists.
Hominization
Hominization, in both science and religion, involves the process or the purpose of becoming human. The process and means by which hominization occurs is a key problem in theistic evolutionary thought. This is noticeable more so in Abrahamic religions, which often have held as a core belief that the souls of animals and humans differ in some capacity. Thomas Aquinas taught animals did not have immortal souls, but that humans did. Many versions of theistic evolution insist on a special creation consisting of at least the addition of a soul just for the human species.
Scientific accounts of the origin of the universe, the origin of life, and subsequent evolution of pre-human life forms may not cause any difficulty but the need to reconcile religious and scientific views of hominization and to account for the addition of a soul to humans remains a problem. Theistic evolution typically postulates a point at which a population of hominids who had (or may have) evolved by a process of natural evolution acquired souls and thus (with their descendants) became fully human in theological terms. This group might be restricted to Adam and Eve, or indeed to Mitochondrial Eve, although versions of the theory allow for larger populations. The point at which such an event occurred should essentially be the same as in paleoanthropology and archeology, but theological discussion of the matter tends to concentrate on the theoretical. The term "special transformism" is sometimes used to refer to theories that there was a divine intervention of some sort, achieving hominization.
Several 19th-century theologians and evolutionists attempted specific solutions, including the Catholics John Augustine Zahm and St. George Jackson Mivart, but tended to come under attack from both the theological and biological camps. and 20th-century thinking tended to avoid proposing precise mechanisms.
Islamic views
Theological views and stances
The Islamic scholar, science lecturer and theologian Shoaib Ahmed Malik divides Muslim positions on the evolution theory into four different views.
Non-evolutionism: The rejection of evolutionary theory and all of its elements, including common ancestry, macro-evolution, etc. many of its proponents, however, still accept micro-evolution.
Human exceptionalism: The acceptance of the entirety of evolutionary theory except for human evolution. More specifically, it rejects the idea that modern humans share common ancestry with other life-forms on Earth. It may still accept that humans evolved over time after Adam's creation and that various species of humans evolved over time.
Adamic exceptionalism: The acceptance of evolution, only making an exception for Adam and Hawa (Eve). It asserts that Adam was the first theologically accurate human. However, taxonomically accurate humans or human-like beings already existed on Earth before their arrival. Thus, it accepts the belief that modern humans share common ancestry with other life-forms on Earth, and that our lineage can be traced back to the origin of life.
Non-exceptionalism: The acceptance of evolution without any exceptions for miraculous creation.
Adamic exceptionalism is the current leading view, as it is considered to be compatible with both science and Islamic theology. Adamic exceptionalism asserts that Adam and Eve were created by Allah through miracles as the first humans, and that the rest of humanity descends from them. At the same time, this view asserts that modern humans emerged through evolution and that modern humans have a lineage leading up to the origin of life (FUCA), and that evolution occurred just as theorized (e.g. Austalopithecus afarensis to Homo habilis, H. habilis to H. eragaster, H. eragaster to H. heidelbergensis, H. heidelbergensis to H. sapiens, etc.) Adamic exceptionalists believe that Allah created human-like beings on Earth through evolution before Adam was brought into the world; however, these human-like beings do not fit the theological description of "humans". From a theological perspective, they're not true humans, but they are biologically human, since they fit the taxonomical description for it. Adam is still considered to be the first human from a theological perspective. Adamic exceptionalism also asserts that the early descendants of Adam mated or hybridized with these "human-like beings", yielding one lineage that leads to Adam and another that leads to FUCA. This belief is considered to be the most viable because it synthesizes the miraculous creation of Adam and Eve and agrees with Muslim theology. At the same time, it is considered as compatible with evolutionary science—any questions regarding Adam and his miraculous creation, the lineage that leads to him, or whether this lineage mated with other "human-like" beings are irrelevant to science and are not obstacles to any established scientific theories.
David Solomon Jalajel, an Islamic author, proclaims an Adamic exceptionalism view of evolution which encourages the theological use of tawaqquf; a tawaqquf is to make no argument for or against a matter to which scripture possesses no declarations for. With tawaqquf, Jalajel believes that Adam's creation does not necessarily signal the beginning of humanity as the Quran makes no declaration as to whether or not human beings were on Earth before Adam had descended. As a result, Jalajel invokes tawaqquf which insinuates that it is possible for humans to exist or not exist before the appearance of Adam on earth with either belief being possible due to the Quran, and that it is possible that an intermingling of Adam's descendants and other humans may or may not have occurred. Thus, the existence of Adam is a miracle since the Quran directly states it to be, but it does not assert there being no humans who could have existed at the time of Adam's appearance on earth and who could have come about as a result of evolution. This viewpoint stands in contrast to creationism and human exceptionalism, ultimately declaring that evolution could be viewed without conflict with Islam and that Muslims could either accept or reject "human evolution on its scientific merits without reference to the story of Adam".
"Human exceptionalism" is theologically compatible, but has some issues with science due to the rejection of common ancestry of modern humans. "Non-exceptionalism" is scientifically compatible, but it's theological validity is a matter of debate.
Proponents of human-exceptionalism include: Yasir Qadhi, Nuh Ha Mim Keller, etc. Proponents of Adamic-exceptionalism include David Solomon Jalajel. Proponents of non-exceptionalism include: Rana Dajani, Nidhal Guessoum, Israr Ahmed, Caner Taslaman, etc.
Acceptance
The theory of evolution is controversial in plenty of contemporary Muslim societies due to negative social views and misconceptions such as "the theory is atheistic" and lack of understanding about views such as human exceptionalism and Adamic exceptionalism. A lot of people suggest that it also has a lot to do with lack of proper scientific facilities and development in a lot (but not all) Muslim countries, particularly where there exists a lot of conflict and political tension. Regardless, a large majority of Muslims accept evolution in Kazakhstan (79%) and Lebanon (78%). However relatively few in Afghanistan (26%) and Iraq (27%) believe in human evolution. Most other Muslim countries have statistics in between. Belief in theistic evolution is increasing in a lot of Muslim countries and societies. The younger generations have a higher rate of acceptance. Countries more developed or developing faster also have higher rates of acceptance. Muslim societies in non-Muslim countries (such as in the West) are inconsistent and can be high or low depending on the specific countries.
Relationship to other positions
19th-century 'theistic evolution'
The American botanist Asa Gray used the name "theistic evolution" in a now-obsolete sense for his point of view, presented in his 1876 book Essays and Reviews Pertaining to Darwinism. He argued that the deity supplies beneficial mutations to guide evolution. St George Jackson Mivart argued instead in his 1871 On the Genesis of Species that the deity, equipped with foreknowledge, sets the direction of evolution (orthogenesis) by specifying the laws that govern it, and leaves species to evolve according to the conditions they experience as time goes by. The Duke of Argyll set out similar views in his 1867 book The Reign of Law. The historian Edward J. Larson stated that the theory failed as an explanation in the minds of biologists from the late 19th century onwards as it broke the rules of methodological naturalism which they had grown to expect.
Non-theistic evolution
The major criticism of theistic evolution by non-theistic evolutionists focuses on its essential belief in a supernatural creator. Physicist Lawrence Krauss considers that, by the application of Occam's razor, sufficient explanation of the phenomena of evolution is provided by natural processes (in particular, natural selection), and the intervention or direction of a supernatural entity is not required. Evolutionary biologist Richard Dawkins considers theistic evolution a "superfluous attempt" to "smuggle God in by the back door".
Intelligent design
A number of notable proponents of theistic evolution, including Kenneth R. Miller, John Haught, George Coyne, Simon Conway Morris, Denis Alexander, Ard Louis, Darrel Falk, Alister McGrath, Francisco J. Ayala, and Francis Collins are critics of intelligent design.
Young Earth creationism
Young Earth creationists including Ken Ham prefer to criticize theistic evolution on theological grounds rather than on any scientific data, finding it hard to reconcile the nature of a loving God with the process of evolution, in particular, the existence of death and suffering before the Fall of Man. They consider that it undermines central biblical teachings by regarding the creation account as a myth, a parable, or an allegory, instead of treating it as an accurate record of historical events. They also fear that a capitulation to what they call "atheistic" naturalism will confine God to the gaps in scientific explanations, undermining biblical doctrines, such as God's incarnation through Christ.
See also
American Scientific Affiliation
The BioLogos Foundation
Day-age creationism
Deistic evolution
"Epic of evolution"
Natural theology
Orthogenesis
Old Earth creationism
Religious naturalism
Teleology in biology
Fine-tuned universe
References
Sources
Brundell, Barry, "Catholic Church Politics and Evolution Theory, 1894-1902", The British Journal for the History of Science, Vol. 34, No. 1 (Mar., 2001), pp. 81–95, Cambridge University Press on behalf of The British Society for the History of Science,
Kung, Hans, beginning of all things: science and religion, trans. John Bowden, Wm. B. Eerdmans Publishing, 2007, ]
Further reading
Contemporary approaches
Collins, Francis; (2006) The Language of God: A Scientist Presents Evidence for Belief
Michael Dowd (2009) Thank God for Evolution: How the Marriage of Science and Religion Will Transform Your Life and Our World
Falk, Darrel; (2004) Coming to Peace with Science: Bridging the Worlds Between Faith and Biology
Miller, Kenneth R.; (1999) Finding Darwin's God: A Scientist's Search for Common Ground Between God and Evolution
Miller, Keith B.; (2003) Perspectives on an Evolving Creation
Corrado Ghinamo; (2013) The Beautiful Scientist: a Spiritual Approach to Science ;
Accounts of the history
Appleby, R. Scott. Between Americanism and Modernism; John Zahm and Theistic Evolution, in Critical Issues in American Religious History: A Reader, Ed. by Robert R. Mathisen, 2nd revised edn., Baylor University Press, 2006, , . Google books
Harrison, Brian W., Early Vatican Responses to Evolutionist Theology, Living Tradition, Organ of the Roman Theological Forum, May 2001.
Morrison, John L., "William Seton: A Catholic Darwinist", The Review of Politics, Vol. 21, No. 3 (Jul., 1959), pp. 566–584, Cambridge University Press for the University of Notre Dame du lac,
O'Leary, John. Roman Catholicism and modern science: a history, Continuum International Publishing Group, 2006, Google books
External links
Evolutionary Creation: A Christian Approach to Evolution by Denis Lamoureux (St. Joseph's College, Edmonton)
About: Agnosticism/Atheism on 'Theistic Evolution & Evolutionary Creationism' by Austin Cline; overview of various viewpoints
Creationism: What's a Catholic to Do? by Michael D. Guinan, O.F.M.; critical assessment of creationism and intelligent design from a Roman Catholic perspective.
What is Creationism? by Mark Isaak, presents various forms of creationism
What is Evolution? by Laurence Moran, presents a standard definition for evolution
Old Earth Ministries Old Earth Creationism, with section on theistic evolution
Evolution & Creation: A Theosophic Synthesis Surveys critical problems in Darwinist explanations and common theistic views; explores ancient and modern "excluded middle" alternatives
The Vatican's View of Evolution: The Story of Two Popes by Doug Linder (2004)
Nobel Prize winner Charles Townes on evolution and "intelligent design"
Spectrum of Creation Beliefs From Flat Earthism to Atheistic Evolutionism, including Theistic Evolution
Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016).
Proponents of theistic evolution
Organizations
God and Evolution at the TalkOrigins Archive
BioLogos
Perspectives on Theistic Evolution An examination of both the theological and scientific aspects of theistic evolution.
The "Clergy Letter" Project signed by thousands of clergy supporting evolution and faith
Religious belief and doctrine
Evolution and religion
Catholic theology and doctrine
Philosophy of biology | Theistic evolution | [
"Biology"
] | 6,354 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
328,838 | https://en.wikipedia.org/wiki/Town%20square | A town square (or public square, urban square, or simply square), also called a plaza or piazza, is an open public space commonly found in the heart of a traditional town, and which is used for community gatherings. A square in a city may be called a city square. Related concepts are the civic center, the market square and the village green.
Most squares are hardscapes suitable for open markets, concerts, political rallies, and other events that require firm ground. They are not necessarily a true geometric square.
Being centrally located, town squares are usually surrounded by small shops such as bakeries, meat markets, cheese stores, and clothing stores. At their center is often a well, monument, statue or other feature. Those with fountains are sometimes called fountain squares.
The term "town square" (especially via the term "public square") is synonymous with the politics of many cultures, and the names of a certain town squares, such as the Euromaidan or Red Square, have become symbolic of specific political events throughout history.
Australia
The city centre of Adelaide and the adjacent suburb of North Adelaide, in South Australia, were planned by Colonel William Light in 1837. The city streets were laid out in a grid plan, with the city centre including a central public square, Victoria Square, and four public squares in the centre of each quarter of the city. North Adelaide has two public squares. The city was also designed to be surrounded by park lands, and all of these features still exist today, with the squares maintained as mostly green spaces.
China
In Mainland China, People's Square is a common designation for the central town square of modern Chinese cities, established as part of urban modernization within the last few decades. These squares are the site of government buildings, museums and other public buildings. One such square, Tiananmen Square, is a famous site in Chinese history due to it being the site of the May Fourth Movement, the Proclamation of the People's Republic of China, the 1976 Tiananmen Incident, the 1989 Tiananmen Square Protests, and all Chinese National Day Parades.
Germany
The German word for square is , which also means "Place", and is a common term for central squares in German-speaking countries. These have been focal points of public life in towns and cities from the Middle Ages to today. Squares located opposite a Palace or Castle () are commonly named Schlossplatz. Prominent Plätze include the Alexanderplatz, Pariser Platz and Potsdamer Platz in Berlin, Heldenplatz in Vienna, and the Königsplatz in Munich.
Indonesia
A large open square common in villages, towns and cities of Indonesia is known as alun-alun. It is a Javanese term which in modern-day Indonesia refers to the two large open squares of kraton compounds. It is typically located adjacent a mosque or a palace. It is a place for public spectacles, court celebrations and general non-court entertainments.
Iran
In traditional Persian architecture, town squares are known as maydan or meydan. A maydan is considered one of the essential features in urban planning and they are often adjacent to bazaars, large mosques and other public buildings. Naqsh-e Jahan Square in Isfahan and Azadi Square in Tehran are examples of classic and modern squares. Several countries use the term "maidan" across Eastern Europe and Central Asia, including Ukraine, in which the term became well-known globally during the Euromaidan.
Italy
A () is a city square in Italy, Malta, along the Dalmatian coast and in surrounding regions. Possibly influenced by the centrality of the Forum (Roman) to ancient Mediterranean culture, the piazze of Italy are central to most towns and cities. Shops, businesses, metro stations, and bus stops are commonly found on piazzas, and in multiple locations also feature Roman Catholic Churches, such as in places known as the Piazza del Duomo, with the most famous perhaps being at Duomo di Milan, or government buildings, such as the Piazza del Quirinale adjacent from the Quirinal Palace of the Italian president.
The Piazza San Marco in Venice and Piazza del Popolo in Rome are among the world's best known. The Italian piazzas historically played a major role in the political developments of Italy in both the Italian Medieval Era and the Italian Renaissance. For example, the Piazza della Signoria in Florence remains synonymous with the return of the Medici from their exile in 1530 as well as the burning at the stake of Savonarola during the Italian Inquisition.
The Italian term is roughly equivalent to the Spanish , French term , Portuguese , and German . Not to be confused, other countries use the phrase to refer to an unrelated feature of architectural or urban design, such as the "piazza" at King's Cross station in London or piazza as used by some in the United States, to refer to a verandah or front porch of a house or apartment, such as at George Washington's historic home Mount Vernon.
Several countries, especially around the Mediterranean Sea, feature Italian-style town squares. In Gibraltar, one such town square just off Gibraltar's Main Street, between the Parliament Building and the City Hall officially named John Mackintosh Square is referred to as The Piazza.
Netherlands and Belgium
In the Low Countries, squares are often called "markets" because of their usage as marketplaces. Most towns and cities in Belgium and the southern part of the Netherlands have in their historical centre a (literally "Big Market") in Dutch or (literally "Grand Square") in French (for example the Grand-Place in Brussels and the in Antwerp). The or is often the location of the town hall, hence also the political centre of the town. The Dutch word for square is plein, which is another common name for squares in Dutch-speaking regions (for example Het Plein in The Hague).
In the 17th and 18th centuries, another type of square emerged, the so-called royal square (, ). Such squares did not serve as a marketplace but were built in front of large palaces or public buildings to emphasise their grandeur, as well as to accommodate military parades and ceremonies, among others (for example the Place Royale in Brussels and the Koningsplein in Amsterdam). Palace squares are usually more symmetrical than their older market counterparts.
Russia
In Russia, central square (, romanised: tsentráĺnaya plóshchad́) is a common term for an open area in the heart of the town. In a number of cities, the square has no individual name and is officially designated Central Square, for example Central Square (Tolyatti). The most famous central square is the monumentally-proportioned Red Square which became a synecdoche for the Soviet Union during the 20th century; nevertheless, the association with "red communism" is a back formation, since krásnaja (the term for "red") also means "beautiful" in archaic and poetic Russian, with many cities and towns throughout the region having locations with the name "Red Square."
South Korea
Gwanghwamun Plaza (Korean: 광화문광장) also known as Gwanghwamun Square) is a public open space on Sejongno, Jongno-gu, Seoul, South Korea. It against the background of A Gwanghwamun Gate(Korean: 광화문).
In 2009, Restoration of Gwanghwamun Gate made the gate's front space as a public plaza. The square has been renovated to modern style has new waterways & rest Areas, exhibition Hall for Excavated Cultural Assets in 2022 Aug.
Spanish-speaking countries
The Spanish-language term for a public square is ( or depending on the dialectal variety). It comes from Latin , with the meaning of 'broad street' or 'public square'. Ultimately coming from Greek plateia (hodos), it is a cognate of Italian and French (which has also been borrowed into English).
The term is used across Spanish-speaking territories in Spain and the Americas, as well as in the Philippines. In addition to smaller plazas, the (sometimes called in the Americas as Plaza de Armas, "armament square" where troops could be mustered) of each center of administration held three closely related institutions: the cathedral, the cantabile or administrative center, which might be incorporated in a wing of a governor's palace, and the or law court. The plaza might be large enough to serve as a military parade ground. At times of crisis or fiestas, it serves as the gathering space for large crowds.
Diminutives of include and the latter's double diminutive , which can be occasionally used as a particle in a proper noun.
Like the Italian and the Portuguese , the plaza remains a center of community life that is only equaled by the market-place. A is a bullring. Shopping centers may incorporate 'plaza' into their names, and is used in some countries as a synonym for i.e. "shopping center".
United Kingdom
In the United Kingdom, and especially in London and Edinburgh, a "square" has a wider meaning. There are public squares of the type described above but the term is also used for formal open spaces surrounded by houses with private gardens at the centre, sometimes known as garden squares. Most of these were built in the 18th and 19th centuries. In some cases the gardens are now open to the public. See the Squares in London category. Additionally, many public squares were created in towns and cities across the UK as part of urban redevelopment following the Blitz. Squares can also be quite small and resemble courtyards, especially in the City of London.
United States
In some cities, especially in New England, the term "square" (as its Spanish equivalent, plaza) is applied to a commercial area (like Central Square in Cambridge, Massachusetts), usually formed around the intersection of three or more streets, and which originally consisted of some open area (many of which have been filled in with traffic islands and other traffic calming features). Many of these intersections are irregular rather than square.
The placita (Spanish for "little plaza"), as it is known in the Southwestern United States, is a common feature within the boundaries of the former provincial kingdom of Santa Fe de Nuevo México. They are a blend of Hispano and Pueblo design styles, several of which continue to be hubs for cities and towns in New Mexico, including Santa Fe Plaza, Old Town Albuquerque, Acoma Pueblo's plaza, Taos Downtown Historic District, Mesilla Plaza, Mora, and Las Vegas Plaza.
In U.S. English, a plaza can mean one of several things:
a town square, as in the Spanish usage
"any open area usually located near urban buildings and often featuring walkways, trees and shrubs, places to sit, and sometimes shops"
a shopping center of any size
a toll plaza, where traffic must temporarily stop to pay tolls
an area adjacent to an expressway that has service facilities (such as restaurants, gas stations, and restrooms)
Today's metropolitan landscapes often incorporate the plaza as a design element, or as an outcome of zoning regulations, building budgetary constraints, and the like. Sociologist William H. Whyte conducted an extensive study of plazas in New York City: his study humanized the way modern urban plazas are conceptualized, and helped usher in significant design changes in the making of plazas. They can be used to open spaces for low-income neighborhoods, and can also the overall aesthetic of the surrounding area boosting economic vitality, pedestrian mobility and safety for pedestrians. Most plazas are created out of a collaboration between local non-profit applicants and city officials which requires approval from the city.
Throughout North America, words like place, square, or plaza frequently appear in the names of commercial developments such as shopping centers and hotels.
See also
Cathedral Square
List of city squares
List of city squares by size
Urban vitality
References
External links
BBC.com: "The Violent History of Public Squares"
"This research initiative is an attempt to rediscover the lost or neglected urban symbols. The Urban Square is a city's 'heart and soul' and that is the focus of this project."
01
Parks
Square
.
Landscape architecture
Protected areas
Road infrastructure
Subnational parks
Urban design
Urban studies and planning terminology | Town square | [
"Engineering"
] | 2,541 | [
"Landscape architecture",
"Architecture"
] |
328,874 | https://en.wikipedia.org/wiki/Korean%20Demilitarized%20Zone | The Korean Demilitarized Zone () is a heavily militarized strip of land running across the Korean Peninsula near the 38th parallel north. The demilitarized zone (DMZ) is a border barrier that divides the peninsula roughly in half. It was established to serve as a buffer zone between the sovereign states of the Democratic People's Republic of Korea (North Korea) and the Republic of Korea (South Korea) under the provisions of the Korean Armistice Agreement in 1953, an agreement between North Korea, China, and the United Nations Command.
The DMZ is long and about wide. There have been various incidents in and around the DMZ, with military and civilian casualties on both sides. Within the DMZ is a meeting point between the two Korean states, where negotiations take place: the small Joint Security Area (JSA) near the western end of the zone.
Location
The Korean Demilitarized Zone intersects but does not follow the 38th parallel north, which was the border before the Korean War. It crosses the parallel on an angle, with the west end of the DMZ lying south of the parallel and the east end lying north of it.
The DMZ is long, approximately wide. Though the zone itself is demilitarized, the zone's borders on both sides are some of the most heavily militarized borders in the world. The Northern Limit Line, or NLL, is the disputed maritime demarcation line between North and South Korea in the Yellow Sea, not agreed in the armistice. The coastline and islands on both sides of the NLL are also heavily militarized.
History
The 38th parallel north—which divides the Korean Peninsula roughly in half—was the original boundary between the United States and Soviet Union's brief administration areas of Korea at the end of World War II. Upon the creation of the North Korea (formally the Democratic People's Republic of Korea or DPRK) and South Korea (formally the Republic of Korea or ROK) in 1948, it became a de facto international border and one of the most tense fronts in the Cold War.
Both the North and the South remained dependent on their sponsor states from 1948 to the outbreak of the Korean War. That conflict, which claimed over three million lives and divided the Korean Peninsula along ideological lines, commenced on 25 June 1950, with a full-front DPRK invasion across the 38th parallel, and ended in 1953 after international intervention pushed the front of the war back to near the 38th parallel.
In the Armistice Agreement of 27 July 1953, the DMZ was created as each side agreed to move their troops back from the front line, creating a buffer zone wide. The Military Demarcation Line (MDL) goes through the center of the DMZ and indicates where the front was when the agreement was signed.
Owing to this theoretical stalemate, and genuine hostility between the North and the South, large numbers of troops are stationed along both sides of the line, each side guarding against potential aggression from the other side, even years after its establishment. The armistice agreement explains exactly how many military personnel and what kind of weapons are allowed in the DMZ. Soldiers from both sides may patrol inside the DMZ, but they may not cross the MDL. Sporadic outbreaks of violence in and around the border have killed over 500 South Korean soldiers, 50 American soldiers and 250 North Korean soldiers along the DMZ between 1953 and 1999.
Daeseong-dong (also written Tae Sung Dong and known as “Freedom Village”), in South Korea, and Kijŏng-dong (also known as the "Peace Village"), in North Korea, are the only settlements allowed by the armistice committee to remain within the boundaries of the DMZ. Residents of Tae Sung Dong are governed and protected by the United Nations Command and are generally required to spend at least 240 nights per year in the village to maintain their residency. In 2008, the village had a population of 218 people. The villagers of Tae Sung Dong are direct descendants of people who owned the land before the 1950–53 Korean War.
To continue to deter North Korean incursion, in 2014 the United States government exempted the Korean DMZ from its pledge to eliminate anti-personnel landmines. On 1 October 2018, however, a 20-day process began to remove landmines from both sides of the DMZ.
Joint Security Area
Inside the DMZ, near the western coast of the peninsula, Panmunjeom is the home of the Joint Security Area (JSA). Originally, it was the only connection between North and South Korea but that changed on 17 May 2007, when a Korail train went through the DMZ to the North on the new Donghae Bukbu Line built on the east coast of Korea. However, the resurrection of this line was short-lived, as it closed again in July 2008 following an incident in which a South Korean tourist was shot and killed.
The JSA is the location of the famous Bridge of No Return, over which prisoner exchanges have taken place. There are several buildings on both the north and the south side of the MDL, and there have been some built on top of it. All negotiations since 1953 have been held in the JSA, including statements of Korean solidarity, which have generally amounted to little except a slight decline of tensions.
Within the JSA are a number of buildings for joint meetings called Conference Rooms. The MDL goes through the conference rooms and down the middle of the conference tables where the North Koreans and the United Nations Command (primarily South Koreans and Americans) meet face to face.
Facing the Conference Row buildings are the North Korean Panmungak () and the South Korean Freedom House. In 1994, North Korea enlarged Panmungak by adding a third floor. In 1998, South Korea built a new Freedom House for its Red Cross staff and to possibly host reunions of families separated by the Korean War. The new building incorporated the old Freedom House Pagoda within its design.
Since 1953 there have been occasional confrontations and skirmishes within the JSA. The axe murder incident in August 1976 involved the attempted trimming of a tree which resulted in two deaths (Captain Arthur Bonifas and First Lieutenant Mark Barrett). Another incident occurred on 23 November 1984, when a Soviet tourist named Vasily Matuzok (sometimes spelled Matusak), who was part of an official trip to the JSA (hosted by the North), ran across the MDL shouting that he wanted to defect to the South. As many as 30 North Korean soldiers pursued him across the border, opening fire.
Border guards on the South Korean side returned fire, eventually surrounding the North Koreans. One South Korean and three North Korean soldiers were killed in the action. Matuzok survived and was eventually resettled in the U.S.
In late 2009, South Korean forces in conjunction with the United Nations Command began renovation of its three guard posts and two checkpoint buildings within the JSA compound. Construction was designed to enlarge and modernize the structures. Work was undertaken a year after North Korea finished replacing four JSA guard posts on its side of the MDL. On 15 October, 2018, during the high-level talks in Panmunjeom, military officials of the rank of colonel from the two Koreas and Burke Hamilton, Secretary of the UNC Military Armistice Commission, announced measures to reduce conventional military threats, such as creating buffer zones along their land and sea boundaries and a no-fly zone above the border, removing 11 front-line guard posts by December, and demining sections of the Demilitarized Zone.
Villages
Both North and South Korea maintain peace villages in sight of each other's side of the DMZ. In the South, Daeseong-dong is administered under the terms of the DMZ. Villagers are classed as South Korean citizens, but are exempt from paying tax and other civic requirements such as military service. In the North, Kijŏng-dong features a number of brightly painted, poured-concrete multi-story buildings and apartments with electric lighting. These features represented an unheard-of level of luxury for rural Koreans, North or South, in the 1950s. The town was oriented so that the bright blue roofs and white sides of the buildings would be the most distinguishing features when viewed from the border. However, based on scrutiny with modern telescopic lenses, it has been confirmed that the buildings are mere concrete shells lacking window glass or even interior rooms, with the building lights turned on and off at set times and the empty sidewalks swept by a skeleton crew of caretakers in an effort to preserve the illusion of activity.
Flagpoles
In the 1980s, the South Korean government built a flag pole in Daeseong-dong, which flies a South Korean flag weighing . The North Korean government responded by building the Panmunjeom flagpole in Kijŏng-dong, only west of the border with South Korea. It flies a flag of North Korea. In 2014, the Panmunjeom flagpole was the fourth tallest in the world, after the Jeddah Flagpole in Jeddah, Saudi Arabia, at , the Dushanbe Flagpole in Dushanbe, Tajikistan, at and the pole at the National Flag Square in Baku, Azerbaijan, which is . It is currently the world's seventh largest flagpole.
DMZ-related incidents and incursions
Since demarcation, the DMZ has had numerous cases of incidents and incursions by both sides, although the North Korean government typically never acknowledges direct responsibility for any of these incidents (there are exceptions, such as the axe incident). This was particularly intense during the Korean DMZ Conflict (1966–1969) when a series of skirmishes along the DMZ resulted in the deaths of 81 American, 299 South Korean and 397 North Korean soldiers. This included the Blue House Raid in 1968, an attempt to assassinate South Korea President Park Chung Hee at the Blue House.
In 1976, in now-declassified meeting minutes, U.S. deputy secretary of defense William Clements told U.S. secretary of state Henry Kissinger that there had been 200 raids or incursions into North Korea from the south, though not by the U.S. military. Details of only a few of these incursions have become public, including raids by South Korean forces in 1967 that had sabotaged about 50 North Korean facilities.
Incursion tunnels
Since 15 November 1974, South Korea has discovered four tunnels crossing the DMZ that had been dug by North Korea. The orientation of the blasting lines within each tunnel indicated they were dug by North Korea. North Korea claimed that the tunnels were for coal mining; no coal was found in the tunnels, which were dug through granite. Some of the tunnel walls were painted black to give the appearance of anthracite.
The tunnels are believed to have been planned as a military invasion route by North Korea. They run in a north–south direction and do not have branches. Following each discovery, engineering within the tunnels has become progressively more advanced. For example, the third tunnel sloped slightly upwards as it progressed southward, to prevent water stagnation. Today, visitors from the south may visit the second, third and fourth tunnels through guided tours.
First tunnel
The first of the tunnels was discovered on 15 November 1974, by a South Korean Army patrol, noticing steam rising from the ground. The initial discovery was met with automatic fire from North Korean soldiers. Five days later, during a subsequent exploration of this tunnel, US Navy Commander Robert M. Ballinger and ROK Marine Corps Major Kim Hah-chul were killed in the tunnel by a North Korean explosive device. The blast also wounded five Americans and one South Korean from the United Nations Command.
The tunnel, which was about , extended more than beyond the MDL into South Korea. The tunnel was reinforced with concrete slabs and had electric power and lighting. There were weapon storage areas and sleeping areas. A narrow-gauge railway with carts had also been installed. Estimates based on the tunnel's size suggest it would have allowed considerable numbers of soldiers to pass through it.
Second tunnel
The second tunnel was discovered on 19 March 1975. It is of similar length to the first tunnel. It is located between below ground, but is larger than the first, approximately .
Third tunnel
The third tunnel was discovered on 17 October 1978. Unlike the previous two, the third tunnel was discovered following a tip from a North Korean defector. This tunnel is about long and about below ground. Foreign visitors touring the South Korean DMZ may view inside this tunnel using a sloped access shaft.
Fourth tunnel
A fourth tunnel was discovered on 3 March 1990, north of Haean town in the former Punchbowl battlefield. The tunnel's dimensions are , and it is deep. The method of construction is almost identical in structure to the second and the third tunnels.
Korean wall
According to North Korea, between 1977 and 1979, the South Korean and United States authorities constructed a concrete wall along the DMZ. North Korea, however, began to propagate information about the wall after the fall of the Berlin Wall in 1989, when the symbolism of a wall unjustly dividing a people became more apparent.
Various organizations, such as the North Korean tour guide company Korea Konsult, claimed a wall was dividing Korea, saying that:
In December 1999, Chu Chang-jun, North Korea's ambassador to China, repeated claims that a "wall" divided Korea. He said the south side of the wall is packed with soil, which permits access to the top of the wall and makes it effectively invisible from the south side. He also claimed that it served as a bridgehead for any northward invasion.
The United States and South Korea deny the wall's existence, although they do claim there are anti-tank barriers along some sections of the DMZ. Dutch journalist and filmmaker Peter Tetteroo also shot footage of a barrier in 2001 which his North Korean guides said was the Korean Wall.
A 2007 Reuters report revealed that there is no coast to coast wall located across the DMZ and that the pictures of a "wall" which have been used in North Korean propaganda have merely been pictures of concrete anti-tank barriers. While 800,000 landmines were being removed in 2018, it was shown that the Joint Security Area along the Korean border was guarded by standard barbed wire.
North Korean side of the DMZ
The North Korean side of the DMZ primarily serves to stop an invasion of North Korea from the south. Its other purpose is to ensure that North Korean citizens face significant difficulty in any effort to defect to South Korea.
From the armistice until 1972, approximately 7,700 South Korean soldiers and agents infiltrated into North Korea in order to sabotage military bases and industrial areas. Around 5,300 of them never returned home.
North Korea has thousands of artillery pieces near the DMZ. According to a 2018 article in The Economist, North Korea could bombard Seoul with over 10,000 rounds every minute. Experts believe that 60 percent of its total artillery is positioned within a few kilometers of the DMZ acting as a deterrent against any South Korean invasion.
Propaganda
Loudspeaker installations
From 1953 until 2004, both sides broadcast audio propaganda across the DMZ. Massive loudspeakers mounted on several of the buildings delivered DPRK propaganda broadcasts directed towards the south as well as propaganda radio broadcasts across the border. South Korean broadcasts featured "popular music and lectures on freedom and democracy," while the North Korean broadcast featured "martial music and praises to the country's rulers." In 2004, the North and South agreed to end the broadcasts as part of an agreement to ease diplomatic tensions.
On 4 August 2015, a border incident occurred where two South Korean soldiers were wounded after stepping on landmines that had allegedly been laid on the southern side of the DMZ by North Korean forces near an ROK guard post. Both North Korea and South Korea then resumed broadcasting propaganda by loudspeaker. After four days of negotiations, on 25 August 2015 South Korea agreed to discontinue the broadcasts following a statement from North Korea's government expressing regret for the landmine incident.
On 8 January 2016, in response to North Korea's supposed successful testing of a hydrogen bomb, South Korea resumed broadcasts directed at the North. On 15 April 2016, it was reported that the South Koreans purchased a new audio system to combat the North's broadcasts.
Balloons
Both North and South Korea have held balloon propaganda leaflet campaigns since the Korean War.
In recent years, mainly South Korean non-governmental organizations have been involved in launching balloons targeted at the DMZ and beyond. Due to the winds, the balloons tend to fall near the DMZ where there are mostly North Korean soldiers to see the leaflets. As with the loudspeakers, balloon operations were mutually agreed to be halted between 2004 and 2010. It has been assessed that the activists' balloons may contribute to the decay of remaining cooperation between the Korean governments, and the DMZ has become more militarized in recent years.
Many North Korean leaflets during the Cold War gave instructions and maps to help targeted South Korean soldiers in defecting. One of the leaflets found on the DMZ included a map of Cho Dae-hum's route of defection to North Korea across the DMZ. In addition to using balloons as a means of delivery, North Koreans have also used rockets to send leaflets to the DMZ.
Dismantling
On 23 April 2018, both North and South Korea officially cancelled their border propaganda broadcasts. On 1 May 2018, the loudspeaker systems across the Korean border were dismantled. Both sides also committed to ending the balloon campaigns. On 5 May 2018, an attempt by North Korean defectors to disperse more balloon propaganda across the border from South Korea was halted by the South Korean government. The no-fly zone which was established on 1 November 2018 also designates a no-fly zone for all aircraft types above the MDL, and prohibits hot-air balloons from traveling within 25 km of the Korean border's MDL.
In 2024, as a response to continuing leaflets from South Korean activists, the North flew around 1,000 balloons filled with cigarette butts, manure, waste batteries, scraps of cloth, and dirty diapers over the border. In response, South Korean activists released helium balloons with anti-Pyongyang leaflets and USB sticks with K-dramas and world news into North Korea. The actions of North Korea resulted in a June 2024 decision by the South to suspend the above deal and resume military drills near the border.
Civilian Control Line
The Civilian Control Line (CCL), or the Civilian Control Zone (CCZ, ), is a line that designates an additional buffer zone to the DMZ within a distance of from the Southern Limit Line of the DMZ. Its purpose is to limit and control the entrance of civilians into the area in order to protect and maintain the security of military facilities and operations near the DMZ. The commander of the 8th US Army ordered the creation of the CCL and it was activated and first became effective in February 1954.
The buffer zone that falls south of the Southern Limit Line is called the Civilian Control Zone. Barbed wire fences and manned military guard posts mark the Civilian Control Line. The Civilian Control Zone is necessary for the military to monitor civilian travel to tourist destinations close to the Southern Limit Line of the DMZ like the discovered infiltration tunnels and tourist observatories. Usually when traveling within the Civilian Control Zone, South Korean soldiers accompany tourist buses and cars as armed guards to monitor the civilians as well as to protect them from North Korean intruders.
Right after the ceasefire, the Civilian Control Zone outside the DMZ encompassed 100 or so empty villages. The government implemented migration measures to attract settlers into the area. As a result, in 1983, when the area delineated by the Civilian Control Line was at its largest, a total of 39,725 residents in 8,799 households were living in the 81 villages located within the Civilian Control Zone.
Most of the tourist and media photos of the "DMZ fence" are actually photos of the CCL fence. The actual DMZ fence on the Southern Limit Line is completely off-limits to everybody except soldiers, and it is illegal to take pictures of the DMZ fence. The CCL fence acts more as a deterrent for South Korean civilians from getting too close to the dangerous DMZ and is also the final barrier for North Korean infiltrators if they get past the Southern Limit Line DMZ fence.
Neutral Zone of the Han River Estuary
The whole estuary of the Han River is deemed a "Neutral Zone" and is off-limits to all civilian vessels and is treated like the rest of the DMZ. Only military vessels are allowed within this neutral zone.
According to the July 1953 Korean Armistice Agreement civil shipping was supposed to be permissible in the Han River estuary and allow Seoul to be connected to the Yellow Sea (West Sea) via the Han River. However, both Koreas and the UNC failed to make this happen. The South Korean government ordered the construction of the Ara Canal to finally connect Seoul to the Yellow Sea, which was completed in 2012. Seoul was effectively landlocked from the ocean until 2012. The biggest limitation of the Ara Canal is it is too narrow to handle any vessels except small tourist boats and recreational boats, so Seoul still cannot receive large commercial ships or passenger ships in its port.
In recent years Chinese fishing vessels have taken advantage of the tense situation in the Han River Estuary Neutral Zone and illegally fished in this area due to both North Korean and South Korean navies never patrolling this area due to the fear of naval battles breaking out. This has led to firefights and sinkings of boats between Chinese fishermen and South Korean Coast Guard.
On January 30, 2019, North Korean and South Korean military officials signed a landmark agreement that would open the Han River Estuary to civilian vessels for the first time since the Armistice Agreement in 1953. The agreement was scheduled to take place in April 2019 but the failure of the 2019 Hanoi Summit indefinitely postponed these plans.
Castle of Gung Ye
Within the DMZ itself, in the town of Cheorwon, is the old capital of the kingdom of Taebong (901–918), a regional upstart that became Goryeo, the dynasty that ruled a united Korea from 918 to 1392.
Taebong was founded by the charismatic leader Gung Ye, a brilliant if tyrannical one-eyed ex-Buddhist monk. Rebelling against the kingdom of Silla, Korea's then ruling dynasty, he proclaimed the kingdom of Taebong—also called Later Goguryeo, in reference to the ancient kingdom of Goguryeo (37 BCE – 668 CE)—in 901, with himself as king. The kingdom consisted of much of central Korea, including areas around the DMZ. He placed his capital in Cheorwon, a mountainous region that was easily defensible (in the Korean War, this same region would earn the name "the Iron Triangle").
As a former Buddhist monk, Gung Ye actively promoted the religion of Buddhism and incorporated Buddhist ceremonies into the new kingdom. Even after Gung Ye was dethroned by his own generals and replaced by Wang Geon, the man who would rule over a united Korea as the first king of Goryeo, this Buddhist influence would continue, playing a major role in shaping the culture of medieval Korea.
As the ruins of Gung Ye's capital lie in the DMZ itself, visitors cannot see them. Moreover, excavation work and research have been hampered by political realities. In the future, inter-Korean peace may allow for proper archaeological studies to be conducted on the castle site and other historical sites within and underneath the DMZ.
The ruins of the capital city of Taebong, the ruins of the castle of Gung Ye, and Gung Ye's tomb all lie within the DMZ and are off-limits to everybody except soldiers who patrol the DMZ.
Transportation
Panmunjeom is the site of the negotiations that ended the Korean War and is the main center of human activity in the DMZ. The village is located on the main highway and near a railroad connecting the two Koreas.
The railway, which connects Seoul and Pyongyang, was called the Gyeongui Line before division in the 1940s. Currently the South uses the original name, but the North refers to the route as the P'yŏngbu Line. The railway line has been mainly used to carry materials and South Korean workers to the Kaesong Industrial Region. Its reconnection was seen as part of the general improvement in the relations between North and South in the early part of this century. However, in November 2008 North Korean authorities closed the railway amid growing tensions with the South. Following the death of former South Korean President Kim Dae-jung, conciliatory talks were held between South Korean officials and a North Korean delegation who attended Kim's funeral. In September 2009, the Kaesong rail and road crossing was reopened.
The road at Panmunjeom, which was known historically as Highway One in the South, was originally the only access point between the two countries on the Korean Peninsula. Passage is comparable to the strict movements that occurred at Checkpoint Charlie in Berlin at the height of the Cold War. Both North and South Korea's roads end in the JSA; the highways do not quite join as there is a concrete line that divides the entire site. People given the rare permission to cross this border must do so on foot before continuing their journey by road.
In 2007, on the east coast of Korea, the first train crossed the DMZ on the new Donghae Bukbu (Tonghae Pukpu) Line. The new rail crossing was built adjacent to the road which took South Koreans to Mount Kumgang Tourist Region, a region of significant cultural importance for all Koreans. More than one million civilian visitors crossed the DMZ until the route was closed following the shooting of a 53-year-old South Korean tourist in July 2008. After a joint investigation was rebuffed by North Korea, the South Korean government suspended tours to the resort. Since then, the resort and the Donghae Bukbu Line have effectively been closed by North Korea. Currently, the South Korean Korea Railroad Corporation (Korail) organizes tours to DMZ with special DMZ themed trains.
On 14 October 2018, North and South Korea, agreed to meet the summit's goal of restoring railway and road transportation, which had been cut since the Korean War, by either late November or early December 2018. Road and railway transportation along the DMZ were reconnected in November 2018, following the removal of the "frontline" guard posts and Arrowhead Hill landmines, railroad transportation between North and South Korea resumed. The same day, 30 officials from both North and South Korea started an 18-day survey of a 400-kilometer (248-mile) railroad section in North Korea alongside the DMZ between Kaesong and Sinuiju. Efforts to conduct the survey had previously been obstructed due to the presence of the guard posts and the Arrowhead Hill landmines. The survey will then follow the groundbreaking of a new railroad along the DMZ. The railway survey which involved the Gyeongui Line concluded on 5 December 2018.
On 8 December 2018, a South Korean bus crossed the DMZ into North Korea. The same day, the officials who conducted the inter-Korean survey for the Gyeongui Line began surveying the Donghae Line.
Nature reserve
In the past 70 years, the Korean DMZ has been a deadly place for humans, making habitation impossible. Only around the former village of Panmunjom and more recently the Donghae Bukbu Line on Korea's east coast have there been regular incursions by people.
This natural isolation along the length of the DMZ has created an involuntary park which is now recognized as one of the most well-preserved areas of temperate habitat in the world. In 1966 it was first proposed that the DMZ be turned into a national park.
There are over 6,000 species of animals and plants in the DMZ. The DMZ has over 100 endangered animal species of the 267 in Korea, as well as many endangered plant species, among the heavily fortified fences, landmines and listening posts. These animals include the endangered red-crowned crane (a staple of Asian art), the white-naped crane, critically endangered Korean fox and Asiatic black bear, and, potentially, the extremely rare Siberian tiger, Amur leopard, and endangered marine species such as western gray whale. Ecologists have identified some 2,900 plant species, 70 types of mammals and 320 kinds of birds within the narrow buffer zone. Additional surveys are now being conducted throughout the region.
The DMZ owes its varied biodiversity to its geography, which crosses mountains, prairies, swamps, lakes, and tidal marshes. Environmentalists hope that the DMZ will be conserved as a wildlife refuge, with a well-developed set of objective and management plans vetted and in place. In 2005, CNN founder and media mogul Ted Turner, on a visit to North Korea, said that he would financially support any plans to turn the DMZ into a peace park and a UN-protected World Heritage Site.
In September 2011, South Korea submitted a nomination form to Man and the Biosphere Programme (MAB) in UNESCO for designation of in the southern part of the DMZ below the Military Demarcation Line, as well as in privately controlled areas, as a Biosphere Reserve according to the Statutory Framework of the World Network of Biosphere Reserves. The MAB National Committee of the Republic of Korea mentioned only the southern part of DMZ to be nominated since there was no response from Pyongyang when it requested Pyongyang to push jointly. North Korea is a member nation of the international coordinating council of UNESCO's Man and the Biosphere Programme, which designates Biosphere Reserves.
North Korea opposed the application as a violation of the armistice agreement during the council's meeting in Paris on 9 to 13 July 2011. The South Korean government's attempt to designate the Demilitarized Zone (DMZ) a UNESCO Biosphere Reserve was turned down at UNESCO's MAB council meeting in Paris in July 2012. Pyongyang expressed its opposition by sending letters to 32 council member countries, except for South Korea, and the UNESCO headquarters a month prior to the meeting. At the council meeting, Pyongyang said the designation violated the Armistice Agreement.
Destruction of guard posts
On 26 October 2018, South Korean major general Kim Do-gyun and North Korean lieutenant general An Ik-san met in Tongilgak (the "Unification Pavilion"), a North Korean building located within the JSA. There, they began implementing new protocols which aim to reduce tension by requiring both North and South Korea to destroy 22 guard posts across the DMZ, among other steps. Both generals approved requirements for the guard posts to be destroyed by the end of November 2018. The JSA's guard posts were destroyed on 25 October 2018. North and South Korea agreed to dismantle 11 guard posts located within their individual country and deemed as "front-line". It was also agreed that after the posts are dismantled, both Koreas would also withdraw equipment and personnel stationed at each post as well. In tandem with the September 2018 Pyongyang and Military Domain Agreements, both sides also agreed to gradually remove all guard posts near the DMZ following verification in December 2018.
However, all remaining troops and equipment, including weapons, were withdrawn from all of the 22 "frontline" guard posts before destruction began and both Koreas later agreed to individually destroy 10 of these guard posts instead of 11.
On 4 November 2018, the North and South Korean governments hoisted a yellow flag above each of their 11 DMZ guard posts to publicly indicate that they all will be dismantled. On 10 November 2018, the withdrawal of military personnel and weapons from all of the DMZ's 22 "front-line" guard posts was completed. The destruction of 20 guard posts officially began on 11 November 2018. However, both Koreas amended the original agreement and decided to preserve 2 of the 22 now demilitarized frontline guard posts. Both of the posts which were planned to be preserved are located on the opposite sides of the Korean border.
On 15 November 2018, destruction of two DMZ guard posts, one being located in South Korea and the other located in North Korea, was completed. Work was still ongoing to complete the destruction of other guard posts as well. On 23 November 2018, it was revealed that South Korea was slowly destroying their guard posts with excavators.
On 20 November 2018, North Korea, hoping to further ease tensions with South Korea, destroyed all of their 10 remaining "frontline" guard posts. The South Korean Defense Ministry released photos confirming this and also released a statement stating that North Korea had informed them about the plans to demolish them before it took place. This came in accordance with the earlier agreements. South Korea also released videos of the guard posts being destroyed as well.
On 30 November 2018, both Koreas completed work to dismantle 10 of their "frontline" guard posts. However, the later agreement for each Korea to preserve one "frontline" post was upheld as well. The "frontline" guard post which was preserved on the North Korean side of the DMZ was visited by Kim Jong-un in 2013 when tensions were rising between both Koreas.
Establishment of buffer zones, no-fly zones and Yellow Sea peace zones
On 1 November 2018, buffer zones were established across the DMZ by the North and South Korean militaries. In compliance with the Comprehensive Military Agreement which was signed at the September 2018 inter-Korean summit, the buffer zone helps ensure that both North and South Korea will effectively ban hostility on land, air, and sea. Both Koreas are prohibited from conducting live-fire artillery drills and regiment-level field maneuvering exercises or those by bigger units within 5 kilometers of the Military Demarcation Line (MDL). The buffer zones stretch from the north of Deokjeok Island to the south of Cho Island in the West Sea and the north of Sokcho city and south of Tongchon County in the East (Yellow) Sea.
No-fly zones have also been established along the DMZ to ban the operation of drones, helicopters and other aircraft over an area up to away from the MDL. For UAVs, within from the MDL in the East and from the MDL in the West. Hot-air balloons cannot travel within of the DMZ as well. For fixed-wing aircraft, no fly zones are designated within from the MDL in the East (between MDL Markers No. 0646 and 1292) and within of the MDL in the West (between MDL Markers No. 0001 and 0646). For rotary-wing aircraft, the no fly zones are designated within of the MDL.
Both Koreas also created "peace zones" near their disputed Yellow Sea border.
Reconnecting of MDL-crossing road
On 22 November 2018, North and South Korea completed construction to connect a road along the DMZ, northeast of Seoul. This road, which crosses the Korean MDL land border, consists of in South Korea and in North Korea. The road was reconnected for the first time in 14 years in an effort to assist with a process at the DMZ's Arrowhead Hill involving the removal of landmines and exhumation of Korean War remains.
Presence of landmines and Korean War remains
On 1 October 2018, North and South Korean military engineers began a scheduled 20 day removal process of landmines and other explosives planted across the JSA. Work to remove landmines from the Joint Security Area was completed on 25 October 2018. Demining had begun at the DMZ's Arrowhead Hill and resulted in the discovery of Korean War remains. Work between both Koreas to remove landmines from Arrowhead Hill was completed on 30 November 2018.
Military Border Crossing
On 12 December 2018, militaries from both Koreas crossed the DMZ's MDL into the opposition countries for the first time in history to inspect and verify the removal of "frontline" guard posts.
Meeting of Trump, Kim, and Moon at the DMZ
On 30 June 2019, U.S. president Donald Trump became the first sitting U.S. president to enter North Korea, doing so at the DMZ line. After crossing into North Korea, Trump and North Korean chairman Kim Jong Un met and shook hands. Kim stated, in Korean, "It's good to see you again", "I never expected to meet you at this place" and "you are the first U.S. president to cross the border." Both men then briefly crossed the border line before crossing back into South Korea.
On the South Korean side of the DMZ, Kim, South Korean president Moon Jae-in, and Trump held a brief chat before holding an hour-long meeting at the DMZ's Inter-Korean House of Freedom.
Pilgrimages
An annual youth pilgrimage including a 6-day peace walk to the DMZ is organised by the Catholic Church. The first pilgrimage took place in 2012. Young people from 15 countries attended the 2019 pilgrimage. The 2022 pilgrimage included visits to the Ulleungdo and Dokdo islands.
See also
Bamboo Curtain
Iron Curtain
List of border incidents involving North and South Korea
Neutral Nations Supervisory Commission
North Korea–South Korea relations
Peace lines
United Nations Buffer Zone in Cyprus
Vietnamese Demilitarized Zone
Notes
References
Elferink, Alex G. Oude, (1994). The Law of Maritime Boundary Delimitation: a Case Study of the Russian Federation. Dordrecht: Martinus Nijhoff. ; OCLC 123566768
D.P – netflix show ( based on D.P Dog Days – the webtoon )
External links
Touring the DMZ in South Korea What it's like to stand at the border, 2017
U.S. Army official Korean Demilitarized Zone image archive
Washington Post Correspondent Amar Bakshi travels to the Korean Demilitarized Zone... And uncovers the world's most dangerous tourist trap, January 2008.
Status and ecological resource value of the Republic of Korea's De-militarized Zone
Tour Of DMZ on YouTube. Dec. 2007
Tour of DMZ from the DPRK on YouTube, 2016
360 degree tour of DMZ from the DPRK on YouTube, 2016
DMZ Forum: Collaborative international NGO focusing on promoting peace and conservation within the Korean DMZ region
ABCNews/Yahoo! report/blog on the DMZ
The World’s Most Dangerous Border – A Tour of North Korea’s DMZ Visiting the DMZ from Pyongyang.
Photo of road linking DPRK to Paju, ROK
International Boundary Study No. 22 – May 24, 1963 Korea “Military Demarcation Line” Boundary
Interactive map with points of interest along the DMZ
1953 introductions
1953 establishments in Korea
Military installations established in 1953
Border barriers
Demilitarized zones
International borders
Biosphere reserves of South Korea
Cold War terminology
Korean reunification
Military of North Korea
Military history of Korea
North Korea–South Korea border
North Korea–South Korea relations | Korean Demilitarized Zone | [
"Engineering"
] | 8,131 | [
"Separation barriers",
"Border barriers"
] |
328,916 | https://en.wikipedia.org/wiki/Edge%20connector | An edge connector is the portion of a printed circuit board (PCB) consisting of traces leading to the edge of the board that are intended to plug into a matching socket. The edge connector is a money-saving device because it only requires a single discrete female connector (the male connector is formed out of the edge of the PCB), and they also tend to be fairly robust and durable. They are commonly used in computers for expansion slots for peripheral cards, such as PCI, PCI Express, and AGP cards.
Socket design
Edge connector sockets consist of a plastic "box" open on one side, with pins on one or both sides of the longer edges, sprung to push into the middle of the open center. Connectors are often keyed to ensure the correct polarity, and may contain bumps or notches both for polarity and to ensure that the wrong type of device is not inserted. The socket's width is chosen to fit to the thickness of the connecting PCB.
The opposite side of the socket is often an insulation-piercing connector which is clamped onto a ribbon cable. Alternatively, the other side may be soldered to a motherboard or daughtercard.
Uses
Edge connectors are commonly used in personal computers for connecting expansion cards and computer memory to the system bus. Example expansion peripheral technologies which use edge connectors include PCI, PCI Express, and AGP. Slot 1 and Slot A also used edge connectors; the processor being mounted on a card with an edge connector, instead of directly to the motherboard as before and since.
IBM PCs used edge connector sockets attached to ribbon cables to connect 5.25" floppy disk drives. 3.5" drives use a pin connector instead.
Video game cartridges typically take the form of a PCB with an edge connector: the socket is located within the console itself. The Nintendo Entertainment System was unusual in that it was designed to use a zero insertion force edge connector: instead of the user forcing the cartridge into the socket directly, the cartridge was first placed in a bay and then mechanically lowered into position.
Starting with the Amiga 1000 in 1985, various Amiga models used the 86-pin Zorro I edge connector, which was later reshaped into the internal 100-pin Zorro II slot on the Amiga 2000 and later upmarket models.
See also
Pin header connector
Insulation-displacement connector
References
Electrical connectors
Computer connectors
Printed circuit board manufacturing | Edge connector | [
"Engineering"
] | 491 | [
"Electrical engineering",
"Electronic engineering",
"Printed circuit board manufacturing"
] |
13,327,372 | https://en.wikipedia.org/wiki/Sleep%E2%80%93wake%20activity%20inventory | The sleep–wake activity inventory (SWAI) is a subjective multidimensional questionnaire intended to measure sleepiness.
The instrument
The SWAI consists of 59 items that provide six subscale scores: excessive daytime sleepiness, nocturnal sleep, ability to relax, energy level, social desirability, and psychic distress. Each item is rated on a 1 to 9 semicontinuous Likert type scale from "always" to "never", based on the previous seven days. The SWAI was normed on 554 subjects in the early 1990s and is currently being validated or has been validated in multiple languages, including Spanish, French and Dutch.
For the excessive daytime sleepiness subscale (SWAI-EDS), a score of 40 or below indicates excessive sleepiness, a score of between 40 and 50 indicates possible sleepiness and a score of greater than 50 is normal.
A short form of the SWAI exists that contains items for the excessive daytime sleepiness and nocturnal sleep subscales only.
Comparison with other sleepiness assessments
The SWAI has been compared to the multiple sleep latency test (MSLT), which is an objective measure that is considered the gold standard of sleepiness assessment; it measures sleep onset latency during several daytime opportunities. The SWAI-EDS has been found to correlate moderately to highly with average MSLT scores.
Other sleepiness scales, including the Stanford sleepiness scale and the Epworth sleepiness scale (ESS), exist. However, the ESS does not correlate as highly with the MSLT as the SWAI. The ESS is currently the most prevalent measure of excessive sleepiness.
History
The SWAI was developed by Drs. Leon Rosenthal, Timothy Roehrs and Tom Roth at the Sleep Disorders and Research Center at the Henry Ford Hospital in Detroit, Michigan.
References
Sleep
Tools
Anthropometry | Sleep–wake activity inventory | [
"Biology"
] | 387 | [
"Behavior",
"Sleep"
] |
13,327,961 | https://en.wikipedia.org/wiki/Dirucotide | Dirucotide (also known as MBP8298) was developed by two research scientists (Dr. Kenneth G. Warren, MD, FRCP(C) & Ingrid Catz, Senior Scientist) at the University of Alberta for the treatment of Multiple Sclerosis (MS). Dirucotide is a synthetic peptide that consists of 17 amino acids linked in a sequence identical to that of a portion of human myelin basic protein (MBP). The sequence of these 17 amino acids is
H2N-Asp-Glu-Asn-Pro-Val-Val-His-Phe-Phe-Lys-Asn-Ile-Val-Thr-Pro-Arg-Thr-OH
Research
Results from a phase II and long-term follow-up trial showed that dirucotide safely delayed median time to disease progression for five years in progressive MS patients with HLA-DR2 or HLA-DR4 immune response genes. It does not seem to be effective in patients with other gene variants.
The drug is exclusively licensed by BioMS Medical Corp., a Canadian-based biotechnology company. BioMS Medical received clearance from the Food & Drug Administration (FDA) to initiate a phase III clinical trial, named MAESTRO-03, for secondary progressive MS patients in January 2007. An additional Phase III clinical trial for dirucotide, MAESTRO-01, is being undertaken in Canada and Europe. In September 2008, the drug was granted FDA fast-track for approval.
A phase II trial of Dirucotide as a potential therapy for relapsing-remitting multiple sclerosis (RRMS), MINDSET-01, failed to achieve its primary endpoint of reduced relapse rate. Nor did it reduce new MRI lesions. It did however reduce progression on the Expanded Disability Status Scale (EDSS) and the Multiple Sclerosis Functional Composite (MSFC) scale. Phase III trials are now in progress with reduction of EDSS and MSFC progression as primary endpoints.
BioMS Medical has agreed to share development of dirucotide with Eli Lilly and Company, which received exclusive worldwide rights to future research and development, manufacturing, and marketing of the compound.
Mechanism of action
T cells producing receptors recognizing MBP fragments presented by the MHC molecules of antigen presenting cells seem to play a role in the pathogenesis of MS. Repeated application of dirucotide (intravenous, every six months) represses immunological response against MBP.
Status of the development
On 27 July 2009, a statement was released, stating "BioMS Medical Corp. (TSX: MS) today announced that dirucotide did not meet the primary endpoint of delaying disease progression, as measured by the Expanded Disability Status Scale (EDSS), during the two-year MAESTRO-01 Phase III trial in patients with secondary progressive multiple sclerosis (SPMS). In addition, there were no statistically significant differences between dirucotide and placebo on the secondary endpoints of the study", this means that the MAESTRO-02 and MAESTRO-03 trials are discontinued.
References
Drugs developed by Eli Lilly and Company
Multiple sclerosis
Peptides | Dirucotide | [
"Chemistry"
] | 649 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
13,328,061 | https://en.wikipedia.org/wiki/Threitol | Threitol is the chiral four-carbon sugar alcohol with the molecular formula C4H10O4. It is primarily used as an intermediate in the chemical synthesis of other compounds. It exists in the enantiomorphic forms D-threitol and L-threitol, the reduced forms of D- and L-threose. It is the diastereomer of erythritol, which is used as a sugar substitute.
In living organisms, threitol is found in the edible fungus Armillaria mellea.
It serves as a cryoprotectant (antifreeze agent) in the Alaskan beetle Upis ceramboides.
See also
Antifreeze protein
Dithiothreitol, a thiol derivative of threitol
References
External links
Sugar alcohols
Tetroses
Tetrols | Threitol | [
"Chemistry"
] | 181 | [
"Carbohydrates",
"Sugar alcohols",
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
13,328,208 | https://en.wikipedia.org/wiki/Chicago%20school%20%28mathematical%20analysis%29 | The Chicago school of mathematical analysis is a school of thought in mathematics that emphasizes the applications of Fourier analysis to the study of partial differential equations. Mathematician Antoni Zygmund co-founded the school with his doctoral student Alberto Calderón at the University of Chicago in the 1950s. Over the years, Zygmund mentored over 40 doctoral students at the University of Chicago.
Key people
Antoni Zygmund
Alberto Calderón
Paul Cohen, Fields Medal winner (1966)
Charles Fefferman, Fields Medal winner (1978)
Eli Stein
Comments
The Chicago school of analysis is considered to be one of the strongest schools of mathematical analysis in the 20th century, which was responsible for some of the most important developments in analysis.
Awards
In 1986, Antoni Zygmund received the National Medal of Science, in part for his "creation and leadership of the strongest school of analytical research in the contemporary mathematical world."
See also
Joseph Fourier
Mathematical analysis
References
University of Chicago
Philosophical schools and traditions | Chicago school (mathematical analysis) | [
"Mathematics"
] | 198 | [
"Mathematical analysis",
"Mathematical analysis stubs"
] |
13,328,505 | https://en.wikipedia.org/wiki/Combustion%20light-gas%20gun | A combustion light-gas gun (CLGG) is a projectile weapon that utilizes the explosive force of low molecular-weight combustible gases, such as hydrogen mixed with oxygen, as propellant. When the gases are ignited, they burn, expand and propel the projectile out of the barrel with higher efficiency relative to solid propellant and have achieved higher muzzle velocities in experiments. Combustion light-gas gun technology is one of the areas being explored in an attempt to achieve higher velocities from artillery to gain greater range. Conventional guns use solid propellants, usually nitrocellulose-based compounds, to develop the chamber pressures needed to accelerate the projectiles. CLGGs' gaseous propellants are able to increase the propellant's specific impulse. Therefore, hydrogen is typically the first choice; however, other propellants like methane can be used.
While this technology does appear to provide higher velocities, the main drawback with gaseous or liquid propellants for gun systems is the difficulty in getting uniform and predictable ignition and muzzle velocities. Variance with muzzle velocities affects precision in range, and the further a weapon shoots, the more significant these variances become. If an artillery system cannot maintain uniform and predictable muzzle velocities it will be of no use at longer ranges. Another issue is the survival of projectile payloads at higher accelerations. Fuzes, explosive fill, and guidance systems all must be "hardened" against the significant acceleration loads of conventional artillery to survive and function properly. Higher velocity weapons, like the CLGG, face these engineering challenges as they edge the boundaries of firing accelerations higher.
The research and development firm UTRON, Inc is experimenting with a combustion light-gas gun design for field use. The corporation claims to have a system ready for testing as a potential long-range naval fire support weapon for emerging ships, such as the Zumwalt-class destroyer. The CLGG, like the railgun, is a possible candidate technology for greater ranges for naval systems, among others. UTRON has built and tested 45mm and 155mm combustion light-gas guns.
See also
Light-gas gun
Scram cannon
Electrothermal-chemical technology
Potato cannon
References
https://apps.dtic.mil/dtic/tr/fulltext/u2/a462130.pdf UTRON 2006 Test Report
Artillery by type
Ballistics | Combustion light-gas gun | [
"Physics"
] | 498 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
13,328,566 | https://en.wikipedia.org/wiki/Insulated%20shipping%20container | Insulated shipping containers are a type of packaging used to ship temperature sensitive products such as foods, pharmaceuticals, organs, blood, biologic materials, vaccines and chemicals. They are used as part of a cold chain to help maintain product freshness and efficacy. The term can also refer to insulated intermodal containers or insulated swap bodies.
Construction
A variety of constructions have been developed. An insulated shipping container might be constructed of:
a vacuum flask, similar to a "thermos" bottle
fabricated thermal blankets or liners
molded expanded polystyrene foam (EPS, styrofoam), similar to a cooler
other molded foams such as polyurethane, polyethylene
sheets of foamed plastics
Vacuum Insulated Panels (VIPs)
reflective materials: (metallised film)
bubble wrap or other gas filled panels
other packaging materials and structures
Some are designed for single use while others are returnable for reuse. Some insulated containers are decommissioned refrigeration units. Some empty containers are sent to the shipper disassembled or “knocked down”, assembled and used, then knocked down again for easier return shipment.
Shipping containers are available for maintaining cryogenic temperatures, with the use of liquid nitrogen. Some carriers have these as a specialized service
Use
Insulated shipping containers are part of a comprehensive cold chain which controls and documents the temperature of a product through its entire distribution cycle. The containers may be used with a refrigerant or coolant such as:
block or cube ice, slurry ice
dry ice
Gel or ice packs (often formulated for specific temperature ranges)
Phase change materials (PCMs)
Some products (such as frozen meat) have sufficient thermal mass to contribute to the temperature control and no excess coolant is required
A digital Temperature data logger or a time temperature indicator is often enclosed to monitor the temperature inside the container for its entire shipment.
Labels and appropriate documentation (internal and external) are usually required.
Personnel throughout the cold chain need to be aware of the special handling and documentation required for some controlled shipments. With some regulated products, complete documentation is required.
Design and evaluation
The use of “off the shelf” insulated shipping containers does not necessarily guarantee proper performance. Several factors need to be considered:
the sensitivity of the product to temperatures (high and low) and to time at temperatures
the specific distribution system being used: the expected (and worst case) time and temperatures
regulatory requirements
the specific combination of packaging components and materials being used
In specifying an insulated shipping container, the two primary characteristics of the material are its thermal conductivity or R-value, and its thickness. These two attributes will help determine the resistance to heat transfer from the ambient environment into the payload space. The coolant material load temperature, quantity, latent heat, and sensible heat will help determine the amount of heat the parcel can absorb while maintaining the desired control temperature. Combining the attributes from the insulator and coolant will allow analysis of expected duration of the insulated shipping container system. Testing of multi-component systems is needed.
It is wise (and sometimes mandatory) to have formal verification of the performance of the insulated shipping container. Laboratory package testing might include ASTM D3103-07, Standard Test Method for Thermal Insulation Performance of Packages, ISTA Guide 5B: Focused Simulation Guide for Thermal Performance Testing of Temperature Controlled Transport Packaging, and others. In addition, validation of field performance (performance qualification) is extremely useful.
Specialists in design and testing of packaging for temperature sensitive products are often needed. These may be consultants, independent laboratories, universities, or reputable vendors. Many laboratories have certifications and accreditations: ISO 9000s, ISO/IEC 17025, etc.
Environmental Impact
Parcel to pallet sized insulated shipping containers have historically been single-use products due to the low-cost material composition of EPS and water-based gel packs. The insulation material typically finds its way into landfill streams as it is not readily recyclable in the United States.
The development of reusable high-performance shipping containers have been shown to reduce packing waste by 95% while also contributing significant savings to other environmental pollutants.
See also
Cold chain
Packaging engineering
Shelf life
Slurry ice
Temperature measurement
Thermal bag
Thermal insulation
Heat transfer
Validation (drug manufacture)
Verification and Validation
References
External links and resources
"Cold Chain Management", 2003, 2006,
Brody, A. L., and Marsh, K, S., "Encyclopedia of Packaging Technology", John Wiley & Sons, 1997,
Lockhart, H., and Paine, F.A., "Packaging of Pharmaceuticals and Healthcare Products", 2006, Blackie,
Food safety
Drug distribution
Shipping containers
Temperature control
Thermal protection
Articles containing video clips | Insulated shipping container | [
"Technology"
] | 966 | [
"Home automation",
"Temperature control"
] |
13,329,119 | https://en.wikipedia.org/wiki/Distributed%20concurrency%20control | Distributed concurrency control is the concurrency control of a system distributed over a computer network (Bernstein et al. 1987, Weikum and Vossen 2001).
In database systems and transaction processing (transaction management) distributed concurrency control refers primarily to the concurrency control of a distributed database. It also refers to the concurrency control in a multidatabase (and other multi-transactional object) environment (e.g., federated database, grid computing, and cloud computing environments. A major goal for distributed concurrency control is distributed serializability (or global serializability for multidatabase systems). Distributed concurrency control poses special challenges beyond centralized one, primarily due to communication and computer latency. It often requires special techniques, like distributed lock manager over fast computer networks with low latency, like switched fabric (e.g., InfiniBand).
The most common distributed concurrency control technique is strong strict two-phase locking (SS2PL, also named rigorousness), which is also a common centralized concurrency control technique. SS2PL provides both the serializability and strictness. Strictness, a special case of recoverability, is utilized for effective recovery from failure. For large-scale distribution and complex transactions, distributed locking's typical heavy performance penalty (due to delays, latency) can be saved by using the atomic commitment protocol, which is needed in a distributed database for (distributed) transactions' atomicity.
See also
Global concurrency control
References
Philip A. Bernstein, Vassos Hadzilacos, Nathan Goodman (1987): Concurrency Control and Recovery in Database Systems, Addison Wesley Publishing Company, 1987,
Gerhard Weikum, Gottfried Vossen (2001): Transactional Information Systems, Elsevier,
Data management
Distributed computing problems
Databases
Concurrency control
Transaction processing | Distributed concurrency control | [
"Mathematics",
"Technology"
] | 364 | [
"Distributed computing problems",
"Computational problems",
"Data management",
"Data",
"Mathematical problems"
] |
13,330,152 | https://en.wikipedia.org/wiki/IBM%20Lotus%20Symphony | IBM Lotus Symphony is a discontinued suite of applications for creating, editing, and sharing text, spreadsheet, presentations, and other documents and browsing the World Wide Web. It was first distributed as commercial proprietary software, then as freeware, before IBM contributed the suite to the Apache Software Foundation in 2014 for inclusion in the free and open-source Apache OpenOffice software suite.
First released in 2007, the suite has a name similar to the 1980s DOS Lotus Symphony suite, but the two software suites are otherwise unrelated. The previous Lotus application suite, Lotus SmartSuite, is also unrelated.
IBM discontinued development of Lotus Symphony in January 2012 with the final release of version 3.0.1, moving future development effort to Apache OpenOffice, and donating the source code to the Apache Software Foundation.
Features
IBM Lotus Symphony consists of:
IBM Lotus Symphony Documents, a word processor program
IBM Lotus Symphony Spreadsheets, a spreadsheet program
IBM Lotus Symphony Presentations, a presentation program
A Web browser based on Firefox 3
Each application is split into tabs.
Symphony supports the OpenDocument formats as well as the binary Microsoft Office formats. It can also export Portable Document Format (PDF) files and import Office Open XML files. Previous support for Lotus SmartSuite formats was disabled in Symphony 3.
Symphony is based on Eclipse Rich Client Platform from IBM Lotus Expeditor (the shell) and OpenOffice.org 3 (the core office-suite code).
In 2009, IBM created development tools for BlackBerry smartphones to link to IBM's business software, which also allow opening ODF file-formats, following a full Symphony later.
Lotus Symphony 3.0.1 added enhancements including support for one million spreadsheet rows, bubble charts, and a new design for the home page. On 27 March 2012 a first fixpack update for Lotus Symphony 3.0.1 was released. On 29 November 2012 a second fixpack update for Lotus Symphony 3.0.1 was released.
A web based version of Symphony, called LotusLive Symphony, was launched in 2011.
History
Symphony has its roots in the IBM Workplace Managed Client component of IBM Workplace. In 2006, IBM introduced Workplace Managed Client version 2.6, which included "productivity tools"—a word processor, spreadsheet, and presentation program—that supported ODF. Workplace used code from OpenOffice.org version 1.1.4, the last version released under the Sun Industry Standards Source License, which allowed for release of binaries of modified versions without releasing changes.
Later in 2006, IBM announced that Lotus Notes 8, which already incorporated Workplace technology, would also include the same productivity tools as the Workplace Managed Client. In 2007, IBM released Notes 8, and then released Notes' productivity tools as a standalone application, Symphony, in a beta one month later. The code in Symphony is the same as that for Notes 8's productivity tools. IBM released version 1.0 of Lotus Symphony in May 2008 as a free download, and introduced three minor upgrades through 2008 and 2009.
In 2010, IBM released version 3.0. Symphony 3.0 was based on OpenOffice.org 3.0, though not under the LGPL but under a special arrangement between IBM and Sun (who required copyright assignment of all outside OpenOffice.org contributions). and includes enhancements such as new sidebars in its user interface and support for Visual Basic for Applications macros, OpenDocument Format 1.2, and OLE. Symphony 3.0 was originally planned to include other existing OpenOffice.org modules, including an equation editor, database software, and a drawing program.
The software was developed by IBM China Development Laboratory, located in Beijing, which later for a brief time developed Apache OpenOffice.
On 13 July 2011, IBM announced that it would donate Lotus Symphony to the Apache Foundation. On 23 January 2012, IBM announced version 3.0.1 would be the last version of Lotus Symphony and their efforts would be going into the Apache OpenOffice project, including the Symphony user interface. IBM planned to release an "Apache OpenOffice IBM Edition" after the release of Apache OpenOffice 4, but later decided that it would offer the stock Apache OpenOffice with IBM extensions.
There were complaints that IBM and the Apache Software Foundation did not really provide an open source release of the Lotus Symphony code, although IBM promised to donate the code to Apache. It was reported that some LibreOffice developers wanted to adopt some code parts and bug fixes which IBM already fixed in their OpenOffice fork.
Usage share
During the Lotusphere event in 2009, IBM confirmed its cost-reduction effort using Lotus Symphony, with the company migrating its 400,000 users from Microsoft Office to Lotus Symphony. In June 2008 IBM urged its 20,000 'strong-techies' employees to use Symphony instead of Microsoft Office and later in September 2009 IBM forced all 360,000 employees to use Symphony.
In March 2009, a study showed that Lotus Symphony had a 2% market share in the corporate market.
, IBM stated that Lotus Symphony had 12 million users with 50 million downloads in January 2011.
Version release dates
Beta 1
Released on 18 September 2007
Beta 2
Released on 5 November 2007
Beta 3
Released on 17 December 2007
Released in 23 languages on 7 January 2008
Beta 4
Released on 1 February 2008. Introduced the Lotus Symphony Developer Toolkit.
Revised edition released on 3 March 2008
Version 1.0
Released on 30 May 2008
Version 1.1
Released on 29 August 2008
Version 1.2
Released on 4 November 2008
Revised edition released on 23 February 2009
Version 1.3
Released on 10 June 2009
Revised edition released on 1 September 2009
Version 3 Beta
Released on 4 February 2010
Version 3 Beta 2
Released on 4 February 2010
Features: Visual Basic macros, OLE Objects and embedded audio/video; support for nested tables, presentation masters and DataPilot tables for pivoting on large datasets.
Version 3 Beta 3
Released on 7 June 2010
Version 3 Beta 4
Released on 26 August 2010
Version 3.0
Released 21 October 2010
Version 3.0 FixPack 1
Released 13 January 2011
Version 3.0 FixPack 2
Released 20 April 2011
Version 3.0 FixPack 3
Released 20 July 2011
Version 3.0.1
Released 23 January 2012
Version 3.0.1 FixPack 1
Released 27 March 2012
Version 3.0.1 FixPack 2
Released 29 November 2012
See also
Office Open XML software
OpenDocument software
References
External links
2007 software
Discontinued software
Lotus Software software
MacOS word processors
Office suites
Office suites for Linux
Office suites for macOS
Office suites for Windows
OpenOffice
Spreadsheet software | IBM Lotus Symphony | [
"Mathematics"
] | 1,357 | [
"Spreadsheet software",
"Mathematical software"
] |
13,330,203 | https://en.wikipedia.org/wiki/Linnik%20interferometer | A Linnik interferometer is a two-beam interferometer used in microscopy and surface contour measurements or topography. The basic configuration is the same as a Michelson interferometer. What distinguishes the Linnik configuration is the use of measurement optics in the reference arm, which essentially duplicate the objective measurement optics in the measurement arm. The advantage of this design is its ability to compensate for chromatic dispersion and other optical aberrations.
In the image of a Linnik interferometer at right, 110 is the light source, 164 the detector. The beamsplitter 120 produces the two arms of the interferometer. The measurement arm 140 contains an objective lens 141 for imaging the surface to be studied 152. The reference arm 130 contains complementary optics to compensate for aberrations produced in the measurement arm.
See also
List of types of interferometers
References
Interferometers | Linnik interferometer | [
"Technology",
"Engineering"
] | 184 | [
"Interferometers",
"Measuring instruments"
] |
13,330,828 | https://en.wikipedia.org/wiki/Synthetic%20jet | In fluid dynamics, a synthetic jet flow—is a type of jet flow, which is made up of the surrounding fluid. Synthetic jets are produced by periodic ejection and suction of fluid from an opening. This oscillatory motion may be driven by a piston or diaphragm inside a cavity among other ways.
A synthetic jet flow was so named by Ari Glezer since the flow is "synthesized" from the surrounding or ambient fluid. Producing a convectional jet requires an external source of fluid, such as piped-in compressed air or plumbing for water.
Synjet devices
Synthetic jet flow can be developed in a number of ways, such as with an electromagnetic driver, a piezoelectric driver, or even a mechanical driver such as a piston. Each moves a membrane or diaphragm up and down hundreds of times per second, sucking the surrounding fluid into a chamber and then expelling it. Although the mechanism is fairly simple, extremely fast cycling requires high-level engineering to produce a device that will last in industrial applications.
For hot spot thermal management, the Synjet, commercially offered by Austin, Texas–based company Nuventix, was patented in 2000 by engineers at Georgia Tech. The tiny synjet module creates jets that can be directed to precise locations for industrial spot cooling. Traditionally, metallic heat sinks conduct heat away from electronic components and into the air, and then a small fan blows the hot air out. Synjet modules replace or augment cooling fans for such devices as microprocessors, memory chips, graphics chips, batteries, and radio frequency components. Additionally, SynJet technology has been used for the thermal management of high power LEDs
Synthetic jet modules have also been widely researched for controlling airflow in aircraft to enhance lift, increase maneuverability, control stalls, and reduce noise. Problems in applying the technology include weight, size, response time, force, and complexity of controlling the flows.
A Caltech researcher has even tested synthetic jet modules to provide thrust for small underwater vehicles, modeled on the natural jets that squid and jellyfish produce. Recently, research team at the School of Engineering, Taylor's University (Malaysia), successfully used synthetic jets as mixing devices. Synthetic jets prove to be effective mixing devices especially for shear sensitive materials.
References
Fluid dynamics | Synthetic jet | [
"Chemistry",
"Engineering"
] | 469 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
13,330,831 | https://en.wikipedia.org/wiki/Ostwald%27s%20rule | In materials science, Ostwald's rule or Ostwald's step rule, conceived by Wilhelm Ostwald, describes the formation of polymorphs. The rule states that usually the less stable polymorph crystallizes first. Ostwald's rule is not a universal law but a common tendency observed in nature.
This can be explained on the basis of irreversible thermodynamics, structural relationships, or a combined consideration of statistical thermodynamics and structural variation with temperature. Unstable polymorphs more closely resemble the state in solution, and thus are kinetically advantaged.
For example, out of hot water, metastable, fibrous crystals of benzamide appear first, only later to spontaneously convert to the more stable rhombic polymorph. A dramatic example is phosphorus, which upon sublimation first forms the less stable white phosphorus, which only slowly polymerizes to the red allotrope. This is notably the case for the anatase polymorph of titanium dioxide, which having a lower surface energy is commonly the first phase to form by crystallisation from amorphous precursors or solutions despite being metastable, with rutile being the equilibrium phase at all temperatures and pressures.
References
Mineralogy
Gemology
Crystallography | Ostwald's rule | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 273 | [
"Crystallography",
"Polymorphism (materials science)",
"Condensed matter physics",
"Materials science"
] |
13,332,199 | https://en.wikipedia.org/wiki/Robik | ALU Robik () was a Soviet and Ukrainian ZX Spectrum clone produced between July 1989 and January 1998 by the NPO "Rotor" in Cherkasy. Over 70 000 was produced, while few millions was planned.
Specification
The Robik is a monoblock computer in the keyboard formfactor, with an external power supply block, permanently connected by wire.
Older cases produced by the SELTO, and newer produced by the NPO "Rotor" (on the back side of keyboard case there is a logo of manufacturer).
Motherboard
It came in four versions, with only minor changes made for Russian internationalization and localization. The hardware remained largely unchanged, but cheaper parts were used for each version. The fourth version had the new addition of a single integrated circuit. This version did not sell well because by then the main market for the Robik was hardware enthusiasts and this design did not allow for modifications.
Robik had two EPROM chips. There are two languages in the M2764AF-1 chip from ST, which can be switched by shortcut keys.
Keyboard
The computer came with 55 keys,. It had the possibility to switch between Latin and Russian fonts.
A total of 55 keys in the main group:
Two ("Reset" keys), ("Delete"), , ;
Full English/Russian (QWERTY/ЙЦУКЕН) keyboard;
Two , and in some variants;
and stop keys;
("Fire" key), ("Multifunctional" key, in some variants changed to third one ), ;
("CAPS C") and ("CAPS L") keys.
Four keys in a separate group (on the right, next to main group) — cursor keys (together with "Fire" key it also worked as a joystick).
The letters on switches caps were written using laser beam technology, as a result labels represented as outlined symbols (in the last version stamp printing used for place labels as filled symbols instead).
PKM 1B
The keyboard buttons are based on the PKM 1B () reed switches, instead of copper or iron contact plates.Initially, the PKM 1B switches produced by the "" (Ukraine) was used, but original production of switches discontinued during the production of the Robik, and the NPO "Rotor" launched its own production line for the PKM 1B switches instead.
Peripherals
The Robik had four ports on the back side: ВИДЕО ("Video"), RGB, JS-K, ◯_◯ ("Tape"). It had no edge connector and video output was analog RGB on a 5-pin DIN or digital TTL on an 8-pin DIN.
Inside the case there was a male 64-pin connector that could be mapped to the standard edge connector.
Display
The Robick supports to be connected to either monochrome MDA/Hercules or color EGA monitor (via ВИДЕО out), or color TV (via RGB out). For the RGB out, there are adjusters for each of color channel (R, G and B), as well as overall color invertor toggle – all are accessible via the marked access holles on the bottom side of computer.
There was no composite video and all I/O ports were 5- and 7-pin DINs.
When writing, the screen memory to the TV/monitor screen did not begin from the top left of the border, but instead began from border right under paper. This meant that most multicolor effects and some games did not work correctly. Errors in the ROM have been fixed and Cyrillic letters were also inserted.
The keyboard matrix was extended from five keys in eight rows to five keys in nine rows to allow for more buttons. A reset could be performed by pressing two buttons.
Sound
NPO "Rotor" produced an external music sound device for the Robik.
External storage
Robik has no internal mass storage and uses cassette tapes as an external storage.
It requires to connect cassette deck via ◯_◯ ("Tape") port, for read and write data.
There was also an external floppy disk drive produced for the Robik by NPO "Rotor", to use diskettes instead of tapes.
Joystick
There is the JS-K port for connecting joysticks via Kempston Interface.
Printer
Printer has been developed for the Robik by NPO "Rotor".
Software
The Robik distribution includes a cassette tape with seven programs:
"BASICTEST" (computer testing and debugging application)
"DEVPAC" (compiler/decompiler/debugger)
"RED" (text editor)
"KRACOUT" (video game)
"ART-STUDIO" (raster image editor)
"IS CHESS-48" (chess video game)
"BAT"
Legacy
There is a number of the Robik computers stored in various museums and private collections around the world:
, Kyiv and Kharkiv (Ukraine)
Museum of Computer Technology of the University of Lviv (Ukraine)
The It8bit.Club , Mariupol (Ukraine). Museum was fully destroyed in March 2022, in a result of shelling by Russians during the Battle of Mariupol (website continue to exists as a virtual museum now)
Ust'-Kamenogorsk Computer Museum (Kazakhstan)
The Peek & Poke Computer Museum, Rijeka (Croatia)
Le musée de l'informatique Silicium, Corbarieu (France)
Kompjutry.cz, moving computer museum (Czechia)
The Home Computer Museum, Oberhausen (Germany)
The Number Crunchers Homecomputer Museum (Germany)
The Freeman PC Museum, Long Beach (California, USA)
Rhode Island Computer Museum, Warwick (Rhode Island, USA)
Arcade Vintage Museum (), Ibi (Spain)
Reale-Rydell Computer Museum, virtual museum
Hal's friends Computer Museum, virtual museum (Italy)
Osmibitóve muzeum, virtual museum (Chechia)
The Clueless Engineer, vlogger (Australia)
Personal Computer Museum, Ontario (Canada)
У 2017, the "LandauCenter" at the National University of Kharkiv organized an exhibition of the 1980s computers. Exposition included the Robik from the Software and Computer Museum collection.
Since the end of production, there are a lot of the Robik computers present on the secondary market.
Facts
On 22 May 1993, on the cover of the printed addition to one of the local magazines in Kryvyi Rih (Ukraine) was placed the next classified advert:
Local schools was gifted with hundreds of the Robik computers by NPO "Rotor" for free.
The hardware contained about three to four grams of gold and almost eighteen grams of silver, and some of other rare metals are present in various electronic components used in the Robik. As a result, many of the Robik computers have been dismantled for recovery of the costly parts.
See also
List of ZX Spectrum clones
Publications
Documents from the NPO «Rotor» archives, published in the «Legends of Bytes», Issue 10 (2024).
АЛП "Робик" [ALP "Robik"] (Circuit diagram) (DjVu)
Video
The Clueless Engineer.
Bazza H.
References
External links
Robik_Basic48.rom (firmware ROM for the Robik) at the SpecciWiki.info
Robik_Tape.zip (TAP files of the original software cassette tape) at the Amiga.nsk.ru
Robik (keyboard layout, RAW data) for the Keyboard Layout Editor[{c:"#757575",a:7,f:4},"RES",{c:"#cccccc",a:4,f:5},"!\n1","@\n2","#\n3","$\n4","%\n5\n\n\n<","&\n6\n\n\n⌃","'\n7\n\n\n⌄","(\n8\n\n\n>",")\n9","—\n0",{a:7,f:9},".",{c:"#757575",f:4},"DEL"],
[{w2:1.5},"EDIT",{x:0.5,c:"#cccccc",a:4,f:5},"Q\n\n\nЙ\nPLOT","W\n\n\nЦ\nDRAW","E\n\n\nУ\nREM","R\n\n\nК\nRUN","T\n\n\nЕ\nRAND","Y\n\n\nН\nRET","U\n\n\nГ\nIF","I\n\n\nШ\nINPUT",{fa:[0,0,6]},"O\n\n;\nЩ\nPOKE","P\n\n\"\nЗ\nPRINT",{c:"#757575",a:7,f:4,w2:1.5},"ENTER"],
["RES",{c:"#cccccc",f:9},",",{a:4,f:5},"A\n\n\nФ\nNEW","S\n\n\nЫ\nSAVE","D\n\n\nВ\nDIM","F\n\n\nА\nFOR","G\n\n\nП\nGOTO","H\n\n\nР\nGOSUB",{fa:[0,0,6]},"J\n\n-\nО\nLOAD","K\n\n+\nЛ\nLIST","L\n\n=\nД\nLET","\n\n\nЖ\n\n\n\n\n}",{c:"#757575"},"\n\n|\nЭ"],
[{a:5,f:4,w2:1.5},"CAPS\nSHIFT",{x:0.5,c:"#cccccc",a:4,f:5},"\n\n\nХ\n\n\n\n\n{","Z\n\n\nЯ\nCOPY\n\n\n\n:","X\n\n\nЧ\nCLEAR",{fa:[0,0,6]},"C\n\n?\nС\nCONT","V\n\n/\nМ\nCLS","B\n\n*\nИ\nBORD","N\n\n\nТ\nNEXT","M\n\n\nЬ\nPAUSE","\n\n~\nБ","\n\n\nЮ",{c:"#757575",a:5,f:4,w2:1.5},"SYMB\nSHIFT",{x:2.5,c:"#cccccc",a:7,f:3,h2:2},"⮜","⮝",{h2:2},"⮞"],
[{c:"#757575",f:9,w2:1.5},"⌖",{x:0.5,f:4,w2:1.5},"MF",{x:0.5,c:"#cccccc",f:3,w2:8},"",{x:7,c:"#757575",f:4},"C","L",{x:3,c:"#cccccc",f:3},"⮟"]
Robik at the SpecciWiki.info
Robik at the SinclairCollection.site
Robik at the Amiga.nsk.ru
Robik at the It8bit.club
Robik at the HomeComputer.de
Robik at the Leningrad.su
Robik at the Interface1.net
Robik at the Witchcraft.org.ua ()
Witchcraft Creative Group at the SpecciWiki.info
ZX Spectrum clones
Soviet computer systems | Robik | [
"Technology"
] | 2,713 | [
"Computer systems",
"Soviet computer systems"
] |
13,332,384 | https://en.wikipedia.org/wiki/Ringo%20R470 | Ringo R-470 was a Brazilian clone of the Sinclair ZX81 by Ritas do Brasil Ltda. introduced in 1983. It featured a Z80A processor at 3.25 MHz, 8K ROM and 16 KB RAM. It wasn't 100% compatible with the ZX81, and some BASIC tokens have alternate codings.
It had a connector port for a 1200 bit/s modem and a joystick, and supported data storage using an external cassette recorder at 300 and 2400 bit/s.
There was an option to display "inverted video" (white background with black characters) by pressing . The computer also featured separate arrow keys.
The machine had a price of Cr$449,950, higher than competitor computers like the TK85 costing Cr$369,850, and it was not successful.
This computer can be emulated on modern systems under EightyOne Sinclair Emulator or MAME.
Keywords and symbols
BASIC keywords and character mapping are slightly altered on Ringo R-470 compared to the ZX81. Entry is still accomplished per keyword token, obtained using different cursor modes and key combinations, but these are different from the ZX81. The following table shows the supported BASIC keyword tokens and symbols, and the key combinations needed to enter them.
References
Computers designed in Brazil
Goods manufactured in Brazil
Z80-based home computers
Sinclair ZX81 clones
Computer-related introductions in 1983 | Ringo R470 | [
"Technology"
] | 300 | [
"Computing stubs",
"Computer hardware stubs"
] |
13,332,454 | https://en.wikipedia.org/wiki/Christopher%20Voigt | Christopher Voigt is an American synthetic biologist, molecular biophysicist, and engineer.
Career
Voigt is the Daniel I.C. Wang Professor of Advanced Biotechnology in the Department of Biological Engineering at Massachusetts Institute of Technology (MIT). He works in the developing field of synthetic biology. He is the co-director of the Synthetic Biology Center at MIT and the co-founder of the MIT-Broad Foundry.
His research interests focus on the programming of cells to perform coordinated and complex tasks for applications in medicine, agriculture, and industry. His works include:
Design of genetic circuits in bacteria, yeast and mammalian cells. Encoded in DNA, these circuits implement computational operations inside of cells.
Software to program living cells (Cello), which is based on principles from electronic design automation and is based on Verilog.
Genetically encoded sensors that enables cells to respond to chemicals, environmental cues, and colored light.
Computational tools to design precision genetic parts, based on biophysics, bioinformatics, and machine learning.
Therapeutic bacteria to navigate the human body and identify and correct disease states.
Redesign of the nitrogen fixation gene cluster to facilitate its transfer between organisms and control with synthetic sensors and circuits.
Pharmaceutical discovery from large databases of DNA sequences, including the human gut microbiome, though high-throughput pathway recoding and DNA synthesis.
Harnessing cells to produce materials, including spider silk, nylon-6, and DNA nanomaterials.
In addition, he is the:
Founding Member of the National Science Foundation-funded Synthetic Biology Engineering Research Center (SynBERC), renamed the Engineering Biology Research Center (EBRC).
Editor-in-Chief of ACS Synthetic Biology.
Co-founder of the companies Asimov (cellular programming) and Pivot Biotechnologies (agriculture).
Co-founder of the Synthetic Biology: Engineering Evolution and Design (SEED) Conference Series.
Chair of the SAB for the Dutch chemical company DSM.
His former students have founded Asimov (mammalian synthetic biology), De Novo DNA (computational design), Bolt Threads (spider silk-based textiles), Pivot Bio (agriculture), and Industrial Microbes (methane consuming organisms).
External links
Official Group Website
ACS Editor Profile
SB7.0 Talk: Foundational Tools & Engineering
Synthetic Biology: Programming Living Bacteria
Decoding Synthetic Biology
Engineering Biology
References
Synthetic biologists
University of Michigan alumni
Living people
Year of birth missing (living people) | Christopher Voigt | [
"Biology"
] | 497 | [
"Synthetic biology",
"Synthetic biologists"
] |
13,332,537 | https://en.wikipedia.org/wiki/Datong%E2%80%93Qinhuangdao%20railway | Datong–Qinhuangdao railway or Daqin railway (), also known as the Daqin line (), is a 653 km coal-transport railway in north China. Its name is derived from its two terminal cities, Datong, a coal mining center in Shanxi province, and Qinhuangdao in Hebei province, on the Bohai Sea.
The electrified double track line serves as a major conduit for moving coal produced in Shanxi, Shaanxi, and Inner Mongolia to Qinhuangdao, China's largest coal-exporting seaport, from there coal is shipped to south China and other countries in Asia.
The railway also passes through the municipalities of Beijing and Tianjin. Unlike most other railways in China, which are run by the state-owned China Railway Corporation, the Daqin railway is operated by Daqin Railway Company Limited, a publicly traded stock company.
Daqin railway carries over 1/5th of the coal transported by rail in China, more coal than any other railway line in China and the world.
The line was constructed in two phases between December 1984 and December 1992, with specifications changed from single-track to double-track during construction. Design capacity was 100 million tonnes a year, which it reached after ten years, but continuous upgrades (wider subgrade, 75 kg/m rails, wagons with higher capacity and top speed, longer trains and stronger locomotives, radio operation and centralised traffic control, automatic train inspection) quadrupled capacity.
In 2006, powerful locomotive models HXD1 and HXD2, with 9.6 MW and 10 MW power output respectively, entered Daqin line to replace the older DJ1 models.
Accidents and incidents
24 August 2020 - Four cars of a train derailed near Zhuolu railway station in Zhuolu County, Hebei province. No casualties were reported.
14 April 2022 - 17 cars of a freight train derailed after colliding with a parked locomotive near Cuipingshan railway station in Jizhou District, Tianjin, 11 of which fell off from the elevated railway. No casualties or injuries have been reported.
See also
Coal energy in China
List of railways in China
Rail transport in the People's Republic of China
References
Railway lines in China
Mining railways
Rail transport in Shanxi
Rail transport in Hebei
Rail transport in Beijing
Rail transport in Tianjin
Coal in China | Datong–Qinhuangdao railway | [
"Engineering"
] | 491 | [
"Mining equipment",
"Mining railways"
] |
13,332,891 | https://en.wikipedia.org/wiki/Day%20Out%20with%20Thomas | Day Out with Thomas is a trade name, licensed by Mattel for tourist events that take place on heritage railways and feature one or more engines decorated to look like characters from the popular long-running classic British children's television series Thomas & Friends. The events are held around the world in Australia, Canada, Japan, the Netherlands, New Zealand, the United Kingdom, and the United States of America. They include a full-day of activities for families in addition to rides on trains pulled by the customised steam locomotives resembling characters such as Thomas the Tank Engine.
Family activities
Day Out with Thomas family events include train rides and activities like live entertainment, scavenger hunts, bounce houses, mazes, lawn games, and stage shows. For example, an event in Ohio had a straw bale maze, bouncy houses, portable mini golf, model train displays, balloon artists, and Thomas Wooden Railway train tables.
Events often include characters like Sir Topham Hatt and Rusty and Dusty. The events usually last all day.
Full-scale Thomas locomotives
The Nene Valley Railway at Peterborough in England was the first railway in the world to possess a full-scale replica of Thomas, constructed from an industrial tank engine built by Hudswell Clarke in 1947. It was nicknamed "Thomas" and in 1971 was officially named by Rev. W. Awdry.
Since then other tank engines around the world have appeared as Thomas. The Strasburg Rail Road and Mid Hants Railway) have built working replicas from original locomotives.
From 2008 onwards, many heritage railways in the UK have withdrawn their "Day Out with Thomas" events due to HiT's revised licensing conditions (which includes the requirement for enhanced criminal records (CRB) checks on all the railway's staff and volunteers). However, the "Day Out with Thomas" events have thrived in the United States and Canada.
Country events
Australia
In Australia, several railways have hosted Day Out with Thomas events: in New South Wales the Zig Zag Railway, Lithgow, and the NSW Rail Museum, Thirlmere; in Queensland, the Workshop Rail Museum; and in Victoria the Puffing Billy Railway and the Bellarine Railway.
Netherlands
In the Netherlands annually, these events are held at Het Spoorwegmuseum in Utrecht.
New Zealand
In New Zealand, Mainline Steam's Bagnall tank locomotive has appeared as Thomas on a number of different locations (including at the Britomart Transport Centre in Auckland and has also appeared at the extremely popular biannual "Day out with Thomas the Tank Engine" weekends at the Glenbrook Vintage Railway (south of Auckland)).
United Kingdom
Many heritage railways up and down the UK have hosted Day Out with Thomas events over the years; some events feature just Thomas himself (whilst others (such as the Watercress Line, East Lancashire Railway, East Anglian Railway Museum, Whistlestop Valley, Bo'ness and Kinneil Railway and the Caledonian Railway (Brechin) also feature some of Thomas' friends such as Percy, James, Toby, Diesel, Mavis and Duck)). As of 2022, 9 Heritage Railways in the UK host Day Out with Thomas events. Some railways also host ‘Festive’ Day Out with Thomas which features Father Christmas and additional activities. As a result of the licensing costs and other demands imposed by Mattel/HiT, some railways have replaced their Thomas events with similar ones which also feature engines with faces.
U.K. railways that have Day Out with Thomas events
East Lancashire Railway
Mid-Hants Railway
Bo'ness and Kinneil Railway
United States and Canada
Events have been held in Colorado, Minnesota, Pennsylvania, Georgia, Michigan, Maryland, Washington, California, Ohio, and North Carolina). In Canada, Thomas has visited locations including Toronto, Calgary, and Squamish.
Locomotives
The United States has six Thomas replicas: one is a steam locomotive and the others are dummy units. All were decorated or built by the Strasburg Railroad, with the real steam engine being converted from Brooklyn Eastern District Terminal No. 15. The dummy units are used with a steam or diesel locomotive operating as a pusher. They have a compressed air whistle powered by the train's compressed air system. One unit is gauge, the other four are standard gauge. An additional gauge replica operated at the Edaville Railroad until 2022, though it returned to Edaville for a limited time in 2024.
While in transit between events, Thomas' face is covered to prevent it from getting damaged or dirtied. The dummy units are transported from location to location via flatbed truck. Thomas appears in full dress at Day Out with Thomas events hosted by railroads in arrangement with Mattel. Many of the larger railroad museums and tourist railroads across the United States host Day out with Thomas events periodically. The same trains are also used for the three Canadian events (in BC, Alberta and Ontario). The National Railroad Museum in Green Bay, Wisconsin was the first railroad museum in the United States to host a "Day Out with Thomas" event (unveiling a small Thomas replica in December 1996).
In September 2014, a full-scale replica of Percy was built which is also a dummy unit. Before Percy's introduction, Thomas' original face was replaced in April 2014 with an animatronic CGI-style face allowing the mouth to open and close and speak pre-recorded dialogue through a voice speaker. Initially, the voice lines for Thomas are provided by Martin Sherman (Thomas' US voice actor from 2009-2015) while the voice lines for Percy were provided by Christopher Ragland (US voice actor for Percy from 2015-2021).
In 2019, Mavis the diesel engine (introduced in the book Tramway Engines, the last of the original books) was introduced at the Strasburg Rail Road's event "Thomas, Mavis, and the Strasburg Spooktacular". Strasburg's SW8 #8618 switcher was redressed as Mavis for the event.
In 2020, the Strasburg Railroad introduced Rusty. Rusty is represented by the Strasburg Railroad's Plymouth Gas Locomotive #2 with altered lettering in a similar fashion to how Mavis was created. So far, Rusty has not run any trains but is instead on display for photo opportunities.
In 2022, the Thomas dummy units had their voices re-recorded by Meesha Contreras to fit with the 2021 reboot, Thomas & Friends: All Engines Go. The Percy unit also had his lines re-recorded by his then voice actor Charlie Zeltzer. The steam locomotive at Strasburg retains Martin Sherman as its voice actor. Despite this, the engines have not changed, and retain the original designs.
In 2023, Thomas and Percy were given splats of different paint colors on their paintwork to fit with the theme of that year’s tour, "Day Out with Thomas: Let's Get Colorful! Tour".
In 2024, Thomas and Percy were given bubbles on their paintwork to fit with the theme of "Day Out with Thomas: The Bubble Tour".
Japan
Japan's Oigawa Railway started running Thomas events in 2014. Families can travel from Shin-Kanaya Station to Senzu Station. There are activities and treats during the ride and at both terminals.
Locomotives and characters
The Thomas used at the Oigawa Railway is a modified and repainted version of the railway's existing JNR Class C11 227 locomotive. In addition to Thomas, two other locomotives at the railway were rethemed: The JNR Class 9600 49616 became a Hiro replica and was put on display at the station yard of Senzu Station while the Class DB1 No. DB9 became a Rusty replica.
In 2015, the railway introduced James, who was repainted from their JNR Class C56 No. 44 locomotive. The railway also refurbished and redecorated their disused JNR Class C12 No. 208 engine as a non-functioning replica of Percy, which sits alongside the Class 9600 that is decorated to resemble Hiro.
In 2016, a Hino Poncho bus was redecorated into a replica of Bertie. The Troublesome Trucks were also introduced, and are pulled by Rusty.
In 2018, a replica of Winston the track inspection car was introduced and guests can ride and operate him by peddling. A replica of Flynn the Fire Engine was introduced in 2019, followed by a replica of Bulgy the double-decker bus in 2020 which runs alongside the existing replica of Bertie.
The Thomas locomotive was given a repaint in 2021 under the guise of his green L.B.S.C. livery from the 2015 direct-to-video special Thomas & Friends: The Adventure Begins.
In August 2022, a Toby the Tram Engine replica (modified and repainted from a DD20 diesel locomotive) was added to the lineup.
References
External links
Thomas & Friends
Heritage railways | Day Out with Thomas | [
"Engineering"
] | 1,827 | [
"Heritage railways",
"Engineering preservation societies"
] |
13,333,460 | https://en.wikipedia.org/wiki/Long%20March%20Launch%20Vehicle%20Technology | Long March Launch Vehicle Technology Co. Ltd. is an aerospace company under China Aerospace Science and Technology Corporation. It is based in and mostly run in Beijing.
External links
http://www.rocketstock.com.cn/EnglishWeb
Companies based in Beijing
Aerospace companies of China
Companies listed on the Shanghai Stock Exchange
Government-owned companies of China | Long March Launch Vehicle Technology | [
"Astronomy"
] | 72 | [
"Rocketry stubs",
"Astronomy stubs"
] |
13,333,998 | https://en.wikipedia.org/wiki/Dielectric%20resonator%20antenna | A dielectric resonator antenna (DRA) is a radio antenna mostly used at microwave frequencies and higher, that consists of a block of ceramic material of various shapes, the dielectric resonator, mounted on a metal surface, a ground plane. Radio waves are introduced into the inside of the resonator material from the transmitter circuit and bounce back and forth between the resonator walls, forming standing waves. The walls of the resonator are partially transparent to radio waves, allowing the radio power to radiate into space.
An advantage of dielectric resonator antennas is they lack metal parts, which become lossy at high frequencies, dissipating energy. So these antennas can have lower losses and be more efficient than metal antennas at high microwave and millimeter wave frequencies. Dielectric waveguide antennas are used in some compact portable wireless devices, and military millimeter-wave radar equipment. The antenna was first proposed by Robert Richtmyer in 1939. In 1982, Long et al. did the first design and test of dielectric resonator antennas considering a leaky waveguide model assuming magnetic conductor model of the dielectric surface . In that very first investigation, Long et al. explored HEM11d mode in a cylindrical shaped ceramic block to radiate broadside. After three decades, yet another mode (HEM12d) bearing identical broadside pattern has been introduced by Guha in 2012.
An antenna like effect is achieved by periodic swing of electrons from its capacitive element to the ground plane which behaves like an inductor. The authors further argued that the operation of a dielectric antenna resembles the antenna conceived by Marconi, the only difference is that inductive element is replaced by the dielectric material.
Features
Dielectric resonator antennas offer the following attractive features:
The dimension of a DRA is the order of , where is the free-space wavelength and is the dielectric constant of the resonator material. Thus, by choosing a high value of (), the size of the DRA can be significantly reduced.
There is no inherent conductor loss in dielectric resonators. This leads to high radiation efficiency of the antenna. This feature is especially attractive for millimeter (mm)-wave antennas, where the loss in metal fabricated antennas can be high.
DRAs offer simple coupling schemes to nearly all transmission lines used at microwave and mm-wave frequencies. This makes them suitable for integration into different planar technologies. The coupling between a DRA and the planar transmission line can be easily controlled by varying the position of the DRA with respect to the line. The performance of DRA can therefore be easily optimized experimentally.
The operating bandwidth of a DRA can be varied over a wide range by suitably choosing resonator parameters. For example, the bandwidth of the lower order modes of a DRA can be easily varied from a fraction of a percent to about 20% or more by the suitable choice of the dielectric constant of the material and/or by strategic shaping of the DRA element.
Use of multiple modes radiating identically has also been successfully addressed. One such example is hybrid combination of dielectric ring-resonator and electric monopole which was initially explored by Lapierre. Multiple identical monopole-type modes in an annular shaped dielectric ring-resonator were theoretically analyzed by Guha to show their unique combinations with that due to a traditional electric monopole resulting in UWB antennas.
Each mode of a DRA has a unique internal and associated external field distribution. Therefore, different radiation characteristics can be obtained by exciting different modes of a DRA.
Differently radiating modes have also been employed to generate identical radiation patterns using composite geometries, with a special feature of wider bandwidth.
See also
Dielectric waveguide
Dielectric wireless receiver
References
Antenova Antenova info.
External links
Animation of Radiation from a Circularly Tapered Dielectric Waveguide Antenna (on YouTube)
Notes
Radio electronics
Radio frequency antenna types
Antennas (radio) | Dielectric resonator antenna | [
"Engineering"
] | 839 | [
"Radio electronics"
] |
13,334,951 | https://en.wikipedia.org/wiki/Cognitively%20Guided%20Instruction | Cognitively Guided Instruction is "a professional development program based on an integrated program of research on (a) the development of students' mathematical thinking; (b) instruction that influences that development; (c) teachers' knowledge and beliefs that influence their instructional practice; and (d) the way that teachers' knowledge, beliefs, and practices are influenced by their understanding of students' mathematical thinking". CGI is an approach to teaching mathematics rather than a curriculum program. At the core of this approach is the practice of listening to children's mathematical thinking and using it as a basis for instruction. Research based frameworks of children's thinking in the domains of addition and subtraction, multiplication and division, base-ten concepts, multidigit operations, algebra, geometry and fractions provide guidance to teachers about listening to their students. Case studies of teachers using CGI have shown the most accomplished teachers use a variety of practices to extend children's mathematical thinking. It's a tenet of CGI that there is no one way to implement the approach and that teachers' professional judgment is central to making decisions about how to use information about children's thinking.
The research base on children' mathematical thinking upon which CGI is based shows that children are able to solve problems without direct instruction by drawing upon informal knowledge of everyday situations. For example, a study of kindergarten children showed that young children can solve problems involving what are normally considered advanced mathematics such as multiplication, division, and multistep problems, by using direct modeling. Direct modeling is an approach to problem solving in which the child, in the absence of more sophisticated knowledge of mathematics, constructs a solution to a story problem by modeling the action or structure. For example, about half of the children in a study of kindergartners' problem solving were able to solve this multistep problem, which they had never seen before, using direct modeling: 19 children are taking a mini-bus to the zoo. They will have to sit either 2 or 3 to a seat. The bus has 7 seats. How many children will have to sit three to a seat, and how many can sit two to a seat?
Example: Fred had six marbles at school. On the way home from school his friend Joey gave him some more marbles. Now Fred has eleven marbles. How many marbles did Joey give to Fred?
Students may solve this problem by counting down from eleven or by counting up from six. With the use of manipulatives students would be able to represent their thoughts for this problem multiple ways. For instance, they might make a row of six counting blocks next to a row of eleven counting blocks and then compare the difference.
The CGI philosophy is detailed in Children's Mathematics which is co-authored by Thomas Carpenter, Elizabeth Fennema, Megan Loef Franke, Linda Levi, and Susan Empson.
References
Notes
Carpenter, T. P., Ansell, E., Franke, M. L., Fennema, E. & Weisbeck, L. (1993). Models of problem solving: A study of kindergarten children's problem-solving processes. Journal for Research in Mathematics Education, 24(5), 427–440.
Carpenter, T., Fennema, E., Franke, M., L. Levi, and S. Empson. Children's Mathematics, Second Edition: Cognitively Guided Instruction. Portsmouth, NH: Heinemann, 2014.
Carpenter, T. P., Fennema, E., Franke, M., Levi, L. & Empson, S. B. (2000). Cognitively Guided Instruction: A Research-Based Teacher Professional Development Program for Mathematics. Research Report 03. Madison, WI: Wisconsin Center for Education Research.
Elementary mathematics
Mathematics education | Cognitively Guided Instruction | [
"Mathematics"
] | 786 | [
"Elementary mathematics"
] |
13,335,403 | https://en.wikipedia.org/wiki/Cranmer%20Park | Cranmer Park is a city park in Denver, United States located in the Hilltop neighborhood off Colorado Boulevard between East 1st and East 3rd Avenue. It is notable for its large sundial.
An inscription at the base describes the axis of the gnomon as elevated 39°43' in the direction of polar north. The stone is perpendicular to the gnomon at 50°17', which makes it parallel to the equator. The south side of the stone is similarly marked for wintertime observation.
A polar chart at the base of the sundial describes the zodiac and degrees of the sun's position, and how to set a clock based on the gnomon's shadow. For winter viewing, the chart continues on the south side of the stone.
History of the sundial
The current sundial is the second one to exist at this location in the park. The first was donated in 1941 by longtime Manager of Denver Parks George E. Cranmer, for whom the park is named. It was destroyed by vandals who exploded dynamite under it in September 1965. The replacement sundial was installed in March, 1966 after a successful citywide fundraising effort led by the Denver Junior Chamber of Commerce. It was restored again in 2018 to repair cracking stones.
The park is on the National Register of Historic Places.
References
External links
Save Our Sundial
Sundials
Astronomical instruments
Parks in Denver
National Register of Historic Places in Denver
Parks on the National Register of Historic Places in Colorado | Cranmer Park | [
"Astronomy"
] | 300 | [
"Astronomical instruments"
] |
13,336,054 | https://en.wikipedia.org/wiki/Powerset%20%28company%29 | Powerset was an American company based in San Francisco, California, that, in 2006, was developing a natural language search engine for the Internet. On July 1, 2008, Powerset was acquired by Microsoft for an estimated $100 million (~$ in ).
Powerset was working on building a natural language search engine that could find targeted answers to user questions (as opposed to keyword based search). For example, when confronted with a question like "Which U.S. state has the highest income tax?", conventional search engines ignore the question phrasing and instead do a search on the keywords "state", "highest", "income", and "tax". Powerset on the other hand, attempts to use natural language processing to understand the nature of the question and return pages containing the answer.
The company was in the process of "building a natural language search engine that reads and understands every sentence on the Web". The company has licensed natural language technology from PARC, the former Xerox Palo Alto Research Center.
On May 11, 2008, the company unveiled a tool for searching a fixed subset of English Wikipedia using conversational phrases rather than keywords.
Acquisition by Microsoft: One significant milestone in Powerset's history was its acquisition by Microsoft on July 1, 2008, for an estimated $100 million. This acquisition was part of Microsoft's broader strategy to enhance its search capabilities and compete more effectively with other search engine providers, particularly Google.
Natural Language Search Engine: Powerset's primary focus was on developing a natural language search engine capable of understanding and interpreting user queries in a more human-like manner. Instead of simply matching keywords, Powerset aimed to comprehend the meaning behind the words, allowing for more accurate and contextually relevant search results.
Technology and Partnerships: Powerset had licensed natural language technology from PARC, the Xerox Palo Alto Research Center. This technology likely played a crucial role in the development of Powerset's NLP capabilities.
Wikipedia Search Tool: In May 2008, Powerset unveiled a search tool that allowed users to search a fixed subset of English Wikipedia using conversational phrases rather than traditional keywords. This demonstrated the potential of Powerset's NLP technology in providing more precise and relevant search results.
Powerlabs
In a form of beta testing, Powerset opened an online community called Powerlabs on September 17, 2007. Business Week said: "The company hopes the site will marshal thousands of people to help build and improve its search engine before it goes public next year." Said The New York Times: "[Powerset Labs] goes far beyond the 'alpha' or 'beta' testing involved in most software projects, when users put a new product through rigorous testing to find its flaws. Powerset doesn’t have a product yet, but rather a collection of promising natural language technologies, which are the fruit of years of research at Xerox PARC."
Powerlabs' initial search results are taken from Wikipedia.
Notable people
Barney Pell (born March 18, 1968, in Hollywood, California) was co-founder and CEO of Powerset. Pell received his Bachelor of Science degree in symbolic systems from Stanford University in 1989, where he graduated Phi Beta Kappa and was a National Merit Scholar. Pell received a PhD in computer science from Cambridge University in 1993, where he was a Marshall Scholar. He has worked at NASA, as chief strategist and vice president of business development at StockMaster.com (acquired by Red Herring in March, 2000) and at Whizbang! Labs. Prior to joining Powerset, Pell was an Entrepreneur-in-Residence at Mayfield Fund, a venture capital firm in Silicon Valley.
Pell is also a founder of Moon Express, Inc., a U.S. company awarded a $10M commercial lunar contract by NASA and a competitor in the Google Lunar X PRIZE.
Steve Newcomb was the COO and co-founder of Powerset. Prior to joining Powerset, he was a co-founder of Loudfire, General Manager at Promptu, and was on the board of directors at Jaxtr. He left Powerset in October 2007 to form Virgance, a social startup incubator.
Lorenzo Thione (born in Como, Italy) was the product architect and co-founder of Powerset. Prior to joining Powerset, he worked at FXPAL in natural language processing and related research fields. Thione earned his master's degree in software engineering from the University of Texas at Austin.
Ronald Kaplan, former manager of research in Natural Language Theory and Technology at PARC, served as the company's CTO and CSO.
Ryan Ferrier is a member of the founding team of Powerset. He managed personnel and internal operations. After 2008 he went on to co-found Serious Business, which made Facebook applications and was later bought by Zynga.
Another Powerset alumnus, Alex Le, became CTO of Serious Business and went on to become an executive producer at Zynga when it bought the company. Siqi Chen founded a stealth startup in mobile computing after leaving Powerset.
Tom Preston-Werner worked at Powerset and left after the acquisition to found GitHub.
Investors
Powerset attracted a wide range of investors, many of whom had considerable experience in the venture capital field. The company received $12.5 million (~$ in ) in Series A funding during November 2007, co-led by the venture capital firms Foundation Capital and The Founders Fund.
Among the better-known investors:
Esther Dyson, founding chairman of ICANN, founder of the newsletter Release 1.0 and editor at Cnet
Peter Thiel, founder and former CEO of PayPal
Luke Nosek, founder of PayPal
Todd Parker. Managing Partner, Hidden River Ventures
Reid Hoffman, executive vice president of PayPal and founder of LinkedIn
First Round Capital, seed-stage venture firm
See also
Bing (search engine)
Apache HBase
References
External links
Powerset main web site - redirects to Bing
Powerset acquired by Microsoft
Defunct internet search engines
Companies based in San Francisco
Natural language processing
Microsoft acquisitions
2008 mergers and acquisitions | Powerset (company) | [
"Technology"
] | 1,254 | [
"Natural language processing",
"Natural language and computing"
] |
13,336,525 | https://en.wikipedia.org/wiki/Work%20systems | Work system has been used loosely in many areas. This article concerns its use in understanding IT-reliant systems in organizations. A notable use of the term occurred in 1977 in the first volume of MIS Quarterly in two articles by Bostrom and Heinen (1977). Later Sumner and Ryan (1994) used it to explain problems in the adoption of CASE (computer-aided software engineering). A number of socio-technical systems researchers such as Trist and Mumford also used the term occasionally, but seemed not to define it in detail. In contrast, the work system approach defines work system carefully and uses it as a basic analytical concept.
A work system is a system in which human participants and/or machines perform work (processes and activities) using information, technology, and other resources to produce products/services for internal or external customers. Typical business organizations contain work systems that procure materials from suppliers, produce products, deliver products to customers, find customers, create financial reports, hire employees, coordinate work across departments, and perform many other functions.
The work system concept is like a common denominator for many of the types of systems that operate within or across organizations. Operational information systems, service systems, projects, supply chains, and ecommerce web sites can all be viewed as special cases of work systems.
An information system is a work system whose processes and activities are devoted to processing information.
A service system is a work system that produces services for its customers.
A project is a work system designed to produce a product and then go out of existence.
A supply chain is an interorganizational work system devoted to procuring materials and other inputs required to produce a firm's products.
An ecommerce web site can be viewed as a work system in which a buyer uses a seller's web site to obtain product information and perform purchase transactions.
The relationship between work systems in general and the special cases implies that the same basic concepts apply to all of the special cases, which also have their own specialized vocabulary. In turn, this implies that much of the body of knowledge for the current information systems discipline can be organized around a work system core.
Specific information systems exist to support (other) work systems. Many different degrees of overlap are possible between an information system and a work system that it supports. For example, an information system might provide information for a non-overlapping work system, as happens when a commercial marketing survey provides information to a firm's marketing managers In other cases, an information system may be an integral part of a work system, as happens in highly automated manufacturing and in ecommerce web sites. In these situations, participants in the work system are also participants in the information system, the work system cannot operate properly without the information system, and the information system has little significance outside of the work system.
Work system framework
The work system approach for understanding systems includes both a static view of a current (or proposed) system in operation and a dynamic view of how a system evolves over time through planned change and unplanned adaptations. The static view is summarized by the work system framework, which identifies the basic elements for understanding and evaluating a work system. An easily recognized triangular representation of the work system framework has appeared in Alter (2002, 2003, 2008, 2013) and elsewhere. The work system itself consists of four elements: the processes and activities, participants, information, and technologies. Five other elements must be included in even a rudimentary understanding of a work system's operation, context, and significance. Those elements are the products/services produced, customers, environment, infrastructure, and strategies. Customers may also be participants in a work system, as happens when a doctor examines a patient. This framework is prescriptive enough to be useful in describing the system being studied, identifying problems and opportunities, describing possible changes, and tracing how those changes might affect other parts of the work system.
The definitions of the 9 elements of the work system framework are as follows:
Processes and activities include everything that happens within the work system. The term processes and activities is used instead of the term business process because many work systems do not contain highly structured business processes involving a prescribed sequence of steps, each of which is triggered in a pre-defined manner. Such processes are sometimes described as “artful processes” whose sequence and content “depend on the skills, experience, and judgment of the primary actors.” (Hill et al., 2006) In effect, business process is but one of a number of different perspectives for analyzing the activities within a work system. Other perspectives with their own valuable concepts and terminology include decision-making, communication, coordination, control, and information processing.
Participants are people who perform the work. Some may use computers and IT extensively, whereas others may use little or no technology. When analyzing a work system the more encompassing role of work system participant is more important than the more limited role of technology user (whether or not particular participants happen to be technology users). In work systems that are viewed as service systems, it is especially important to identify activities in which customers are participants.
Information includes codified and non-codified information used and created as participants perform their work. Information may or may not be computerized. Data not related to the work system is not directly relevant, making the distinction between data and information secondary when describing or analyzing a work system. Knowledge can be viewed as a special case of information.
Technologies include tools (such as cell phones, projectors, spreadsheet software, and automobiles) and techniques (such as management by objectives, optimization, and remote tracking) that work system participants use while doing their work.
Products/services are the combination of physical things, information, and services that the work system produces for its customers' benefit and use. This may include physical products, information products, services, intangibles such as enjoyment and peace of mind, and social products such as arrangements, agreements, and organizations. The term "products/services” is used because the distinction between products and services in marketing and service science (Chesbrough and Spohrer, 2006) is not important for understanding work systems even though product-like vs. service-like is the basis of a series of design dimensions for characterizing and designing the things that a work system produces (Alter, 2012).
Customers are people who receive direct benefit from products/services the work system produces. Since work systems exist to produce products/services for their customers, an analysis of a work system should consider who the customers are, what they want, and how they use whatever the work system produces. Customers may include external customers who receive an enterprise's products/services and internal customers who are employed by the enterprise, such as customers of a payroll work system. Customers of a work system often are participants in the work system (e.g., patients in a medical exam, students in an educational setting, and clients in a consulting engagement).
Environment includes the organizational, cultural, competitive, technical, and regulatory environment within which the work system operates. These factors affect system performance even though the system does not rely on them directly in order to operate. The organization's general norms of behavior are part of its culture, whereas more specific behavioral norms and expectations about specific activities within the work system are considered part of its processes and activities.
Infrastructure includes human, informational, and technical resources that the work system relies on even though these resources exist and are managed outside of it and are shared with other work systems. Technical infrastructure includes computer networks, programming languages, and other technologies shared by other work systems and often hidden or invisible to work system participants. From an organizational viewpoint such as that expressed in Star and Bowker (2002) rather than a purely technical viewpoint, infrastructure includes human infrastructure, informational infrastructure, and technical infrastructure, all of which can be essential to a work system's operation and therefore should be considered in any analysis of a work system.
Strategies include the strategies of the work system and of the department(s) and enterprise(s) within which the work system exists. Strategies at the department and enterprise level may help in explaining why the work system operates as it does and whether it is operating properly.
Work system life cycle model
The dynamic view of a work system starts with the work system life cycle (WSLC) model, which shows how a work system may evolve through multiple iterations of four phases: operation and maintenance, initiation, development, and implementation. The names of the phases were chosen to describe both computerized and non-computerized systems, and to apply regardless of whether application software is acquired, built from scratch, or not used at all. The terms development and implementation have business-oriented meanings that are consistent with Markus and Mao's (2004) distinction between system development and system implementation.
This model encompasses both planned and unplanned change. Planned change occurs through a full iteration encompassing the four phases, i.e., starting with an operation and maintenance phase, flowing through initiation, development, and implementation, and arriving at a new operation and maintenance phase. Unplanned change occurs through fixes, adaptations, and experimentation that can occur within any phase. The phases include the following activities:
Operation and maintenance
Operation of the work system and monitoring of its performance
Maintenance of the work system (which often includes at least part of information systems that support it) by identifying small flaws and eliminating or minimizing them through fixes, adaptations, or workarounds.
On-going improvement of processes and activities through analysis, experimentation, and adaptation
Initiation
Vision for the new or revised work system
Operational goals
Allocation of resources and clarification of time frames
Economic, organizational, and technical feasibility of planned changes
Development
Detailed requirements for the new or revised work system (including requirements for information systems that support it)
As necessary, creation, acquisition, configuration, and modification of procedures, documentation, training material, software and hardware
Debugging and testing of hardware, software, and documentation
Implementation
Implementation approach and plan (pilot? phased? big bang?)
Change management efforts about rationale and positive or negative impacts of changes
Training on details of the new or revised information system and work system
Conversion to the new or revised work system
Acceptance testing
As an example of the iterative nature of a work system's life cycle, consider the sales system in a software start-up. The first sales system is the CEO selling directly. At some point the CEO can't do it alone, several salespeople are hired and trained, and marketing materials are produced that can be used by someone less expert than the CEO. As the firm grows, the sales system becomes regionalized and an initial version of sales tracking software is developed and used. Later, the firm changes its sales system again to accommodate needs to track and control a larger salesforce and predict sales several quarters in advance. A subsequent iteration might involve the acquisition and configuration of CRM software. The first version of the work system starts with an initiation phase. Each subsequent iteration involves deciding that the current sales system is insufficient; initiating a project that may or may not involve significant changes in software; developing the resources such as procedures, training materials, and software that are needed to support the new version of the work system; and finally, implementing the new work system.
The pictorial representation of the work system life cycle model places the four phases at the vertices of rectangle. Forward and backward arrows between each successive pair of phases indicate the planned sequence of the phases and allow the possibility of returning to a previous phase if necessary. To encompass both planned and unplanned change, each phase has an inward facing arrow to denote unanticipated opportunities and unanticipated adaptations, thereby recognizing the importance of diffusion of innovation, experimentation, adaptation, emergent change, and path dependence.
The work system life cycle model is iterative and includes both planned and unplanned change. It is fundamentally different from the frequently cited Systems Development Life Cycle (SDLC), which actually describes projects that attempt to produce software or produce changes in a work system. Current versions of the SDLC may contain iterations but they are basically iterations within a project. More important, the system in the SDLC is a basically a technical artifact that is being programmed. In contrast, the system in the WSLC is a work system that evolves over time through multiple iterations. That evolution occurs through a combination of defined projects and incremental changes resulting from small adaptations and experimentation. In contrast with control-oriented versions of the SDLC, the WSLC treats unplanned changes as part of a work system's natural evolution.
Work system method
The work system method (Alter, 2002; 2006; 2013) is a method that business professionals (and/or IT professionals) can use for understanding and analyzing a work system at whatever level of depth is appropriate for their particular concerns. It has evolved iteratively starting in around 1997. At each stage, the then current version was tested by evaluating the areas of success and the difficulties experienced by MBA and EMBA students trying to use it for a practical purpose. A version called “work-centered analysis” that was presented in a textbook has been used by a number of universities as part of the basic explanation of systems in organizations, to help students focus on business issues, and to help student teams communicate. Ramiller (2002) reports on using a version of the work system framework within a method for “animating” the idea of business process within an undergraduate class. In a research setting, Petrie (2004) used the work system framework as a basic analytical tool in a Ph.D. thesis examining 13 ecommerce web sites. Petkov and Petkova (2006) demonstrated the usefulness of the work system framework by comparing grades of students who did and did not learn about the framework before trying to interpret the same ERP case study. More recent evidence of the practical value of a work system approach is from Truex et al. (2010, 2011), which summarized results from 75 and later 300 management briefings produced by employed MBA students based on a work system analysis template. These briefings contained the kind of analysis that would be discussed in the initiation phase of the WSLC, as decisions were being made about which projects to pursue and how to proceed.
Results from analyses of real world systems by typical employed MBA and EMBA students indicate that a systems analysis method for business professionals must be much more prescriptive than soft systems methodology (Checkland, 1999). While not a straitjacket, it must be at least somewhat procedural and must provide vocabulary and analysis concepts while at the same time encouraging the user to perform the analysis at whatever level of detail is appropriate for the task at hand. The latest version of the work system method is organized around a general problem-solving outline that includes:
Identify the problem or opportunity
Identify the work system that has that problem or opportunity (plus relevant constraints and other considerations)
Use the work system framework to summarize the work system
Gather relevant data.
Analyze using design characteristics, measures of performance, and work system principles.
Identify possibilities for improvement.
Decide what to recommend
Justify the recommendation using relevant metrics and work system principles.
In contrast to systems analysis and design methods for IT professionals who need to produce a rigorous, totally consistent definition of a computerized system, the work system method:
encourages the user to decide how deep to go
makes explicit use of the work system framework and work system life cycle model
makes explicit use of work system principles.
makes explicit use of characteristics and metrics for the work system and its elements.
includes work system participants as part of the system (not just users of the software)
includes codified and non-codified information
includes IT and non-IT technologies.
suggests that recommendations specify which work system improvements rely on IS changes, which recommended work system changes don't rely on IS changes, and which recommended IS changes won't affect the work system's operational form.
References
Alter, S. (2002) "The Work System Method for Understanding Information Systems and Information Systems Research," Communications of the Association for Information Systems 9(9), Sept., pp. 90–104,
Alter, S. (2003) "18 Reasons Why IT-Reliant Work Systems Should Replace ‘The IT Artifact’ as the Core Subject Matter of the IS Field," Communications of the Association for Information Systems, 12(23), Oct., pp. 365–394,
Alter, S. (2006) The Work System Method: Connecting People, Processes, and IT for Business Results, Larkspur, CA: Work System Press.
Alter, S. (2012) "Challenges for Service Science," Journal of Information Technology Theory and Application, Vol. 13, Issue 2, No. 3, 2012, pp. 22 –37.
Alter, S. (2013) "Work System Theory: Overview of Core Concepts, Extensions, and Challenges for the Future," Journal of the Association for Information Systems, 14(2), pp. 72–121.
Bostrom, R.P. and J.S. Heinen, (1977) "MIS Problems and Failures: A Socio-Technical Perspective. PART I: The Causes." MIS Quarterly, 1(3), December, pp. 17–32.
Bostrom, R. P. and J. S. Heinen, (1977) "MIS Problems and Failures: A Socio-Technical Perspective. PART II: The Application of Socio-Technical Theory." MIS Quarterly, 1(4), December, pp. 11–28.
Checkland, P. (1999) Systems Thinking, Systems Practice (Includes a 30-year retrospective), Chichester, UK: John Wiley & Sons.
Chesbrough, H., and J. Spohrer (2006) "A Research Manifesto for Services Science," Communications of the ACM (49)7, 35–40.
Hill, C., R. Yates, C. Jones, and S. L. Kogan, (2006) "Beyond predictable workflows: Enhancing productivity in artful business processes," IBM Systems Journal, 45(4), pp. 663–682.
Markus, M.L. and J.Y. Mao (2004) "Participations of the in Development and Implementation – Updating an Old, Tired Concept for Today’s IS Contexts," Journal of the Association for Information Systems, Dec., pp. 514–544.
Petrie, D.E. (2004) Understanding the Impact of Technological Discontinuities on Information Systems Management: The Case of Business-to-Business Electronic Commerce, Ph.D. Thesis, Claremont Graduate University.
Ramiller, N. (2002) "Animating the Concept of Business Process in the Core Course in Information Systems," Journal of Informatics Education and Research, 3(2), pp. 53–71.
Star, S. L. and Bowker, G. C. (2002) "How to Infrastructure," in L. Lievrouw and S. Livingstone (Eds.), Handbook of the new media. London: SAGE, 151-162.
Sumner, M. and T. Ryan (1994). "The Impact of CASE: Can it achieve critical success factors?" Journal of Systems Management, 45(6), p. 16, 6 pages.
Truex, D., Alter, S., and Long, C. (2010) "Systems Analysis for Everyone Else: Empowering Business Professionals through a Systems Analysis Method that Fits their Needs," Proceedings of 18th European Conference on Information Systems, Pretoria, South Africa.
Truex., D., Lakew, N., Alter, S., and Sarkar, S. (2011) "Extending a Systems Analysis Method for Business Professionals," European Design Science Symposium, Leixlip, Ireland, Oct. 2011
Information systems
Management systems
Systems analysis
Systems science
Systems theory
Systems thinking | Work systems | [
"Technology"
] | 4,081 | [
"Information systems",
"Information technology"
] |
13,337,042 | https://en.wikipedia.org/wiki/Pioneer%20%28military%29 | A pioneer () is a soldier employed to perform engineering and construction tasks. The term is in principle similar to sapper or combat engineer. Pioneers were originally part of the artillery branch of European armies. Subsequently, they formed part of the engineering branch, the logistic branch, part of the infantry, or even comprised a branch in their own right.
Historically, the primary role of pioneer units was to assist other arms in tasks such as the construction of field fortifications, military camps, bridges and roads. Prior to and during the First World War, pioneers were often engaged in the construction and repair of military railways. During World War II, pioneer units were used extensively by all major forces, both on the front line and in supporting roles.
During the 20th century, British Commonwealth military forces came to distinguish between small units of "assault pioneers" belonging to infantry regiments and separate pioneer units (as in the former Royal Pioneer Corps). The United States Marine Corps has sometimes organized its sappers into "Pioneer Battalions". The arrival of the military engineering vehicle and the deployment of weapons of mass destruction vastly expanded capabilities and complicated mission-profiles of modern pioneer units.
Etymology
The word pioneer is originally from France. The word () was borrowed into English, from Old French pionnier, which meant a "foot soldier", from the root 'peon' recorded in 1523. It was used in a military sense as early as 1626–1627. In the late 18th century, Captain George Smith defined the term as:
Pioneer regiments in the Indian Army
Extensive use was made of pioneers in the British Indian Army because of the demands of campaigning in difficult terrain with little or no infrastructure. In 1780, two companies of pioneers were raised in Madras, increasing to 16 in 1803 divided into two battalions. Bombay and Bengal pioneers were formed during the same period. In the late nineteenth century, a number of existing Indian infantry regiments took the title and the construction role of pioneers. The twelve Indian Pioneer regiments in existence in 1914 were trained and equipped for road, rail and engineering work, as well as for conventional infantry service. While this dual function did not qualify them to be regarded as elite units, the frequency with which they saw active service made postings to pioneer regiments popular with British officers.
Prior to World War I, each sepoy in a Pioneer regiment carried a pickaxe or a light spade in special leather equipment as well as a rifle and bayonet. NCOs and buglers carried axes, saws and billhooks. Heavier equipment, such as explosives, was carried by mule. The unit was therefore well equipped for simple field engineering tasks, as well as being able to defend itself in hostile territory. During the War, the increased specialisation required of Pioneers made them too valuable to use as regular assault infantry. Accordingly, in 1929, the Pioneer regiments were taken out of the line infantry and grouped into the Corps of Madras Pioneers (four battalions), the Corps of Bombay Pioneers (four battalions), the Corps of Sikhs Pioneers (four battalions), and the Corps of Hazara Pioneers (one battalion).
All four Pioneer Corps were disbanded in 1933 and their personnel mostly transferred into the Corps of Sappers and Miners, whose role they had come to parallel. It was concluded that the Pioneer battalions had become less technically effective than the Sappers and Miners, but too well trained in specialist functions to warrant being used as ordinary infantry. In addition, their major role of frontier road building had now been allocated to civilian workers. An Indian Pioneer Corps was re-established in 1943.
Pioneers in the British Army
Historically, British infantry regiments maintained small units of pioneers for heavy work and engineering, especially for clearing paths through forests and for leading assaults on fortifications. These units evolved into assault pioneers. They also inspired the creation of the Royal Pioneer Corps.
During World War I, on paper at least, each division was allocated a pioneer infantry battalion, who in addition to being trained infantry were able to conduct pioneer duties. These pioneer battalions were raised and numbered within the existing infantry regiments; where possible recruits were men who possessed transferable skills from civilian life.
The Royal Pioneer Corps was a British Army combatant corps used for light engineering tasks. The Royal Pioneer Corps was raised on 17 October 1939 as the Auxiliary Military Pioneer Corps. It was renamed the Pioneer Corps on 22 November 1940. It was renamed the Royal Pioneer Corps on 28 November 1946. On 5 April 1993, the Royal Pioneer Corps united with other units to form the Royal Logistic Corps.
The specialist pioneer units in the Royal Logistic Corps, 23 Pioneer Regiment, based at St David's Barracks at Bicester, and 168 Pioneer Regiment, headquartered in Prince William of Gloucester Barracks at Grantham, were disbanded in 2014, as part of the Army 2020 re-organisation.
The ARRC Support Battalion is based at Imjin Barracks, Innsworth (until June 2010, it was at Rheindahlen Military Complex, Germany)
All British infantry regiments still maintain assault pioneer units. The Pioneer Sergeant is the only rank allowed to wear a beard on parade.
Israeli Army
The Israeli army has an infantry brigade called the Fighting Pioneer Youth, in Hebrew Noar Halutzi Lohem or just "Nahal". The title of Israeli military pioneers is a back-derivation from the civilian term. The Israeli army's pioneers were formed in 1948 from Jewish civilian pioneers, i.e. settlers, who were permitted to combine military service and farming.
Pioneer units
United Kingdom
Maltese Pioneers
British Garrison at Calais Pioneers
Pioneer Corps
4th (Pioneer) Battalion Coldstream Guards with the Guards Division, 1917 alternatively known as Guards Pioneer Battalion
6th East Yorkshire Regiment (Pioneer Battalion) with Division, 1917 (three company establishment)
3rd "Salford Pals" Battalion (19th Battalion, Lancashire Fusiliers) (converted to a pioneer battalion)
9th Battalion, Seaforth Highlanders Regiment (Pioneer Battalion) with 9th Division, 1917
1/6th Argyll and Sutherland Highlanders (Pioneer Battalion) with 5th Division, 1917
9th Battalion, South Staffordshire Regiment (Pioneer Battalion) with 23rd Division, 1917
9th Battalion, North Staffordshire Regiment (Pioneer Battalion) with 37th Division 1915–18
19th Battalion, Middlesex Regiment (Pioneer Battalion) with 41st Division, 1917
1/5th Royal Sussex Regiment (Pioneer Battalion) with 48th Division, 1917
8th (Pioneer) Battalion, Royal Sussex Regiment divisional pioneer battalion
12th (Pioneer) Battalion Sherwood Foresters
Pioneer Battalion, The Royal Scots
19th Battalion (Pioneers), The Welsh Regiment (Glamorgan Pioneers)
15th (Pioneer) Battalion, the Royal Fusiliers (City of London Regiment) recruited at Oxford, Thame, Dover, Elham and Lyminge, Bude, Woolacombe and Truro areas during the Second World War
5th (Pioneer) Battalion, Cheshire Regiment was appointed "in consequence of earning a high reputation as diggers and as constructors of field works"
25th (Pioneer) Battalion, King's Royal Rifle Corps
Pioneer Battalion, 5th Royal Irish Lancers, 1902 – 1922 was created to construct a new railway in the I Corps area on the Western Front.
1st Battalions Monmouthshire Regiment Territorial Force 11 November 1915: Pioneer Battalion of 46th Division, south west of Avesnes, France.
2nd Battalions Monmouthshire Regiment Territorial Force 1 May 1916: Joined 29th Division as Pioneer Battalion.
3rd Battalions Monmouthshire Regiment Territorial Force 28 September 1915: Became Pioneer Battalion, 28th Division.
16th (Pioneer) Battalion, Royal Irish Rifles
605th Pioneer Battalion, Pioneer Corps – used for light engineering tasks
606th Pioneer Battalion, Pioneer Corps – used for light engineering tasks
23 Pioneer Regiment, Royal Logistic Corps – Disbanded October 2014
168 Pioneer Regiment, Territorial Army- Disbanded April 2014
Australia
During World War I, Australia raised six pioneer battalions within the First Australian Imperial Force (1st AIF) for service on the Western Front, one per division:
1st Pioneer Battalion (New South Wales), 1st Division
2nd Pioneer Battalion (Western Australia), 2nd Division
3rd Pioneer Battalion (Victoria, Queensland, South Australia, Western Australia), 3rd Division
4th Pioneer Battalion (Queensland), 4th Division
5th Pioneer Battalion (South Australia), 5th Division
6th Pioneer Battalion, 6th Division (disbanded without seeing combat)
In World War II, four pioneer battalions were raised as part of the Second Australian Imperial Force (2nd AIF):
2/1 Australian Pioneer Battalion
2/2 Australian Pioneer Battalion
2/3 Australian Pioneer Battalion
2/4 Australian Pioneer Battalion
Other World War II pioneer units:
2/1st Special Pioneer Company (Formed in New South Wales in 1942 from the 9th Pioneer Training Battalion. Absorbed by 2/11th Army Troops Company in September 1943.)
2/2nd Special Pioneer Company (Formed in New South Wales in 1942 from the 9th Pioneer Training Battalion. Absorbed by 2/11th Army Troops Company in September 1943.)
3rd Special Pioneer Company (Formed in Victoria in March 1942. Redesignated 30th Employment Company in September 1942.)
2/4th Special Pioneer Company (Formed in Victoria in March 1942. Redesignated 29th Employment Company in September 1942.)
2/5th Pioneer Company (Formed in Victoria in March 1942. Redesignated 34th Infantry Training Battalion in May 1942.)
7th Special Pioneer Company (Formed in Queensland in April 1942 from the 7th Infantry Training Battalion. Disbanded September 1942.)
8th Special Pioneer Company (Formed in Queensland in April 1942 from the 29th Infantry Training Battalion. Disbanded September 1942.)
20th Pioneer Battalion (Formed by redesignation of the 20th Motor Regiment, February 1945. Disbanded September 1945)
Torres Strait Pioneer Company (Formed from Torres Strait islanders, 1943. Disbanded January 1945)
The Headquarters Companies of Infantry Battalions serving in the South West Pacific included Pioneer Platoons, giving Battalion Commanders the authority over deployment of Pioneer troops as required in combat pioneering, infantry combat or service roles.
Canada
2nd Canadian Pioneer Battalion, Canadian Expeditionary Force with over a thousand men whose training gave them a combination of engineering and infantry skills.
48th Battalion served in the field as the 3rd Canadian Pioneer Battalion (48th Canadians), with the 3rd Canadian Division
67th "Western Scots" (Pioneer Battalion), Canadian Expeditionary Force, 1916
107th Pioneer Battalion
123rd Infantry Battalion repurposed as a Pioneer Battalion in January 1917, and replaced the 3rd Pioneer Battalion in May 1917 as the Pioneer Battalion of the 3rd Canadian Division
124th Infantry Battalion repurposed as a Pioneer Battalion in January 1917, and became the Pioneer Battalion of the 4th Canadian Division
New Zealand
The New Zealand Pioneer Battalion, sometimes referred to as the Pioneer Māori Battalion. The battalion included four companies, each with two Māori and two European (Pākehā) platoons, and included remnants of the Otago Mounted Rifle Regiment.
South Africa
South African Army Pioneer Battalion
India
For Indian Army Pioneer Corps, see also Indian Army Pioneer Corps
British Indian Army Pioneer Battalions enlisted, drilled and trained as any other native infantry battalion of the line, but received additional construction training.
1st Madras Pioneers, Indian Army
2nd Bombay Pioneers, Indian Army
3rd Sikh Pioneers, Indian Army
4th Hazara Pioneers, Indian Army
Other commonwealth countries
African Pioneer Corps
Nepal
1st Jangi Auxiliary Pioneer Battalion (1000 strong), Nepalese Army
Jagannath Auxiliary Pioneer Battalion, Nepalese Army
France
The Foreign Legion Pionniers, members of the Foreign Legion, open all the Legion's parades as a matter of tradition. They grow full beards, wear leather aprons and carry axes during these parades.
Germany
First World War
Imperial German Army pioneers (Pioniere) were regarded as a separate combat arm trained in construction and the demolition of fortifications, but they were often used as specialist infantry, serving the role of combat engineers. One battalion was assigned to each Corps.
The Guard Pioneer Battalion 1. (6 companies, each with 20 large and 18 small flame-throwers)
The Guard Pioneer Battalion 2.
The Guard Pioneer Battalion 3.
The Guard Reserve Pioneer Battalion – created from reservists who had been civilian firemen, the battalion was issued with experimental flame-throwers
1st Bavarian Pioneer Battalion, First Bavarian Division (12 destruction squads)
2nd Bavarian Pioneer Battalion
Prussian Army pioneer battalions:
1 Prussian Pioneer Battalion of the Guards – 3 Field companies, one Reserve company
12 Prussian Pioneer Battalions of the Line (18 officers, 495 men and 6 other people)
2nd Pioneer Battalion at Stettin
4th Pioneer Battalion at Magdeburg
Saxon Pioneer Battalion
World War Two
German Army Pionier battalions:
Panzer-Pionier-Bataillon (armoured pioneer battalion performing engineering tasks during an assault from manoeuvre)
Sturmpionierbataillon (assault pioneer battalion performing engineering tasks during an infantry assault)
Gebirgs-Pionier-Bataillon 95, a pioneer unit trained for the mountain terrain
Pionier-Bataillon 233 (divisional pioneer unit)
Heeres-Pionier-Bataillon 73 (Corps pioneer unit)
Pioneer Battalion, Leibstandarte SS Adolf Hitler, Waffen-SS
Pioneer Battalions, Estonian Auxiliary Police
Russia
1st Pioneer Battalion, Imperial Russian Army
2nd Pioneer Battalion, Imperial Russian Army
3rd Pioneer Battalion (later 5th Pioneer Battalion), Imperial Russian Army
4th Pioneer Battalion, Imperial Russian Army
United States
First Pioneer Battalion of Engineers, Mounted, United States Army (1st Bn. mtd. Engra.) (3 companies)
First Pioneer Battalion of Engineers, United States Army (1st Bn. Engrs.) (3 companies)
First Pioneer Infantry, United States Army (Companies A – M)
9th Pioneer Battalion, US Army
18th Reserve Pioneer Battalion, US Army
Jefferson County Pioneer Battalion, Pennsylvania (CO Lieutenant-Colonel, Hance Robinson)
Red Patch
1st Pioneer Battalion, United States Marine Corps
2nd Pioneer Battalion, United States Marine Corps
3rd Pioneer Battalion, United States Marine Corps
4th Pioneer Battalion, United States Marine Corps
5th Pioneer Battalion, United States Marine Corps (deactivated in November 1969)
31st Naval Construction Battalion TAD as USMC Pioneers 5th Shore Party Regiment, 5th Marine Division (decommissioned)
71st Naval Construction Battalion TAD as USMC Pioneers 3rd Marine Division (decommissioned)
133rd Naval Construction Battalion TAD as USMC Pioneers to 23rd Marines, now called "Naval Mobile Construction Battalion 133"
See also
Combat engineer
Citations and notes
References
Dooley, Thomas P., Irishmen Or English Soldiers?: The Times and World of a Southern Catholic Irish Man (1876–1916) Enlisting in the British Army During the First World War, Liverpool University Press, 1995
Lane, Kerry, Guadalcanal Marine, University Press of Mississippi, 2004
Showalter, Dennis E., Tannenberg: Clash of Empires, 1914, Brassey's, London, 2004
Combat occupations
Military engineering | Pioneer (military) | [
"Engineering"
] | 2,927 | [
"Construction",
"Military engineering"
] |
13,337,084 | https://en.wikipedia.org/wiki/Nanomesh | The nanomesh is an inorganic nanostructured two-dimensional material, similar to graphene. It was discovered in 2003 at the University of Zurich, Switzerland.
It consists of a single layer of boron (B) and nitrogen (N) atoms, which forms by self-assembly into a highly regular mesh after high-temperature exposure of a clean rhodium or ruthenium surface to borazine under ultra-high vacuum.
The nanomesh looks like an assembly of hexagonal pores (see right image) at the nanometer (nm) scale. The distance between two pore centers is only 3.2 nm, whereas each pore has a diameter of about 2 nm and is 0.05 nm deep. The lowest regions bind strongly to the underlying metal, while the wires (highest regions) are only bound to the surface through strong cohesive forces within the layer itself.
The boron nitride nanomesh is not only stable under vacuum, air and some liquids, but also up to temperatures of 796 °C (1070 K). In addition it shows the extraordinary ability to trap molecules and metallic clusters, which have similar sizes to the nanomesh pores, forming a well-ordered array. These characteristics may provide applications of the material in areas like, surface functionalisation, spintronics, quantum computing and data storage media like hard drives.
Structure
h-BN nanomesh is a single sheet of hexagonal boron nitride, which forms on substrates like rhodium Rh(111) or ruthenium Ru(0001) crystals by a self-assembly process.
The unit cell of the h-BN nanomesh consists of 13x13 BN or 12x12 Rh atoms with a lattice constant of 3.2 nm. In a cross-section it means that 13 boron or nitrogen atoms are sitting on 12 rhodium atoms. This implies a modification of the relative positions of each BN towards the substrate atoms within a unit cell, where some bonds are more attractive or repulsive than other (site selective bonding), what induces the corrugation of the nanomesh (see right image with pores and wires).
The nanomesh corrugation amplitude of 0.05 nm causes a strong effect on the electronic structure, where two distinct BN regions are observed. They are easily recognized in the lower right image, which is a scanning tunneling microscopy (STM) measurement, as well as in the lower left image representing a theoretical calculation of the same area. A strongly bounded region assigned to the pores is visible in blue in the left image below (center of bright rings in the right image) and a weakly bound region assigned to the wires appears yellow-red in the left image below (area in-between rings in the right image).
See for more details.
Properties
The nanomesh is stable under a wide range of environments like air, water and electrolytes among others. It is also temperature resistant since it does not decompose in temperatures up to 1275K under a vacuum. In addition to these exceptional stabilities, the nanomesh shows the extraordinary ability to act as a scaffold for metallic nanoclusters and to trap molecules forming a well-ordered array.
In the case of gold (Au), its evaporation on the nanomesh leads to formation of well-defined round Au nanoparticles, which are centered at the nanomesh pores.
The STM figure on the right shows Naphthalocyanine (Nc) molecules, which were vapor-deposited onto the nanomesh. These planar molecules have a diameter of about 2 nm, whose size is comparable to that of the nanomesh pores (see upper inset). It is spectacularly visible how the molecules form a well-ordered array with the periodicity of the nanomesh (3.22 nm). The lower inset shows a region of this substrate with higher resolution, where individual molecules are trapped inside the pores. In addition, the molecules seem to keep their native conformation, what means that their functionality is kept, which is nowadays a challenge in nanoscience.
Such systems with wide spacing between individual molecules/clusters and negligible intermolecular interactions might be interesting for applications such as molecular electronics and memory elements, in photochemistry or in optical devices.
See for more detailed information.
Preparation and analysis
Well-ordered nanomeshes are grown by thermal decomposition of borazine (HBNH)3, a colorless substance that is liquid at room temperature. The nanomesh results after exposing the atomically clean Rh(111) or Ru(0001) surface to borazine by chemical vapor deposition (CVD).
The substrate is kept at a temperature of 796 °C (1070 K) when borazine is introduced in the vacuum chamber at a dose of about 40 L (1 Langmuir = 10−6 torr sec). A typical borazine vapor pressure inside the ultrahigh vacuum chamber during the exposure is 3x10−7 mbar.
After cooling down to room temperature, the regular mesh structure is observed using different experimental techniques. Scanning tunneling microscopy (STM) gives a direct look on the local real space structure of the nanomesh, while low energy electron diffraction (LEED) gives information about the surface structures ordered over the whole sample. Ultraviolet photoelectron spectroscopy (UPS) gives information about the electronic states in the outermost atomic layers of a sample, i.e. electronic information of the top substrate layers and the nanomesh.
See also
Other forms
CVD of borazine on other substrates has not led so far to the formation of a corrugated nanomesh. A flat BN layer is observed on nickel and palladium, whereas stripped structures appear on molybdenum instead.
References and notes
Other links
http://www.nanomesh.ch
http://www.nanomesh.org
Two-dimensional nanomaterials
Self-organization
Thin films
Nitrides
Boron compounds
III-V compounds
Transition metals
NASA spin-off technologies | Nanomesh | [
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,274 | [
"Self-organization",
"Inorganic compounds",
"Materials science",
"Nanotechnology",
"Planes (geometry)",
"III-V compounds",
"Thin films",
"Dynamical systems"
] |
13,337,091 | https://en.wikipedia.org/wiki/Water%20chiller | A water chiller is a device used to lower the temperature of water. Most chillers use refrigerant in a closed loop system to facilitate heat exchange from water where the refrigerant is then pumped to a location where the waste heat is transferred to the atmosphere. However, there are other methods in performing this action.
In hydroponics, pumps, lights and ambient heat can warm the reservoir water temperatures, leading to plant root and health problems. For ideal plant health, a chiller can be used to lower the water temperature below ambient level; is a good temperature for most plants. This results in healthy root production and efficient absorption of nutrients.
In air conditioning, chilled water is often used to cool a building's air and equipment, especially in situations where many individual rooms must be controlled separately, such as a hotel. A chiller lowers water temperature to between and before the water is pumped to the location to be cooled.
See also
Chiller
Gardening
Notes
Hydroponics
Cooling technology
Heating, ventilation, and air conditioning
Mechanical engineering | Water chiller | [
"Physics",
"Engineering"
] | 212 | [
"Mechanical engineering stubs",
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
13,337,259 | https://en.wikipedia.org/wiki/Retardation%20factor | In chromatography, the retardation factor (R) is the fraction of an analyte in the mobile phase of a chromatographic system. In planar chromatography in particular, the retardation factor RF is defined as the ratio of the distance traveled by the center of a spot to the distance traveled by the solvent front. Ideally, the values for RF are equivalent to the R values used in column chromatography.
Although the term retention factor is sometimes used synonymously with retardation factor in regard to planar chromatography the term is not defined in this context. However, in column chromatography, the retention factor or capacity factor (k) is defined as the ratio of time an analyte is retained in the stationary phase to the time it is retained in the mobile phase, which is inversely proportional to the retardation factor.
General definition
In chromatography, the retardation factor, R, is the fraction of the sample in the mobile phase at equilibrium, defined as:
Planar chromatography
The retardation factor, RF, is commonly used in paper chromatography and thin layer chromatography (TLC) for analyzing and comparing different substances. It can be mathematically described by the following ratio:
An RF value will always be in the range 0 to 1; if the substance moves, it can only move in the direction of the solvent flow, and cannot move faster than the solvent. For example, if particular substance in an unknown mixture travels 2.5 cm and the solvent front travels 5.0 cm, the retardation factor would be 0.50. One can choose a mobile phase with different characteristics (particularly polarity) in order to control how far the substance being investigated migrates.
An RF value is characteristic for any given compound (provided that the same stationary and mobile phases are used). It can provide corroborative evidence as to the identity of a compound. If the identity of a compound is suspected but not yet proven, an authentic sample of the compound, or standard, is spotted and run on a TLC plate side by side (or on top of each other) with the compound in question. Note that this identity check must be performed on a single plate, because it is difficult to duplicate all the factors which influence RF exactly from experiment to experiment.
Relationship with retention factor
In terms of retention factor (k), retardation factor (R) is defined as follows:
based on the definition of k:
References
Chromatography | Retardation factor | [
"Chemistry"
] | 525 | [
"Chromatography",
"Separation processes"
] |
13,337,318 | https://en.wikipedia.org/wiki/Model-based%20design | Model-based design (MBD) is a mathematical and visual method of addressing problems associated with designing complex control, signal processing and communication systems. It is used in many motion control, industrial equipment, aerospace, and automotive applications. Model-based design is a methodology applied in designing embedded software.
Overview
Model-based design provides an efficient approach for establishing a common framework for communication throughout the design process while supporting the development cycle (V-model). In model-based design of control systems, development is manifested in these four steps:
modeling a plant,
analyzing and synthesizing a controller for the plant,
simulating the plant and controller,
integrating all these phases by deploying the controller.
The model-based design is significantly different from traditional design methodology. Rather than using complex structures and extensive software code, designers can use Model-based design to define plant models with advanced functional characteristics using continuous-time and discrete-time building blocks. These built models used with simulation tools can lead to rapid prototyping, software testing, and verification. Not only is the testing and verification process enhanced, but also, in some cases, hardware-in-the-loop simulation can be used with the new design paradigm to perform testing of dynamic effects on the system more quickly and much more efficiently than with traditional design methodology.
History
As early as the 1920s two aspects of engineering, control theory and control systems, converged to make large-scale integrated systems possible. In those early days controls systems were commonly used in the industrial environment. Large process facilities started using process controllers for regulating continuous variables such as temperature, pressure, and flow rate. Electrical relays built into ladder-like networks were one of the first discrete control devices to automate an entire manufacturing process.
Control systems gained momentum, primarily in the automotive and aerospace sectors. In the 1950s and 1960s, the push to space generated interest in embedded control systems. Engineers constructed control systems such as engine control units and flight simulators, that could be part of the end product. By the end of the twentieth century, embedded control systems were ubiquitous, as even major household consumer appliances such as washing machines and air conditioners contained complex and advanced control algorithms, making them much more "intelligent".
In 1969, the first computer-based controllers were introduced. These early programmable logic controllers (PLC) mimicked the operations of already available discrete control technologies that used the out-dated relay ladders. The advent of PC technology brought a drastic shift in the process and discrete control market. An off-the-shelf desktop loaded with adequate hardware and software can run an entire process unit, and execute complex and established PID algorithms or work as a Distributed Control System (DCS).
Steps
The main steps in model-based design approach are:
Plant modeling. Plant modeling can be data-driven or based on first principles. Data-driven plant modeling uses techniques such as System identification. With system identification, the plant model is identified by acquiring and processing raw data from a real-world system and choosing a mathematical algorithm with which to identify a mathematical model. Various kinds of analysis and simulations can be performed using the identified model before it is used to design a model-based controller. First-principles based modeling is based on creating a block diagram model that implements known differential-algebraic equations governing plant dynamics. A type of first-principles based modeling is physical modeling, where a model consists in connected blocks that represent the physical elements of the actual plant.
Controller analysis and synthesis. The mathematical model conceived in step 1 is used to identify dynamic characteristics of the plant model. A controller can then be synthesized based on these characteristics.
Offline simulation and real-time simulation. The time response of the dynamic system to complex, time-varying inputs is investigated. This is done by simulating a simple LTI (Linear Time-Invariant) model, or by simulating a non-linear model of the plant with the controller. Simulation allows specification, requirements, and modeling errors to be found immediately, rather than later in the design effort. Real-time simulation can be done by automatically generating code for the controller developed in step 2. This code can be deployed to a special real-time prototyping computer that can run the code and control the operation of the plant. If a plant prototype is not available, or testing on the prototype is dangerous or expensive, code can be automatically generated from the plant model. This code can be deployed to the special real-time computer that can be connected to the target processor with running controller code. Thus a controller can be tested in real-time against a real-time plant model.
Deployment. Ideally this is done via code generation from the controller developed in step 2. It is unlikely that the controller will work on the actual system as well as it did in simulation, so an iterative debugging process is carried out by analyzing results on the actual target and updating the controller model. Model-based design tools allow all these iterative steps to be performed in a unified visual environment.
Disadvantages
The disadvantages of model-based design are fairly well understood this late in development lifecycle of the product and development.
One major disadvantage is that the approach taken is a blanket or coverall approach to standard embedded and systems development. Often the time it takes to port between processors and ecosystems can outweigh the temporal value it offers in the simpler lab based implementations.
Much of the compilation tool chain is closed source, and prone to fence post errors, and other such common compilation errors that are easily corrected in traditional systems engineering.
Design and reuse patterns can lead to implementations of models that are not well suited to that task. Such as implementing a controller for a conveyor belt production facility that uses a thermal sensor, speed sensor, and current sensor. That model is generally not well suited for re-implementation in a motor controller etc. Though its very easy to port such a model over, and introduce all the software faults therein.
Version control issues: Model-based design can encounter significant challenges due to the lack of high-quality tools for managing version control, particularly for handling diff and merge operations. This can lead to difficulties in managing concurrent changes and maintaining robust revision control practices. Although newer tools, such as 3-way merge, have been introduced to address these issues, effectively integrating these solutions into existing workflows remains a complex task.
While Model-based design has the ability to simulate test scenarios and interpret simulations well, in real world production environments, it is often not suitable. Over reliance on a given toolchain can lead to significant rework and possibly compromise entire engineering approaches. While it's suitable for bench work, the choice to use this for a production system should be made very carefully.
Advantages
Some of the advantages model-based design offers in comparison to the traditional approach are:
Model-based design provides a common design environment, which facilitates general communication, data analysis, and system verification between various (development) groups.
Engineers can locate and correct errors early in system design, when the time and financial impact of system modification are minimized.
Design reuse, for upgrades and for derivative systems with expanded capabilities, is facilitated.
Because of the limitations of graphical tools, design engineers previously relied heavily on text-based programming and mathematical models. However, developing these models was time-consuming, and highly prone to error. In addition, debugging text-based programs is a tedious process, requiring much trial and error before a final fault-free model could be created, especially since mathematical models undergo unseen changes during the translation through the various design stages.
Graphical modeling tools aim to improve these aspects of design. These tools provide a very generic and unified graphical modeling environment, and they reduce the complexity of model designs by breaking them into hierarchies of individual design blocks. Designers can thus achieve multiple levels of model fidelity by simply substituting one block element with another. Graphical models also help engineers to conceptualize the entire system and simplify the process of transporting the model from one stage to another in the design process. Boeing's simulator EASY5 was among the first modeling tools to be provided with a graphical user interface, together with AMESim, a multi-domain, multi-level platform based on the Bond Graph theory. This was soon followed by tool like 20-sim and Dymola, which allowed models to be composed of physical components like masses, springs, resistors, etc. These were later followed by many other modern tools such as Simulink and LabVIEW.
See also
Control theory
Functional specification
Model-driven engineering
Scientific modelling
Specification (technical standard)
Systems engineering
References
Control engineering | Model-based design | [
"Engineering"
] | 1,744 | [
"Control engineering"
] |
13,337,703 | https://en.wikipedia.org/wiki/Whinstone | Whinstone is a term used in the quarrying industry to describe any hard dark-coloured rock. Examples include the igneous rocks, basalt and dolerite, as well as the sedimentary rock chert.
Etymology
The Northern English/Scots term whin is first attested in the fourteenth century, and the compound whinstone from the sixteenth. The Oxford English Dictionary concludes that the etymology of whin is obscure, though it has been claimed, fancifully, that the term 'whin' derives from the sound it makes when struck with a hammer.
Description
Massive outcrops of whinstone occur at the Pentland Hills, Scotland and the Whin Sills, England.
It is used for road chippings and dry stone walls, but its natural angular shapes do not fit together well and are not easy to build with, and its hardness makes it a difficult material to work. A common use is in the laying of patios and driveways in its ground/by product state called Whindust.
References
Rocks
Quarrying | Whinstone | [
"Physics"
] | 212 | [
"Rocks",
"Physical objects",
"Matter"
] |
13,338,132 | https://en.wikipedia.org/wiki/Medipix | Medipix is a family of photon counting and particle tracking pixel detectors developed by an international collaboration, hosted by CERN.
Design
These are hybrid detectors as a semiconductor sensor layer is bonded to a processing electronics layer.
The sensor layer is a semiconductor, such as silicon, GaAs, or CdTe in which the incident radiation makes an electron hole/cloud. The charge is then collected to pixel electrodes and, via bump bonds, conducted to the CMOS electronics layer.
The pixel electronics first amplifies the signal and then compares the signal amplitude with a pre-set discrimination level (an energy threshold). The subsequent signal processing depends on the type of device. A standard Medipix detector increases the counter in the appropriate pixel if the signal is above the discrimination level. The Medipix device also contains an upper discrimination level and hence only signals within a range of amplitude could be accepted (within an energy window).
Timepix devices offer two more modes of operation in addition to the counting. The first one is so called “Time-over-Threshold” mode (Wilkinson type analog-to-digital converter). It is a mode where the counter in each pixel records the number of clocks for which the pulse remains above the discrimination level. This number is proportional to the energy of detected radiation. This mode is useful for particle tracking applications or for direct spectral imaging.
The second mode of the Timepix chip is “Time-of-arrival”, in which pixel counters record time between a trigger and detection of radiation quanta with energy above the discrimination level. This mode of operation finds use in time of flight (ToF) applications, for instance in neutron imaging.
Every individual hit of radiation is processed by the electronics integrated in each pixel this way, therefore the device could be considered as 65 536 individual counting detectors or even spectrometers. The energy discriminators are adjustable. Therefore, scanning with their level it is possible to measure over frequency-bands of the incoming radiation; thus enabling spectroscopic x-ray imaging.
Medipix-2, Timepix, and Medipix-3 are all 256×256 pixels, each 0.055 mm (55 μm) square, forming a total area 14.08 mm × 14.08 mm. Larger area detectors can be created by bump-bonding many chips to larger monolithic sensors. Detectors of sizes from 2x2 to 2x4 chips are commonly used. Even larger, gapless areas could be created using the edgeless sensor technology. Medipix/Timepix chips each have its own sensor. These assemblies are tiled next to each other to create nearly arbitrarily sized detector arrays (the largest build using this technology has 10x10 chips, hence 14x14 cm and 2560x2560 pixels).
Comparison with existing technologies
Photon counting pixel detectors represent the next generation of radiation imaging detectors. The photon counting technology overcomes limitations of current imaging devices. Comparison of photon counting with existing technologies is in the following table:
Versions
Medipix-1 was the first device of the Medipix family. It had 64x64 pixels of 170 μm pitch. Pixels contained one comparator (threshold) with 3-bit per-pixel offset adjustment. The minimum threshold was ~5.5 keV. The counter depth was 15-bit. The maximum count rate per pixel was 2 MHz per pixel.
Medipix-2 is the successor of Medipix-1. The pixel pitch was reduced to 55 μm and the pixel array is of 256x256 pixels. Each pixel has two discrimination levels (upper and lower threshold) each adjustable individually in pixels using a 3-bit offset. The maximum count rate is about 100 kHz per pixel (however in pixels with 9x smaller area compared to Medipix-1).
Medipix-2 MXR is an improved version of Medipix-2 device with better temperature stability, pixel counter overflow protection, increased radiation hardness and many other improvements.
Timepix is device conceptually originating from Medipix-2. It adds two more modes to the pixels, in addition to counting of detected signals: Time-over-Threshold (TOT) and Time-of-Arrival (TOA). The detected pulse height is recorded in the pixel counter in the TOT mode. The TOA mode measures time between trigger and arrival of the radiation into each pixel.
Medipix-3 is the latest generation of photon counting devices for X-ray imaging. The pixel pitch remains the same (55 μm) as well as the pixel array size (256x256). It has better energy resolution through real time correction of charge sharing. It also has multiple counters per pixel that can be used in several different modes. This allows for continuous readout and up to eight energy thresholds.
Timepix-3 is a successor of the Timepix chip. One of the biggest distinguishing changes is the approach to the data readout. All previous chips used the frame-based readout, i.e. the whole pixel matrix was read out at once. Timepix-3 has event-based readout where values recorded in pixels are read out immediately after the hit together with coordinates of the hit pixel. The chip therefore generates a continuous stream of data rather than a sequence of frames. The next major difference compared to the previous Timepix chip is the ability to measure the hit amplitude simultaneously with the time of arrival. Other parameters such as energy and timing resolution were also improved compared to the original Timepix chip.
Timepix-4 is the successor of the Timepix-3 chip. It has general stronger specifications for instance its time-of-arrival resolution is 195 ps, 8 times faster than Timepix-3, it also has a larger pixelmatrix of 512x448 pixels and can handle 8 times higher data rates.
Readout electronics
The digital data recorded by Medipix/Timepix devices are transferred to a computer via readout electronics. The readout electronics is also responsible for setup and control of the detector parameters. Several readout systems were developed within the Medipix collaboration
Muros
Muros was one of the first readout systems of Medipix detectors. Muros was developed at Nikhef, Amsterdam, The Netherlands. It was relatively compact readout enabling access to all features of the detector. It allowed maximum frame rate of cca 30 frames/s with a single chip.
USB interface
This electronics was developed at IEAP-CTU, Czech Republic. It provides a lower frame rate compared to Muros, but the electronics was integrated into a box not larger than a pack of cigarettes. Moreover, no special PC hardware card was needed as it was in case of Muros. Therefore, the USB interface become quickly the most used readout within the Medipix collaboration and its partners.
Relaxd
Relaxd is a readout electronics developed at Nikhef. The data is transferred to PC via 1 Gbit/s Ethernet connection. The maximum frame rate is at level of 100 frame/s.
Fitpix
Fitpix is the next generation of the USB interface developed by the group in Prague. The electronics implements the parallel Medipix/Timepix readout and therefore the maximum frame rate reaches 850 frame/s. It supports also the serial readout with frame rate of 100 frames/s.
Minipix
Minipix is a miniaturized integrated chip+readout electronics device developed by ADVACAM s.r.o. in Prague. The whole system has size of a USB flash drive. Several of these devices were used on the International Space Station as radiation monitoring systems.
Spidr3
Spidr3 is powerful readout electronics for the TimePix3 and MediPix3 chip. The readout rate for the MediPix3 is about 12500 frames per second and for the TimePix3 of 120 Million Hits per second. The data are transferred by a powerful 10 GB optical fiber connection. The chip and readout system is developed together with Nikhef and Amsterdam Scientific Instruments.
Excalibur and Merlin systems
Both systems are developed at Diamond Light Source, UK, for Medipix3 readout and applications at synchrotrons. Merlin is available with CdTe sensors from Quantum Detectors who are collaborating on further development with Diamond Light Source.
LAMBDA system
Lambda is a high-speed (2,000 fps) big area (12 chips) readout systems developed at DESY. Lambda is available with high-Z sensor options, such as GaAs (Gallium-Arsenide) and CdTe (Cadmium-Telluride).
MARS
MARS is a gigabit Ethernet readout accommodating up to 6 Medipix 2 or Medipix 3 detectors. The electronics was developed at University of Otago, Christchurch, New Zealand.
Applications
X-ray imaging
X-ray imaging is the primary application field of Medipix detectors. Medipix offers to the X-ray imaging field in particular an advantage in higher dynamic range and energy sensitivity. Examples of X-ray images from selected X-ray imaging application fields are:
Space radiation dosimetry
Timepix-based detectors from the Medipix2 Collaboration have been flown on the International Space Station since 2013, and on the first flight test (EFT-1) of NASA's new Orion Multi-Purpose Crew Vehicle in December 2014. Current plans call for similar devices to be flown as the primary radiation area monitors on the future initial crewed Orion missions.
Other
The detectors may also find applications in astronomy, high energy physics, medical imaging, and X-ray spectroscopy.
History
Medipix-1: Early 90s.
Medipix-2: Late 90s.
Medipix-3: Collaboration formed 2006.
Medipix-4: Collaboration formed 2016.
See also
MARS Bioimaging
References
External links
Medipix collaboration home page
Medipix3 collaborators
Medipix3 page on CERN’s Knowledge and Technology Transfer website
Detectors
Ionising radiation detectors
Measuring instruments
Medical imaging
Photons
Radiography
X-ray instrumentation
X-rays
CERN experiments | Medipix | [
"Physics",
"Technology",
"Engineering"
] | 2,085 | [
"Radioactive contamination",
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Measuring instruments",
"X-ray instrumentation",
"Ionising radiation detectors"
] |
13,338,259 | https://en.wikipedia.org/wiki/House%20sign | House signs have been used since ancient times to personalise a dwelling, turning a house into a home.
See also
House number sign
Paternoster Row (London)
References
Infographics | House sign | [
"Engineering"
] | 40 | [
"Architecture stubs",
"Architecture"
] |
13,338,370 | https://en.wikipedia.org/wiki/Transocean%20John%20Shaw | Transocean John Shaw was a semi-submersible drilling rig designed by Friede & Goldman as a self-propelled modified & enhanced pacesetter, built and delivered in 1982 by Mitsui Engineering & Shipbuilding Ltd. in Japan.
The Panama-convenience flagged vessel was designed and outfitted to operate in harsh environments. The rig was capable of operations at water depths up to and drilling down to approximately using an , 10,000 PSI blowout preventer (BOP), and a outside diameter (OD) marine riser.
The rig was named after John S. Shaw, former chairman of Birmingham, Alabama-based Sonat Inc. Sonat spun off its offshore division as Sonat Offshore in 1993, and it changed its name to Transocean in 1996. In January 2016, it was decided to scrap the rig, and after a period berthed at Invergordon, Scotland, it departed, under tow, for Aliaga, Turkey on 19 April 2016.
References
External links
Transocean official website
Oil platforms
Semi-submersibles
Ships built by Mitsui Engineering and Shipbuilding
1982 ships
Drilling rigs
Transocean | Transocean John Shaw | [
"Chemistry",
"Engineering"
] | 235 | [
"Oil platforms",
"Petroleum technology",
"Natural gas technology",
"Structural engineering"
] |
988,364 | https://en.wikipedia.org/wiki/Fran%C3%A7ois%20Lionet | François Lionet is a French programmer, best known for having written STOS BASIC on the Atari ST and AMOS BASIC on the Amiga (along with Constantin Sotiropoulos). He has also written several games on these platforms.
In 1994, he founded Clickteam with Yves Lamoureux, producing the Klik series of games-creation tools, including Multimedia Fusion.
Software
2023 AOZ Studio 1
2019 AMOS 2 project (vaporware)
2013 Clickteam Fusion 2.5
2006 The Games Factory 2.0
2006 Multimedia Fusion 2.0
2002 Multimedia Fusion 1.5
1999
1997-98 Multimedia Fusion 1.0
1996-97 The Games Factory 1.0
1995-96 Corel Click & Create
1993-94 Klik & Play
1993 AMOSPro Compiler
1992 AMOS Professional
1992 Easy AMOS
1991 AMOS Compiler
1990 AMOS BASIC
1989 STOS Compiler
1988 STOS BASIC
1987 Captain Blood (PC and C64)
1983-86 Various 8-bit games
References
External links
AWI on Patron (2024)
Youtube
Amiga people
French computer programmers
Living people
Year of birth missing (living people)
Video game programmers | François Lionet | [
"Technology"
] | 223 | [
"Computing stubs",
"Computer specialist stubs"
] |
988,722 | https://en.wikipedia.org/wiki/English%20Electric%20DEUCE | The DEUCE (Digital Electronic Universal Computing Engine) was one of the earliest British commercially available computers, built by English Electric from 1955. It was the production version of the Pilot ACE, itself a cut-down version of Alan Turing's ACE.
Hardware description
The DEUCE had 1450 thermionic valves, and used mercury delay lines for its main memory; each of the 12 delay lines could store 32 instructions or data words of 32 bits each. It adopted the then high 1 megahertz clock rate of the Pilot ACE. Input/output was via Hollerith 80-column punch-card equipment. The reader read cards at the rate of 200 per minute, while the card punch rate was 100 cards per minute. The DEUCE also had an 8192-word magnetic drum for main storage. To access any of the 256 tracks of 32 words, the drum had one group of 16 read and one group of 16 write heads, each group on independent moveable arms, each capable of moving to one of 16 positions. Access time was 15 milliseconds if the heads were already in position; an additional 35 milliseconds was required if the heads had to be moved. There was no rotational delay incurred when reading from and writing to drum. Data was transferred between the drum and one of the 32-word delay lines.
The DEUCE could be fitted with paper tape equipment; the reader speed was 850 characters per second, while the paper tape output speed was 25 characters per second. (The DEUCE at the University of New South Wales {UTECOM} had a Siemens M100 teleprinter attached in 1964, giving 10 characters per second input/output). Decca magnetic tape units could also be attached. The automatic multiplier and divider operated asynchronously (that is, other instructions could be executed while the multiplier/divider unit was in operation). Two arithmetic units were provided for integer operations: one of 32 bits and another capable of performing 32-bit operations and 64-bit operations. Auto-increment and auto-decrement were provided on eight registers from about 1957. Array arithmetic and array data transfers were permitted. Compared with contemporaries such as the Manchester Mark 1, DEUCE was about ten times faster.
The individual words of the quadruple registers were associated with an auto-increment/decrement facility. That facility could be used for counting and for modifying instructions (for indexing, loop control, and for changing the source or destination address of an instruction).
Being a bit-serial machine, access time to a single register was 32 microseconds, a double register 64 microseconds, and a quadruple register 128 microseconds. That for a delay line was 1024 microseconds.
Instruction times were: addition, subtraction, logical operations: 64 microseconds for 32-bit words; double-precision 96 microseconds; multiplication and division 2 milliseconds. For array arithmetic and transfer operations, time per word was 33 microseconds per word for 32 words.
Floating-point operations were provided by software; times: 6 milliseconds for addition and subtraction, 5.5 milliseconds average for multiplication, and 4.5 milliseconds average for division.
In the early machines, all instructions involving the magnetic drum were interlocked while an operation was in progress. Thus, if the read heads were moved, any subsequent magnetics operation such as to read a track or write a track, were prohibited from proceeding until the first had completed. From about 1957, a new unit, called "rationalised magnetics" was made available. This unit eliminated unnecessary interlocks. Thus, it was possible to execute an instruction that moved the read heads: if followed by an instruction to move the write heads, or to write a track, such instructions were not interlocked, and could proceed in parallel with moving the read heads.
The front panel of the DEUCE featured two CRT displays: one showed the current contents of registers, while the other showed the content of any one of the mercury delay line stores.
From about 1958, seven extra delay lines could be attached, giving 224 more words of high-speed store. An IBM 528 combined reader–punch could be substituted for the Hollerith equipment, giving the same input/output speeds, in which case the machine was called Mark II. Automatic conversion of alphanumeric data to BCD was provided on input, and the reverse operation on output, for all eighty card columns. On this equipment, reading and punching could proceed simultaneously, if required, and thus could be used for reading in a record, updating it, and then punching an updated record simultaneously with reading in the next record. With the seven extra delay lines, the DEUCE was denoted Mark IIA.
Software
The principal high-level programming languages were GEORGE (General Order Generator), ALPHACODE, STEVE, TIP, GIP, and ALGOL. Assembler language translators included ZP43 and STAC.
Invented by Charles Leonard Hamblin in 1957, GEORGE was closest to present-day programming languages. It used Reverse Polish Notation. For example, to evaluate
e = ay2 + by + c, one wrote
a y dup × × b y × + c + (e).
where "dup" duplicates the previous entry, being the same as using "y" here.
GEORGE provided a 12-position accumulator as a push-down pop-up stack.
Using a variable name in a program (e.g., 'd') brought the value of variable 'd'
into the accumulator (i.e., pushed d onto the top-of-stack), while
enclosing a name in parentheses {e.g., (d) } assigned to variable 'd'
the value at the top of the stack (accumulator). To destroy (pop and
discard) the value at the top of the stack, the semicolon (;) was used.
The following GEORGE program reads in ten numbers and prints their squares:
1, 10 rep (i)
read
dup ×
punch
;
]
In the above program, the "dup" command duplicated the top of the stack,
so that there were then two copies of the value at the top of the stack.
GIP (General Interpretive Programme) was a control program for manipulating programs called "bricks". Its principal service was in the running of programs from the several hundred in the DEUCE linear algebra library. Preparation of such a program involved selecting the required bricks (on punch cards), copying them and GIP in a reproducing punch, and assembling the copies into a deck of cards. Next, simple codewords would be written to use the bricks to perform such tasks as: matrix multiplication; matrix inversion; term-by-term matrix arithmetic (addition, subtraction, multiplication, and division); solving simultaneous equations; input; and output. The dimensions of matrices were never specified in the codewords. Dimensions were taken from the matrices themselves, either from a card preceding the data cards or from the matrices as stored on drum. Thus, programs were entirely general. Once written, such a program handled any size of matrices (up to the capacity of the drum, of course). A short program to read in a matrix from cards, to transpose the matrix, and to punch the results on cards requires the following codewords:
0, 0, 5, 1
5, 0, 120, 2
120, 0, 0, 3
In each of the codewords, the fourth number is the brick number. The first codeword specifies that the matrix is read from cards and stored at drum address 5; the second codeword specifies that the matrix at drum address 5 is transposed, and the result is stored at drum address 120; and the third punches that result on cards.
STAC was a macro-assembler. Most instructions were written in the form of a transfer, in decimal, such as 13-16, meaning to copy the word in register 13 to register 16. The location of the instruction was not specified. STAC allocated an instruction to a word in a delay line, and computed the six components of the binary instruction. It allocated the next instruction to a location that was optimum, to be executed as soon as the previous instruction was complete, if possible.
The following program reads in a value, n, and then reads in n binary integers. It punches out the integer and its square. Comments in lower case explain the instruction.
1.0 12-24 start the card reader. The location of the program is specified as 1.0.
0-13X read one number (n) from the card reader. The letter X causes the computer to wait
until the first row of the card has arrived at the reading station.
R2 12-24 start or re-start the card reader.
0-16X read one number to be squared, store it in the multiplier register.
9-24 stop the card reader.
16-21.3 copy the number to the multiplicand register.
30-21.2 clear the low-order bits of the multiplicand register.
MULT
10-24 start the card punch.
21.2-29X send the square to the card punch.
9-24 stop the card punch.
27-26 decrement n.
13-28 R1 test for zero. Branch on zero to R1; branch on not zero to R2.
R1 1-1X halt; the program is complete.
STAC would produce the following instructions (in addition to the binary program). The memory location of each instruction is shown at the left.
1.0 12-24
1.2 0-13X
1.4 12-24
1.6 0-16X
1.8 9-24
1.10 16-21.3
1.13 30-21.2
1.16 0-24 wait 1
1.18 1-1 wait 1
1.20 10-24
1.22 21.2-29X
1.24 9-24
1.26 27-26
1.28 13-28 1.3
1.3 1-1X
Wait and timing numbers are not shown, except for the multiplication.
Programming
Programming the DEUCE was different from other computers. The serial nature of the delay lines required that instructions be ordered such that when one instruction completed execution, the next one was ready to emerge from a Delay Line. For operations on the single registers, the earliest time that the next instruction could be obeyed was 64 microseconds after the present one. Thus, instructions were not executed from sequential locations. In general, instructions could transfer one or more words. Consequently, each instruction specified the location of the next instruction. Optimum programming meant that as each instruction was executed, the next one was just emerging from a Delay Line. The position of instructions in the store could greatly affect performance if the location of an instruction was not optimum.
Reading data from the card reader was done in real-time – each row had to be read as it passed the read brushes, without stopping. Similarly for the card punch; the word for a particular row was prepared in advance and had to be ready when a given row of the card was in position under the punch knives. The normal mode of reading and punching was binary. Decimal input and output was performed via software.
The high-speed store consisted of four single-word registers of 32 bits each, three double-word registers, and two quadruple-word registers. Each 32-bit word of the double and quadruple-word registers could be addressed separately. They could also be accessed as a pair, and—in the case of the quadruple registers—as a group of three or four. The instruction store consisted of twelve mercury delay lines, each of 32 words, and numbered 1 to 12. Delay line 11 (DL11) served as the buffer between the magnetic drum and the high-speed store. Being a "transfer machine", data could be transferred a word at a time, a pair of words at a time, and any number of words up to 33 at a time. Thus, for example, 32 words read from the drum could be transferred as a block to any of the other delay lines; 4 words could be transferred as a block from one quadruple register to the other, or between a quadruple register and a delay line—all with one instruction. The 32 words of a delay line could be summed by passing them to the single-length adder (by means of a single instruction).
By a special link between DL10 and one register, namely, register 16, DL10 could be used as a push-down stack.
Production
The first three machines were delivered in the spring of 1955; in late 1958 a DEUCE Mark II improved model appeared. This version employed a combined card reader and punch. The combined IBM 528 reader and punch behaved like the separate Hollerith units on the earlier DEUCE Mark I machines; however, it was also provided with hardware conversion of alphanumeric data to BCD on input, and vice versa on output. Data could also be read in and punched simultaneously at 100 cards per minute. The DEUCE Mark IIA provided seven extra mercury delay lines, each of 32 words.
A total of 33 DEUCE machines were sold between 1955 and 1964, two being purchased by the engine manufacturer Bristol Siddeley.
The success of DEUCE was due to its program library of over 1000 programs and subroutines.
Hardware characteristics
DEUCE Mark 0 and I
Clock rate 1 MHz
Word size 32 bits
High speed store 384 words
Arithmetic
one 32-bit accumulator;
one 64-bit accumulator that could be used also as two 32-bit accumulators.
addition/subtraction
64 microseconds single length,
96 microseconds double precision
Addition of a single-length number to a double-length number, with automatic sign extension, 64 microseconds.
multiplication 2080 microseconds
division 2112 microseconds
magnetic drum 8192 words
separate read heads and write heads
Track read time 15 ms
Head shift time 35 ms
card reader speed 200 cards per minute
card punch speed 100 cards per minute
paper tape reader speed 850 character/second
tape: 5, 7, 8-row tape.
stopping time: ½ millisecond (m.s.)
start time: 20 milliseconds
paper tape punch speed 25 characters/second
tape: 5 or 7 rows
Software floating-point (average times)
addition/subtraction 6 m.s.
multiplication 5½ m.s.
division 4½ m.s.
DEUCE MARK II
As for DEUCE Mark I.
A combined IBM 528 card reader and punch could read cards at 200 per minute, and punch at 100 cards per minute. When simultaneously started, the reader and punch ran at 100 cards per minute. Automatic conversion to and from 6-bit characters was provided. This mode was in addition to the programmed conversion provided by the Mark I DEUCE.
DEUCE MARK IA AND IIA
As above, with 7 extra delay lines providing 224 words of high-speed store.
Notes:
The multiplier and divider were asynchronous.
Several integers could be multiplied in a single execution of the multiply instruction, by inserting integers in the multiplier or multiplicand registers during multiplication, and by extracting results during multiplication.
Other special effects included counting the bits in a word, and converting Binary Coded Decimal (BCD) to binary.
Similarly for division, which could be used for
converting integers to Binary Coded Decimal (BCD), and for
converting pounds, shillings, and pence to pence.
See also
List of vacuum-tube computers
References
External links
This site has an extensive collection of original documents, including programs, subroutines, DEUCE News, and bulletins.
Oral history interview with Donald W. Davies, Charles Babbage Institute, University of Minnesota. Davies describes computer projects at the U.K. National Physical Laboratory, from the 1947 design work of Alan Turing to the development of the two ACE computers. Davies discusses a much larger, second ACE, and the decision to contract with English Electric Company to build the DEUCE—which he calls the first commercially produced computer in Great Britain.
"The Deuce" a 1955 Flight article on the Deuce
1950s computers
Early British computers
English Electric
Vacuum tube computers
32-bit computers
Computer-related introductions in 1955
Serial computers | English Electric DEUCE | [
"Technology"
] | 3,427 | [
"Serial computers",
"Computers"
] |
988,735 | https://en.wikipedia.org/wiki/Automatic%20Computing%20Engine | The Automatic Computing Engine (ACE) was a British early electronic serial stored-program computer design by Alan Turing. Turing completed the ambitious design in late 1945, having had experience in the years prior with the secret Colossus computer at Bletchley Park.
The ACE was not built, but a smaller version, the Pilot ACE, was constructed at the National Physical Laboratory and became operational in 1950. A larger implementation of the ACE design was the MOSAIC computer which became operational in 1955. ACE also led to the Bendix G-15 and other computers.
Background
The project was managed by John R. Womersley, superintendent of the Mathematics Division of the National Physical Laboratory (NPL). The use of the word Engine was in homage to Charles Babbage and his Difference Engine and Analytical Engine. Turing's technical design Proposed Electronic Calculator was the product of his theoretical work in 1936 "On Computable Numbers" and his wartime experience at Bletchley Park where the Colossus computers had been successful in breaking German military codes. In his 1936 paper, Turing described his idea as a "universal computing machine", but it is now known as the Universal Turing machine.
Turing was sought by Womersley to work in the NPL on the ACE project; he accepted and began work on 1 October 1945 and by the end of the year he completed his outline of his 'Proposed electronic calculator', which was the first reasonably complete design of a stored-program computer and, apart from being on a much larger scale than the final working machine, anticipated the final realisation in most important respects. However, because of the strict and long-lasting secrecy around the Bletchley Park work, he was prohibited (because of the Official Secrets Act) from explaining that he knew that his ideas could be implemented in an electronic device. The better-known EDVAC design presented in the First Draft of a Report on the EDVAC (dated 30 June 1945), by John von Neumann, who knew of Turing's theoretical work, received much publicity, despite its incomplete nature and questionable lack of attribution of the sources of some of the ideas.
Turing's report on the ACE was written in late 1945 and included detailed logical circuit diagrams and a cost estimate of £11,200. He felt that speed and size of memory were crucial and he proposed a high-speed memory of what would today be called 25 kilobytes, accessed at a speed of 1 MHz; he remarked that for the purposes required "the memory needs to be very large indeed by comparison with standards which prevail in most valve and relay work, and [so] it is necessary to look for some more economical form of storage", and that memory "appears to be the main limitation in the design of a calculator, i.e. if the storage problem can be solved all the rest is comparatively straightforward". The ACE implemented subroutine calls, whereas the EDVAC did not, and what also set the ACE apart from the EDVAC was the use of Abbreviated Computer Instructions, an early form of programming language. Initially, it was planned that Tommy Flowers, the engineer at the Post Office Research Station at Dollis Hill in north London, who had been responsible for building the Colossus computers, should build the ACE, but because of the secrecy around his wartime achievements and the pressure of post-war work, this was not possible.
Pilot ACE
Turing's colleagues at the NPL, not knowing about Colossus, thought that the engineering work to build a complete ACE was too ambitious, so the first version of the ACE that was built was the Pilot Model ACE, a smaller version of Turing's original design. Turing's assistant, Jim Wilkinson, worked on the logical design of the ACE and after Turing left for Cambridge in 1947, Wilkinson was appointed to lead the ACE group. The Pilot ACE had fewer than 1000 thermionic valves (vacuum tubes) compared to about 18,000 in the ENIAC. It used mercury delay lines for its main memory. Each of the 12 delay lines was 5 feet (1.5 m) long and propagated 32 instructions or data words of 32 bits each. This ran its first program on 10 May 1950, at which time it was the fastest computer in the world; each of its delay lines had a throughput of 1 Mbit/s.
The first production versions of the Pilot ACE, the English Electric DEUCE, of which 31 were sold, were delivered in 1955.
MOSAIC
A second implementation of the ACE design was the MOSAIC (Ministry of Supply Automatic Integrator and Computer). This was built by Allen Coombs and William Chandler of Dollis Hill who had worked with Tommy Flowers on building the ten Colossus computers. It was installed at the Radar Research and Development Establishment (RRDE) at Malvern, which later merged with the Telecommunications Research Establishment (TRE) to become the Royal Radar Establishment (RRE). It ran its first trial program in late 1952 or early 1953 and became operational in early 1955. MOSAIC contained 6,480 electronic valves and had an availability of about 75%. It occupied four rooms and was the largest of the early British computers. It was used to calculate aircraft trajectories from radar data. It continued operating until the early 1960s.
Derivatives
The principles of the ACE design were used in the Bendix Corporation's G-15 computer. The engineering designer was Harry Huskey who had spent 1947 in the ACE section at the NPL. He later contributed to the hardware designs for the EDVAC. The first G-15 ran in 1954 and, as a relatively small single-user machine, some consider it to be the first personal computer.
Other derivatives of the ACE include the EMI Electronic Business Machine and the Packard Bell Corporation PB 250.
Footnotes
Bibliography
External links
Oral history interview with Donald W. Davies, Charles Babbage Institute, University of Minnesota. Davies describes computer projects at the U.K. National Physical Laboratory, from the 1947 design work of Alan Turing to the development of the two ACE computers. Davies discusses a much larger, second ACE, and the decision to contract with English Electric Company to build the DEUCE—possibly the first commercially produced computer in Great Britain.
Events in the history of NPL — ACE computer
1940s computers
Alan Turing
Early British computers
One-of-a-kind computers
English inventions
1940s in computing
Computer-related introductions in 1950
Serial computers | Automatic Computing Engine | [
"Technology"
] | 1,314 | [
"Serial computers",
"Computers"
] |
988,751 | https://en.wikipedia.org/wiki/Personal%20knowledge%20management | Personal knowledge management (PKM) is a process of collecting information that a person uses to gather, classify, store, search, retrieve and share knowledge in their daily activities and the way in which these processes support work activities . It is a response to the idea that knowledge workers need to be responsible for their own growth and learning . It is a bottom-up approach to knowledge management (KM) .
History and background
Although as early as 1998 Davenport wrote on the importance to worker productivity of understanding individual knowledge processes (cited in ), the term personal knowledge management appears to be relatively new. Its origin can be traced in a working paper by .
PKM integrates personal information management (PIM), focused on individual skills, with knowledge management (KM) in addition to input from a variety of disciplines such as cognitive psychology, management and philosophy . From an organizational perspective, understanding of the field has developed in light of expanding knowledge about human cognitive capabilities and the permeability of organizational boundaries. From a metacognitive perspective, it compares various modalities within human cognition as to their competence and efficacy . It is an underresearched area . More recently, research has been conducted to help understand "the potential role of Web 2.0 technologies for harnessing and managing personal knowledge" . The Great Resignation has expanded the category of knowledge workers and is predicted to increase demand for personal knowledge management in the future .
Models
identified information retrieval, assessment and evaluation, organization, analysis, presentation, security, and collaboration as essential to PKM (cited in ).
Wright's model involves four interrelated domains: analytical, information, social, and learning. The analytical domain involves competencies such as interpretation, envisioning, application, creation, and contextualization. The information dimension comprises the sourcing, assessment, organization, aggregation, and communication of information. The social dimension involves finding and collaborating with people, the development of both close networks and extended networks, and dialogue. The learning dimension entails expanding pattern recognition and sensemaking capabilities, reflection, development of new knowledge, improvement of skills, and extension to others. This model stresses the importance of both bonding and bridging networks .
In Nonaka and Takeuchi's SECI model of knowledge dimensions (see under knowledge management), knowledge can be tacit or explicit, with the interaction of the two resulting in new knowledge . Smedley has developed a PKM model based on Nonaka and colleagues' model in which an expert provides direction while a community of practice provides support for personal knowledge creation . Trust is central to knowledge sharing in this model. Nonaka has returned to his earlier work in an attempt to further develop his ideas about knowledge creation
Personal knowledge management can also be viewed along two main dimensions, personal knowledge and personal management . Zhang has developed a model of PKM in relation to organizational knowledge management (OKM) that considers two axes of knowledge properties and management perspectives, either organizational or personal. These aspects of organizational and personal knowledge are interconnected through the OAPI process (organizationalize, aggregate, personalize, and individualize), whereby organizational knowledge is personalized and individualized, and personal knowledge is aggregated and operationalized as organizational knowledge .
Criticism
It is not clear whether PKM is anything more than a new wrapper around personal information management (PIM). William Jones argued that only personal information as a tangible resource can be managed, whereas personal knowledge cannot . Dave Snowden has asserted that most individuals cannot manage their knowledge in the traditional sense of "managing" and has advocated thinking in terms of sensemaking rather than PKM . Knowledge is not solely an individual product—it emerges through connections, dialog, and social interaction (see Sociology of knowledge). However, in Wright's model, PKM involves the application to problem-solving of analytical, information, social, and learning dimensions, which are interrelated , and so is inherently social.
An aim of PKM is "helping individuals to be more effective in personal, organizational and social environments" , often through the use of technology such as networking software. It has been argued, however, that equation of PKM with technology has limited the value and utility of the concept (e.g., , ).
In 2012, Mohamed Chatti introduced the personal knowledge network (PKN) model to KM as an alternative perspective on PKM, based on the concepts of a personal knowledge network and knowledge ecology .
Skills
Skills associated with personal knowledge management include:
Collaboration skills. Coordination, synchronization, experimentation, cooperation and design.
Communication skills. Perception, intuition, expression, visualization and interpretation.
Creative skills. Imagination, pattern recognition, appreciation, innovation, inference. Understanding of complex adaptive systems.
Information literacy. Understanding what information is important and how to find unknown information.
Manage learning. Manage how and when the individual learns.
Networking with others. Knowing what your network of people knows. Knowing who might have additional knowledge and resources to help you
Organizational skills. Personal librarianship. Personal categorization and taxonomies.
Reflection. Continuous improvement on how the individual operates.
Researching, canvassing, paying attention, interviewing and observational "cultural anthropology" skills
Tools
Some organizations are introducing PKM "systems" with some or all four components:
Content management: taxonomy processes and desktop search tools that enable employees to subscribe to, find, organize and publish information that resides on their desktops
Just-in-time canvassing: templates and e-mail canvassing lists that enable people to identify and connect with the appropriate experts and expertise quickly and effectively
Knowledge harvesting: software tools that automatically collect appropriate knowledge residing on subject matter experts' hard drives
Personal productivity improvement: knowledge fairs and 101 training sessions to help each employee make more effective personal use of the knowledge, learning, and technology resources available in the context of their work
PKM has also been linked to these tools:
Email, calendars, task managers
Knowledge logs (k-logs)
Social bookmarking and enterprise bookmarking
Virtual assistants
Wikis, including personal wikis and semantic wikis
Web annotations
Other useful tools include stories and narrative inquiry, decision support systems, various kinds of node–link diagram (such as argument maps, mind maps, concept maps), and similar information visualization techniques. Individuals use these tools to capture ideas, expertise, experience, opinions or thoughts, and this "voicing" will encourage cognitive diversity and promote free exchanges away from a centralized policed knowledge repository. The goal is to facilitate knowledge sharing and personal content management.
The most widely used software with PKM functions are:
Logseq, which is FLOSS
Notion (productivity software)
Obsidian (software)
Roam Research
TiddlyWiki
See also
Adaptive hypermedia
Card file
Commonplace book
Drakon-chart
Memex
Semantic desktop
User modeling
References
Knowledge management
Information systems | Personal knowledge management | [
"Technology"
] | 1,387 | [
"Information systems",
"Information technology"
] |
988,753 | https://en.wikipedia.org/wiki/Yoga%20nidra | Yoga nidra () or yogic sleep in modern usage is a state of consciousness between waking and sleeping, typically induced by a guided meditation.
A state called yoga nidra is mentioned in the Upanishads and the Mahabharata, while a goddess named Yoganidrā appears in the Devīmāhātmya. Yoga nidra is linked to meditation in Shaiva and Buddhist tantras, while some medieval hatha yoga texts use "yoganidra" as a synonym for the deep meditative state of samadhi. These texts however offer no precedent for the modern technique of guided meditation. That derives from 19th and 20th century Western "proprioceptive relaxation" as described by practitioners such as Annie Payson Call and Edmund Jacobson.
The modern form of the technique, pioneered by Dennis Boyes in 1973, made widely known by Satyananda Saraswati in 1976, and then by Swami Rama, Richard Miller, and others has spread worldwide. It is applied by the U.S. Army to assist soldier recovery from post-traumatic stress disorder. There is limited scientific evidence that the technique helps relieve stress.
Historical usage
Ancient times
The Hindu epic Mahabharata, completed by the 3rd century CE, mentions a state called "yoganidra", and associates it with Lord Vishnu:
The Devīmāhātmya, written around the 6th century CE, mentions a goddess whose name is Yoganidrā. The God Brahma asks Yoganidrā to wake up Vishnu to go and fight the Asuras or demigods named Madhu and Kaitabha. These early mentions do not define any yoga technique or practice, but describe the God Vishnu's transcendental sleep in between the Yugas, the cycles of the universe, and the manifestation of the goddess as sleep itself.
Medieval practices
Yoganidra is first linked to meditation in Shaiva and Buddhist tantras. In the Shaiva text Ciñcinīmatasārasamuccaya (7.164), yoganidra is called "peace beyond words"; in the Mahāmāyātantra (2.19ab) it is named as a state in which perfected Buddhas may access secret knowledge. In the 11th or 12th century, yoganidra is first used in Hatha yoga and Raja yoga texts as a synonym for samadhi, a deep state of meditative consciousness where the yogi no longer thinks, moves, or breathes. The Amanaska (2.64) asserts that "Just as someone who has suddenly arisen from sleep becomes aware of sense objects, so the yogi wakes up from that [world of sense objects] at the end of his yogic sleep."
By the 14th century, the Yogatārāvalī (24–26) gives a more detailed description, stating that yoganidra "removes all thought of the world of multiplicity" in the advanced yogi who has completely uprooted his "network of Karma". He then enters the "fourth state", namely turiya or samadhi, beyond the usual states of waking, dreaming, and deep sleep, "that special thoughtless sleep, which consists of [just] consciousness." The 15th century Haṭha Yoga Pradīpikā goes further, stating (4.49) that "One should practice Khecarī Mudrā until one is asleep in yoga. For one who has achieved Yoganidrā, death never occurs." Khecarī Mudrā is the Hatha yoga practice of folding the tongue back so that it reaches inside the nasal cavity, where it can enable the yogi to reach samadhi. In the 17th century Haṭha Ratnāvalī (3.70), Yoganidrasana is first described. It is an asana or yoga pose where the legs are wrapped around the back of the neck. The text says that the yogi should sleep in this position, which "bestows bliss". These texts view yoganidra as a state, not a practice in itself.
Modern usage
Western "relaxationism"
The yoga scholar Mark Singleton states that while relaxation is a primary feature of modern Western yoga, its relaxation techniques "have no precedent in the pre-modern yoga tradition", but derive mostly from 19th and 20th century Western "proprioceptive relaxation". This prescriptive approach was described by authors such as the "relaxationist" Annie Payson Call in her 1891 book Power through Repose, and the Chicago psychiatrist Edmund Jacobson, the creator of progressive muscle relaxation and biofeedback, in his 1934 book You Must Relax!.
Dennis Boyes
In 1973, French yoga advocate Dennis Boyes published his book Le Yoga du sommeil éveillé; méthode de relaxation, yoga nidra ("The Yoga of Waking Sleep: method of relaxation, yoga nidra"). This is the first known usage of "yoga nidra" in a modern sense. In the book, Boyes makes use of relaxation techniques including the direction of attention to each part of the body:
The French journal Revue 3e Millénaire, reviewing Boyes's approach in 1984, wrote that Boyes proposes relaxation in order to "reach the state of emptiness". The person thus imperceptibly moves to a stage where relaxation becomes meditation and can remain there once the mind's obsession with external objects or thoughts is removed.
Satyananda
In modern times, Satyananda Saraswati claimed to have experienced yoga nidra when he was living with his guru Sivananda Saraswati in Rishikesh. In 1976, he constructed a system of relaxation through guided meditation, which he popularized in the mid-20th century. He explained yoga nidra as a state of mind between wakefulness and sleep that opened deep phases of the mind, suggesting a connection with the ancient tantric practice called nyasa, whereby Sanskrit mantras are mentally placed within specific body parts while meditating on each part (of the bodymind). The form of practice taught by Satyananda includes eight stages (internalisation, resolve (sankalpa), rotation of consciousness, breath awareness, manifestation of opposites, creative visualization, repeated resolve (sankalpa), and externalisation). Satyananda used this technique, along with the suggestion, on the child who was to become his successor, Niranjanananda Saraswati, from age four. He claimed to have been taught several languages by this method.
Satyananda's multi-stage yoga nidra technique is not found in ancient or medieval texts. However, the yoga scholars Jason Birch and Jacqueline Hargreaves note that there are analogues for several of his yoga nidra activities.
Yoga nidra in this modern sense is a state in which the body is completely relaxed, and the practitioner becomes systematically and increasingly aware of the inner world by following a set of verbal instructions. This state of consciousness is different from meditation, in which concentration on a single focus is required. In yoga nidra the practitioner remains in a state of light withdrawal of the 5 senses (pratyahara) with four senses internalised, that is, withdrawn, and only hearing still connects to any instructions given.
Swami Rama
Swami Rama taught a form of yoga nidra (in a broad sense), which involves an exercise called shavayatra, "inner pilgrimage [through the body]", which directs the attention around "61 sacred points of the body" during relaxation in shavasana, corpse pose. A second exercise, shithali karana, is said to induce "a very deep state of relaxation", and is described as a preliminary for yoga nidra (in a narrow sense). It, too, is performed in Shavasana, involving exhalations imagined as directed from the crown of the head to different points around the body, each repeated 5 or 10 times. The yoga nidra exercise involves directed breathing on the left side, then the right side, then in Shavasana. In Shavasana, the attention is directed to the eyebrow, throat, and heart centers or chakras.
Richard Miller
The Western pioneer of yoga as therapy, Richard Miller, has developed the use of yoga nidra for rehabilitating soldiers in pain, using the Integrative Restoration (iRest) methodology. Miller worked with Walter Reed Army Medical Center and the United States Department of Defense studying the efficacy of the approach. According to Yoga Journal, "Miller is responsible for bringing the practice to a remarkable variety of nontraditional settings," which includes "military bases and in veterans' clinics, homeless shelters, Montessori schools, Head Start programs, hospitals, hospices, chemical dependency centers, and jails." The iRest protocol was used with soldiers returning from Iraq and Afghanistan suffering from post-traumatic stress disorder (PTSD). The Surgeon General of the United States Army endorsed Yoga Nidra as a complementary alternative medicine (CAM) for chronic pain in 2010.
Post-lineage yoga nidra
In 2021, the yoga teachers Uma Dinsmore-Tuli and Nirlipta Tuli jointly published a "declaration of independence for Yoga Nidrā Shakti". In it, they stated that yoga nidra had become commodified and promoted by commercial organisations for profit; that abuse had taken place within those organisations; and that the organisations had propagated origin stories for yoga nidra "that privilege their own founders" and exclude or neglect older roots of the practice. They state their shock at abuses by Satyananda, Swami Rama, Amrit Desai, and Richard Miller. They invite practitioners and teachers to learn about the history of yoga nidra outside organisational boundaries and to work without "trademarked versions" of the practice.
Reception
The Mindful Yoga teacher Anne Cushman states that "This body-sensing journey [that I teach in Mindful Yoga] ... is one variation of the ancient practice of Yoga nidra ... and of the body-scan technique commonly used in the Buddhist Vipassana tradition."
The cultural historian Alistair Shearer writes that the name yoga nidra is an umbrella term for different systems of "progressive relaxation or 'guided meditation'." He comments that Satyananda promoted his version of yoga nidra, claiming it was ancient, when its connections to ancient texts "seem vague at best". Shearer writes that other teachers have defined yoga nidra as "the state of conscious sleep" in which inner awareness is maintained, without reference to Satyananda's method of progressive relaxation by directing attention to different parts of the body. Shearer attributes this "inner lucidity" to the buddhi (intellect, literally "wakefulness") of Sankhya philosophy. He compares buddhi to the "intellect" discussed by Saint Augustine and the Apostolic Fathers at about the same time as Patanjali's Yoga Sutra.
Scientific evidence
Scientific evidence for the action of yoga nidra is patchy. Parker (2019) conducted a single-observation study of a famous yogi; in it, Swami Rama demonstrated conscious entry into NREM delta wave sleep through yoga nidra, while a disciple produced delta and theta waves even with eyes open and talking. A therapeutic model was developed by Datta and Colleagues (2017) and the same appeared to be useful for insomnia patients. Datta and colleagues (2022) report a beneficial effect of yoga nidra on the sleep of forty-five male athletes, noting that sportsmen often have sleep problems. Their small randomised controlled trial found improvements in subjective sleep latency and sleep efficiency with four weeks of yoga nidra compared to progressive muscular relaxation (used as the control).
Primary research, sometimes informal, on a small scale, and without strictly controlled trials, has been conducted on various aspects of yoga nidra. These have made tentative findings of benefits to mind and body such as increased dopamine release in the brain, improved heart rate variability, reduced blood pressure, reduced anxiety, and improved self-esteem.
See also
Dream yoga
Notes
References
External links
Systematic review articles on Yoga Nidra indexed by Google Scholar
Sleep
Yoga as therapy | Yoga nidra | [
"Biology"
] | 2,470 | [
"Behavior",
"Sleep"
] |
988,765 | https://en.wikipedia.org/wiki/Sator%20Square | The Sator Square (or Rotas-Sator Square or Templar Magic Square) is a two-dimensional acrostic class of word square containing a five-word Latin palindrome. The earliest squares were found at Roman-era sites, all in ROTAS-form (where the top line is "ROTAS", not "SATOR"), with the earliest discovery at Pompeii (and also likely pre-AD 62). The earliest square with Christian-associated imagery dates from the sixth century. By the Middle Ages, Sator squares had been found across Europe, Asia Minor, and North Africa. In 2022, the Encyclopedia Britannica called it "the most familiar lettered square in the Western world".
A significant volume of academic research has been published on the square, but after more than a century, there is no consensus on its origin and meaning. The discovery of the "Paternoster theory" in 1926 led to a brief consensus among academics that the square was created by early Christians, but the subsequent discoveries at Pompeii led many academics to believe that the square was more likely created as a Roman word puzzle (as per the Roma-Amor puzzle), which was later adopted by Christians. This origin theory, however, fails to explain how a Roman word puzzle then became such a powerful religious and magical medieval symbol. It has instead been argued that the square was created in its ROTAS-form as a Jewish symbol, embedded with cryptic religious symbolism, which was later adopted in its SATOR-form by Christians. There are many other less-supported academic origin theories, such as a Pythagorean or Stoic puzzle, a Gnostic or Orphic or Italian pagan amulet, a cryptic Mithraic or Semitic numerology charm, or that it was simply a device for working out wind directions.
The square has long associations with magical powers throughout its history (and even up to the 19th century in North and South America), including a perceived ability to extinguish fires, particularly in Germany. The square appears in early and late medieval medical textbooks such as the Trotula, and was employed as a medieval cure for many ailments, particularly for dog bites and rabies, as well as for insanity, and relief during childbirth.
It has featured in a diverse range of contemporary artworks including fiction books, paintings, musical scores, and films, and most notably in Christopher Nolan's 2020 film Tenet. In 2020, The Daily Telegraph called it "one of the closest things the classical world had to a meme".
Description and naming
The Sator square is arranged as a 5 × 5 grid consisting of five 5-letter words, thus totaling 25 characters. It uses 8 different Latin letters: 5 consonants (S, T, R, P, N) and 3 vowels (A, E, O). In some versions, the vertical and horizontal lines of the grid are also drawn, but in many cases, there are no such lines. The square is described as a two-dimensional palindrome, or word square, which is a particular class of a double acrostic.
The square comes in two forms: ROTAS (left, below), and SATOR (right, below):
The earliest Roman-era versions of the square have the word ROTAS as the top line (called a ROTAS-form square, left above), but the inverted version with SATOR in the top line became more dominant from early medieval times (called a SATOR-form square, right above). Some academics call it a Rotas-Sator Square, and some of them refer to the object as a rebus, or a magic square. Since medieval times, it has also been known as a Templar Magic Square.
Discovery and dating
The existence of the square was long recognized from early medieval times, and various examples have been found in Europe, Asia Minor, North Africa (in mainly Coptic settlements), and the Americas. Medieval examples of the square in SATOR-form abound, including the earliest French example in a Carolingian Bible from AD 822 at the monastery of Saint-Germain-des-Prés. Many medieval European churches and castles have Sator square inscriptions.
The first recognized serious academic study of the square was the 1881 publication of historical survey in , titled "Sator-Arepo-Formel", and a considerable body of academic research has been subsequently published on the meaning of the square.
Up until the 1930s, a Coptic papyrus with the square in the ROTAS-form dating from the fourth or fifth century AD was considered the earliest version. In 1889, British ancient historian Francis Haverfield identified the 1868 discovery of a Sator square found in ROTAS-form scratched on a plaster wall in the Roman settlement of Corinium at Cirencester to be of Roman origin; however, his assertion was discounted at the time by most academics who considered the square to be an "early medieval charm".
Haverfield was ultimately proved right by the 1931-32 excavations at Dura-Europos in Syria that uncovered three separate Sator square inscriptions, all in ROTAS-form, on the interior walls of a Roman military office (and a fourth a year later) that were dated from circa AD 200.
Five years later in 1936, Italian archaeologist discovered a Sator square, also in ROTAS-form, inscribed on a column in the (the gymnasium) near the Amphitheatre of Pompeii (CIL IV 8623). This discovery led Della Corte to reexamine a fragment of a square, again also in ROTAS-form, that he had made in 1925 at the house of Publius Paquius Proculus, also at Pompeii (CIL IV 8123). The square at the house of Publius Paquius Proculus was dated between AD 50 and AD 79 (based on the decorative style of the interior), and the palestra square find was dated pre-AD 62 (and therefore the earthquake of AD 62), making it the oldest known Sator square discovery to date.
Translation
Individual words
The words are in Latin, and the following translations are known by scholars:
(nominative noun; from , "to sow") sower, planter, founder, progenitor (usually divine); originator; literally 'seeder';
unknown word, perhaps a proper name, either invented to complete the palindrome or of a non-Latin origin (see § Arepo interpretations);
(verb; from , 'to hold') he/she/it holds, keeps, comprehends, possesses, masters, preserves, sustains;
(ablative [see opera] singular noun) service, pains, labor; care, effort, attention;
(, accusative plural of ) wheels.
Sentence construction
The most direct sentence translation is: "The sower (or, farmer) Arepo holds the wheels with care (or, with care the wheels)". Similar translations include: "The farmer Arepo works his wheels", or "Arepo the sower (sator) guides (tenet) the wheel (rotas) with skill (opera)".
Some academics, such as French historian Jules Quicherat, believe the square should be read in a boustrophedon style (i.e. in alternating directions). The boustrophedon style, which in Greek means "as the ox plows", emphasizes the agricultural aspect of the text of the square. Such a reading when applied to the SATOR-form square, and repeating the central word TENET, gives SATOR OPERA TENET – TENET OPERA SATOR, which has been very loosely interpreted as: "as ye sow, so shall ye reap", while some believe the square should be read as just three words – SATOR OPERA TENET, which they loosely translate as: "The Creator (the author of all things) maintains his works"; both of which could imply Graeco-Roman Stoic and/or Pythagorean origins.
British academic Duncan Fishwick observes that the translation from the boustrophedon approach fails when applied to a ROTAS-form square; however, Belgian scholar Paul Grosjean reversed the boustrophedon rule on the ROTAS-form (i.e. starting on the right-hand side instead of the left) to get SAT ORARE POTEN, which loosely translates into the Jewish call to prayer, "are you able to pray enough?".
Arepo interpretations
The word AREPO is a hapax legomenon, appearing nowhere else in attested Latin literature. Some academics believe it is likely a proper name, or possibly a theophoric name, that was adapted from a non-Latin word or was invented specifically for the Sator square. French historian Jerome Carcopino believed that it came from the Gaulish word for a 'plough'; however, this has been discounted by other academics. American ancient legal historian David Daube believed that AREPO represented a Hebrew or Aramaic rendition of the ancient Greek for alpha () and omega (), bespeaking the "Alpha-Omega" concept (cf. Isiah 44.6, and Revelation 1:8) from early Judeo-Christianity. J. Gwyn Griffiths contended that the term AREPO came, via Alexandria, from the attested Egyptian name "Hr-Hp" (ḥr ḥp), which he took to mean "the face of Apis". In 1983, Serbian-American scholar Miroslav Marcovich proposed the term AREPO as a Latinized abbreviation of Harpocrates (or "Horus-the-child"), god of the rising sun, also called , which Marcovich suggests corresponds to SATOR AREPO. This would translate the square as: "The sower Horus/Harpocrates keeps in check toils and tortures".
Duncan Fishwick, among other academics, believed that AREPO was simply a residual word that was required to complete what is a complex and sophisticated palindrome (which Fishwick believed was embedded with hidden Jewish symbolism, per the "Jewish Symbol" origin theory below), and to expect more from the word was unreasonable from its likely Jewish creators.
Further anagrams
Attempts have been made to discover "hidden meanings" by the anagrammatic method of rearranging the letters of which the square is composed.
In 1883, German historian Gustav Fritsch reformed the letters to discover an invocation to Satan:
SATAN, ORO TE, PRO ARTE A TE SPERO
SATAN, TER ORO TE, OPERA PRAESTO
SATAN, TER ORO TE, REPARATO OPES
French historian Guillaume de Jerphanion catalogued examples that were known formulas for an exorcism such as:
RETRO SATANA, TOTO OPERE ASPER, and the prayers
ORO TE PATER, ORO TE PATER, SANAS
O PATER, ORES PRO AETATE NOSTRA
ORA, OPERARE, OSTENTA TE PASTOR
In 1887, Polish ethnographer Oskar Kolberg amended the strict anagrammatic approach by using abbreviations and thus deduced from the 25 letters of the Sator Square the 36 letters of the monastic rule: SAT ORARE POTEN (TER) ET OPERA(RE) R(ATI)O T(U)A S(IT), which he considered an ancient rule of the Benedictines; French historian Gaston Letonnelier made a similar approach in 1952 to get the Christian prayer: SAT ORARE POTEN(TIA) ET OPER(A) A ROTA S(ERVANT), which translates as: "Prayer is our strength and will save us from the wheel (of fate?)".
In 1935, German art historian believed he discovered the relief the Rose of Sharon gave to Saint Peter for the sin of his denial of Christ, with the anagram PETRO ET REO PATET ROSA SARONA, which translates as "For Peter even guilty the rose of Sharon is open"; academics refuted his interpretation.
In 2003, American historian Rose Mary Sheldon listed some of the many diverse sentences that can be produced from anagrams of the square including her favorite: APATOR NERO EST, which would translate as saying that the Roman emperor Nero was the result of a virgin birth.
Origin and meaning
The origin and meaning of the square has eluded a definitive academic consensus even after more than a century of study. In 1938, British classical historian Donald Atkinson said the square occupied the "mysterious region where religion, superstition, and magic meet, where words, numbers, and letters are believed, if properly combined, to exert power over the processes of nature ...". Even by 2003, American academic Rose Mary Sheldon called it "one of the oldest unsolved word puzzles in the world". In 2018, American ancient classical historian Megan O'Donald still noted that "most interpretations of the ROTAS square have failed to gain consensus due to failings", and, in particular, reconciling the archeological evidence with the square's later adoption as a religious and magical object.
Christian symbol
Adoption by Christians
Irrespective of the theory of its origin, the evidence that the Sator square, particularly in its SATOR-form, became adopted into Christian imagery is not disputed by academics. Academics note the repeated association of Christ with the "sower" (or SATOR), and the words of the Sator square have been discovered in Christian settings even in very early medieval times, including:
Jesuit historian Jean Daniélou claimed that the third century Bishop Irenaeus of Lyons (c. AD 200) knew of the square and had written of "Him who joined the beginning with the end, and is the Lord of both, and has shown forth the plough at the end". Some academics link Irenaeus with creating the association of the five words in the square to the five wounds of Christ.
The Berlin State Museum houses a sixth-century bronze amulet from Asia Minor that has two fish turned toward one another on one side, and a Sator square in Greek characters in a checkerboard pattern on the other side. Written above the square is the word "ICHTHUS", which directly translates as a term for Christ; it is the earliest known Christian annotated Sator Square.
An illustration in an early Byzantine bible gives the baptismal names of the three Magi as being: ATOR, SATOR, and PERATORAS.
In Cappadocia, in the time of Constantine VII Porphyrogenitus (913–959), the shepherds of the Nativity of Jesus are named: SATOR, AREPON, and TENETON.
The Sator square appears in diverse Christian communities, such as in Abyssinia where in the Ethiopian Book of the Dead, the individual nails in Christ's cross were called: Sador, Alador, Danet, Adera, Rodas. These are likely derived from even earlier Coptic Christian works that also ascribe the wounds of Christ and the nails of the cross with names that resemble the five words from the square.
While there is little doubt among academics that Christians adopted the square, it was not clear that they had originated the symbol.
Paternoster theory
During 1924 to 1926, three people separately discovered, or rediscovered, that the square could be used to write the name of the Lord's Prayer, the "Paternoster", twice and intersecting in a cross-form (see image opposite). The remaining residual letters (two As and two Os) could be placed in the four quadrants of the cross and would represent the Alpha and Omega that are established in Christian symbolism. The positioning of the As and Os was further supported by the fact that the position of the Ts in the Sator square formed the points of a cross – there are obscure references in the Epistle of Barnabas to T being a symbol of the cross – and that the As and Os also lay in the four quadrants of this cross. At the time of this discovery, the earliest known Sator square was from the fourth century, further supporting the dating of the Christian symbolism inherent in the Paternoster theory. Academics considered the Christian origins of the square to be largely resolved.
With the subsequent discovery of Sator squares at Pompeii, dating pre-79 AD, the Paternoster theory began to lose support, even among notable supporters such as French historian Guillaume de Jerphanion. Jerphanion noted: that (1) it was improbable that many Christians were present at Pompeii, that (2) first-century Christians would have written the square in Greek and not Latin, that (3) the Christian concepts of Alpha and Omega only appear after the first century, that (4) the symbol of the cross only appears from about AD 130–131, and that (5) cryptic Christian symbols only appeared during the persecutions of the third century.
Jérôme Carcopino claimed the Pompeii squares were added at a later date by looters. The lack of any disturbance to the volcanic deposits at the palestra, however, meant that this was unlikely, and the Paternoster theory as a proof of Christian origination lost much of its academic support.
Regardless of its Christian origins, many academics considered the Paternoster discovery as being a random occurrence to be mathematically impossible. Several examined this mathematical probability including German historian and British historian Hugh Last, but without reaching a conclusion. A 1987 computer analysis by William Baines derived a number of "pseudo-Christian formulae" from the square but Baines concluded it proved nothing.
Roman word puzzle
There is considerable contemporary academic support for the theory that the square originated as a Roman-era word puzzle. Italian historian Arsenio Frugoni found it written in the margin of the Carme delle scolte modenesi beside the Roma-Amor palindrome, and Italian classicist Margherita Guarducci noted it was similar to the ROMA OLIM MILO AMOR two-dimensional acrostic word puzzle that was also found at Pompeii (see Wiktionary for details on the Pompeiian graffito), and at Ostia and Bolonia. Similarly, another ROTAS-form square scratched into a Roman-era wall in the basement of the Basilica di Santa Maria Maggiore, was found alongside the Roma-Amor, and the Roma-Summus-Amor, palindromes. Duncan Fishwick noted the "composition of palindromes was, in fact, a pastime of Roman landed gentry". American classical epigraphist Rebecca Benefiel, noted that by 2012, Pompeii had yielded more than 13,000 separate inscriptions and that the house of Publius Paquius Proculus (where a square was found) had more than 70 pieces of graffiti alone.
A 1969 computer study by Charles Douglas Gunn started with a Roma-Amor square and found 2,264 better versions, of which he considered the Sator square to be the best. The square's origin as a word puzzle solved the problem of AREPO (a word that appears nowhere else in classical writing), as being a necessary component to complete the palindrome.
Fishwick still considered this interpretation as unproven and clarified that the apparent discovery of the Roma-Amor palindrome written beside the 1954 discovery of a square on a tile at Aquincum, was incorrectly translated (if anything it supported the square as a charm). Fishwick, and others, consider the key failing of the Roman puzzle theory of origin is the lack of any explanation as to why the square would later become so strongly associated with Christianity, and with being a medieval charm. Some argue that this can be bridged if considered as a Pythagorean-Stoic puzzle creation.
In 2018, Megan O'Donnell argued that the square is less of a pure word puzzle but more a piece of Latin Roman graffito that should be read figuratively as a wheel (i.e. the ROTAS), and that the textual-visual interplay had parallels with other forms of graffito found in Pompeii, some of which later became adopted as charms.
Jewish symbol
Some prominent academics, including British-Canadian ancient Roman scholar Duncan Fishwick, American ancient legal historian David Daube, and British ancient historian Mary Beard, consider the square as being likely of Jewish origin.
Fishwick notes that the failings of the Paternoster theory (above) are resolved when looked at from a Jewish perspective. Large numbers of Latin-speaking Jews had been settled in Pompeii, and their affinity for cryptic and mystical word symbols was well known. The Alpha and Omega concept appears much earlier in Judaism (Ex. 3.14; Is. 41.4, and 44.6), and the letters "aleph" and "tau" are used in the Talmud as symbols of totality. The Ts of TENET may be explained not as Christian crosses, but as a Latin form of the Jewish "tau" salvation symbol (from Ezekiel), and its archaic form (+ or X) appears regularly on ossuaries of both Hellenistic and early Roman times. Fishwick highlights the central position of the letter N, as Jews attached significance to the utterance of the "Name" (or nomen).
In addition, Fishwick believes a Jewish origin provides a satisfactory explanation for the Paternoster cross (or X) as the configuration is an archaic Jewish "tau" (+ or X). Fishwick draws attention to some liturgical prayers in Judaism, where several prayers refer to "Our Father". None of these liturgical prayers, however, can be dated to before Jesus. Fishwick concludes that the translations of the words ROTAS OPERA TENET AREPO SATOR are irrelevant, except to the extent that they make some sense and thereby hide a Jewish cryptic charm, and to require them to mean more is "to expect the impossible". The motivation for the creation square might have been the Jewish pogroms of AD 19 or AD 49; however, it fell into disuse only to be revived later by Christians facing their own persecution, and who appreciated its hidden Paternoster and Alpha and Omega symbolism, but who focused on the SATOR-form (which gave an emphasis on the "sower", which was associated with Christ).
Research in 2006 by French classical scholar Nicolas Vinel drew on recent discoveries on the mathematics of ancient magic squares to propose that the square was a "Jewish cryptogram using Pythagorean arithmetic". Vinel decoded several Jewish concepts in the square, including the reason for AREPO, and was able to explain the word SAUTRAN that appears beside the square that was discovered on the palestra column in Pompeii. Vinel addressed a criticism of the Jewish origin theory – why would the Jews have then abandoned the symbol? – by noting the Greek texts that they also abandoned (e.g. the Septuagint) in favor of Hebrew versions.
Other theories
The amount of academic research published on the Rotas-Sator square is regarded as being considerable (and even described by one source as "immense"); American academic Rose Mary Sheldon attempted to catalog and review the most prominent works in a 2003 paper published in Cryptologia. Among the more diverse but less supported theories Sheldon recorded were:
Several German academics have written on the links of the square to Pythagoreanism and Stoicism, including philologist , historian , and Heinz Hoffman, among others. Schneider believed the square was an important link between Etruscan religion and Stoic academic philosophy. Hommel believed that in the Stoic tradition, the Ephesian word AREPO would be discarded, and the square would be read in the boustrophedon style as SATOR OPERA TENET, TENET OPERA SATOR, translating as "The Creator preserves his works". German scholar writing the Sator square's entry in The Encyclopedia of Christianity found this theory persuasive, but Miroslav Marcovich refuted the translation.
Several academics link the square to Gnostic origins, such as Jean Doignon, Gustav Maresch, Adolfo Omodeo, and . English egyptogolist J. Gwyn Griffiths explains AREPO as a personal name derived from the Egyptian name "Hr-Hp", and sources the square to an Alexandrine origin where a gnostic tradition employed acrostics.
Some academics link the square to Orphic cults, including Serbian historian Milan Budimir who linked the Greek form of AREPO to the name Orpheus.
Italian academic Adolfo Omodeo linked the square to Mithraic origins as the Roman-era discoveries were in military locations with whom it was popular, while academic historian Walter O. Moeller attempted to derive a Mithraic relationship using perceived mathematical patterns in the square, but his arguments were not considered convincing by other academics.
Norwegian philologist Samson Eitrem took the last half of the square starting at N to get: "net opera rotans", which translates as "She spins her works", interpreting it to be a feminine being (i.e. Hecate), a demon, or even the square itself rotating on its TENET spokes, thus giving a peasant Italian pagan origin with the square as a wind indicator.
Some academics such as Swiss archeologist have proposed that it is a numerical number square, which would also imply a Semitic origin. A significant issue is that the square is in Latin, and Romans did not have the ciphered number system of the Greeks or the Semites. However, if the letters are transliterated to Greek, and then assigned ciphered numbers, the word TENET can be rendered as 666, the number of the beast. Walter O. Moeller analyzed the resultant numerical combinations to assert that the square was made by Mithraic numerologists.
In 1925, Zatzman interpreted the square as a Hebraic or Aramaic apotropaic formula against the devil, and translated the square to read: "Satan Adama Tabat Amada Natas".
In 1958, French historian Paul-Louis Couchoud proposed a novel interpretation as the square being a device for working out wind directions.
Magical and medical associations
In 2003, Rose Mary Sheldon noted: "Long after the fall of Rome, and long after the general public had forgotten about classical word games, the square survived among people who might not even read Latin. They continued to use it as a charm against illness, evil and bad luck. By the end of the Middle Ages, the "prophylactic magic" of the square was firmly established in the superstition of Italy, Serbia, Germany, and Iceland, and eventually even crossed to North America". The square appears in versions of several popular magical manuscripts from the early and late Middle Ages magical text such as the Tabula Smaragdina and the Clavicula Salomonis.
In Germany in the Middle Ages, the square was inscribed on disks that were then thrown into fires to extinguish them. An edict in 1743 by Duke Ernest Auguste of Saxe-Weimar-Eisenach required all settlements to make Sator square disks to combat fires. By the fifteenth century the square was being used as a touchstone against fire at the Château de Chinon and in France.
The square appears as a remedy during labour in the twelfth-century Latin medical text, the Trotula, and was widely cited as a cure for dog bites and rabies in medieval Europe; in both cases, the remedy/cure is administered by eating bread inscribed with the words of the square. By the sixteenth century, the use of the square to cure insanity and fever was being documented in books such as De Varia Quercus Historia (1555) by Jean du Choul, and De Rerum Varietate (1557) by Gerolamo Cardano. Jean du Choul describes a case where a person from Lyon recovered from insanity after eating three crusts of bread inscribed with the square. After the meal, the person then recited five paternosters for the five wounds of Christ, linking to the Christian imagery believed encoded into the square.
Scholars have found medieval Sator-based charms, remedies, and cures, for a diverse range of applications from childbirth, to toothaches, to love potions, to ways of warding off evil spells, and even to determine whether someone was a witch. Richard Cavendish notes a medieval manuscript in the Bodleian says: "Write these [five sator] words on in parchment with the blood of a Culver [pigeon] and bear it in thy left hand and ask what thou wilt and thou shalt have it. fiat." Other examples include Bosnia, where the square was used as a remedy for aquaphobia, and in Iceland, it was etched into the fingernails to cure jaundice.
There are examples from the nineteenth century in South America, where the Sator square was used as a cure for dog bites and snake-bites in Brazil, and in enclaves of German settlers (or mountain whites) in the Allegheny Mountains who used the square to prevent fire, stop fits, and prevent miscarriages. The Sator square features in eighteenth-century books on Pow-wow folk medicine of the Pennsylvania Dutch, such as The Long Lost Friend (see image).
Notable examples
Roman
The oldest Sator square was found in November 1936, in ROTAS-form, etched into column number LXI at the near the amphitheatre of Pompeii (CIL IV 8623). Graffiti associated with the particular columns pre-dates the AD 62 Pompeii earthquake, making it the oldest known square. It also has additional graffiti just below it, with the words SAUTRAN and VALE (CIL IV 8622a-b).
Another Sator square was also found in October 1925, in ROTAS-form, etched onto the wall in a bathroom of the house of Publius Paquius Proculus (Reg I, Ins 7, 1), also at Pompeii (CIL IV 8123). The style of the house, which is associated with Nero's reign, dated the square to between AD 50 and AD 79 (the destruction of the city).
A Sator square was found in 1954, in ROTAS-form, etched onto a roof tile of the second-century Roman Imperial governor's house for Pannonia Inferior at Aquincum, near Budapest, Hungary. There has been debate over whether a second partial inscription found beside the square is part of the Roma-Amor palindrome (thus affirming the Roman puzzle origin theory), but it seems unlikely.
A Sator square was found in 1978, in ROTAS-form, etched on a fragment of Roman pottery at a Roman site at Manchester that was dated circa. AD 185.
Four Sator squares were found in 1931–32, all in ROTAS-form, etched on the walls of military buildings, at Dura-Europos in Syria, dated circa AD 200.
A Sator square was found in 1868, in ROTAS-form, scratched onto a plaster wall in the Roman Britain settlement of Corinium Dobunnorum at Cirencester.
A Sator square was found in 1971, in ROTAS-form, etched onto an unfired brick at the Roman city of Conímbriga in Portugal that was dated from the second century.
A Sator square was found in 1966–71, in ROTAS-form, scratched into a Roman-era wall during excavations of the Basilica di Santa Maria Maggiore in Rome (along with the Roma-Amor, and the Rome Summus Amor palindromes).
Early medieval
The earliest Sator square post-Roman times was the 1899 discovery of a ROTAS-form square inscribed on a Coptic papyrus by German historians Adolph Erman and Fritz Krebs in the Egyptian papyrus collections of the Berlin State Museums (then the Koniglischen Museen); it has no other explicit Christian imagery.
The earliest Sator square with explicit additional Christian imagery is a sixth-century bronze amulet from Asia Minor that has two fish turned toward one another on one side, and a Sator square in Greek characters in a checkerboard pattern on the other side. Written above the square is the word "ICHTHUS", which directly translates as a term for Christ. It is also in the Berlin State Museums.
One of the earliest examples of a Sator square in a Christian church is the SATOR-form marble square on the facade of the circa AD 752 Benedictine Abbey of St Peter ad Oratorium, near Capestrano, in Italy.
The earliest example from France is a SATOR-form square found in a Carolingian Bible from AD 822 at the monastery of Saint-Germain-des-Prés. There are ninth- to tenth-century examples in Codex 384 from Monte Cassino, and a square was found written into the margin of a work titled Versus de cavenda Venere et vino found, which is part of Codex 1.4 of the Capitolare di Modena.
One of the earliest examples of the square being applied to medical beliefs is from the twelfth-century Latin medical textbooks, the Trotula, where the translated text advises: "[98] Or let these names be written on cheese and butter: + sa. e. op. ab. z. po. c. zy. e pe. pa. pu c. ac. sator arepo tenet os pera rotas and let them be given to eat". In a similar vein, there is a thirteenth-century parchment from Aurillac that offers a Sator square chant for women in childbirth.
Later medieval
Twelfth-century French examples are found on the wall of the Eglise Saint Laurent at , and in the keep of Château de Loches.
A Sator square in SATOR-form was found on a block set into the doorway facade of a fortified wall in the largely abandoned medieval fortress town of Oppède-le-Vieux, in France's Luberon; the old town itself dates from the twelfth or thirteenth-century and was abandoned by the seventeenth-century.
Many medieval Italian towns and churches have squares. The twelfth-century church of San Giovanni Decolatto in Pieve Terzagni in Cremona has fragments of a floor mosaic that included a square. Valvisciolo Abbey has letters forming five concentric rings, each one divided into five sectors. One appears on the exterior wall of the Duomo in Siena. Inside the church of Acquaviva Collecroce is a stone with the square in a ROTAS-form. Others include the church of the Pieve of San Giovanni, the Collegiate church of Saint Ursus, the Cathedral of Ascoli Satriano, and the Church of San Lorenzo in Paggese in Marche.
The square is also found in diverse locations all over later medieval France, including fifteenth-century examples at the Château de Chinon, the , as well as in the courthouse in Valbonnais.
There is a Sator square in SATOR-form in the medieval Rivington Church in Lancashire, England.
The phrase appears on the rune stone Nä Fv1979;234 from Närke, Sweden, dated to the fourteenth century. It reads "sator arepo tenet" (untranscribed: "sator ¶ ar(æ)po ¶ tænæt). It also occurs in two inscriptions from Gotland (G 145 M and G 149 M), which contain the whole palindrome.
Other
Lady Jane Francesa Wilde's anthology of Irish folklore, Ancient Legends Mystic Charms & Superstitions of Ireland (1888), includes the tale of a young girl who is enchanted by a poet using the spell of a Sator square written on a piece of paper in blood.
The Sator square, with some letters changed, features in eighteenth-century books on Pow-wow folk medicine of the Pennsylvania Dutch, such as The Long Lost Friend (see image earlier).
In popular culture
The Sator square has inspired many works in the arts, including some classical and contemporary composers such as works by Austrian composer Anton Webern and Italian composer Fabio Mengozzi, writers such as Brazilian writer Osman Lins (whose novel Avalovara (1973) follows the structure of the square), and painters such as American artist Dick Higgins with La Melancolia (1983), and American artist Gary Stephan with Sator Arepo Tenet Opera Rotas (1982).
British-American director Christopher Nolan's 2020 film Tenet has a story structure that mimics the square's concept of interlinked multiple directions of meaning, and incorporates all five of the names from the Sator square:
The main antagonist is named Sator.
The artist who created the forged Goya drawings was named "Arepo".
Tenet is the title of the film as well as the secret organization that works to save the world.
The opening scene is set at an opera house.
Sator owns a construction company called "Rotas".
American author Lawrence Watt-Evans notes that Sir Terry Pratchett named the main square in the fictional city of Ankh-Morpork in his Discworld book series, "Sator Square", in a deliberate reference to the symbol. Watt-Evans notes that the Discworld series is full of other incidental references to unusual symbols and concepts.
The song Tenet by the Nordic neo-folk band Heilung is based on the Sator square. All its individual musical parts, melodies and instruments (and even at times the lyrics) play the same both forward and backwards.
See also
Abracadabra, a second-century Roman magic word
Abraxas, a mystical word in Gnosticism
Nipson anomemata me monan opsin, a fourth-century Byzantine palindrome
Paser crossword stele
The Book of the Sacred Magic of Abramelin the Mage, a medieval book that contains word squares
Notes
References
Further reading
External links
An Early Christian Cryptogram? Duncan Fishwick, University of St. Michael's College (1959)
The "Magic Square" in Conimbriga (Portugal) Robert Étienne, University of Coimbra (1978)
Square found in 1936 in the Palestra Grande on column (II 7), Parco Archeologico di Pompei, inv. 20565 (2023)
1930s archaeological discoveries
1st-century artifacts
1st-century inscriptions
Amulets
Ancient Roman art
Culture of ancient Rome
Archaeological discoveries in Italy
Archaeological discoveries in Portugal
Archaeological discoveries in Syria
Archaeological discoveries in the United Kingdom
Christian symbols
Coptic history
Dura-Europos
Early Christian inscriptions
Graffiti (archaeology)
Incantation
Knights Templar in popular culture
Language and mysticism
Latin inscriptions
Latin words and phrases
Lord's Prayer
Magic symbols
Magic words
Medieval Christian inscriptions
Medieval inscriptions in Latin
Objects believed to protect from evil
Palindromes
Papyri in the Staatliche Museen zu Berlin
Pennsylvania Dutch culture
Pompeii (ancient city)
Religious symbols
Roman archaeology
Superstitions of Europe
Superstitions of the Americas
Theophoric names
Undeciphered historical codes and ciphers
Word puzzles | Sator Square | [
"Physics"
] | 8,113 | [
"Symmetry",
"Palindromes"
] |
988,796 | https://en.wikipedia.org/wiki/Jevons%20paradox | In economics, the Jevons paradox (; sometimes Jevons effect) occurs when technological progress increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the falling cost of use induces increases in demand enough that resource use is increased, rather than reduced. Governments, both historical and modern, typically expect that energy efficiency gains will lower energy consumption, rather than expecting the Jevons paradox.
In 1865, the English economist William Stanley Jevons observed that technological improvements that increased the efficiency of coal use led to the increased consumption of coal in a wide range of industries. He argued that, contrary to common intuition, technological progress could not be relied upon to reduce fuel consumption.
The issue has been re-examined by modern economists studying consumption rebound effects from improved energy efficiency. In addition to reducing the amount needed for a given use, improved efficiency also lowers the relative cost of using a resource, which increases the quantity demanded. This may counteract (to some extent) the reduction in use from improved efficiency. Additionally, improved efficiency increases real incomes and accelerates economic growth, further increasing the demand for resources. The Jevons paradox occurs when the effect from increased demand predominates, and the improved efficiency results in a faster rate of resource utilization.
Considerable debate exists about the size of the rebound in energy efficiency and the relevance of the Jevons paradox to energy conservation. Some dismiss the effect, while others worry that it may be self-defeating to pursue sustainability by increasing energy efficiency. Some environmental economists have proposed that efficiency gains be coupled with conservation policies that keep the cost of use the same (or higher) to avoid the Jevons paradox. Conservation policies that increase cost of use (such as cap and trade or green taxes) can be used to control the rebound effect.
History
The Jevons paradox was first described by the English economist William Stanley Jevons in his 1865 book The Coal Question. Jevons observed that England's consumption of coal soared after James Watt introduced the Watt steam engine, which greatly improved the efficiency of the coal-fired steam engine from Thomas Newcomen's earlier design. Watt's innovations made coal a more cost-effective power source, leading to the increased use of the steam engine in a wide range of industries. This in turn increased total coal consumption, even as the amount of coal required for any particular application fell. Jevons argued that improvements in fuel efficiency tend to increase (rather than decrease) fuel use, writing: "It is a confusion of ideas to suppose that the economical use of fuel is equivalent to diminished consumption. The very contrary is the truth."
At that time, many in Britain worried that coal reserves were rapidly dwindling, but some experts opined that improving technology would reduce coal consumption. Jevons argued that this view was incorrect, as further increases in efficiency would tend to increase the use of coal. Hence, improving technology would tend to increase the rate at which England's coal deposits were being depleted, and could not be relied upon to solve the problem.
Although Jevons originally focused on coal, the concept has since been extended to other resources, e.g., water usage. The Jevons paradox is also found in socio-hydrology, in the safe development paradox called the reservoir effect, where construction of a reservoir to reduce the risk of water shortage can instead exacerbate that risk, as increased water availability leads to more development and hence more water consumption.
Cause
Economists have observed that consumers tend to travel more when their cars are more fuel efficient, causing a 'rebound' in the demand for fuel. An increase in the efficiency with which a resource (e.g. fuel) is used causes a decrease in the cost of using that resource when measured in terms of what it can achieve (e.g. travel). Generally speaking, a decrease in the cost (or price) of a good or service will increase the quantity demanded (the law of demand). With a lower cost for travel, consumers will travel more, increasing the demand for fuel. This increase in demand is known as the rebound effect, and it may or may not be large enough to offset the original drop in fuel use from the increased efficiency. The Jevons paradox occurs when the rebound effect is greater than 100%, exceeding the original efficiency gains.
The size of the direct rebound effect is dependent on the price elasticity of demand for the good. In a perfectly competitive market where fuel is the sole input used, if the price of fuel remains constant but efficiency is doubled, the effective price of travel would be halved (twice as much travel can be purchased). If in response, the amount of travel purchased more than doubles (i.e. demand is price elastic), then fuel consumption would increase, and the Jevons paradox would occur. If demand is price inelastic, the amount of travel purchased would less than double, and fuel consumption would decrease. However, goods and services generally use more than one type of input (e.g. fuel, labour, machinery), and other factors besides input cost may also affect price. These factors tend to reduce the rebound effect, making the Jevons paradox less likely to occur.
Khazzoom–Brookes postulate
In the 1980s, economists Daniel Khazzoom and Leonard Brookes revisited the Jevons paradox for the case of society's energy use. Brookes, then chief economist at the UK Atomic Energy Authority, argued that attempts to reduce energy consumption by increasing energy efficiency would simply raise demand for energy in the economy as a whole. Khazzoom focused on the narrower point that the potential for rebound was ignored in mandatory performance standards for domestic appliances being set by the California Energy Commission.
In 1992, the economist Harry Saunders dubbed the hypothesis that improvements in energy efficiency work to increase (rather than decrease) energy consumption the Khazzoom–Brookes postulate, and argued that the hypothesis is broadly supported by neoclassical growth theory (the mainstream economic theory of capital accumulation, technological progress and long-run economic growth). Saunders showed that the Khazzoom–Brookes postulate occurs in the neoclassical growth model under a wide range of assumptions.
According to Saunders, increased energy efficiency tends to increase energy consumption by two means. First, increased energy efficiency makes the use of energy relatively cheaper, thus encouraging increased use (the direct rebound effect). Second, increased energy efficiency increases real incomes and leads to increased economic growth, which pulls up energy use for the whole economy. At the microeconomic level (looking at an individual market), even with the rebound effect, improvements in energy efficiency usually result in reduced energy consumption. That is, the rebound effect is usually less than 100%. However, at the macroeconomic level, more efficient (and hence comparatively cheaper) energy leads to faster economic growth, which increases energy use throughout the economy. Saunders argued that taking into account both microeconomic and macroeconomic effects, the technological progress that improves energy efficiency will tend to increase overall energy use.
Energy conservation policy
Jevons warned that fuel efficiency gains tend to increase fuel use. However, this does not imply that improved fuel efficiency is worthless if the Jevons paradox occurs; higher fuel efficiency enables greater production and a higher material quality of life. For example, a more efficient steam engine allowed the cheaper transport of goods and people that contributed to the Industrial Revolution. Nonetheless, if the Khazzoom–Brookes postulate is correct, increased fuel efficiency, by itself, will not reduce the rate of depletion of fossil fuels.
There is considerable debate about whether the Khazzoom-Brookes Postulate is correct, and of the relevance of the Jevons paradox to energy conservation policy. Most governments, environmentalists and NGOs pursue policies that improve efficiency, holding that these policies will lower resource consumption and reduce environmental problems. Others, including many environmental economists, doubt this 'efficiency strategy' towards sustainability, and worry that efficiency gains may in fact lead to higher production and consumption. They hold that for resource use to fall, efficiency gains should be coupled with other policies that limit resource use. However, other environmental economists argue that, while the Jevons paradox may occur in some situations, the empirical evidence for its widespread applicability is limited.
The Jevons paradox is sometimes used to argue that energy conservation efforts are futile, for example, that more efficient use of oil will lead to increased demand, and will not slow the arrival or the effects of peak oil. This argument is usually presented as a reason not to enact environmental policies or pursue fuel efficiency (e.g. if cars are more efficient, it will simply lead to more driving). Several points have been raised against this argument. First, in the context of a mature market such as for oil in developed countries, the direct rebound effect is usually small, and so increased fuel efficiency usually reduces resource use, other conditions remaining constant. Second, even if increased efficiency does not reduce the total amount of fuel used, there remain other benefits associated with improved efficiency. For example, increased fuel efficiency may mitigate the price increases, shortages and disruptions in the global economy associated with crude oil depletion. Third, environmental economists have pointed out that fuel use will unambiguously decrease if increased efficiency is coupled with an intervention (e.g. a fuel tax) that keeps the cost of fuel use the same or higher.
The Jevons paradox indicates that increased efficiency by itself may not reduce fuel use, and that sustainable energy policy must rely on other types of government interventions as well. As the imposition of conservation standards or other government interventions that increase cost-of-use do not display the Jevons paradox, they can be used to control the rebound effect. To ensure that efficiency-enhancing technological improvements reduce fuel use, efficiency gains can be paired with government intervention that reduces demand (e.g. green taxes, cap and trade, or higher emissions standards). The ecological economists Mathis Wackernagel and William Rees have suggested that any cost savings from efficiency gains be "taxed away or otherwise removed from further economic circulation. Preferably they should be captured for reinvestment in natural capital rehabilitation." By mitigating the economic effects of government interventions designed to promote ecologically sustainable activities, efficiency-improving technological progress may make the imposition of these interventions more palatable, and more likely to be implemented.
Other examples
Agriculture
Increasing the yield of a crop, such as wheat, for a given area will reduce the area required to achieve the same total yield. However, increasing efficiency may make it more profitable to grow wheat and lead farmers to convert land to the production of wheat, thereby increasing land use instead.
AI, Large Language Models, and Semiconductors
Improvements in AI model efficiency have demonstrated the Jevons paradox in the computing sector. When OpenAI introduced their advanced ChatGPT Pro model in 2024 at $200 per month, featuring 86% accuracy on competition math problems (compared to 78% for their standard model), the higher performance led to increased rather than decreased compute consumption. Despite the higher price point and improved efficiency, organizations began implementing AI automation more extensively, particularly in data science, programming, and case law analysis. This trend was evidenced by OpenRouter's data, which showed an increase from 8 billion to over 300 billion tokens per week in token consumption within a year. The improved efficiency of these models, rather than reducing overall compute usage, enabled new use cases like continuous-operation AI agents and automated workflows, leading to higher total semiconductor demand for companies like TSMC, Intel, and Samsung.
See also
Andy and Bill's law, new software will always consume any increase in computing power that new hardware can provide
Diminishing returns
Downs–Thomson paradox, increasing road capacity can make traffic congestion worse
Tragedy of the commons, a phenomenon in which common resources to which access is not regulated tend to become depleted
Wirth's law, faster hardware can trigger the development of less-efficient software
Dutch Disease, strong revenue from a dominant sector renders other sectors uncompetitive and starves them
AI boom, periods of increased investment and rapid advancement in artificial intelligence technology
References
Further reading
Eponymous paradoxes
Paradoxes in economics
Industrial ecology
Energy policy
Energy conservation
Environmental social science concepts | Jevons paradox | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,508 | [
"Industrial engineering",
"Energy policy",
"Environmental social science concepts",
"Environmental engineering",
"Industrial ecology",
"Environmental social science"
] |
988,803 | https://en.wikipedia.org/wiki/Out%20of%20print |
An out-of-print (OOP) or out-of-commerce item or work is something that is no longer being published. The term applies to all types of printed matter, visual media, sound recordings, and video recordings. An out-of-print book is a book that is no longer being published. The term can apply to specific editions of more popular works, which may then go in and out of print repeatedly, or to the sole printed edition of a work, which is not picked up again by any future publishers for reprint.
Most works that have ever been published are out of print at any given time, while certain highly popular books, such as the Bible, are always "in print". Less popular out-of-print books are often rare and may be difficult to acquire unless scanned or electronic copies of the books are available. With the advent of book scanning, and print-on-demand technology, fewer and fewer works are now considered truly out of print.
A publisher creates a print run of a fixed number of copies of a new book. Print runs for most modern books number in the thousands. These books can be ordered in bulk by booksellers, and when all the bookseller's copies are sold, the bookseller has the option to order additional copies. If the initial print run sells out and demand still exists, the publisher will have more copies printed, if possible. When the book is no longer selling either at a rate fast enough to pay for the inventory or stock costs, or to justify another print run, the publisher will cease to print additional copies, and may remainder or pulp the remaining unsold copies. When all of the books in a print run have been sold to booksellers, the book is said to be "out of print", meaning that a bookseller cannot get any further copies from the publisher. If a book sells out unexpectedly quickly, it may be considered out of print briefly when its initial print run is exhausted, but is usually soon reprinted.
Publishers will often let a book go out of stock for long periods, then reprint the book, usually with a new cover and formatting, to catch the presumably built up demand for the book. The author or their estate may have copyright reverted to them once the publisher has declared it out of print.
Most publishing contracts contain reversion clauses allowing authors to regain the copyright to their books. One of the triggering conditions for this is an out-of-print clause which makes a book eligible for reversion to the author when a publisher no longer keeps a book in print. Often, rights do not automatically revert to the author. Instead, the author is responsible for requesting the book be put back in-print or, if the publisher declines, demand their rights back.
In recent years, with the development of print-on-demand services and electronic formats of books, there has been much contention between publishers and authors as to what deems a book out-of-print. Publishers have begun to explicitly state which book formats qualify as in print, and typically include print-on-demand and electronic copies.
At least one publishing company has adjusted their contracts to account for this change in publishing options by removing the lower limit for book sales, meaning that no matter how few copies of a book sells, if it is available through a print-on-demand vendor or electronically, it is still considered in-print.
The longer a book has been out of print, the more difficult it may be to obtain a copy. If there is enough demand for an out-of-print book, and all copyright issues can be resolved, another publisher may republish the book in the same manner as the original publisher might have reprinted it. In some cases, an out-of-print book, even one that sold very poorly, may be republished if the author becomes popular again.
A reader who wishes to purchase an out-of-print book must either find a bookseller who still has a copy, wait for another print run, or find someone who will sell their own copy as a used book. The advent of the Internet has made this process much easier, as many websites sell used books offered by bookstores and individuals.
Some publishers intentionally limit the print run of some or all titles to fewer copies than the anticipated demand, in creating limited editions marketed to collectors. In these cases, there is an implicit or explicit promise to collectors that the book will not be reprinted, at least in the same form as originally published. For instance, Madonna's book Sex, with a limited edition print run, was the most requested out-of-print book from 2011 to 2015 in BookFinder.com and remains as one of the most in-demand out-of-print publications of all time according to Barry Walters from Rolling Stone.
See also
Abandonware
Cut-out (recording industry)
Deletion (music industry) of records
List of publishers
Orphan work
Old Earth Books
Self-publishing
References
Further reading
Book Finder
External links
Criterion DVDs (Out of Print titles indicated in red), The Criterion Collection website
Out of print Criterion Laserdiscs, The Criterion Collection website (archived 27 May 2007)
Publishing
Past | Out of print | [
"Physics"
] | 1,055 | [
"Spacetime",
"Past",
"Physical quantities",
"Time"
] |
988,975 | https://en.wikipedia.org/wiki/U.S.%20Space%20%26%20Rocket%20Center | The U.S. Space & Rocket Center in Huntsville, Alabama is a museum operated by the government of Alabama, showcasing rockets, achievements, and artifacts of the U.S. space program. Sometimes billed as "Earth's largest space museum", astronaut Owen Garriott described the place as, "a great way to learn about space in a town that has embraced the space program from the very beginning."
The center opened in 1970, just after the Apollo 12 Moon landing, the second crewed mission to the lunar surface. It showcases Apollo Program hardware, including the Apollo 16 capsule, and also houses interactive science exhibits, Space Shuttle exhibits, and Army rocketry and aircraft. With more than 1,500 permanent rocketry and space exploration artifacts, as well as many rotating rocketry and space-related exhibits, the center occupies land carved out of Redstone Arsenal adjacent to Huntsville Botanical Garden at exit 15 on Interstate 565. The center offers bus tours of nearby NASA's Marshall Space Flight Center.
Two camp programs offer visitors the opportunity to stay on the grounds to learn more about spaceflight and aviation. U.S. Space Camp gives an in-depth exposure to the space program through participant use of simulators, lectures, and training exercises. Aviation Challenge offers a taste of military fighter pilot training, including simulations, lectures, and survival exercises. Both camps provide residential and day camp educational programs for children and adults.
Exhibits
The U.S. Space & Rocket Center has one of the most extensive collections of space artifacts and displays more than 1500 pieces. Displays include rockets, engines, spacecraft, simulators, and hands-on exhibits.
The Space & Rocket Center introduces visitors to U.S. rocketry efforts via both indoor and outdoor displays, from its predecessor at Peenemünde with the German V-1 flying bomb and V-2 rocket, through a progression of U.S. military rockets, such as the Redstone and Jupiter IRBM vehicles, and civilian derivatives such as the Mercury-Redstone and the Juno II, up to the Saturn rocket family civilian rockets, including the vertically displayed Saturn I Block 2 Dynamic Test Vehicle, SA-D5, which has become a famous local landmark, and on to the Space Shuttle. The Saturn V Dynamic Test Vehicle, SA-500D, the only Saturn V of the three on display to have been brought together outside a museum, is displayed overhead in a new building designed specifically for the rocket named Davidson Center for Space Exploration. The , sometimes described as the first manufactured Space Shuttle Orbiter, was a mockup made of steel and wood to test facilities for later handling the actual vehicle. Until it was removed for refurbishment in February 2021 it sat atop an external tank with solid rocket boosters attached. The was lifted back into place on the external tank and boosters in September 2024. A homecoming rededication took place on October 24, 2024.
The center showcases significant military rockets, including representatives of the Project Nike series, which formed the first ballistic missile defense, MIM-23 Hawk surface-to-air missile, Hermes, an early surface-to-surface missile, MGR-1 Honest John and Corporal nuclear missiles and Patriot, first used in the Gulf War of 1991.
The rocketry collection includes numerous engines, as well. In addition to the authentic engines mounted on rockets on display, the museum has unmounted engines on display, including two F-1s, the type of gigantic engine that produced to push Saturn Vs off the launch pad, J-2 engine that powered second and third stages of the Saturn V, and both Descent and Ascent Propulsion System (DPS/APS) engines for the Lunar Module. Engines from the V-2 engine to NERVA to the Space Shuttle Main Engine are on display as well. The rocket park area renovation was completed in November 2024.
The Apollo program gets full coverage in the Davidson Center for Space Exploration with artifacts outlining Apollo missions. Astronauts crossed the service structure's red walkway to the White Room, both on display, and climbed in the Command Module atop a Saturn V which was their cabin for the trip to the Moon and back. The Apollo 16 command module, which carried astronauts John Young, Charles Duke and Ken Mattingly, orbited the Moon 64 times in 1972, is on display. The Saturn V Instrument Unit controlled five F-1 engines in the first stage of the rocket as it lifted off the pad. Several exhibits relate the complexity and magnitude of that phase of the journey. They took a Lunar Module (mockup on display) to the lunar surface where they collected Moon rocks such as the Apollo 12 Lunar Sample Number 12065,15 at the museum. Later Moon trips took a Lunar Roving Vehicle (displayed beside the LM). The first few Moon trips ended at a Mobile Quarantine Facility (Apollo 12's is on display) where astronauts stayed to ensure containment of any Moon contamination after that mission.
A restored engineering mock-up of Skylab is also on display, showing the Apollo project's post-lunar efforts. Various simulators help visitors understand the spaceflight experience. Space Shot lets the rider experience launch-like 4 gs and 2–3 seconds of weightlessness. G-Force Accelerator offers 3 gs of acceleration for an extended period by means of a centrifuge. Several other simulators entertain and educate visitors.
Other exhibits offer a hands-on understanding of concepts related to rocketry or space travel. A bell jar demonstrates the reason for using a rocket instead of a propeller in the vacuum of space. A wind tunnel offers visitors the opportunity to manipulate a model to see how forces change with its orientation, and The Mind of Saturn exhibit demonstrates gyroscopic force (necessary for rocket navigation). An Apollo trainer offers visitors the opportunity to climb in.
Some simulators on exhibit were used for astronaut training. A Project Mercury simulator shows the cramped conditions endured by the first Americans in space. A Gemini simulator shows visitors the accommodations when two people flew together to space for the first U.S. missions involving extra-vehicular activities and space rendezvous.
Exhibits also cover the future of space flight. Two Orion spacecraft exhibits show the next NASA spacecraft, and a Bigelow Aerospace commercial habitat model details a space tourism effort.
Bus tours
The Space & Rocket Center offers bus tours of Marshall Space Flight Center. The tour offers views of all four National Historic Landmarks at the center including a stop at the landmark Redstone Test Stand, where Alan Shepard's Redstone Rocket was tested prior to launch. Another scheduled stop is the Payload Operations and Integration Center, which serves as mission control for a number of experiments. Bus tours originally started July 4, 1972, but were suspended following the September 11 attacks in 2001.
Tours resumed July 20, 2012, the 43rd anniversary of the Apollo 11 Moon landing, limited to U.S. citizens because of security protocol at the Army installation, Redstone Arsenal, which contains Marshall Space Flight Center. As of 2023, bus tours of MSFC are no longer offered. Bus tours of Space Camp's Aviation Challenge are available.
Traveling exhibits
In the summer of 2010, the Space and Rocket Center began hosting traveling exhibits. The first was Star Wars: Where Science Meets Imagination with other exhibits planned. The United States Space Camp hosted at the facility has provided themed camps in conjunction with the exhibits, including a Jedi Experience camp.
Other traveling exhibits include:
The Chronicles of Narnia: The Exhibition Traveling Exhibit
CSI: The Experience Traveling Exhibit
A T-Rex Named Sue and Be the Dinosaur
100 Years of Von Braun: His American Journey
Mammoths and Mastodons: Titans of the Ice Age
Miss Baker gravesite
The U.S. Space & Rocket Center is the resting place of Miss Baker, a squirrel monkey who flew on a suborbital test flight of the PGM-19 Jupiter rocket on May 28, 1959. Baker lived in a facility at the center from 1971 until she died of kidney failure in November 1984.
History
The idea for the museum was first proposed by Dr. Wernher von Braun, who led the efforts of the United States to land the first man on the Moon. Plans for the museum were underway in 1960 with an economic feasibility study for the Huntsville-Madison County Chamber of Commerce.
Von Braun, understanding the dominance of football in the Alabama culture, persuaded rival Alabama and Auburn coaches Bear Bryant and Shug Jordan to appear in a television commercial supporting a $1.9 million statewide bond referendum to finance museum construction. The referendum passed on November 30, 1965, and a donation of land from the Army's Redstone Arsenal provided a location on which to build.
To help draw tourists from far afield, the center needed a crown jewel. The Huntsville Times reported, Center director "Edward O. Buckbee is the type of guy with the tenacity to 'arrange' for this planet's largest, most complex mechanical beast to become a part of the Alabama Space and Rocket Center at Huntsville. / Pulling off the coup – getting a Saturn 5 moon rocket here which cost 90 times the center itself – was 'a little difficult,' admits Buckbee in a galloping understatement." Buckbee worked with von Braun to see that the Saturn V Dynamic Test Vehicle would be delivered to the site as it was on June 28, 1969. The Saturn I Block 2 Dynamic Test Vehicle which stands erect at the museum was delivered the same day. Initial plans called for visitors to walk through the Saturn V. The center opened on March 17, 1970.
The Space & Rocket Center was a "major sponsor" of the United States pavilion at the 1982 World's Fair, providing exhibits on space and energy as well as equipment and operations for the IMAX theater at the fair. At the time, the Space & Rocket Center also served as the Alabama Energy Information Center. The Spacedome IMAX theater at the museum opened December 19, 1982. The theater closed October 7, 2018 and was converted into the Intuitive Planetarium, featuring high-definition digital projectors, which opened February 28, 2019.
Mike Wing plunged the Center into debt as its executive director from 1998 to 1999. Wing oversaw construction of a full-scale vertical Saturn V replica to be finished by the 30th anniversary of the Apollo 11 moon landing, July 1999. It serves as a towering landmark in Huntsville, and cost the center $8.6 million of borrowed money. The Huntsville Times estimated interest costs at $10 million. Wing also sought to create a program for fifth grade students in Alabama and elsewhere to attend Space Camp at no cost to them. Anonymous corporate pledges that Wing promised would fund the $800 per student never arrived. Wing prolonged the Alabama Space Science Exhibit Commission's investigation into the pledges by writing bogus personal checks and having the center record them as received. The program ultimately cost the center $7.5 million. Wing was pressured to resign, and several members of the governing Alabama Space Science Exhibit Commission were ousted from that board as a result of the debacle. At the end of Wing's term as director, the center was $26 million in debt. The state sued Wing for $7.5 million over the Space Camp fraud. They settled for $500,000.
The expenditures would shape more than the next decade for the center. Bill Stender took over from ousted Wing as acting chief executive officer on October 14, 1999.
The board of directors was largely changed out in the shakeup removing Wing. New directors included Larry Capps who was selected to head the museum on February 9, 2000, after Stender's interim appointment. He reduced the debt to $16 million while also building the Davidson Center for Space Exploration and moving the Saturn V Dynamic Test Vehicle into its custom-built facility. Capps was director through his retirement in 2010.
Dr. Deborah Barnhart, who headed Space Camp from 1986 to 1990, was selected to run the museum in 2010. She has since brought Orion and other post-Shuttle training apparatus to Space Camp and retired the center's line of credit, reducing interest expenditures. The center had about $13 million debt in May 2014. Barnhart retired in December 2019.
In July 2020, the center put out a plea for donations to help make ends meet since two–thirds of revenue had been lost due to shutdowns and cancellations from the COVID-19 pandemic, and because of the center's unique governance, it was not eligible for any state or federal bailout programs. After a week, the center's fundraiser met its $1.5 million goal to continue operations through April 2021.
On December 15, 2020, the Alabama Space Science Exhibit Commission announced that Dr. Kimberly Robinson would be the next director, starting February 15, 2021.
Buildings
Huntsville architect David Crowe designed the initial building with of exhibit space. Since 1969, Huntsville residents could point to the vertical Saturn I rocket at the U.S. Space & Rocket Center as a distant landmark (located a few miles from the city center). In 1999, a full-scale model of the Saturn V rocket was erected, standing nearly twice as tall as the Saturn I.
From 1979 to 2023 an unflown Saturn IB rocket owned by MSFC and leased to the museum stood at the Alabama Welcome Center in Ardmore "as a reminder to visitors of Alabama's role in the space program." It was removed and salvaged due to lack of maintenance in September 2023.
The dome theater addition opened December 19, 1982, and was updated in early 2019 to be the INTUITIVE Planetarium.
The 1986 film SpaceCamp promoted the camp and inspired more than a doubling of camp attendees (from 5,000 in 1986 to 11,000 in 1987), and the facilities had to be expanded again.
A $3 million NASA Educator Resource Center was built during Larry Capps's tenure, opening mid-2005.
The newest addition to the U.S. Space & Rocket Center is the Davidson Center for Space Exploration, named after Dr. Julian Davidson, founder of Davidson Technologies. The building opened January 31, 2008. The Davidson Center was designed to house the Saturn V Dynamic Test Vehicle (listed on the National Register of Historic Places) and many other space exploration exhibits. The vehicle is elevated above the floor surface with separated stages and engines exposed, so visitors have the opportunity to walk underneath the rocket. The Davidson Center also features a 3D movie theater in addition to the planetarium in the original museum.
Governance
The U.S. Space & Rocket Center is owned by the State of Alabama and operated by the Alabama Space Science Exhibit Commission (ASSEC), whose 18 members are appointed by the Governor for terms of four or eight years. The composition and authority of the board are set forth in Title 41, Article 15 of the Code of Alabama. ASSEC meetings are open to the public.
The U.S. Space & Rocket Center Foundation is a nonprofit organization that raises funds for the ASSEC.
Visitors
The Space & Rocket Center saw 540,153 visitors in 2010 and 553,137 in 2011, and over 584,000 in 2013, the latter earning the museum recognition as the top paid-tourist attraction in Alabama. In 2017, more than 786,820 people visited the center, ranking it first among state attractions that charge admission, according to the Alabama Department of Tourism.
The NASA Human Exploration Rover Challenge, previously known as the Great Moonbuggy Race, has run every year since 1994, and all but the first two have been held at the Space & Rocket Center. The race challenges high school and college students to design and build a small moonbuggy that they can assemble on-site and ride across a simulated lunar terrain.
In popular culture
The U.S. Space & Rocket Center was the setting for feature films SpaceCamp (1986), Beyond the Stars (1989), and Space Warriors (2013), along with the 2012 made-for-TV movie A Smile as Big as the Moon.
The U.S. Space & Rocket Center was the site of a Roadblock and Pit Stop at the end of Leg 3 of The Amazing Race: Family Edition aired in October 2005.
Good Morning America has featured the Space & Rocket Center multiple times. In their 2006 proclamation the "Seven wonders of America", GMA selected the Saturn V and particularly featured the Saturn V Dynamic Test Vehicle at the U.S. Space & Rocket Center.
References
External links
U.S. Space & Rocket Center website
Aerospace museums in Alabama
Museums in Huntsville, Alabama
Space and Rocket Center
Space and Rocket Center
Open-air museums in Alabama
Buildings and structures in Huntsville, Alabama
Culture of Huntsville, Alabama
History museums in Alabama
Huntsville-Decatur, AL Combined Statistical Area
Landmarks in Alabama
Rocketry
Smithsonian Institution affiliates
Space and Rocket Center
Mountain biking venues in Alabama
Museums established in 1965
1965 establishments in Alabama
Wernher von Braun | U.S. Space & Rocket Center | [
"Engineering"
] | 3,415 | [
"Rocketry",
"Aerospace engineering"
] |
989,011 | https://en.wikipedia.org/wiki/Population%20pyramid | A population pyramid (age structure diagram) or "age-sex pyramid" is a graphical illustration of the distribution of a population (typically that of a country or region of the world) by age groups and sex; it typically takes the shape of a pyramid when the population is growing. Males are usually shown on the left and females on the right, and they may be measured in absolute numbers or as a percentage of the total population. The pyramid can be used to visualize the age of a particular population. It is also used in ecology to determine the overall age distribution of a population; an indication of the reproductive capabilities and likelihood of the continuation of a species. Number of people per unit area of land is called population density.
Structure
A population pyramid often contains continuous stacked-histogram bars, making it a horizontal bar diagram. The population size is shown on the x-axis (horizontal) while the age-groups are represented on the y-axis (vertical). The size of each bar can be displayed either as a percentage of the total population or as a raw number. Males are conventionally shown on the left and females on the right. Population pyramids are often viewed as the most effective way to graphically depict the age and distribution of a population, partly because of the very clear image these pyramids provide. A great deal of information about the population broken down by age and sex can be read from a population pyramid, and this can shed light on the extent of development and other aspects of the population.
The measures of central tendency (mean, median, and mode) should be considered when assessing a population pyramid. For example, the average age could be used to determine the type of population in a particular region. A population with an average age of 15 would be very young compared to one with an average age of 55. Population statistics are often mid-year numbers.
A series of population pyramids could give a clear picture of how a country transitions from high to low fertility rates. If the pyramid has a broad base, this indicates that a relatively high proportion of the population lies in the youngest age band, such as ages 0–14, which suggests that the fertility rate of the country is high and above replacement fertility level. If a population is below replacement fertility level, the older population is declining with age, due to a combination of mortality and an increase in the number of births over time. There are usually more females than males in the older age ranges since, for a variety of reasons, women have a greater life expectancy.
The shape of the pyramid can also reveal the age-dependency ratio of a population. Populations with a high proportion of children and/or of elderly people have a higher dependency ratio. This ratio refers to how many old and young people are dependent on the working-age groups (often defined as ages 15–64). According to Weeks' Population: an Introduction to Concepts and Issues, population pyramids can be used to predict the future, known as a population forecast. Population momentum, when a population's birth rates continue to increase even after fertility rate has declined to replacement level, can even be predicted if a population has a low mortality rate since the population will continue to grow. This then brings up the term doubling time, which is used to predict when the population will double in size. Lastly, a population pyramid can even give insight into the economic status of a country from the age stratification since the distribution of supplies is not evenly distributed through a population.
Demographic transition
In the demographic transition model, the size and shape of population pyramids vary. In stage one of the demographic transition model, the pyramids have the most defined shape. They have the ideal big base and a skinny top. In stage two, the pyramid looks similar but starts to widen in the middle age groups. In stage three, the pyramids start to round out and look similar in shape to a tombstone. In stage four, there is a decrease in the younger age groups. This causes the base of the widened pyramid to narrow. Lastly, in stage five, the pyramid starts to take on the shape of a kite as the base continues to decrease. The shape of the population is dependent upon what the economy is like in the country. More developed countries can be found in stages three, four, and five, while the least developed countries have a population represented by the pyramids in stages one and two.
Types
Each country will have a different population pyramid. However, population pyramids can be categorised into three types: stationary, expansive, or constrictive. These types have been identified by the fertility and mortality rates of a country.
"Stationary" pyramid or constant population pyramid
A pyramid can be described as stationary if the percentages of population (age and sex) remain approximately constant over time. In a stationary population, the numbers of births and death roughly balance one another.
"Expansive" pyramid or Expanding population pyramid
A population pyramid that is very wide at the younger ages, characteristic of countries with a high birth rate and perhaps low life expectancy therefore leading to high death rate. The population is said to be fast-growing, and the size of each birth cohort increases each year.
"Constrictive" pyramid or Declining population
A population pyramid that is narrowed at the bottom. The population is generally older on average, as the country has long life expectancy, a low death rate, but also a low birth rate. This may suggest that in future there may be a high dependency ratio due to reducing numbers at working ages. This is a typical pattern for a very developed country, with a high level of education, easy access to and incentive to use birth control, good health care, and few negative environmental factors.
Youth bulge
Gary Fuller (1995) described a youth bulge as a type of expansive pyramid. Gunnar Heinsohn (2003) argues that an excess in especially young adult male population predictably leads to social unrest, war, and terrorism, as the "third and fourth sons" that find no prestigious positions in their existing societies rationalize their impetus to compete by religion or political ideology.
Heinsohn claims that most historical periods of social unrest lacking external triggers (such as rapid climatic changes or other catastrophic changes of the environment) and most genocides can be readily explained as a result of a built-up youth bulge. This factor has been also used to account for the Arab Spring events and the rise of extremist populism in the 2010s. Economic recessions, such as the Great Depression of the 1930s and the late 2000s Great Recession, are also claimed to be explained in part due to a large youth population who cannot find jobs. Youth bulge can be seen as one factor among many in explaining social unrest and uprisings in society. A 2016 study finds that youth bulges increase the chances of non-ethnic civil wars, but not ethnic civil wars.
A large population of adolescents entering the labor force and electorate strains at the seams of the economy and polity, which were designed for smaller populations. This creates unemployment and alienation unless new opportunities are created quickly enough – in which case a 'demographic dividend' accrues because productive workers outweigh young and elderly dependents. Yet the 16–29 age range is associated with risk-taking, especially among males. In general, youth bulges in developing countries are associated with higher unemployment and, as a result, a heightened risk of violence and political instability. For Cincotta and Doces (2011), the transition to more mature age structures is almost a sine qua non for democratization.
To reverse the effects of youth bulges, specific policies such as creating more jobs, improving family planning programs, and reducing overall infant mortality rates should be a priority.
Middle East and North Africa
The Middle East and North Africa are currently experiencing a prominent youth bulge. "Across the Middle East, countries have experienced a pronounced increase in the size of their youth populations over recent decades, both in total numbers and as a percentage of the total population. Today, the nearly 111 million individuals aging between 15 to 29 living across the region make up nearly 27 percent of the region's population." Structural changes in service provision, especially health care, beginning in the 1960s created the conditions for a demographic explosion, which has resulted in a population consisting primarily of younger people. It is estimated that around 65% of the regional population is under the age of 25.
The youth bulge in the Middle East and North Africa has been favorably compared to that of East Asia, which harnessed this human capital and saw huge economic growth in recent decades. The youth bulge has been referred to by the Middle East Youth Initiative as a demographic gift, which, if engaged, could fuel regional economic growth and development. "While the growth of the youth population imposes supply pressures on education systems and labor markets, it also means that a growing share of the overall population is made up of those considered to be of working age; and thus not dependent on the economic activity of others. In turn, this declining dependency ratio can have a positive impact on overall economic growth, creating a demographic dividend. The ability of a particular economy to harness this dividend, however, is dependent on its ability to ensure the deployment of this growing working-age population towards productive economic activity, and to create the jobs necessary for the growing labor force."
See also
Age class structure
Demographic analysis
Demographic transition
Middle East Youth Initiative
Overpopulation
Political demography
Population growth
Sex ratio
Waithood
References
Citations
Additional references
U.S. Census Bureau, Demographic Internet Staff (June 27, 2011). "International Programs, International Data Base". Information Gateway. U.S. Census Bureau.
"Population Reference Bureau – Inform, Empower, Advance". Population Reference Bureau.
"Databases". United Nations.
Zarulli, Virginia, et al. "Women Live Longer than Men Even During Severe Famines and Epidemics". Proceedings of the National Academy of Sciences, National Academy of Sciences, Jan 3 2018.
External links
World Population Prospects, the 2010 Revision, Website of the United Nations Population Division with population pyramids for all countries
U.S. Census Bureau, International Statistical Agencies
U.S. Census Bureau, International Database (IDB)
Australian animated population pyramids, Australian Bureau of Statistics
Interactive population pyramids of metropolitan France 1901-2060 (INSEE)
Demographic economics
Ageing
Demographics
Demography
Population geography
Statistical charts and diagrams | Population pyramid | [
"Environmental_science"
] | 2,117 | [
"Demography",
"Environmental social science"
] |
989,039 | https://en.wikipedia.org/wiki/Ecological%20psychology | Ecological psychology is the scientific study of the relationship between perception and action, grounded in a direct realist approach. This school of thought is heavily influenced by the writings of Roger Barker and James J. Gibson and stands in contrast to the mainstream explanations of perception offered by cognitive psychology. Ecological psychology is primarily concerned with the interconnectedness of perception, action and dynamical systems. A key principle in this field is the rejection of the traditional separation between perception and action, emphasizing instead that they are inseparable and interdependent.
In this context, perceptions are shaped by an individual's ability to engage with their emotional experiences in relation to the environment. This emotional engagement influences action, fostering collective processing, building social capital, and promoting pro-environmental behavior.
Barker
Roger Barker's work was based on his empirical work at the Midwest Field Station. He wrote later: "The Midwest Psychological Field Station was established to facilitate the study of human behavior and its environment in situ by bringing to psychological science the kind of opportunity long available to biologists: easy access to phenomena of the science unaltered by the selection and preparation that occur in laboratories." The study of environmental units (behavior settings) grew out of this research. In his classic work "Ecological Psychology" (1968) he argued that human behaviour was radically situated: in other words, you couldn't make predictions about human behaviour unless you know what situation or context or environment the human in question was in. For example, there are certain behaviours appropriate to being in church, attending a lecture, working in a factory etc., and the behaviour of people in these environments is more similar than the behaviour of an individual person in different environments. Barker later developed these theories in a number of books and articles.
Gibson
James J. Gibson, too, stressed the importance of the environment, in particular, the (direct) perception of how the environment of an organism affords various actions to the organism. Thus, an appropriate analysis of the environment was crucial for an explanation of perceptually guided behaviour. He argued that animals and humans stand in a 'systems' or 'ecological' relation to the environment, such that to adequately explain some behaviour it was necessary to study the environment or niche in which the behaviour took place and, especially, the information that 'epistemically connects' the organism to the environment.
It is Gibson's emphasis that the foundation for perception is ambient, ecologically available information – as opposed to peripheral or internal sensations – that makes Gibson's perspective unique in perceptual science in particular and cognitive science in general. The aphorism: "Ask not what's inside your head, but what your head's inside of" captures that idea. Gibson's theory of perception is information-based rather than sensation-based and to that extent, an analysis of the environment (in terms of affordances), and the concomitant specificational information that the organism detects about such affordances, is central to the ecological approach to perception. Throughout the 1970s and up until his death in 1979, Gibson increased his focus on the environment through development of the theory of affordances - the real, perceivable opportunities for action in the environment, that are specified by ecological information.
Gibson rejected outright indirect perception, in favour of ecological realism, his new form of direct perception that involves the new concept of ecological affordances. He also rejected the emerging constructivist, information processing and cognitivist views that assume and emphasize internal representation and the processing of meaningless, physical sensations ('inputs') in order to create meaningful, mental perceptions ('output'), all supported and implemented by a neurological basis (inside the head).
His approach to perception has often been criticised and dismissed when compared to widely publicised advances made in the fields of neuroscience and visual perception by the computational and cognitive approaches.
However, developments in cognition studies which consider the role of embodied cognition and action in psychology can be seen to support his basic position.
Given that Gibson's tenet was that "perception is based on information, not on sensations", his work and that of his contemporaries today can be seen as crucial for keeping prominent the primary question of what is perceived (i.e., affordances, via information) – before questions of mechanism and material implementation are considered. Together with a contemporary emphasis on dynamical systems theory and complexity theory as a necessary methodology for investigating the structure of ecological information, the Gibsonian approach has maintained its relevance and applicability to the larger field of cognitive science.
See also
Action-specific perception
Ambient optic array
Community psychology
Conservation psychology
Embodied cognition
Environmental Design Research Association
Evolutionary psychology
Information ecology
Situated cognition
Urie Bronfenbrenner
References
External links
Viridis Graduate Institute
Ecological Psychology Information
International Society for Ecological Psychology
Centre for the Ecological Study of Perception and Action
Teaching Psychology for Sustainability
Direct Perception; An early classic and a good introduction to the theoretical background and available research (circa 1980) on ecological psychology and direct perception; by Claire Michaels and Claudia Carello.
Ecological Psychology in Context
YouTube channel PERCEIVINGACTING A collection of video and audio resources in ecological psychology, direct perception, coordination, and related topics.
Environmental Psychologist and Wellbeing Consultant
Environmental psychology
Psychological schools
Enactive cognition | Ecological psychology | [
"Environmental_science"
] | 1,069 | [
"Environmental social science",
"Environmental psychology"
] |
989,207 | https://en.wikipedia.org/wiki/Anti-tank%20mine | An anti-tank or AT mine is a type of land mine designed to damage or destroy vehicles including tanks and armored fighting vehicles.
Compared to anti-personnel mines, anti-tank mines typically have a much larger explosive charge, and a fuze designed to be triggered by vehicles or, in some cases, remotely or by tampering with the mine.
History
First World War
The first anti-tank mines were improvised during the First World War as a countermeasure against the first tanks introduced by the British towards the end of the war. Initially they were nothing more than a buried high-explosive shell or mortar bomb with its fuze upright. Later, purpose-built mines were developed, including the Flachmine 17, which was simply a wooden box packed with explosives and triggered either remotely or by a pressure fuze. By the end of the war, the Germans had developed row mining techniques, and mines accounted for 15% of U.S. tank casualties during the Battle of Saint-Mihiel, Third Battle of the Aisne, Battle of Selle and Meuse-Argonne Offensive.
Inter-War Period
The Soviet Union began developing mines in the early 1920s, and in 1924 produced its first anti-tank mine, the EZ mine. The mine, which was developed by Yegorov and Zelinskiy, had a 1 kg charge, which was enough to break the tracks of contemporary tanks. Meanwhile, in Germany, defeat spurred the development of anti-tank mines, with the first truly modern mine, the Tellermine 29, entering service in 1929. It was a disc-shaped device approximately 30 cm across filled with about 5 kg of high explosives. A second mine, the Tellermine 35 was developed in 1935. Anti-tank mines were used by both sides during the Spanish Civil War. Notably, Republican forces lifted mines placed by Nationalist forces and used them against the Nationalists. This spurred the development of anti-handling devices for anti-tank mines.
The Winter War between the Soviet Union and Finland also saw widespread use of anti-tank mines. Finnish forces, facing a general shortage of anti-tank weapons, could exploit the predictable movements of motorized units imposed by difficult terrain and weather conditions.
Second World War
The German Tellermine was a purpose-built anti-tank mine first introduced in 1929. Some variants were of a rectangular shape, but in all cases the outer casing served only as container for the explosives and fuze, without being used to destructive effect (e.g. shrapnel). Tellermine was the prototypical anti-tank mine, with many elements of its design emulated by later mines such as the Pignone P-1, NR 25, and M6. Because of the Tellermine's high operating pressure, a vehicle would need to pass directly overhead to detonate it. But since the tracks represent only about 20% of a tank's width, the pressure fuze had a limited area of effect.
As one source has it: "Since they were pressure-detonated, these early anti-tank mines typically did most of their damage to a tank's treads, leaving its crew unharmed and its guns still operational but immobilised and vulnerable to aircraft and enemy anti-tank weapons ... During World War II they (the Wehrmacht) began using a mine with a tilt-rod fuze, a thin rod standing approximately two feet up from the center of the charge and nearly impossible to see after the mine had been buried. As a tank passed over the mine, the rod was pushed forward, causing the charge to detonate directly beneath it. The blast often killed the crew and sometimes exploded onboard ammunition. Now that tank crews were directly at risk, they were less likely to plow through a minefield."
Although other measures such as satchel charges, sticky bombs and bombs designed to magnetically adhere to tanks were developed, they do not fall within the category of land mines as they are not buried and detonated remotely or by pressure. The Hawkins mine was a British anti-tank device that could be employed as a mine laid on the road surface for a tank to run over setting off a crush fuze or thrown at the tank in which case a timer fuze was used.
Shaped charge devices like the Hohl-Sprung mine 4672 were also developed by Germany later in the war, although these did not see widespread use. The most advanced German anti-tank mine of the war was their minimal metal Topfmine.
In contrast to the dinner plate mines such as the German Tellermine were bar mines such as the German Riegel mine 43 and Italian B-2 mine. These were long mines designed to increase the probability of a vehicle triggering it, the B-2 consisted of multiple small shaped charge explosive charges along its length designed to ensure a mobility kill against enemy vehicles by destroying their tracks. This form of mine was the inspiration for the British L9 bar mine.
Modern
Several advances have been made in the development of modern anti-tank mines, including:
more effective explosive payloads (different explosive compounds and shaped charge effects)
use of non-ferrous materials making them harder to detect
new methods of deployment (from aircraft or with artillery)
more sophisticated fuzes (e.g. triggered by magnetic and seismic effects, which make a mine blast resistant, or which ignore the first target vehicle to drive over it and therefore can be used against convoys or mine rollers)
sophisticated "anti-handling" devices to prevent or discourage tampering or removal.
Design
More modern anti-tank mines are usually more advanced than simple containers full of explosives detonated by remote or the vehicles pressure. The biggest advances were made in the following areas:
Power of the explosives (explosives such as RDX).
Shaped charges to increase the armour piercing effect.
Advanced dispersal systems.
More advanced or specific detonation triggers.
Most modern mine bodies or casings are made of plastic material to avoid easy detection. They feature combinations of pressure or magnetically activated detonators to ensure that they are only triggered by vehicles.
Dispersal systems
There are several systems for dispersing mines to quickly cover wide areas, as opposed to a soldier laying each one individually. These system can take the form of cluster bombs or be artillery fired. Cluster bombs contain several mines each, which could be a mixture of anti-personnel mines. When the cluster bomb reaches a preset altitude it disperses the mines over a wide area. Some anti-tank mines are designed to be fired by artillery, and arm themselves once they impact the target area.
Off-route mines
Off-route mines are designed to be effective when detonated next to a vehicle instead of underneath the vehicle. They are useful in cases where the ground or surface is not suitable for burying or concealing a mine. They normally employ a Misnay–Schardin shaped charge to fire a penetrating slug through the target armour. This self-forging projectile principle has been used for some French and Soviet off route mines and has earned infamy as an improvised explosive device (IED) technique in Israel and especially Iraq.
Due to the critical standoff necessary for penetration and the development of standoff neutralization technologies, shaped charge off-route mines using the Munroe effect are more rarely encountered, though the British/French/German ARGES mine with a tandem warhead is an example of one of the more successful.
The term "off-route mine" refers to purpose-designed and manufactured anti-tank mines. Explosively Formed Projectiles (EFPs) are one type of IED that was used in Iraq, but most "home-made" IEDs are not employed in this manner.
Countermeasures
The most effective countermeasure deployed against mine fields is mine clearing, using either explosive methods or mechanical methods. Explosive methods, such as the Giant Viper and the SADF Plofadder 160 AT, involve laying explosives across a minefield, either by propelling the charges across the field with rockets, or by dropping them from aircraft, and then detonating the explosive, clearing a path. Mechanical methods include plowing and pressure-forced detonation. In plowing, a specially designed plow attached to the front end of a heavily armored tank is used to push aside the earth and any mines embedded in it, clearing a path as wide as the pushing tank. In pressure-forced detonation, a heavily armored tank pushes a heavy spherical or cylindrical solid metal roller ahead of it, causing mines to detonate.
There are also several ways of making vehicles resistant to the effects of a mine detonation to reduce the chance of crew injury. In case of a mine's blast effect, this can be done by absorbing the blast energy, deflecting it away from the vehicle hull or increasing the distance between the crew and the points where wheels touch the ground–where any detonations are likely to centre.
Another way to protect a vehicle from mines was to attach wooden planks to the sides of armored vehicles to prevent enemy soldiers from attaching magnetic mines. In the close combat on Iwo Jima, for example, some tanks were protected in this manner. A Japanese soldier running up from a concealed foxhole would not be able to stick a magnetic mine on the side of a tank encased in wood. A simple, and highly effective, technique to protect the occupants of a wheeled vehicle is to fill the tires with water. This will have the effect of absorbing and deflecting the mine's blast energy. Steel plates between the cabin and the wheels can absorb the energy and their effectiveness is enhanced if they can be angled to deflect it away from the cabin. Increasing the distance between the wheels and passenger cabin, as is done on the South African Casspir personnel carrier, is an effective technique, although there are mobility and ease of driving problems with such a vehicle. A V-hull vehicle uses a wedge-shaped passenger cabin, with the thin edge of the wedge downwards, to divert blast energy away from occupants. Improvised measures such as sandbags in the vehicle floor or bulletproof vests placed on the floor may offer a small measure of protection against tiny mines.
Steel plates on the floor and sides and armoured glass will protect the occupants from fragments. Mounting seats from the sides or roof of the vehicle, rather than the floor, will help protect occupants from shocks transmitted through the structure of the vehicle and a four-point seat harness will minimise the chance of injury if the vehicle is flung onto its side or its roof–a mine may throw a vehicle 5 – 10 m from the detonation point. Police and military can use a robot to remove mines from an area.
Combat use
Anti-tank mines have played an important role in most wars fought since they were first used.
Second World War
Anti-tank mines played a major role on the Eastern Front, where they were used in huge quantities by Soviet troops. The most common included the TM-41, TM-44, TMSB, YAM-5, and AKS. In the Battle of Kursk, combat engineers laid 503,663 AT mines, achieving a density of 1500 mines per kilometer. This was four times greater than what was seen in the Battle of Moscow.
Furthermore, mobile detachments were tasked with laying more mines directly in the path of advancing enemy tanks. A January 1943 report on Russian anti-tank tactics by the American Intelligence Bulletin attributes the following to an unnamed Soviet intelligence officer: "Each artillery battalion and, in some cases, each artillery battery, had a mobile reserve of 5 to 8 combat engineers equipped with 4 to 5 mines each. Their function was to mine unguarded tank approaches after the direction of the enemy attack had been definitely ascertained. These mines proved highly effective in stopping and even in destroying many enemy tanks."
The Wehrmacht also relied heavily on anti-tank mines to defend the Atlantic Wall, having planted six million mines of all types in Northern France alone. Mines were usually laid in staggered rows about 500 yards (460 meters) deep. Along with the anti-personnel types, there were various model of Tellermines, Topfmines, and Riegel mines. On the Western front, anti-tank mines were responsible for 20-22% of Allied tank losses. Since the majority of these mines were equipped with pressure fuzes (rather than tilt-rods), tanks were more often crippled than destroyed outright.
Vietnam War
During the Vietnam War, both 'regular' NVA and Viet Cong forces used AT mines. These were of Soviet, Chinese or local manufacture. Anti-tank mines were also used extensively in Cambodia and along the Thai border, planted by Pol Pot's Maoist guerrillas and the Vietnamese army, which invaded Cambodia in 1979 to topple the Khmer Rouge. Millions of these mines remain in the area, despite clearing efforts. It is estimated that they cause hundreds of deaths annually.
Southern Africa
Conflict in southern Africa since the 1960s have often involved Soviet, United States or South African supported irregular armies or fighters engaged in guerrilla warfare. Anti-tank mines were widely used in unconventional roles and spurred the development of effective mine-resistant vehicles. As a result, both Angola and Mozambique are littered with such devices to this day (as with Cambodia).
In the Angolan Civil War or South African Border War that covered vast sparsely populated area of southern Angola and northern Namibia, it was easy for small groups to infiltrate and mine roads, often escaping without ever being detected. The anti-tank mines were most often placed on public roads used by civilian and military vehicles and had a great psychological effect.
Mines were often laid in complex arrangements. One tactic was to lay multiple mines on top of each other to increase the blast effect. Another common tactic was to link together several mines placed within a few metres of each other, so that all would detonate when any one was triggered.
It was because of this threat that some of the first successful mine protected vehicles were developed by South African military and police forces. Chief amongst these were the Buffel and Casspir armoured personnel carriers and Ratel armoured fighting vehicle. They employed V-shaped hulls that deflected the blast force away from occupants. In most cases occupants survived anti-tank mine detonations with only minor injuries. The vehicles themselves could often be repaired by replacing the wheels or some drive train components that were designed to be modular and replaceable for exactly this reason.
Most countries involved in Middle Eastern peace keeping missions deploy modern developments of these vehicles like the RG-31 (Canada, United Arab Emirates, United States)
and RG-32 (Sweden).
See also
Mines Advisory Group
Swiss Foundation for Mine Action
List of landmines (provides extensive details of different types)
Blast resistant mine
Anti-handling device
Examples of Anti-tank mines
Tellermine and ({World War II)
, an off route mine using the Misnay–Schardin effect
Mine dispersal systems
References
External links
Mines Advisory Group
German mines of World War 2.
How Stuff Works
German anti-tank mines (Archived 2009-10-25)
Tank casualties during WWII
Explosive weapons
Area denial weapons
Armoured warfare | Anti-tank mine | [
"Engineering"
] | 3,076 | [
"Area denial weapons",
"Military engineering"
] |
989,287 | https://en.wikipedia.org/wiki/Residue%20number%20system | A residue numeral system (RNS) is a numeral system representing integers by their values modulo several pairwise coprime integers called the moduli. This representation is allowed by the Chinese remainder theorem, which asserts that, if is the product of the moduli, there is, in an interval of length , exactly one integer having any given set of modular values.
Using a residue numeral system for arithmetic operations is also called multi-modular arithmetic.
Multi-modular arithmetic is widely used for computation with large integers, typically in linear algebra, because it provides faster computation than with the usual numeral systems, even when the time for converting between numeral systems is taken into account. Other applications of multi-modular arithmetic include polynomial greatest common divisor, Gröbner basis computation and cryptography.
Definition
A residue numeral system is defined by a set of integers
called the moduli, which are generally supposed to be pairwise coprime (that is, any two of them have a greatest common divisor equal to one). Residue number systems have been defined for non-coprime moduli, but are not commonly used because of worse properties. Therefore, they will not be considered in the remainder of this article.
An integer is represented in the residue numeral system by the set of its remainders
under Euclidean division by the moduli. That is
and
for every
Let be the product of all the . Two integers whose difference is a multiple of have the same representation in the residue numeral system defined by the s. More precisely, the Chinese remainder theorem asserts that each of the different sets of possible residues represents exactly one residue class modulo . That is, each set of residues represents exactly one integer in the interval . For signed numbers, the dynamic range is
(when is even, generally an extra negative value is represented).
Arithmetic operations
For adding, subtracting and multiplying numbers represented in a residue number system, it suffices to perform the same modular operation on each pair of residues. More precisely, if
is the list of moduli, the sum of the integers and , respectively represented by the residues and is the integer represented by such that
for (as usual, mod denotes the modulo operation consisting of taking the remainder of the Euclidean division by the right operand). Subtraction and multiplication are defined similarly.
For a succession of operations, it is not necessary to apply the modulo operation at each step. It may be applied at the end of the computation, or, during the computation, for avoiding overflow of hardware operations.
However, operations such as magnitude comparison, sign computation, overflow detection, scaling, and division are difficult to perform in a residue number system.
Comparison
If two integers are equal, then all their residues are equal. Conversely, if all residues are equal, then the two integers are equal, or their differences is a multiple of . It follows that testing equality is easy.
At the opposite, testing inequalities () is difficult and, usually, requires to convert integers to the standard representation. As a consequence, this representation of numbers is not suitable for algorithms using inequality tests, such as Euclidean division and Euclidean algorithm.
Division
Division in residue numeral systems is problematic. On the other hand, if is coprime with (that is ) then
can be easily calculated by
where is multiplicative inverse of modulo , and is multiplicative inverse of modulo .
Applications
RNS have applications in the field of digital computer arithmetic. By decomposing in this a large integer into a set of smaller integers, a large calculation can be performed as a series of smaller calculations that can be performed independently and in parallel.
See also
Covering system
Reduced residue system
References
Further reading
(viii+418+6 pages)
Chervyakov, N. I.; Molahosseini, A. S.; Lyakhov, P. A. (2017). Residue-to-binary conversion for general moduli sets based on approximate Chinese remainder theorem. International Journal of Computer Mathematics, 94:9, 1833-1849, doi: 10.1080/00207160.2016.1247439.
Chervyakov, N. I.; Lyakhov, P. A.; Deryabin, M. A. (2020). Residue Number System-Based Solution for Reducing the Hardware Cost of a Convolutional Neural Network. Neurocomputing, 407, 439-453, doi: 10.1016/j.neucom.2020.04.018.
(1+7 pages)
(296 pages)
(351 pages)
(389 pages)
Isupov, Konstantin (2021). "High-Performance Computation in Residue Number System Using Floating-Point Arithmetic". Computation. 9 (2): 9. doi:10.3390/computation9020009. ISSN 2079-3197.
Modular arithmetic
Computer arithmetic | Residue number system | [
"Mathematics"
] | 1,016 | [
"Computer arithmetic",
"Arithmetic",
"Modular arithmetic",
"Number theory"
] |
989,686 | https://en.wikipedia.org/wiki/Fenfluramine | Fenfluramine, sold under the brand name Fintepla, is a serotonergic medication used for the treatment of seizures associated with Dravet syndrome and Lennox–Gastaut syndrome. It was formerly used as an appetite suppressant in the treatment of obesity, but was discontinued for this use due to cardiovascular toxicity before being repurposed for new indications. Fenfluramine was used for weight loss both alone under the brand name Pondimin and in combination with phentermine commonly known as fen-phen.
Side effects of fenfluramine in people treated for seizures include decreased appetite, somnolence, sedation, lethargy, diarrhea, constipation, abnormal echocardiogram, fatigue, malaise, asthenia, ataxia, balance disorder, gait disturbance, increased blood pressure, drooling, excessive salivation, fever, upper respiratory tract infection, vomiting, appetite loss, weight loss, falls, and status epilepticus. Fenfluramine acts as a serotonin and norepinephrine releasing agent, agonist of the serotonin 5-HT2 receptors, and sigma σ1 receptor positive modulator. Its mechanism of action in the treatment of seizures is unknown, but may involve increased activation of certain serotonin receptors and the sigma σ1 receptor. Chemically, fenfluramine is a phenethylamine and amphetamine.
Fenfluramine was developed in the early 1960s and was first introduced for medical use as an appetite suppressant in France in 1963 followed by approval in the United States in 1973. In the 1990s, fenfluramine came to be associated with cardiovascular toxicity, and because of this, was withdrawn from the United States market in 1997. Subsequently, it was repurposed for the treatment of seizures and was reintroduced in the United States and the European Union in 2020. Fenfluramine was previously a schedule IV controlled substance in the United States. However, the substance has since no-longer been subject to control pursuant to rule-making issued on 23 December 2022.
Medical uses
Seizures
Fenfluramine is indicated for the treatment of seizures associated with Dravet syndrome and Lennox–Gastaut syndrome in people age two and older.
Dravet syndrome is a life-threatening, rare and chronic form of epilepsy. It is often characterized by severe and unrelenting seizures despite medical treatment.
Obesity
Fenfluramine was formerly used as an appetite suppressant in the treatment of obesity, but was withdrawn for this use due to cardiovascular toxicity.
Adverse effects
The most common adverse reactions in people with seizures include decreased appetite; drowsiness, sedation and lethargy; diarrhea; constipation; abnormal echocardiogram; fatigue or lack of energy; ataxia (lack of coordination), balance disorder, gait disturbance (trouble with walking); increased blood pressure; drooling, salivary hypersecretion (saliva overproduction); pyrexia (fever); upper respiratory tract infection; vomiting; decreased weight; risk of falls; and status epilepticus.
The U.S. Food and Drug Administration (FDA) fenfluramine labeling includes a boxed warning stating the drug is associated with valvular heart disease (VHD) and pulmonary arterial hypertension (PAH). Because of the risks of VHD and PAH, fenfluramine is available only through a restricted drug distribution program, under a risk evaluation and mitigation strategy (REMS). The fenfluramine REMS requires health care professionals who prescribe fenfluramine and pharmacies that dispense fenfluramine to be specially certified in the fenfluramine REMS and that patients be enrolled in the REMS. As part of the REMS requirements, prescribers and patients must adhere to the required cardiac monitoring with echocardiograms to receive fenfluramine.
At higher therapeutic doses, headache, diarrhea, dizziness, dry mouth, erectile dysfunction, anxiety, insomnia, irritability, lethargy, and stimulation have been reported with fenfluramine.
There have been reports associating chronic fenfluramine treatment with emotional instability, cognitive deficits, depression, psychosis, exacerbation of pre-existing psychosis (schizophrenia), and sleep disturbances. It has been suggested that some of these effects may be mediated by serotonergic neurotoxicity/depletion of serotonin with chronic administration and/or activation of serotonin 5-HT2A receptors.
Heart valve disease
The distinctive valvular abnormality seen with fenfluramine is a thickening of the leaflet and chordae tendineae. One mechanism used to explain this phenomenon involves heart valve serotonin receptors, which are thought to help regulate growth. Since fenfluramine and its active metabolite norfenfluramine stimulate serotonin receptors, this may have led to the valvular abnormalities found in patients using fenfluramine. In particular norfenfluramine is a potent inhibitor of the re-uptake of 5-HT into nerve terminals. Fenfluramine and its active metabolite norfenfluramine affect the 5-HT2B receptors, which are plentiful in human cardiac valves. The suggested mechanism by which fenfluramine causes damage is through over or inappropriate stimulation of these receptors leading to inappropriate valve cell division. Supporting this idea is the fact that this valve abnormality has also occurred in patients using other drugs that act on 5-HT2B receptors.
According to a study of 5,743 former users conducted by a plaintiff's expert cardiologist, damage to the heart valve continued long after stopping the medication. Of the users tested, 20% of women, and 12% of men were affected. For all ex-users, there was a 7-fold increase of chances of needing surgery for faulty heart valves caused by the drug.
Overdose
In overdose, fenfluramine can cause serotonin syndrome and rapidly result in death.
Pharmacology
Pharmacodynamics
Fenfluramine acts primarily as a serotonin releasing agent (SRA). It increases the level of serotonin, a neurotransmitter that regulates mood, appetite and other functions. Fenfluramine causes the release of serotonin by disrupting vesicular storage of the neurotransmitter, and reversing serotonin transporter function. The drug also acts as a norepinephrine releasing agent (NRA) to a lesser extent, particularly via its active metabolite norfenfluramine. At high concentrations, norfenfluramine, though not fenfluramine, also acts as a dopamine releasing agent (DRA), and so fenfluramine may do this at very high doses as well. In addition to monoamine release, while fenfluramine binds only very weakly to the serotonin 5-HT2 receptors, norfenfluramine binds to and activates the serotonin 5-HT2B and 5-HT2C receptors with high affinity and the serotonin 5-HT2A receptor with moderate affinity. The result of the increased serotonergic and noradrenergic neurotransmission is a feeling of fullness and reduced appetite.
In spite of acting as a serotonin 5-HT2A receptor agonist, fenfluramine has been described as non-hallucinogenic. However, psychedelic effects and hallucinations have occasionally been reported when large doses of fenfluramine are taken.
Fenfluramine was identified as a potent positive modulator of the σ1 receptor in 2020 and this action may be involved in its therapeutic benefits in the treatment of seizures.
Fenfluramine is inactive as an agonist of the rodent trace amine-associated receptor 1 (TAAR1). Norfenfluramine is an agonist of the human TAAR1, with dexnorfenfluramine acting as a very weak agonist of the receptor (43% of maximum at a concentration of 10,000nM) and levonorfenfluramine being inactive.
The combination of fenfluramine with phentermine, a norepinephrine–dopamine releasing agent (NDRA) acting primarily on norepinephrine, results in a well-balanced serotonin–norepinephrine releasing agent (SNRA) with weaker effects of dopamine release.
Pharmacokinetics
The elimination half-life of fenfluramine has been reported as ranging from 13 to 30 hours. The mean elimination half-lives of its enantiomers have been found to be 19 hours for dexfenfluramine and 25 hours for levfenfluramine. Norfenfluramine, the major active metabolite of fenfluramine, has an elimination half-life that is about 1.5 to 2 times as long as that of fenfluramine, with mean values of 34 hours for dexnorfenfluramine and 50 hours for levnorfenfluramine.
Chemistry
Fenfluramine is a substituted amphetamine and is also known as 3-trifluoromethyl-N-ethylamphetamine. It is a racemic mixture of two enantiomers, dexfenfluramine and levofenfluramine. Some analogues of fenfluramine include norfenfluramine, benfluorex, flucetorex, and fludorex.
History
Fenfluramine was developed in the early 1960s and was introduced in France in 1963. Approximately 50 million Europeans were treated with fenfluramine for appetite suppression between 1963 and 1996. Fenfluramine was approved in the United States in 1973. The combination of fenfluramine and phentermine was proposed in 1984. Approximately 5 million people in the United States were given fenfluramine or dexfenfluramine with or without phentermine between 1996 and 1998.
In the early 1990s, French researchers reported an association of fenfluramine with primary pulmonary hypertension and dyspnea in a small sample of patients. Fenfluramine was withdrawn from the U.S. market in 1997 after reports of heart valve disease and continued findings of pulmonary hypertension, including a condition known as cardiac fibrosis. It was subsequently withdrawn from other markets around the world. It was banned in India in 1998.
Fenfluramine was an appetite suppressant which was used to treat obesity. It was used both on its own and, in combination with phentermine, as part of the anti-obesity medication Fen-Phen.
In June 2020, fenfluramine was approved for medical use in the United States with an indication to treat Dravet syndrome.
The effectiveness of fenfluramine for the treatment of seizures associated with Dravet syndrome was demonstrated in two clinical studies in 202 subjects between ages two and eighteen. The studies measured the change from baseline in the frequency of convulsive seizures. In both studies, subjects treated with fenfluramine had significantly greater reductions in the frequency of convulsive seizures during the trials than subjects who received placebo (inactive treatment). These reductions were seen within 3–4 weeks, and remained generally consistent over the 14- to 15-week treatment periods.
The U.S. Food and Drug Administration (FDA) granted the application for fenfluramine priority review and orphan drug designations. The FDA granted approval of Fintepla to Zogenix, Inc.
On 15 October 2020, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Fintepla, intended for the treatment of seizures associated with Dravet syndrome. Fenfluramine was approved for medical use in the European Union in December 2020.
Society and culture
Legal status
Fenfluramine is a prescription medication in the US. Fenfluramine was removed from Schedule IV of the Controlled Substances Act in December 2022.
Recreational use and effects
Unlike various other amphetamine derivatives, fenfluramine is reported to be dysphoric, "unpleasantly lethargic", and non-addictive at therapeutic doses. However, it has been reported to be used recreationally at high doses ranging between 80 and 400 mg, which have been described as producing euphoria, amphetamine-like effects, sedation, and hallucinogenic effects, along with anxiety, nausea, diarrhea, and sometimes panic attacks, as well as depressive symptoms once the drug had worn off. At very high doses (e.g., 240 mg, or between 200 and 600 mg), fenfluramine induces a psychedelic state resembling that produced by lysergic acid diethylamide (LSD).
Fenfluramine has been found to produce acute effects in humans including decreased arousal, elation, and positive mood, decreased anxiety at lower doses and increased anxiety at higher doses, drug disliking, confusion, reduced psychomotor performance, reduced impulsivity, and decreased aggression. Whereas fenfluramine alone decreases positive mood and phentermine alone increases positive mood similarly to amphetamine, the combination of fenfluramine and phentermine results in a neutral impact on mood. Similarly fenfluramine diminishes the subjective effects of phentermine and amphetamine. In contrast to other serotonin releasers like MDMA and mephedrone, fenfluramine does not produce euphoria. The differing effects with fenfluramine may be attributable to its lack of concomitant dopamine release and its potent serotonin 5-HT2C receptor agonism via its metabolite norfenfluramine.
Research
Social deficits
Fenfluramine has been reported to improve social deficits in children with autism. In addition, it has been found to produce prosocial behavior similarly to the entactogen MDMA in animals. However, fenfluramine has shown limited effectiveness in treating the symptoms of autism generally. Moreover, the cardiovascular toxicity and neurotoxicity of fenfluramine make it unsuitable for clinical use in the treatment of social deficits.
References
Further reading
External links
Inchem.org - Fenfluramine hydrochloride
5-HT2A agonists
5-HT2B agonists
5-HT2C agonists
Anorectics
Cardiotoxins
Monoaminergic neurotoxins
Non-hallucinogenic 5-HT2A receptor agonists
Orphan drugs
Psychedelic phenethylamines
Respiratory toxins
Secondary amines
Serotonin-norepinephrine releasing agents
Substituted amphetamines
Trifluoromethyl compounds
Withdrawn anti-obesity drugs
Withdrawn drugs | Fenfluramine | [
"Chemistry"
] | 3,126 | [
"Respiratory toxins",
"Cellular respiration",
"Drug safety",
"Withdrawn drugs"
] |
989,858 | https://en.wikipedia.org/wiki/Streaming%20television | Streaming television is the digital distribution of television content, such as and films and television series, streamed over the Internet. Standing in contrast to dedicated terrestrial television delivered by over-the-air aerial systems, cable television, and/or satellite television systems, streaming television is provided as over-the-top media (OTT), or as Internet Protocol television (IPTV). In the United States, streaming television has become "the dominant form of TV viewing."
History
Up until the 1990s, it was not thought possible that a television show could be squeezed into the limited telecommunication bandwidth of a copper telephone cable to provide a streaming service of acceptable quality, as the required bandwidth of a digital television signal was (in the mid-1990s perceived to be) around 200Mbit/s, which was 2,000 times greater than the bandwidth of a speech signal over a copper telephone wire. By the year 2000, a television broadcast could be compressed to 2Mbit/s, but most consumers still had little opportunity to obtain greater than 1Mbit/s connection speeds.
Streaming services started as a result of two major technological developments: MPEG (motion-compensated DCT) video compression and asymmetric digital subscriber line (ADSL) data communication.
The first worldwide live-streaming event was a radio live broadcast of a baseball game between the Seattle Mariners and the New York Yankees streamed by ESPN SportsZone on September 5, 1995. During the mid-2000s, the streaming media was based on UDP, whereas the basis of the majority of the Internet was HTTP and content delivery networks (CDNs). In 2007, HTTP-based adaptive streaming was introduced by Move Networks. This new technology would be a significant change for the industry. One year later the introduction of HTTP-based adaptive streaming, many companies such as Microsoft and Netflix developed their streaming technology. In 2009, Apple launched HTTP Live Streaming (HLS), and Adobe, in 2010, HTTP Dynamic Streaming (HDS). In addition, HTTP-based adaptive streaming was chosen for important streaming events such as Roland Garros, Wimbledon, Vancouver and London Olympic Games, and many others and on premium on-demand services (Netflix, Amazon Instant Video, etc.). The increase in streaming services required a new standardization, therefore in 2012, with the contributions of Apple, Netflix, Microsoft, and other companies, Dynamic Adaptive Streaming, known as MPEG-DASH, was published as the new HTTP-based adaptive streaming standard.
The mid-2000s were the beginning of television programs becoming available via the Internet. In 2003, TVonline Station was founded in Greece, making it the world's first television station to produce and broadcast content exclusively over the internet. The online video platform site YouTube was launched in early 2005, allowing users to share illegally posted television programs. YouTube co-founder Jawed Karim said the inspiration for YouTube first came from Janet Jackson's role in the 2004 Super Bowl incident, when her breast was exposed during her performance, and later from the 2004 Indian Ocean tsunami. Karim could not easily find video clips of either event online, which led to the idea of a video sharing site.
Apple's iTunes service also began offering select television programs and series in 2005, available for download after direct payment. A few years later, television networks and other independent services began creating sites where shows and programs could be streamed online. Amazon Prime Video began in the United States as Amazon Unbox in 2006, but did not launch worldwide until 2016. Netflix, a website originally created for DVD rentals and sales, began providing streaming content in 2007. In 2008 Hulu, owned by NBC and Fox, was launched, followed by tv.com in 2009, owned by CBS. The first generation Apple TV was released in 2007 and in 2008 the first generation Roku streaming device was announced. Digital media players also began to become available to the public during this time. These digital media players have continued to be updated and new generations released.
Smart TVs took over the television market after 2010 and continue to partner with new providers to bring streaming video to even more users. As of 2015, smart TVs are the only type of middle to high-end television being produced. Amazon's version of a digital media player, Amazon Fire TV, was not offered to the public until 2014.
Access to television programming has evolved from computer and television access to include mobile devices such as smartphones and tablet computers. Corresponding apps for mobile devices started to become available via app stores in 2008, but they grew in popularity in the 2010s with the rapid deployment of LTE cellular networks. These apps enable users to stream television content on mobile devices that support them.
In 2008, the International Academy of Web Television, headquartered in Los Angeles, formed in order to organize and support television actors, authors, executives, and producers in web series and streaming television. The organization also administers the selection of winners for the Streamy Awards. In 2009, the Los Angeles Web Series Festival was founded. Several other festivals and award shows have been dedicated solely to web content, including the Indie Series Awards and the Vancouver Web Series Festival. In 2013, in response to the shifting of the soap opera All My Children from broadcast to streaming television, a new category for "Fantastic web-only series" in the Daytime Emmy Awards was created. Later that year, Netflix made history by earning the first Primetime Emmy Award nominations for a streaming television series, for Arrested Development, Hemlock Grove, and House of Cards, at the 65th Primetime Emmy Awards. Hulu earned the first Emmy win for Outstanding Drama Series, for The Handmaid's Tale at the 69th Primetime Emmy Awards.
Traditional cable and satellite television providers began to offer services such as Sling TV, owned by Dish Network, which was unveiled in January 2015. DirecTV, another satellite television provider launched their own streaming service, DirecTV Stream, in 2016. Sky launched a similar streaming service in the UK called Now.
In 2013, Video on demand website Netflix earned the first Primetime Emmy Award nominations for original streaming television at the 65th Primetime Emmy Awards. Three of its series, House of Cards, Arrested Development, and Hemlock Grove, earned nominations that year. On July 13, 2015, cable company Comcast announced an HBO plus broadcast TV package at a price discounted from basic broadband plus basic cable.
In 2017, YouTube launched YouTube TV, a streaming service that allows users to watch live television programs from popular cable or network channels, and record shows to stream anywhere, anytime. , 28% of US adults cite streaming services as their main means for watching television, and 61% of those ages 18 to 29 cite it as their main method. , Netflix is the world's largest streaming TV network and also the world's largest Internet media and entertainment company with 269 million paid subscribers, and by revenue and market cap. In 2020, the COVID-19 pandemic had a strong impact in the television streaming business with the lifestyle changes such as staying at home and lockdowns.
Technology
The Hybrid Broadcast Broadband TV (HbbTV) consortium of industry companies (such as SES, Humax, Philips, and ANT Software) is currently promoting and establishing an open European standard for hybrid set-top boxes for the reception of broadcast and broadband digital television and multimedia applications with a single-user interface.
BBC iPlayer originally incorporated peer-to-peer streaming, moved towards centralized distribution for their video streaming services. BBC executive Anthony Rose cited network performance as an important factor in the decision, as well as consumers being unhappy with their own network bandwidth being used for transmitting content to other viewers. Samsung TV has also announced their plans to provide streaming options including 3D Video on Demand through their Explore 3D service.
Access control
Some streaming services incorporate digital rights management. The W3C made the controversial decision to adopt Encrypted Media Extensions due in large part to motivations to provide copy protection for streaming content. Sky Go has software that is provided by Microsoft to prevent content being copied.
Additionally, BBC iPlayer makes use of a parental control system giving users the option to "lock" content, requiring a password to access it. The goal of these systems is to enable parents to keep children from viewing sexually themed, violent, or otherwise age-inappropriate material. Flagging systems can be used to warn a user that content may be certified or that it is intended for viewing post-watershed. Honour systems are also used where users are asked for their dates of birth or age to verify if they are able to view certain content.
IPTV
IPTV delivers television content using signals based on the Internet Protocol (IP), through managed private network infrastructure entirely owned by a single telecom or Internet service provider (ISP). This stands in contrast to delivering content over unmanaged public networks - a practice known as over-the-top content delivery. Both IPTV and OTT use the Internet protocol over a packet-switched network to transmit data, but IPTV operates in a closed system—a dedicated, managed network controlled by the local cable, satellite, telephone, or fiber-optic company. In its simplest form, IPTV simply replaces traditional circuit switched analog or digital television channels with digital channels which happen to use packet-switched transmission. In both the old and new systems, subscribers have set-top boxes or other customer-premises equipment that communicates directly over company-owned or dedicated leased lines with central-office servers. Packets never travel over the public Internet, so the television provider can guarantee enough local bandwidth for each customer's needs.
The Internet protocol is a cheap, standardized way to enable two-way communication and simultaneously provide different data (e.g., TV-show files, email, Web browsing) to different customers. This supports DVR-like features for time shifting television: for example, to catch up on a TV show that was broadcast hours or days ago, or to replay the current TV show from its beginning. It also supports video on demand—browsing a catalog of videos (such as movies or television shows) which might be unrelated to the company's scheduled broadcasts.
IPTV has an ongoing standardization process (for example, at the European Telecommunications Standards Institute).
Streaming quality
Streaming quality is the quality of image and audio transmission from the servers of the distributor to the user's screen. Also, Streaming resolution helps to measure the size of the streaming quality of video pixels. High-definition video (720p+) and later standards require higher bandwidth and faster connection speeds than previous standards, because they carry higher spatial resolution image content. In addition, transmission packet loss and latency caused by network impairments and insufficient bandwidth degrade replay quality. Decoding errors may manifest themselves with video breakup and macro blocks. The generally accepted download rate for streaming high-definition (1080p) video encoded in AVC is 6000 kbit/s, whereas UHD requires upwards of 16,000 kbit/s.
For users who do not have the bandwidth to stream HD/4K video or even SD video, most streaming platforms make use of an adaptive bitrate stream so that if the user's bandwidth suddenly drops, the platform will lower its streaming bitrate to compensate. Most modern television streaming platforms offer a wide range of both manual and automatic bitrate settings which are based on initial connection tests during the first few seconds of a video loading, and can be changed on the fly. This is valid for both Live and Catch-up content. Additionally, platforms can also offer content in standards such as HDR or Dolby Vision or at higher framerates which can require additional costs or subscription tiers to access.
Usage
Internet television is common in most US households as of the mid-2010s. In a 2013 study by eMarketer, about one in four new televisions being sold is a smart TV. Within the same decade, rapid deployment of LTE cellular network and general availability of smartphones have increased popularity of the streaming services, and the corresponding apps on mobile devices. On August 18, 2022, Nielsen reported that for the first time, streaming viewership has surpassed cable.
Considering the popularity of smart TVs, smartphones, and devices such as the Roku and Chromecast, much of the US public can watch television via the Internet. Internet-only channels are now established enough to feature some Emmy-nominated shows, such as Netflix's House of Cards. Many networks also distribute their shows the next day to streaming providers such as Hulu Some networks may use a proprietary system, such as the BBC utilizes their BBC iPlayer format. This has resulted in bandwidth demands increasing to the point of causing issues for some networks. It was reported in February 2014 that Verizon Fios is having issues coping with the demand placed on their network infrastructure. Until long-term bandwidth issues are worked out and regulation such at net neutrality Internet Televisions push to HDTV may start to hinder growth.
Aereo was launched in March 2012 in New York City (and subsequently stopped from broadcasting in June 2014). It streamed network TV only to New York customers over the Internet. Broadcasters filed lawsuits against Aereo, because Aereo captured broadcast signals and streamed the content to Aereo's customers without paying broadcasters. In mid-July 2012, a federal judge sided with the Aereo start-up. Aereo planned to expand to every major metropolitan area by the end of 2013. The Supreme Court ruled against Aereo June 24, 2014.
Some have noted that as opposed to broadcast television, with demographics of mostly "unspokenly straight" white viewers, cable, and with streaming services, dollars from subscription can "level the playing field," giving viewers from marginalized communities, and representation of their communities, "equal power."
Market competitors
Many providers of Internet television services exist—including conventional television stations that have taken advantage of the Internet as a way to continue showing television shows after they have been broadcast, often advertised as "on-demand" and "catch-up" services. Today, almost every major broadcaster around the world is operating an Internet television platform. Examples include the BBC, which introduced the BBC iPlayer on 25 June 2008 as an extension to its "RadioPlayer" and already existing streamed video-clip content, and Channel 4 that launched 4oD ("4 on Demand") (now All 4) in November 2006 allowing users to watch recently shown content. Most Internet television services allow users to view content free of charge; however, some content is for a fee. In the UK, the term catch up TV was most commonly used to refer to these sort of services at the time.
Since 2012, around 200 over-the-top (OTT) platforms providing streamed and downloadable content have emerged. Investment by Netflix in new original content for its OTT platform reached $13bn in 2018.
Streaming platforms
Amazon Prime Video
Amazon Prime Video was originally launched in the year 2006. Upon its initial release, the popular streaming service was referred to as Amazon Unbox. Amazon Prime Video was created due to the development of Amazon Prime, which is a paid service that includes free shipping of different types of goods. Amazon Prime Video is available in approximately 200 countries around the world. Each year, Amazon invests in the production of films and TV series that are streamed as Amazon originals.
Apple TV+
Apple TV+ is a streaming service owned by Apple Inc. Apple TV+ is a streaming subscription platform that launched November 1, 2019. The service offers original content exclusively made by Apple, being seen as Apple Originals. This streaming platform solely releases content that can only be found on Apple TV+, there is no third-party content found on the platform whereas several other streaming services have third-party content. The Apple TV+ name derives from the Apple TV media player that was released in 2007.
Disney+
Disney+ is an American subscription streaming service owned and operated by the Disney Entertainment division of The Walt Disney Company. Released on November 12, 2019, the service primarily distributes films and television series produced by The Walt Disney Studios and Walt Disney Television, with dedicated content hubs for the brands The Walt Disney Company, Pixar, Marvel, Star Wars, and National Geographic, as well as Star in some regions. Original films and television series are also distributed on Disney+.
Hulu
Launched in 2007, Hulu is only available to viewers in the United States because of licensing restrictions. Hulu is one of the only streaming services that provides streaming for current on-air television shows a few days after their original broadcast on cable television, but with limited availability. Hulu originally had both a free and paid plan. The free plan was accessible only via computer and there was a limited amount of content for users, whereas the paid plan could be accessed via computers, mobile devices, and connected televisions. In 2019, The Walt Disney Company became the major owner of Hulu. The platform has bundle deals where customers can subscribe to both Hulu and Disney+.
Max
Max is a streaming service released by Warner Bros. Discovery. The platform was released on May 27, 2020 in the United States, and within the first five months of launching, had amassed 8 million subscribers across the country. It offers classic Warner Bros. films and self-produced programs, and has won the right to exclusively air Ghibli Studios films in the United States. It is not until 45 days after the theatrical release from 2022 that the release is taking place on the platform and reached 70 million subscribers in December 2021. In September 2022, 92 million households were counted as subscribers, but since this was announced, including subscribers to the HBO channel, it is expected that the actual population of Max alone will be much smaller.
Netflix
Netflix, founded by Reed Hastings and Marc Randolph, is a media streaming and video rental in 1997. Two years later, Netflix was offering the audience the possibility of an online subscription service. Subscribers could select movies and TV shows on Netflix's website and receive the chosen titles via DVDs in prepaid return envelopes. In 2007, Netflix's subscribers could watch some movies and TV shows online, directly from their homes. In 2010, Netflix launched an only-streaming plan with unlimited streaming services without DVDs. Starting from the United States, the only-streaming plan reached several countries; by 2016 more than 190 countries could use this service. In 2011, Netflix began to negotiate the production of original programming, starting with the series House of Cards.
Paramount+
Paramount+ is a streaming service that is owned by the Paramount Global Media Company. The streaming service was launched on October 28, 2014, and was known as CBS All Access originally. At the time of the release, the platform focused primarily on streaming programs from local CBS stations as well as complete access to all CBS network content. In 2016 the streaming service created original content that could only be found by using the platform. As the network continued to expand with its content, the service decided to rebrand themselves and took the name Paramount+, taking its name from Paramount Pictures film studio. The network since expanded to Latin America, Europe and Australia.
Peacock
Peacock is a streaming service owned and operated by Peacock TV, which is a subsidiary of NBCUniversal Television and Streaming. The streaming service gets its name from the NBC logo based on its colors. The platform had launched on July 15, 2020. The streaming service primarily features content that can be found on NBC networking channels as well as other third-party sources. Additionally, Peacock now offers original content that cannot be found on any other streaming platform. In December 2022, Peacock reached 20 million paid subscribers. In March 2023, the platform had 22 million paid subscribers.
YouTube
The domain name of YouTube was bought and activated by Chad Hurley, Steve Chen, and Jawed Karim in the beginning of 2005. YouTube launched later that year as an online video sharing and social media platform. The video platform became popular among the audience thanks to a short video, called Lazy Sunday, uploaded by Saturday Night Live in December 2005. The SNL's video was not broadcast on TV, therefore people looked for it on Google by typing "SNL rap video," "Lazy Sunday SNL," or "Chronicles of Narnia SNL." The first result of searches was a link video on YouTube, which was the beginning of sharing videos on YouTube. Because of its popularity, YouTube had some issues caused by its bandwidth expenses. In 2006, Google bought Youtube, and after some months the video platform was the second-largest engine search in the world.
Binge-watching
In the 1990s, the practice of watching entire seasons in a short amount of time emerged with the introduction of the DVD box. Media-marathoning consists of watching at least one season of a TV show in a week or less, watching three or more films from the same series in a week or less, or reading three or more books from the same series in a month or less. The term "binge-watching" arrived with streaming TV, when Netflix launched its first original production, House of Cards, and started marketing this process of watching TV series episode after episode in 2013. COVID-19 gave another connotation to binge-watching, which was considered a negative activity.
Broadcasting rights
Broadcasting rights (also called Streaming rights in this case) vary from country to country and even within provinces of countries. These rights govern the distribution of copyrighted content and media and allow the sole distribution of that content at any one time. An example of content only being aired in certain countries is BBC iPlayer. The BBC checks a user's IP address to make sure that only users located in the UK can stream content from the BBC. The BBC only allows free use of their product for users within the UK as those users have paid for a television license that funds part of the BBC. This IP address check is not foolproof as the user may be accessing the BBC website through a VPN or proxy server. Broadcasting rights can also be restricted to allowing a broadcaster rights to distribute that content for a limited time. Channel 4's online service All 4 can only stream shows created in the US by companies such as HBO for thirty days after they are aired on one of the Channel 4 group channels. This is to boost DVD sales for the companies who produce that media.
Some companies pay very large amounts for broadcasting rights with sports and US sitcoms usually fetching the highest price from UK-based broadcasters. A trend among major content producers in North America is the use of the "TV Everywhere" system. Especially for live content, the TV Everywhere system restricts viewership of a video feed to select Internet service providers, usually cable television companies that pay a retransmission consent or subscription fee to the content producer. This often has the negative effect of making the availability of content dependent upon the provider, with the consumer having little or no choice on whether they receive the product.
Profits and costs
With the advent of broadband Internet connections, multiple streaming providers have come onto the market in the last couple of years. The main providers are Netflix, Hulu and Amazon. Some of these providers such as Hulu advertise and charge a monthly fee. Other such as Netflix and Amazon charge users a monthly fee and have no commercials. Netflix is the largest provider with more than 217 million subscribers. The rise of internet TV has resulted in cable companies losing customers to a new kind of customer called "cord cutters". Cord cutters are consumers who are cancelling their cable TV or satellite TV subscriptions and choosing instead to stream TV series, films and other content via the Internet. Cord cutters are forming communities. With the increasing availability of Online video platform (e.g., YouTube) and streaming services, there is an alternative to cable and satellite television subscriptions. Cord cutters tend to be younger people.
Overview of platforms and availability
See also
The Business of Television
Comparison of streaming media software
Comparison of video hosting services
Content delivery network
Digital television
Interactive television
Internet radio
Internet Protocol television
Home theater PC
List of free television software
List of streaming media systems
List of streaming media services
Livestreamed news
Media psychology
Multicast
P2PTV
Protection of Broadcasts and Broadcasting Organizations Treaty
Push technology
Smart TV
Software as a service
Television broadcasting
Video advertising
Web series
Web-to-TV
Webcast
WPIX, Inc. v. ivi, Inc.
References
External links
Digital television
Internet broadcasting
Internet television channels
Streaming media systems
Television technology
Video hosting
Video on demand
New media | Streaming television | [
"Technology"
] | 4,958 | [
"Information and communications technology",
"New media",
"Television technology",
"Computer systems",
"Streaming media systems",
"Telecommunications systems",
"Streaming television",
"Multimedia"
] |
989,870 | https://en.wikipedia.org/wiki/Elizabeth%20Blackburn | Elizabeth Helen Blackburn (born 26 November 1948) is an Australian-American Nobel laureate who is the former president of the Salk Institute for Biological Studies. In 1984, Blackburn co-discovered telomerase, the enzyme that replenishes the telomere, with Carol W. Greider. For this work, she was awarded the 2009 Nobel Prize in Physiology or Medicine, sharing it with Carol W. Greider and Jack W. Szostak, becoming the first Australian woman Nobel laureate.
She also worked in medical ethics, and was controversially dismissed from the Bush administration's President's Council on Bioethics. 170 scientists signed an open letter to the president in her support, maintaining that she was fired because of political opposition to her advice.
Early life and education
Elizabeth Helen Blackburn, the second of seven children, was born in Hobart, Tasmania, on 26 November 1948, with both her parents being family physicians. Her family moved to the city of Launceston when she was four, where she attended the Broadland House Church of England Girls' Grammar School (later amalgamated with Launceston Church Grammar School) until the age of sixteen.
Upon her family's relocation to Melbourne, she attended University High School, and ultimately gained very high marks in the end-of-year final statewide matriculation exams. She went on to earn a Bachelor of Science in 1970 and Master of Science in 1972, both from the University of Melbourne in the field of biochemistry. Blackburn then went to receive her PhD in 1975 from Darwin College at the University of Cambridge, for work she did with Frederick Sanger at the MRC Laboratory of Molecular Biology developing methods to sequence DNA using RNA, as well as studying the bacteriophage Phi X 174.
Career and research
During her postdoctoral work at Yale, Blackburn was doing research on the protozoan Tetrahymena thermophila and noticed a repeating codon at the end of the linear rDNA which varied in size. Blackburn then noticed that this hexanucleotide at the end of the chromosome contained a TTAGGG sequence that was tandemly repeated, and the terminal end of the chromosomes were palindromic. These characteristics allowed Blackburn and colleagues to conduct further research on the protozoan. Using the telomeric repeated end of Tetrahymena, Blackburn and colleague Jack Szostak showed the unstable replicating plasmids of yeast were protected from degradation, proving that these sequences contained characteristics of telomeres. This research also proved the telomeric repeats of Tetrahymena were conserved evolutionarily between the species. Through this research, Blackburn and collaborators noticed the replication system for chromosomes was not likely to add to the lengthening of the telomere, and that the addition of these hexanucleotides to the chromosomes was likely due to the activity of an enzyme that is able to transfer specific functional groups. The proposition of a possible transferase-like enzyme led Blackburn and PhD student Carol W. Greider to the discovery of an enzyme with reverse transcriptase activity that was able to fill in the terminal ends of telomeres without leaving the chromosome incomplete and unable to divide without loss of the end of the chromosome. This 1985 discovery led to the purification of this enzyme in lab, showing the transferase-like enzyme contained both RNA and protein components. The RNA portion of the enzyme served as a template for adding the telomeric repeats to the incomplete telomere, and the protein added enzymatic function for the addition of these repeats.Through this breakthrough, the term "telomerase" was given to the enzyme, solving the end-replication process that had troubled scientists at the time.
Telomerase
In 1984, Blackburn was a biological researcher and professor of biology and physiology at the University of California, San Francisco, studying the telomere, a structure at the end of chromosomes that protects the chromosome.
Telomerase works by adding base pairs to the overhang of DNA on the 3' end, extending the strand until DNA polymerase and an RNA primer can complete the complementary strand and successfully synthesize the double-stranded DNA. Since DNA polymerase only synthesizes DNA in the leading strand direction, the shortening of the telomere results. Through their research, Blackburn and collaborators were able to show that the telomere is effectively replenished by the enzyme telomerase, which conserves cellular division by preventing the rapid loss of genetic information internal to the telomere, leading to cellular aging.
On 1 January 2016, Blackburn was interviewed about her studies, discovering telomerase, and her current research. When she was asked to recall the moment of telomerase discovery she stated:Carol had done this experiment, and we stood, just in the lab, and I remember sort of standing there, and she had this – we call it a gel. It's an autoradiogram because there were trace amounts of radioactivity that were used to develop an image of the separated DNA products of what turned out to be the telomerase enzyme reaction. I remember looking at it and just thinking, 'Ah! This could be very big. This looks just right.' It had a pattern to it. There was a regularity to it. There was something that was not just sort of garbage there, and that was really kind of coming through, even though we look back at it now, we'd say, technically, there was this, that, and the other, but it was a pattern shining through, and it just had this sort of sense, 'Ah! There's something real here.' But then, of course, the good scientist has to be very skeptical and immediately say, 'Okay, we're going to test this every way around here, and really nail this one way or the other.' If it's going to be true, you have to make sure that it's true, because you can get a lot of false leads, especially if you're wanting something to work.In 1978, Blackburn joined the faculty of the University of California, Berkeley, in the Department of Molecular Biology. In 1990, she moved across the San Francisco Bay to the Department of Microbiology and Immunology at the University of California, San Francisco (UCSF), where she served as the Department Chair from 1993 to 1999 and was the Morris Herzstein Professor of Biology and Physiology at UCSF. Blackburn became a Professor Emeritus at UCSF at the end of 2015.
Blackburn co-founded the company Telomere Health which offers telomere length testing to the public, but later severed ties with the company.
In 2015, Blackburn was announced as the new President of the Salk Institute for Biological Studies in La Jolla, California. "Few scientists garner the kind of admiration and respect that Dr. Blackburn receives from her peers for her scientific accomplishments and her leadership, service and integrity", says Irwin M. Jacobs, chairman of Salk's Board of Trustees, on Blackburn's appointment as President of the institute. "Her deep insight as a scientist, her vision as a leader, and her warm personality will prove invaluable as she guides the Salk Institute on its continuing journey of discovery". In 2017, she announced her plans to retire from the Salk Institute the following year.
Nobel Prize
For their research and contributions to the understanding of telomeres and the enzyme telomerase, Elizabeth Blackburn, Carol Greider, and Jack Szostak were awarded the 2009 Nobel Prize in Physiology or Medicine. The substantial research on the effects of chromosomal protection from telomerase, and the impact this has on cellular division has been a revolutionary catalyst in the field of molecular biology. For example, the addition of telomerase to cells that do not possess this enzyme has shown to bypass the limit of cellular ageing in those cells, thereby linking this enzyme to reduced cellular aging. The addition of telomerase, and the presence of the enzyme in cancer cells has been shown to provide an immunity mechanism for the cell in proliferating, linking the transferase activity to increased cellular growth and reduced sensitivity for cellular signaling. Telomeres are also believed to play an important role in certain types of cancers, including pancreatic, bone, prostate, bladder, lung, kidney, and head and neck cancer. The importance of discovering this enzyme has since led her continued research at the University of California San Francisco, where she studies the effect of telomeres and telomerase activity on cellular aging.
Bioethics
Blackburn was appointed a member of the President's Council on Bioethics in 2002. She supported human embryonic cell research, in opposition to the Bush administration. Her Council terms were terminated by White House directive on 27 February 2004. Dr. Blackburn believes that she was dismissed from the Council due to her disapproval of the Bush administration's position against stem cell research. This was followed by expressions of outrage over her removal by many scientists, 170 of whom signed an open letter to the president maintaining that she was fired because of political opposition to her advice.
Scientists and ethicists at the time even went as far as to say that Blackburn's removal was in violation of the Federal Advisory Committee Act of 1972, which "requires balance on such advisory bodies"
"There is a growing sense that scientific research—which, after all, is defined by the quest for truth—is being manipulated for political ends", wrote Blackburn. "There is evidence that such manipulation is being achieved through the stacking of the membership of advisory bodies and through the delay and misrepresentation of their reports."
Blackburn serves on the Science Advisory Board of the Regenerative Medicine Foundation formerly known as the Genetics Policy Institute.
Current research
In recent years Blackburn and her colleagues have been investigating the effect of stress on telomerase and telomeres with particular emphasis on mindfulness meditation. She is also one of several biologists (and one of two Nobel Prize laureates) in the 1995 science documentary Death by Design/The Life and Times of Life and Times. She also featured in the 2012 Emmy award-winning science documentary, 'Decoding Immortality' (also known as 'Immortal') by Genepool Productions. Studies suggest that chronic psychological stress may accelerate ageing at the cellular level. Intimate partner violence was found to shorten telomere length in formerly abused women versus never abused women, possibly causing poorer overall health and greater morbidity in abused women.
At the University of California San Francisco, Blackburn currently researches telomeres and telomerase in many organisms, from yeast to human cells. The lab is focused on telomere maintenance, and how this has an impact on cellular aging. Many chronic diseases have been associated with the improper maintenance of these telomeres, thereby affecting cellular division, cycling, and impaired growth. At the cutting edge of telomere research, the Blackburn lab currently investigates the impact of limited maintenance of telomeres in cells through altering the enzyme telomerase.
Publications
Blackburn's first book The Telomere Effect: A Revolutionary Approach to Living Younger, Healthier, Longer (2017) was co-authored with health psychologist Dr. Elissa S. Epel of Aging, Metabolism, and Emotions (AME) Center at the UCSF Center for Health and Community. Blackburn comments on ageing reversal and care for one's telomeres through lifestyle: managing chronic stress, exercising, eating better and getting enough sleep; telomere testing, plus cautions and advice. While studying telomeres and the replenishing enzyme, telomerase, Blackburn discovered a vital role played by these protective caps that revolved around one central idea: ageing of cells. The book hones in on many of the effects that poor health can have on telomeres and telomerase activity. Since telomeres shorten with every division of a cell, replenishing these caps is essential to long term cell growth. Through research and data, Blackburn explained that people that lead stressful lives exhibit less telomerase functioning in the body, which leads to a decrease in the dividing capabilities of the cell. Once telomeres shorten drastically, the cells can no longer divide, meaning the tissues they replenish with every division would therefore die out, highlighting the ageing mechanism in humans. To increase telomerase activity in people with stress-filled lives, Blackburn suggests moderate exercise, even 15 minutes a day, which has been proven to stimulate telomerase activity and replenish the telomere.
Blackburn states that unhappiness in lives also has an effect on the shortening of telomeres. In a study done on divorced couples, their telomere length was "significantly shorter" compared to couples in healthy relationships, and Blackburn states, "There's an obvious stressor ... we are intensely social beings." She suggests positivity in daily life increases health. While increasing the amount of exercise, decreasing stress, and tobacco use, and maintaining a balanced sleep schedule, Blackburn explains that telomere length can be maintained, leading to a decrease in cell aging. Blackburn also tells readers to be wary of clinical pills that proclaim to lengthen or telomeres and protect the body from aging. She says that these pills and creams have no scientific proof of being anti-aging supplements and that the key to preserving our telomeres and stimulating telomerase activity comes from leading a healthy life.
Personal life
While working at the MRC Laboratory of Molecular Biology in Cambridge, Blackburn met her husband John Sedat. Sedat had taken a position at Yale, where she then decided to finish her postdoctoral. "Thus it was that love brought me to a most fortunate and influential choice: Joe Gall’s lab at Yale." They moved to New Haven and were married soon after.
Blackburn splits her time living between La Jolla and San Francisco with her husband, and has a son, born in 1986. She serves as a mentor and advocate for scientific research and policy.
Awards and honours
Blackburn's awards and honors include:
Eli Lilly Research Award for Microbiology and Immunology (1988)
United States National Academy of Sciences Award in Molecular Biology (1990)
Harvey Society Lecturer at the Harvey Society in New York (1990)
Honorary Doctorate of Science from Yale University (1991)
Fellow of American Academy of Arts and Sciences (1991)
Elected a Fellow of the Royal Society (FRS) in 1992
Fellow of American Academy of Microbiology (1993)
Foreign Associate of National Academy of Sciences (1993)
Australia Prize (1998)
Gairdner Foundation International Award (1998)
Harvey Prize (1999)
Keio Medical Science Prize (1999)
Passano Award (1999)
California Scientist of the Year in 1999
American Academy of Achievement's Golden Plate Award (2000)
American Association for Cancer Research – G.H.A. Clowes Memorial Award (2000)
American Cancer Society Medal of Honor (2000)
Fellow of American Association for the Advancement of Science (2000)
AACR-Pezcoller Foundation International Award for Cancer Research (2001)
General Motors Cancer Research Foundation Alfred P. Sloan Award (2001)
E.B.Wilson Award of the American Society for Cell Biology (2001)
Bristol-Myers Squibb Award (2003)
Robert J. and Claire Pasarow Foundation Medical Research Award (2003)
Dr A.H. Heineken Prize for Medicine (2004)
Benjamin Franklin Medal in Life Science of The Franklin Institute (2005)
Albert Lasker Award for Basic Medical Research (2006) (shared with Carol W. Greider and Jack Szostak)
Genetics Prize from the Peter Gruber Foundation (2006)
Honorary Doctorate of Science from Harvard University (2006)
Wiley Prize in Biomedical Sciences from the Wiley Foundation (shared with Carol W. Greider) (2006)
Fellow of Australian Academy of Science (2007)
Corresponding fellow of the Australian Academy of Science (2007)
Recipient of the UCSF Women's Faculty Association Award
Honorary Doctorate of Science from Princeton University (2007)
Louisa Gross Horwitz Prize of Columbia University (2007) (shared with Carol W. Greider and Joseph G. Gall)
L'Oréal-UNESCO Award for Women in Science (2008)
Albany Medical Center Prize (2008)
Pearl Meister Greengard Prize (2008)
Tasmanian Honour Roll of Women (2008)
Victorian Honour Roll of Women (2010)
Mike Hogg Award (2009)
Paul Ehrlich and Ludwig Darmstaedter Prize (2009) (shared with Carol W. Greider)
The Nobel Prize in Physiology or Medicine 2009, shared with Carol W. Greider and Jack W. Szostak "for the discovery of how chromosomes are protected by telomeres and the enzyme telomerase"
Companion of the Order of Australia (Australia Day Honours, 2010), for "eminent service to science as a leader in the field of biomedical research, particularly through the discovery of telomerase and its role in the development of cancer and ageing of cells and through contributions as an international adviser in Bioethics."
Fellow of the Royal Society of New South Wales (FRSN) (2010)
California Hall of Fame (2011)
AIC Gold Medal (2012)
The Royal Medal of the Royal Society (2015).
Honorary Fellow at Jesus College, Oxford
Blackburn was elected:
President of the Salk Institute for Biological Studies (2016–2017)
President of the American Association for Cancer Research for 2010
President of the American Society for Cell Biology for 1998
Foreign associate of the National Academy of Sciences (1993)
Member of the Institute of Medicine (2000)
Board member of the Genetics Society of America (2000–2002)
Member of the American Philosophical Society (2006)
In 2007, Blackburn was listed among Time magazine's 100 people who shape our world.
References
External links
Video Lecture on Telomeres and Telomerase
1948 births
Living people
Alumni of Darwin College, Cambridge
American Nobel laureates
Australia Prize recipients
Australian Nobel laureates
Companions of the Order of Australia
Fellows of the American Academy of Arts and Sciences
Fellows of the Australian Academy of Science
Female fellows of the Royal Society
Fellows of the Royal Society of New South Wales
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Members of the European Molecular Biology Organization
Nobel laureates in Physiology or Medicine
Women Nobel laureates
Australian women biologists
People educated at University High School, Melbourne
Recipients of the Albert Lasker Award for Basic Medical Research
University of California, San Francisco faculty
University of Melbourne alumni
Winners of the Heineken Prize
L'Oréal-UNESCO Awards for Women in Science laureates
Members of the National Academy of Medicine
20th-century American biologists
20th-century American women scientists
21st-century American women scientists
21st-century American biologists
Fellows of the Academy of Medical Sciences (United Kingdom)
Salk Institute for Biological Studies people
Fellows of the American Academy of Microbiology
Longevity researchers
Members of the American Philosophical Society
Members of the Royal Swedish Academy of Sciences
People from Launceston, Tasmania
People from Hobart
Benjamin Franklin Medal (Franklin Institute) laureates | Elizabeth Blackburn | [
"Technology"
] | 3,885 | [
"Women Nobel laureates",
"Women in science and technology"
] |
989,935 | https://en.wikipedia.org/wiki/Turtle%20Geometry | Turtle Geometry is a college-level math text written by Hal Abelson and Andrea diSessa which aims to engage students in exploring mathematical properties visually via a simple programming language to maneuver the icon of a turtle trailing lines across a personal computer display.
See also
Turtle graphics
Turtle Geometry at MIT Press
Computer science books
1981 non-fiction books
MIT Press books | Turtle Geometry | [
"Technology"
] | 71 | [
"Computing stubs",
"Computer book stubs"
] |
989,996 | https://en.wikipedia.org/wiki/Sharp%20PC-1251 | The Sharp PC-1251 was a small pocket computer that was also marketed as the Tandy Pocket Computer.
It was created by Sharp Corporation in 1982.
Technical specifications
CPU: Hitachi SC61860 (8-bit CMOS), 576 kHz clock frequency
24 digit (5×7 pixel) LCD
Integrated speaker
Same connector for printer and tape drive as PC-1401
2 built-in batteries
4 KB RAM
24 KB ROM
See also
Sharp pocket computer character sets
References
External links
Sharp PC-1251 pictures on MyCalcDB (database of 70s and 80s pocket calculators)
PC-1251
PC-1251 | Sharp PC-1251 | [
"Technology"
] | 130 | [
"Computing stubs",
"Computer hardware stubs"
] |
990,036 | https://en.wikipedia.org/wiki/Software%20requirements%20specification | A software requirements specification (SRS) is a description of a software system to be developed. It is modeled after the business requirements specification (CONOPS). The software requirements specification lays out functional and non-functional requirements, and it may include a set of use cases that describe user interactions that the software must provide to the user for perfect interaction.
Software requirements specifications establish the basis for an agreement between customers and contractors or suppliers on how the software product should function (in a market-driven project, these roles may be played by the marketing and development divisions). Software requirements specification is a rigorous assessment of requirements before the more specific system design stages, and its goal is to reduce later redesign. It should also provide a realistic basis for estimating product costs, risks, and schedules. Used appropriately, software requirements specifications can help prevent software project failure.
The software requirements specification document lists sufficient and necessary requirements for the project development. To derive the requirements, the developer needs to have a clear and thorough understanding of the products under development. This is achieved through detailed and continuous communications with the project team and customer throughout the software development process.
The SRS may be one of a contract's deliverable data item descriptions or have other forms of organizationally-mandated content.
Typically a SRS is written by a technical writer, a systems architect, or a software programmer.
History
Software requirement specifications are already used in software development processes as early as 1975.
The purpose and content of software requirement specifications was formalised in 1983 by the IEEE. The standard was published in 1984 as IEEE-830-1984 and approved by ANSI. It was revised in 1993 and 1998, before being superseded by an international standard. This standard aimed at providing criteria for a good SRS, and recommendations about its content. It recognised the benefits of prototyping for the requirement engineering. It propose an example of structure and several variants.
The ISO/IEC/IEEE 29148 standard "Systems and software engineering —Life cycle processes — Requirements engineering" superseded IEEE 830 in 2011. The current revision is from 2018. This standard is broader as it covers also requirement quality criteria, requirement management processes, and business requirement specification (BRS), as well as stakeholder requirement specification (StRS). It proposes a slightly changed example structure.
Structure
An example organization of an SRS is as follows:
Purpose
Definitions
Background
System overview
References
Overall description
Product perspective
System Interfaces
User interfaces
Hardware interfaces
Software interfaces
Communication Interfaces
Memory constraints
Design constraints
Operations
Site adaptation requirements
Product functions
User characteristics
Constraints, assumptions and dependencies
Specific requirements
External interface requirements
Performance requirements
Logical database requirement
Software system attributes
Reliability
Availability
Security
Maintainability
Portability
Functional requirements
Functional partitioning
Functional description
Control description
Environment characteristics
Hardware
Peripherals
Users
Other
It would be recommended to address also verification approaches planned to qualify the software against the requirements, for example with a specific section with a structure that mirrors the section on specific requirements.
Requirement quality
Requirements should strictly be about what is needed, independently of the system design, and not how the software should do it. Individual requirements shall hence be necessary, appropriate, and unambiguous. A set of requirements shall moreover be complete, consistent, feasible, and comprehensible.
Following the idea of code smells, the notion of requirements smell has been proposed to describe issues in requirements specification where the requirement is not necessarily wrong but could be problematic. Examples of requirements smells are subjective language, ambiguous adverbs and adjectives, superlatives and negative statements. Comparative phrases, non-verifiable terms or terms implying totatily should also be avoided.
See also
System requirements specification
Concept of operations
Requirements engineering
Software Engineering Body of Knowledge (SWEBOK)
Design specification
Specification (technical standard)
Formal specification
Abstract type
References
External links
("This standard replaces IEEE 830-1998, IEEE 1233-1998, IEEE 1362-1998 - ")
How to Write a Software Requirement Specification to Save Costs?
Software requirements
Software documentation
IEEE standards | Software requirements specification | [
"Technology",
"Engineering"
] | 799 | [
"Software engineering",
"Computer standards",
"IEEE standards",
"Software requirements"
] |
990,090 | https://en.wikipedia.org/wiki/Redshift-space%20distortions | Redshift-space distortions are an effect in observational cosmology where the spatial distribution of galaxies appears squashed and distorted when their positions are plotted as a function of their redshift rather than as a function of their distance. The effect is due to the peculiar velocities of the galaxies causing a Doppler shift in addition to the redshift caused by the cosmological expansion.
Redshift-space distortions (RSDs) manifest in two particular ways. The Fingers of God effect is where the galaxy distribution is elongated in redshift space, with an axis of elongation pointed toward the observer. It is caused by a Doppler shift associated with the random peculiar velocities of galaxies bound in structures such as clusters. The large velocities that lead to this effect are associated with the gravity of the cluster by means of the virial theorem; they change the observed redshifts of the galaxies in the cluster. The deviation from the Hubble's law relationship between distance and redshift is altered, and this leads to inaccurate distance measurements.
A closely related effect is the Kaiser effect, in which the distortion is caused by the coherent motions of galaxies as they fall inwards towards the cluster center as the cluster assembles. Depending on the particular dynamics of the situation, the Kaiser effect usually leads not to an elongation, but an apparent flattening ("pancakes of God"), of the structure. It is a much smaller effect than the fingers of God, and can be distinguished by the fact that it occurs on larger scales.
The previous effects are a consequence of special relativity, and have been observed in real data. There are additional effects that arise from general relativity. One is gravitational redshift distortion, which arises from the net gravitational redshift, or blueshift, that is acquired when the photon climbs out of the gravitational potential well of the distant galaxy and then falls into the potential well of the Milky Way galaxy. This effect will make galaxies at a higher gravitational potential than Earth appear slightly closer, and galaxies at lower potential will appear farther away.
The other effects of general relativity on clustering statistics are observed when the light from a background galaxy passes near, or through, a closer galaxy or cluster. These two effects are the integrated Sachs-Wolfe effect (ISW) and gravitational lensing. ISW arises because large-scale gravitational potentials are decaying in time (due to dark energy), so that a photon passing through a low area of gravitational potential gains more energy on entry than it loses on exit, making the background galaxy appear closer. Gravitational lensing, unlike all of the previous effects, distorts the apparent position, and number, of background galaxies.
The RSDs measured in galaxy redshift surveys can be used as a cosmological probe in their own right, providing information on how structure formed in the Universe, and how gravity behaves on large scales.
See also
Cosmic microwave background spectral distortions
References
Specific citations:
General references:
External links
NED/IPAC - Large Scale Structure (Alison L. Coil)
NYU CCPP reference Wiki page
Observational cosmology
Physical cosmology
Doppler effects | Redshift-space distortions | [
"Physics",
"Astronomy"
] | 658 | [
"Physical phenomena",
"Astronomical sub-disciplines",
"Theoretical physics",
"Astrophysics",
"Doppler effects",
"Physical cosmology"
] |
990,197 | https://en.wikipedia.org/wiki/Concerted%20reaction | In chemistry, a concerted reaction is a chemical reaction in which all bond breaking and bond making occurs in a single step. Reactive intermediates or other unstable high energy intermediates are not involved. Concerted reaction rates tend not to depend on solvent polarity ruling out large buildup of charge in the transition state. The reaction is said to progress through a concerted mechanism as all bonds are formed and broken in concert. Pericyclic reactions, the S2 reaction, and some rearrangements - such as the Claisen rearrangement - are concerted reactions.
The rate of the SN2 reaction is second order overall due to the reaction being bimolecular (i.e. there are two molecular species involved in the rate-determining step). The reaction does not have any intermediate steps, only a transition state. This means that all the bond making and bond breaking takes place in a single step. In order for the reaction to occur both molecules must be situated correctly.
References
Organic reactions | Concerted reaction | [
"Chemistry"
] | 208 | [
"Organic reactions"
] |
990,224 | https://en.wikipedia.org/wiki/Guestbook | A guestbook (also guest book, visitor log, visitors' book, visitors' album) is a paper or electronic means for a visitor to acknowledge a visit to a site, physical or web-based, and leave details such as their name, postal or electronic address and any comments. Such paper-based ledgers or books are traditional in churches, at weddings, funerals, B&Bs, museums, schools, institutions and other private facilities open to the public. Some private homes keep visitors' books. Specialised forms of guestbooks include hotel registers, wherein guests are required to provide their contact information, and Books of Condolence, which are used at funeral homes and more generally after notable public deaths, such as the death of a monarch or president, or after a public disaster, such as an airplane crash.
On the web, a guestbook is a logging system that allows visitors of a website to leave a public comment. It is possible in some guestbooks for visitors to express their thoughts about the website or its subject. Generally, they do not require the poster to create a user account, as it is an informal method of dropping off a quick message. The purpose of a website guestbook is to display the kind of visitors the site gets, including the part of the world they reside in, and gain feedback from them. This allows the webmaster to assess and improve their site. A guestbook is generally a script, which is usually remotely hosted and written in a language such as Perl, PHP, Python or ASP. Many free guestbook hosts and scripts exist.
Names and addresses provided in guestbooks, paper-based or electronic, are frequently recorded and collated for use in providing statistics about visitors to the site, and to contact visitors to the site in the future. Because guestbooks are considered ephemeral objects, historians, literary scholars and other academic researchers have been increasingly eager to identify and help conserve them.
See also
Guestbook spam
References
Web applications
Books by type
Memorabilia
Archives | Guestbook | [
"Technology"
] | 406 | [
"Computing stubs",
"World Wide Web stubs"
] |
990,247 | https://en.wikipedia.org/wiki/KT66 | KT66 is the designator for a beam power tube introduced by Marconi-Osram Valve Co. Ltd. (M-OV) of Britain in 1937 and marketed for application as a power amplifier for audio frequencies and driver for radio frequencies.
The KT66 is a beam tetrode that utilizes partially collimated electron beams to form a low potential space charge region between the anode and screen grid to return anode secondary emission electrons to the anode and offers significant performance improvements over comparable power pentodes. In the 21st century, the KT66 is manufactured and used in some high fidelity audio amplifiers and musical instrument amplifiers.
Overview
Although the RCA 6L6 of 1936 (the result of a license agreement between RCA and EMI) was the first successful beam power tube on the market, the KT66 of 1937 became almost equally famous, at least in Europe.
Because the beam tetrode design eliminated the tetrode kink in the lower parts of the tetrode's voltage-current characteristic curves, M-OV marketed this tube family as the "KT" series, standing for kinkless tetrode.
The KT66 was one of the "International series" introduced in 1937. This series utilized the "American Octal" base and had characteristics equivalent to tubes by U.S. manufacturers. A number of different KT tubes were later marketed by M-OV. Some, but not all, were versions of existing American beam tetrode tubes or European power pentodes, such as the KT66 (6L6GC similar), KT77 (EL34 and 6CA7 similar), KT88 (6550), and KT63 (6F6, pentode but almost identical characteristics).
The KT66 was very popular in British radios and audio amplifiers. It was the standard output tube in the classic Quad II (1952, a version of which is still being manufactured today) and in the LEAK Type 15 (1945) and TL/12 (1948), both among the earliest British hi-fi amplifiers. Because of their excellent electrical characteristics and overload tolerance, KT66s are preferred by some guitar players for use in guitar amps in place of 6L6GC. However, the plate dissipation of the 6L6GC, at 30W, exceeds the KT66's 25W, and adjustment of the amplifier's bias is necessary.
M-OV ceased glass vacuum tube manufacturing in 1988; their old audio tube types became valuable collectibles. In 2004 original M-OV KT66 tubes (bearing the official "Genalex" marketing brand that M-OV used outside the UK), unused in original carton, sold for US$250. KT66 tubes continued to be manufactured at EkspoPUL in Saratov, Russia (Genalex Gold Lion brand), JJ Electronic in Slovakia, and by Hengyang Electronics at former Guiguang factory in Foshan city, southern China.
Some modern Russian manufacture Sovtek KT66 tubes are actually 6L6GC tubes in a KT66 style bottle. While these tubes have the same pinout and minimum tolerances required of a KT66 tube, they do not have the performance characteristics of a true kinkless tetrode KT66 tube.
By contrast the very latest Russian manufactured tubes (2012) not only carry the same internal electrode structure as the original KT66 (they now look the same) they also have the same rugged electrical characteristics and can withstand a high voltage on grid 2 comparable to the anode voltage rating, allowing greater power output afforded by higher voltage capability when run in ultralinear connection.
See also
KT88
6L6
6CA7 / EL34
6V6
807
References
Barbour, Eric. "History of the 6L6" in Vacuum Tube Valley, issue 4 (1996), p. 3.
Schade, O. H. "Beam Power Tubes" in Proceedings of the IRE, February 1938.
Stokes, John. 70 Years of Radio Tubes and Valves. Vestal Press, NY, 1982.
Thrower, Keith. History of the British Radio Valve to 1940. MMA International, 1982, p. 59.
External links
Several tube datasheets
TDSL Tube data[KT66
Vacuum tubes
Guitar amplification tubes
Telecommunications-related introductions in 1937 | KT66 | [
"Physics"
] | 918 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
990,315 | https://en.wikipedia.org/wiki/List%20of%20former%20IA-32%20compatible%20processor%20manufacturers | As the 32-bit Intel Architecture became the dominant computing platform during the 1980s and 1990s, multiple companies have tried to build microprocessors that are compatible with that Intel instruction set architecture. Most of these companies were not successful in the mainstream computing market. So far, only AMD has had any market presence in the computing market for more than a couple of product generations. Cyrix was successful during the 386 and 486 generations of products but did not do well after the Pentium was introduced.
List of former IA-32 compatible microprocessor vendors:
Progressed into surviving companies
Centaur Technology – originally subsidiary of IDT, later acquired by VIA Technologies, still producing compatible low-end devices for VIA
Cyrix – acquired by National Semiconductor, later acquired by VIA Technologies, eventually shut down
NexGen – bought by AMD to help develop the successful K6 device
National Semiconductor – low-end 486 (designed in-house) never widely sold; first acquirer of Cyrix, later keeping only low-end IA-32 devices targeted for consumer System-on-a-chips, finally selling them to AMD
Product discontinued/transformed
Harris Corporation – sold radiation-hardened versions of the 8086 and 80286; product line discontinued. Produced 20 MHz and 25 MHz 80286s (some motherboards were equipped with cache memory, which was unusual for 80286 processors).
NEC – sold processors, such as NEC V20 and NEC V30, that were compatible with early Intel 16-bit architectures; product line transitioned to NEC-designed architectures.
Siemens – sold versions of the 8086 and 80286; product line discontinued.
V.M. Technology – developed VM860 (8086-compatible processor), VM8600SP (80286 compatibility with proprietary 32-bit extensions), and VM386SX+ (Intel 386SX pin compatible processor) for the Japanese market.
Left the market or closed
Chips and Technologies – left market after failed 386 compatible chip failed to boot the Windows operating system
IBM – Cyrix licensee and developer of Blue Lightning 486 line of processors, eventually left compatible chip market
Rise Technology – after five years of working on the slow mP6 chip (released in 1998), the company closed a year later
Texas Instruments and SGS-Thomson – licensees of Cyrix designs, eventually left compatible chip market
Transmeta – transitioned to an intellectual property company in 2005
United Microelectronics Corporation and Meridian Semiconductor – got out of market after a suit from Intel questioning the legality of copying Intel origin x86 microcode
Incomplete/unsuccessful projects
Chromatic Research – media processor with x86 instruction set compatibility never completed
Exponential Technology – x86-compatible microprocessor never completed
S-MOS - 486-compatible project was canceled
IIT Corp – 486-compatible project never completed
International Meta Systems – Pentium/PPro-class processors "Meta 6000", "Meta 6500", "Meta 7000/BiFrost" never completed
Texas Instruments - internally developed Pentium class processor was canceled in 1996
MemoryLogix – multi-threaded CPU core "MLX1" and SOC for PCs never completed
Metaflow Technologies – 486-class processor "CP100" never released
Montalvo Systems – asymmetric multiprocessor never completed
ULSI System Technology – never completed x86 SOC; company shut down after one of their employees was convicted for stealing Intel floating-point x87 design documents
VLSI Technology - developed 386SX-based "Polar" SoC in collaboration with Intel - canceled due to low performance and lack of software support
KAIST - developed but did not commercialize Intel-compatible processors HK386 and K486.
Henry Wong - developed a 2-way superscalar, out-of-order execution, 32-bit x86 processor soft core running at over 200 MHz on Altera Stratix IV FPGA.
See also
List of x86 manufacturers
References
List of former IA32 compatible processor manufacturers
Computing-related lists
Computing by company
Former IA-32 compatible processor | List of former IA-32 compatible processor manufacturers | [
"Technology"
] | 837 | [
"Computer industry",
"Computing-related lists",
"Computing by company"
] |
990,343 | https://en.wikipedia.org/wiki/Lindenbaum%E2%80%93Tarski%20algebra | In mathematical logic, the Lindenbaum–Tarski algebra (or Lindenbaum algebra) of a logical theory T consists of the equivalence classes of sentences of the theory (i.e., the quotient, under the equivalence relation ~ defined such that p ~ q exactly when p and q are provably equivalent in T). That is, two sentences are equivalent if the theory T proves that each implies the other. The Lindenbaum–Tarski algebra is thus the quotient algebra obtained by factoring the algebra of formulas by this congruence relation.
The algebra is named for logicians Adolf Lindenbaum and Alfred Tarski.
Starting in the academic year 1926-1927, Lindenbaum pioneered his method in Jan Łukasiewicz's mathematical logic seminar, and the method was popularized and generalized in subsequent decades through work
by Tarski.
The Lindenbaum–Tarski algebra is considered the origin of the modern algebraic logic.
Operations
The operations in a Lindenbaum–Tarski algebra A are inherited from those in the underlying theory T. These typically include conjunction and disjunction, which are well-defined on the equivalence classes. When negation is also present in T, then A is a Boolean algebra, provided the logic is classical. If the theory T consists of the propositional tautologies, the Lindenbaum–Tarski algebra is the free Boolean algebra generated by the propositional variables.
Related algebras
Heyting algebras and interior algebras are the Lindenbaum–Tarski algebras for intuitionistic logic and the modal logic S4, respectively.
A logic for which Tarski's method is applicable, is called algebraizable. There are however a number of logics where this is not the case, for instance the modal logics S1, S2, or S3, which lack the rule of necessitation (⊢φ implying ⊢□φ), so ~ (defined above) is not a congruence (because ⊢φ→ψ does not imply ⊢□φ→□ψ). Another type of logic where Tarski's method is inapplicable is relevance logics, because given two theorems an implication from one to the other may not itself be a theorem in a relevance logic. The study of the algebraization process (and notion) as topic of interest by itself, not necessarily by Tarski's method, has led to the development of abstract algebraic logic.
See also
Algebraic semantics (mathematical logic)
Leibniz operator
List of Boolean algebra topics
References
Algebraic logic
Algebraic structures | Lindenbaum–Tarski algebra | [
"Mathematics"
] | 510 | [
"Mathematical structures",
"Mathematical logic",
"Mathematical objects",
"Fields of abstract algebra",
"Algebraic logic",
"Algebraic structures"
] |
990,397 | https://en.wikipedia.org/wiki/Maui%20Nui | Maui Nui is a modern geologists' name given to a prehistoric Hawaiian island and the corresponding modern biogeographic region. Maui Nui is composed of four modern islands: Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe. Administratively, the four modern islands comprise Maui County (and a tiny part of Molokaʻi called Kalawao County). Long after the breakup of Maui Nui, the four modern islands retained plant and animal life similar to each other. Thus, Maui Nui is not only a prehistoric island but also a modern biogeographic region.
Geology
Maui Nui formed and broke up during the Pleistocene Epoch, which lasted from about 2.58 million to 11,700 years ago.
Maui Nui is built from seven shield volcanoes. The three oldest are Penguin Bank, West Molokaʻi, and East Molokaʻi, which probably range from slightly over to slightly less than 2 million years old. The four younger volcanoes are Lāna‘i, West Maui, Kaho‘olawe, and Haleakalā, which probably formed between 1.5 and 2 million years ago.
At its prime 1.2 million years ago, Maui Nui was , 50% larger than today's Hawaiʻi Island. The island of Maui Nui included four modern islands (Maui, Molokaʻi, Lānaʻi, and Kahoʻolawe) and landmass west of Molokaʻi called Penguin Bank, which is now completely submerged.
Maui Nui broke up as rising sea levels flooded the connections between the volcanoes. The breakup was complex because global sea levels rose and fell intermittently during the Quaternary glaciation. About 600,000 years ago, the connection between Molokaʻi and the island of Lāna‘i/Maui/Kahoʻolawe became intermittent. About 400,000 years ago, the connection between Lāna‘i and Maui/Kahoʻolawe also became intermittent. The connection between Maui and Kahoʻolawe was permanently broken between 200,000 and 150,000 years ago. Maui, Lāna‘i, and Molokaʻi were connected intermittently thereafter, most recently about 18,000 years ago during the Last Glacial Maximum.
Today, the sea floor between these four islands is relatively shallow, about deep. At the outer edges of former Maui Nui, as with the edges of all Hawaiian Islands, the sea floor plummets to the abyssal plain of the Pacific Ocean.
Biogeography
The term Maui Nui is also used as a modern biogeographic region of Hawaii. Long after the breakup of Maui Nui, the four modern islands retained similar plant and animal life. Many plant and animal species occur across multiple islands of former Maui Nui but are found nowhere else in Hawaii.
Many of Hawaii's native species declined or became extinct after Polynesian arrival or in the modern era, making the study of Hawaiian biogeography more complicated. Among Hawaii's native birds, the ʻākohekohe (Palmeria dolei) only survives on Maui, but it also occurred on Molokaʻi until 1907. The black mamo (Drepanis funerea) was historically documented only on Molokaʻi until its extinction in 1907, but fossils are also known from Maui. The Maui Nui icterid-like gaper (Aidemedia lutetiae) was never documented historically, but fossils are known from Maui and Molokaʻi. Among Hawaii's native plants, the maui hala pepe (Dracaena rockii) is known from Maui and Molokaʻi, and survives on both islands. Pua ʻala (Brighamia rockii) survives only on Molokaʻi, but was historically documented on Maui and Lāna‘i. Additional examples of plants and animals endemic to the Maui Nui region appear in List of Hawaiian animals extinct in the Holocene and Endemism in the Hawaiian Islands.
Conversely, the ʻelepaio (genus Chasiempis) have a disjunct distribution. These birds occur on Hawaiʻi Island, Oʻahu, and Kauaʻi, but are curiously absent from the islands of former Maui Nui (both currently and in the fossil record).
Some bird species use the term "Maui Nui" in their common names, such as the Maui Nui large-billed moa-nalo (Thambetochen chauliodous), Maui Nui icterid-like gaper (Aidemedia lutetiae), Maui Nui ʻakialoa (Akialoa lanaiensis), Maui Nui ʻalauahio (Paroreomyza montana), and Maui Nui finch (Telespiza ypsilon). All of these species survived for thousands of years after the breakup of Maui Nui, and the Maui population of the Maui Nui ʻalauahio survives to the present. Thus, Maui Nui is not just a prehistoric island but also a modern biogeographic region.
See also
Santa Rosae
List of Hawaiian animals extinct in the Holocene
Endemism in the Hawaiian Islands
References
Geology of Hawaii
Islands of Hawaii
Former islands of the United States
Physical oceanography
Volcanism of Hawaii
Geography of Hawaii
Geography of Maui County, Hawaii
Cenozoic Hawaii | Maui Nui | [
"Physics"
] | 1,091 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
990,454 | https://en.wikipedia.org/wiki/Interior%20algebra | In abstract algebra, an interior algebra is a certain type of algebraic structure that encodes the idea of the topological interior of a set. Interior algebras are to topology and the modal logic S4 what Boolean algebras are to set theory and ordinary propositional logic. Interior algebras form a variety of modal algebras.
Definition
An interior algebra is an algebraic structure with the signature
⟨S, ·, +, ′, 0, 1, I⟩
where
⟨S, ·, +, ′, 0, 1⟩
is a Boolean algebra and postfix I designates a unary operator, the interior operator, satisfying the identities:
xI ≤ x
xII = xI
(xy)I = xIyI
1I = 1
xI is called the interior of x.
The dual of the interior operator is the closure operator C defined by xC = ((x′)I)′. xC is called the closure of x. By the principle of duality, the closure operator satisfies the identities:
xC ≥ x
xCC = xC
(x + y)C = xC + yC
0C = 0
If the closure operator is taken as primitive, the interior operator can be defined as xI = ((x′)C)′. Thus the theory of interior algebras may be formulated using the closure operator instead of the interior operator, in which case one considers closure algebras of the form ⟨S, ·, +, ′, 0, 1, C⟩, where ⟨S, ·, +, ′, 0, 1⟩ is again a Boolean algebra and C satisfies the above identities for the closure operator. Closure and interior algebras form dual pairs, and are paradigmatic instances of "Boolean algebras with operators." The early literature on this subject (mainly Polish topology) invoked closure operators, but the interior operator formulation eventually became the norm following the work of Wim Blok.
Open and closed elements
Elements of an interior algebra satisfying the condition xI = x are called open. The complements of open elements are called closed and are characterized by the condition xC = x. An interior of an element is always open and the closure of an element is always closed. Interiors of closed elements are called regular open and closures of open elements are called regular closed. Elements that are both open and closed are called clopen. 0 and 1 are clopen.
An interior algebra is called Boolean if all its elements are open (and hence clopen). Boolean interior algebras can be identified with ordinary Boolean algebras as their interior and closure operators provide no meaningful additional structure. A special case is the class of trivial interior algebras, which are the single element interior algebras characterized by the identity 0 = 1.
Morphisms of interior algebras
Homomorphisms
Interior algebras, by virtue of being algebraic structures, have homomorphisms. Given two interior algebras A and B, a map f : A → B is an interior algebra homomorphism if and only if f is a homomorphism between the underlying Boolean algebras of A and B, that also preserves interiors and closures. Hence:
f(xI) = f(x)I;
f(xC) = f(x)C.
Topomorphisms
Topomorphisms are another important, and more general, class of morphisms between interior algebras. A map f : A → B is a topomorphism if and only if f is a homomorphism between the Boolean algebras underlying A and B, that also preserves the open and closed elements of A. Hence:
If x is open in A, then f(x) is open in B;
If x is closed in A, then f(x) is closed in B.
(Such morphisms have also been called stable homomorphisms and closure algebra semi-homomorphisms.) Every interior algebra homomorphism is a topomorphism, but not every topomorphism is an interior algebra homomorphism.
Boolean homomorphisms
Early research often considered mappings between interior algebras that were homomorphisms of the underlying Boolean algebras but that did not necessarily preserve the interior or closure operator. Such mappings were called Boolean homomorphisms. (The terms closure homomorphism or topological homomorphism were used in the case where these were preserved, but this terminology is now redundant as the standard definition of a homomorphism in universal algebra requires that it preserves all operations.) Applications involving countably complete interior algebras (in which countable meets and joins always exist, also called σ-complete) typically made use of countably complete Boolean homomorphisms also called Boolean σ-homomorphisms—these preserve countable meets and joins.
Continuous morphisms
The earliest generalization of continuity to interior algebras was Sikorski's, based on the inverse image map of a continuous map. This is a Boolean homomorphism, preserves unions of sequences and includes the closure of an inverse image in the inverse image of the closure. Sikorski thus defined a continuous homomorphism as a Boolean σ-homomorphism f between two σ-complete interior algebras such that f(x)C ≤ f(xC). This definition had several difficulties: The construction acts contravariantly producing a dual of a continuous map rather than a generalization. On the one hand σ-completeness is too weak to characterize inverse image maps (completeness is required), on the other hand it is too restrictive for a generalization. (Sikorski remarked on using non-σ-complete homomorphisms but included σ-completeness in his axioms for closure algebras.) Later J. Schmid defined a continuous homomorphism or continuous morphism for interior algebras as a Boolean homomorphism f between two interior algebras satisfying f(xC) ≤ f(x)C. This generalizes the forward image map of a continuous map—the image of a closure is contained in the closure of the image. This construction is covariant but not suitable for category theoretic applications as it only allows construction of continuous morphisms from continuous maps in the case of bijections. (C. Naturman returned to Sikorski's approach while dropping σ-completeness to produce topomorphisms as defined above. In this terminology, Sikorski's original "continuous homomorphisms" are σ-complete topomorphisms between σ-complete interior algebras.)
Relationships to other areas of mathematics
Topology
Given a topological space X = ⟨X, T⟩ one can form the power set Boolean algebra of X:
and extend it to an interior algebra
,
where I is the usual topological interior operator. For all S ⊆ X it is defined by
For all S ⊆ X the corresponding closure operator is given by
SI is the largest open subset of S and SC is the smallest closed superset of S in X. The open, closed, regular open, regular closed and clopen elements of the interior algebra A(X) are just the open, closed, regular open, regular closed and clopen subsets of X respectively in the usual topological sense.
Every complete atomic interior algebra is isomorphic to an interior algebra of the form A(X) for some topological space X. Moreover, every interior algebra can be embedded in such an interior algebra giving a representation of an interior algebra as a topological field of sets. The properties of the structure A(X) are the very motivation for the definition of interior algebras. Because of this intimate connection with topology, interior algebras have also been called topo-Boolean algebras or topological Boolean algebras.
Given a continuous map between two topological spaces
we can define a complete topomorphism
by
A(f)(S) = f−1[S]
for all subsets S of Y. Every complete topomorphism between two complete atomic interior algebras can be derived in this way. If Top is the category of topological spaces and continuous maps and Cit is the category of complete atomic interior algebras and complete topomorphisms then Top and Cit are dually isomorphic and is a contravariant functor that is a dual isomorphism of categories. A(f) is a homomorphism if and only if f is a continuous open map.
Under this dual isomorphism of categories many natural topological properties correspond to algebraic properties, in particular connectedness properties correspond to irreducibility properties:
X is empty if and only if A(X) is trivial
X is indiscrete if and only if A(X) is simple
X is discrete if and only if A(X) is Boolean
X is almost discrete if and only if A(X) is semisimple
X is finitely generated (Alexandrov) if and only if A(X) is operator complete i.e. its interior and closure operators distribute over arbitrary meets and joins respectively
X is connected if and only if A(X) is directly indecomposable
X is ultraconnected if and only if A(X) is finitely subdirectly irreducible
X is compact ultra-connected if and only if A(X) is subdirectly irreducible
Generalized topology
The modern formulation of topological spaces in terms of topologies of open subsets, motivates an alternative formulation of interior algebras: A generalized topological space is an algebraic structure of the form
⟨B, ·, +, ′, 0, 1, T⟩
where ⟨B, ·, +, ′, 0, 1⟩ is a Boolean algebra as usual, and T is a unary relation on B (subset of B) such that:
T is closed under arbitrary joins (i.e. if a join of an arbitrary subset of T exists then it will be in T)
T is closed under finite meets
For every element b of B, the join exists
T is said to be a generalized topology in the Boolean algebra.
Given an interior algebra its open elements form a generalized topology. Conversely given a generalized topological space
⟨B, ·, +, ′, 0, 1, T⟩
we can define an interior operator on B by thereby producing an interior algebra whose open elements are precisely T. Thus generalized topological spaces are equivalent to interior algebras.
Considering interior algebras to be generalized topological spaces, topomorphisms are then the standard homomorphisms of Boolean algebras with added relations, so that standard results from universal algebra apply.
Neighbourhood functions and neighbourhood lattices
The topological concept of neighbourhoods can be generalized to interior algebras: An element y of an interior algebra is said to be a neighbourhood of an element x if . The set of neighbourhoods of x is denoted by N(x) and forms a filter. This leads to another formulation of interior algebras:
A neighbourhood function on a Boolean algebra is a mapping N from its underlying set B to its set of filters, such that:
For all exists
For all if and only if there is a such that and .
The mapping N of elements of an interior algebra to their filters of neighbourhoods is a neighbourhood function on the underlying Boolean algebra of the interior algebra. Moreover, given a neighbourhood function N on a Boolean algebra with underlying set B, we can define an interior operator by thereby obtaining an interior algebra. will then be precisely the filter of neighbourhoods of x in this interior algebra. Thus interior algebras are equivalent to Boolean algebras with specified neighbourhood functions.
In terms of neighbourhood functions, the open elements are precisely those elements x such that . In terms of open elements if and only if there is an open element z such that .
Neighbourhood functions may be defined more generally on (meet)-semilattices producing the structures known as neighbourhood (semi)lattices. Interior algebras may thus be viewed as precisely the Boolean neighbourhood lattices i.e. those neighbourhood lattices whose underlying semilattice forms a Boolean algebra.
Modal logic
Given a theory (set of formal sentences) M in the modal logic S4, we can form its Lindenbaum–Tarski algebra:
L(M) = ⟨M / ~, ∧, ∨, ¬, F, T, □⟩
where ~ is the equivalence relation on sentences in M given by p ~ q if and only if p and q are logically equivalent in M, and M / ~ is the set of equivalence classes under this relation. Then L(M) is an interior algebra. The interior operator in this case corresponds to the modal operator □ (necessarily), while the closure operator corresponds to ◊ (possibly). This construction is a special case of a more general result for modal algebras and modal logic.
The open elements of L(M) correspond to sentences that are only true if they are necessarily true, while the closed elements correspond to those that are only false if they are necessarily false.
Because of their relation to S4, interior algebras are sometimes called S4 algebras or Lewis algebras, after the logician C. I. Lewis, who first proposed the modal logics S4 and S5.
Preorders
Since interior algebras are (normal) Boolean algebras with operators, they can be represented by fields of sets on appropriate relational structures. In particular, since they are modal algebras, they can be represented as fields of sets on a set with a single binary relation, called a Kripke frame. The Kripke frames corresponding to interior algebras are precisely the preordered sets. Preordered sets (also called S4-frames) provide the Kripke semantics of the modal logic S4, and the connection between interior algebras and preorders is deeply related to their connection with modal logic.
Given a preordered set X = ⟨X, «⟩ we can construct an interior algebra
from the power set Boolean algebra of X where the interior operator I is given by
for all S ⊆ X.
The corresponding closure operator is given by
for all S ⊆ X.
SI is the set of all worlds inaccessible from worlds outside S, and SC is the set of all worlds accessible from some world in S. Every interior algebra can be embedded in an interior algebra of the form B(X) for some preordered set X giving the above-mentioned representation as a field of sets (a preorder field).
This construction and representation theorem is a special case of the more general result for modal algebras and Kripke frames. In this regard, interior algebras are particularly interesting because of their connection to topology. The construction provides the preordered set X with a topology, the Alexandrov topology, producing a topological space T(X) whose open sets are:
.
The corresponding closed sets are:
.
In other words, the open sets are the ones whose worlds are inaccessible from outside (the up-sets), and the closed sets are the ones for which every outside world is inaccessible from inside (the down-sets). Moreover, B(X) = A(T(X)).
Monadic Boolean algebras
Any monadic Boolean algebra can be considered to be an interior algebra where the interior operator is the universal quantifier and the closure operator is the existential quantifier. The monadic Boolean algebras are then precisely the variety of interior algebras satisfying the identity xIC = xI. In other words, they are precisely the interior algebras in which every open element is closed or equivalently, in which every closed element is open. Moreover, such interior algebras are precisely the semisimple interior algebras. They are also the interior algebras corresponding to the modal logic S5, and so have also been called S5 algebras.
In the relationship between preordered sets and interior algebras they correspond to the case where the preorder is an equivalence relation, reflecting the fact that such preordered sets provide the Kripke semantics for S5. This also reflects the relationship between the monadic logic of quantification (for which monadic Boolean algebras provide an algebraic description) and S5 where the modal operators □ (necessarily) and ◊ (possibly) can be interpreted in the Kripke semantics using monadic universal and existential quantification, respectively, without reference to an accessibility relation.
Heyting algebras
The open elements of an interior algebra form a Heyting algebra and the closed elements form a dual Heyting algebra. The regular open elements and regular closed elements correspond to the pseudo-complemented elements and dual pseudo-complemented elements of these algebras respectively and thus form Boolean algebras. The clopen elements correspond to the complemented elements and form a common subalgebra of these Boolean algebras as well as of the interior algebra itself. Every Heyting algebra can be represented as the open elements of an interior algebra and the latter may be chosen to be an interior algebra generated by its open elements—such interior algebras correspond one-to-one with Heyting algebras (up to isomorphism) being the free Boolean extensions of the latter.
Heyting algebras play the same role for intuitionistic logic that interior algebras play for the modal logic S4 and Boolean algebras play for propositional logic. The relation between Heyting algebras and interior algebras reflects the relationship between intuitionistic logic and S4, in which one can interpret theories of intuitionistic logic as S4 theories closed under necessity. The one-to-one correspondence between Heyting algebras and interior algebras generated by their open elements reflects the correspondence between extensions of intuitionistic logic and normal extensions of the modal logic S4.Grz.
Derivative algebras
Given an interior algebra A, the closure operator obeys the axioms of the derivative operator, D. Hence we can form a derivative algebra D(A) with the same underlying Boolean algebra as A by using the closure operator as a derivative operator.
Thus interior algebras are derivative algebras. From this perspective, they are precisely the variety of derivative algebras satisfying the identity xD ≥ x. Derivative algebras provide the appropriate algebraic semantics for the modal logic wK4. Hence derivative algebras stand to topological derived sets and wK4 as interior/closure algebras stand to topological interiors/closures and S4.
Given a derivative algebra V with derivative operator D, we can form an interior algebra with the same underlying Boolean algebra as V, with interior and closure operators defined by and , respectively. Thus every derivative algebra can be regarded as an interior algebra. Moreover, given an interior algebra A, we have . However, does not necessarily hold for every derivative algebra V.
Stone duality and representation for interior algebras
Stone duality provides a category theoretic duality between Boolean algebras and a class of topological spaces known as Boolean spaces. Building on nascent ideas of relational semantics (later formalized by Kripke) and a result of R. S. Pierce, Jónsson, Tarski and G. Hansoul extended Stone duality to Boolean algebras with operators by equipping Boolean spaces with relations that correspond to the operators via a power set construction. In the case of interior algebras the interior (or closure) operator corresponds to a pre-order on the Boolean space. Homomorphisms between interior algebras correspond to a class of continuous maps between the Boolean spaces known as pseudo-epimorphisms or p-morphisms for short. This generalization of Stone duality to interior algebras based on the Jónsson–Tarski representation was investigated by Leo Esakia and is also known as the Esakia duality for S4-algebras (interior algebras) and is closely related to the Esakia duality for Heyting algebras.
Whereas the Jónsson–Tarski generalization of Stone duality applies to Boolean algebras with operators in general, the connection between interior algebras and topology allows for another method of generalizing Stone duality that is unique to interior algebras. An intermediate step in the development of Stone duality is Stone's representation theorem, which represents a Boolean algebra as a field of sets. The Stone topology of the corresponding Boolean space is then generated using the field of sets as a topological basis. Building on the topological semantics introduced by Tang Tsao-Chen for Lewis's modal logic, McKinsey and Tarski showed that by generating a topology equivalent to using only the complexes that correspond to open elements as a basis, a representation of an interior algebra is obtained as a topological field of sets—a field of sets on a topological space that is closed with respect to taking interiors or closures. By equipping topological fields of sets with appropriate morphisms known as field maps, C. Naturman showed that this approach can be formalized as a category theoretic Stone duality in which the usual Stone duality for Boolean algebras corresponds to the case of interior algebras having redundant interior operator (Boolean interior algebras).
The pre-order obtained in the Jónsson–Tarski approach corresponds to the accessibility relation in the Kripke semantics for an S4 theory, while the intermediate field of sets corresponds to a representation of the Lindenbaum–Tarski algebra for the theory using the sets of possible worlds in the Kripke semantics in which sentences of the theory hold. Moving from the field of sets to a Boolean space somewhat obfuscates this connection. By treating fields of sets on pre-orders as a category in its own right this deep connection can be formulated as a category theoretic duality that generalizes Stone representation without topology. R. Goldblatt had shown that with restrictions to appropriate homomorphisms such a duality can be formulated for arbitrary modal algebras and Kripke frames. Naturman showed that in the case of interior algebras this duality applies to more general topomorphisms and can be factored via a category theoretic functor through the duality with topological fields of sets. The latter represent the Lindenbaum–Tarski algebra using sets of points satisfying sentences of the S4 theory in the topological semantics. The pre-order can be obtained as the specialization pre-order of the McKinsey–Tarski topology. The Esakia duality can be recovered via a functor that replaces the field of sets with the Boolean space it generates. Via a functor that instead replaces the pre-order with its corresponding Alexandrov topology, an alternative representation of the interior algebra as a field of sets is obtained where the topology is the Alexandrov bico-reflection of the McKinsey–Tarski topology. The approach of formulating a topological duality for interior algebras using both the Stone topology of the Jónsson–Tarski approach and the Alexandrov topology of the pre-order to form a bi-topological space has been investigated by G. Bezhanishvili, R.Mines, and P.J. Morandi. The McKinsey–Tarski topology of an interior algebra is the intersection of the former two topologies.
Metamathematics
Grzegorczyk proved the first-order theory of closure algebras undecidable. Naturman demonstrated that the theory is hereditarily undecidable (all its subtheories are undecidable) and demonstrated an infinite chain of elementary classes of interior algebras with hereditarily undecidable theories.
Notes
References
Blok, W.A., 1976, Varieties of interior algebras, Ph.D. thesis, University of Amsterdam.
Esakia, L., 2004, "Intuitionistic logic and modality via topology," Annals of Pure and Applied Logic 127: 155-70.
McKinsey, J.C.C. and Alfred Tarski, 1944, "The Algebra of Topology," Annals of Mathematics 45: 141-91.
Naturman, C.A., 1991, Interior Algebras and Topology, Ph.D. thesis, University of Cape Town Department of Mathematics.
Bezhanishvili, G., Mines, R. and Morandi, P.J., 2008, Topo-canonical completions of closure algebras and Heyting algebras, Algebra Universalis 58: 1-34.
Schmid, J., 1973, On the compactification of closure algebras, Fundamenta Mathematicae 79: 33-48
Sikorski R., 1955, Closure homomorphisms and interior mappings, Fundamenta Mathematicae 41: 12-20
Algebraic structures
Mathematical logic
Boolean algebra
Closure operators
Modal logic | Interior algebra | [
"Mathematics"
] | 5,079 | [
"Boolean algebra",
"Mathematical structures",
"Closure operators",
"Mathematical logic",
"Mathematical objects",
"Modal logic",
"Fields of abstract algebra",
"Algebraic structures",
"Order theory"
] |
990,491 | https://en.wikipedia.org/wiki/Anti-fouling%20paint | Anti-fouling paint is a specialized category of coatings applied as the outer (outboard) layer to the hull of a ship or boat, to slow the growth of and facilitate detachment of subaquatic organisms that attach to the hull and can affect a vessel's performance and durability. It falls into a category of commercially available underwater hull paints, also known as bottom paints.
Anti-fouling paints are often applied as one component of multi-layer coating systems which may have other functions in addition to their antifouling properties, such as acting as a barrier against corrosion on metal hulls that will degrade and weaken the metal, or improving the flow of water past the hull of a fishing vessel or high-performance racing yachts. Although commonly discussed as being applied to ships, antifouling paints are also of benefit in many other sectors such as off-shore structures and fish farms.
History
In the Age of Sail, sailing vessels suffered severely from the growth of barnacles and weeds on the hull, called "fouling". Starting in the mid-1700s thin sheets of copper and approximately 100 years later, Muntz metal, were nailed onto the hull in an attempt to prevent marine growth. One famous example of the traditional use of metal sheathing is the clipper Cutty Sark, which is preserved as a museum ship in dry-dock at Greenwich in England. Marine growth affected performance (and profitability) in many ways:
The maximum speed of a ship decreases as its hull becomes fouled with marine growth, and its displacement increases.
Fouling hampers a ship's ability to sail upwind.
Some marine growth, such as shipworms, would bore into the hull causing severe damage over time.
The ship may transport harmful marine organisms to other areas.
While anti-fouling coatings began to be developed from 1840 onwards, the first practical commercial anti-fouling coatings were established around 1860. One of the first successful commercial patents was for 'McIness', a metallic soap compound with copper sulphate that was applied heated over a quick-drying rosin varnish primer with an iron oxide pigment. The Bonnington Chemical Works began marketing copper sulphide anti-fouling paint around 1850. Other widely used anti-fouling paints were developed in the late 19th century, with some 213 anti-fouling patents being recorded by 1872. Among the most widely used in the 1880s and 1890s was a hot plastic composition known as Italian Morovian.
In an official 1900 Letter from the U.S. Navy to the U.S. Senate Committee on Naval Affairs, it was noted that the (British) Admiralty had considered a proposal in 1847 to limit the number of iron ships (only recently introduced into naval service) and even to consider the sale of all iron ships in its possession, due to significant problems with biofouling. However, once an antifouling paint "with very fair results" was found, the iron ships were instead retained and continued to be built.
During World War II, which included a substantial naval component, the U.S. Navy provided significant funding to the Woods Hole Oceanographic Institution to gather information and conduct research on marine biofouling and technologies for its prevention. This work was published as a book in 1952, the contents of which are available online as individual chapters. The third and final part of this book includes a number of chapters that go into the state of the art at that time for the formulation of anti-fouling paints. Lunn (1974) provides further history.
Modern antifouling paints
In modern times, antifouling paints are formulated with cuprous oxide (or other copper compounds) and/or other biocides—special chemicals which impede growth of barnacles, algae, and marine organisms. Historically, copper paints were red, leading to ship bottoms still being painted red today.
"Soft", or ablative bottom paints slowly slough off in the water, releasing a copper or zinc based biocide into the water column. The movement of water increases the rate of this action. Ablative paints are widely used on the hulls of recreational vessels and typically are reapplied every 1–3 years.
"Contact leaching" paints "create a porous film on the surface. Biocides are held in the pores, and released slowly." Another type of hard bottom paint includes Teflon and silicone coatings which are too slippery for growth to stick. SealCoat systems, which must be professionally applied, dry with small fibers sticking out from the coating surface. These small fibers move in the water, preventing bottom growth from adhering.
Environmental concerns
In the 1960s and 1970s, commercial vessels commonly used bottom paints containing tributyltin, which has been banned in the International Convention on the Control of Harmful Anti-fouling Systems on Ships of the International Maritime Organization due to its serious toxic effects on marine life (such as the collapse of a French shellfish fishery). Now that tributyltin has been banned, the most commonly used anti-fouling bottom paints are copper-based. Copper-based antifouling paints can also have adverse effects on marine organisms. Copper occurs naturally in aquatic systems but can build up in ports or marinas where there are lots of boats. Copper can leach out of anti-fouling paint from the hulls of the boats or fall off the hulls in different sized paint particles. This can lead to higher-than-normal concentrations of copper in the ports or bays.
This excess of copper in the marine ecosystem can have adverse effects on the marine environment and its organisms. In marinas, the river nerite, a brackish water snail, was found to have higher mortality, negative growth, and a large decrease in reproduction compared to areas with no boating. The snails in marinas had more tissue (histopathological) issues and alternations in areas like their gills and gonads as well. Increased exposure to copper from antifouling paint has also been found to decrease enzyme activity in brine shrimp.
Antifouling paint particles can be eaten by zooplankton or other marine species and move up the food chain, bioaccumulating in fish. This accumulation of copper through the food web can cause damage to not only the species eating the particle, but those that are accumulating it in their tissues from their diet. Antifouling paint particles can also end up in the sediment of harbors or bays and damage the benthic environment or the organisms that live in them. These are the known effects of copper based antifouling paint; however, it has not been a large focus of study so the extent of the effects is not fully known. More research is needed to fully understand how these paints and the metals in them affect their environments.
The Port of San Diego is investigating how to reduce copper input from copper-based antifouling coatings, and Washington State has passed a law which may phase in a ban on copper antifouling coatings on recreational vessels beginning in January 2018. However, despite the toxic chemistry of bottom paint and its accumulation in water ways across the globe, a similar ban was rescinded in the Netherlands after the European Union's Scientific Committee on Health and Environmental Risks concluded The Hague had insufficiently justified the law. In an expert opinion, the committee concluded the Netherlands government's explanation "does not provide sufficient sound scientific evidence to show that the use of copper-based antifouling paints in leisure boats presents significant environmental risk."
"Sloughing bottom paints", or "ablative" paints, are an older type of paint designed to create a hull coating which ablates (wears off) slowly, exposing a fresh layer of biocides. Scrubbing a hull with sloughing bottom paint while it is in the water releases its biocides into the environment. One way to reduce the environmental impact from hulls with sloughing bottom paint is to have them hauled out and cleaned at boatyards with a "closed loop" system.
Some innovative bottom paints that do not rely on copper or tin have been developed in response to the increasing scrutiny that copper-based ablative bottom paints have received as environmental pollutants.
A possible future replacement for antifouling paint may be slime. A mesh would cover a ship's hull beneath which a series of pores would supply the slime compound. The compound would turn into a viscous slime on contact with water and coat the mesh. The slime would constantly slough off, carrying away micro-organisms and barnacle larvae.
See also
Biofouling
Biomimetic antifouling coating
Environmental impact of paint
References
External links
Selecting an anti-fouling paint, West Marine
Clean Boating Tip Sheet, Selecting a Bottom Paint, .pdf chart, Maryland Dept. of Natural Resources
Bottom Paint for Racing Boats, Sailing World, 2007
Are foul-release paints for you? Coating calculator, National Fisherman
Using Antifouling paint against the Gribble Menace, Teamac Marine Coatings
Paints
Shipbuilding
Fouling | Anti-fouling paint | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,873 | [
"Paints",
"Coatings",
"Shipbuilding",
"Marine engineering",
"Materials degradation",
"Fouling"
] |
990,534 | https://en.wikipedia.org/wiki/Norm%20%28mathematics%29 | In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude or length of the vector. This norm can be defined as the square root of the inner product of a vector with itself.
A seminorm satisfies the first two properties of a norm, but may be zero for vectors other than the origin. A vector space with a specified norm is called a normed vector space. In a similar manner, a vector space with a seminorm is called a seminormed vector space.
The term pseudonorm has been used for several related meanings. It may be a synonym of "seminorm". It can also refer to a norm that can take infinite values, or to certain functions parametrised by a directed set.
Definition
Given a vector space over a subfield of the complex numbers a norm on is a real-valued function with the following properties, where denotes the usual absolute value of a scalar :
Subadditivity/Triangle inequality: for all
Absolute homogeneity: for all and all scalars
Positive definiteness/positiveness/: for all if then
Because property (2.) implies some authors replace property (3.) with the equivalent condition: for every if and only if
A seminorm on is a function that has properties (1.) and (2.) so that in particular, every norm is also a seminorm (and thus also a sublinear functional). However, there exist seminorms that are not norms. Properties (1.) and (2.) imply that if is a norm (or more generally, a seminorm) then and that also has the following property:
Non-negativity: for all
Some authors include non-negativity as part of the definition of "norm", although this is not necessary.
Although this article defined "" to be a synonym of "positive definite", some authors instead define "" to be a synonym of "non-negative"; these definitions are not equivalent.
Equivalent norms
Suppose that and are two norms (or seminorms) on a vector space Then and are called equivalent, if there exist two positive real constants and such that for every vector
The relation " is equivalent to " is reflexive, symmetric ( implies ), and transitive and thus defines an equivalence relation on the set of all norms on
The norms and are equivalent if and only if they induce the same topology on Any two norms on a finite-dimensional space are equivalent but this does not extend to infinite-dimensional spaces.
Notation
If a norm is given on a vector space then the norm of a vector is usually denoted by enclosing it within double vertical lines: Such notation is also sometimes used if is only a seminorm. For the length of a vector in Euclidean space (which is an example of a norm, as explained below), the notation with single vertical lines is also widespread.
Examples
Every (real or complex) vector space admits a norm: If is a Hamel basis for a vector space then the real-valued map that sends (where all but finitely many of the scalars are ) to is a norm on There are also a large number of norms that exhibit additional properties that make them useful for specific problems.
Absolute-value norm
The absolute value
is a norm on the vector space formed by the real or complex numbers. The complex numbers form a one-dimensional vector space over themselves and a two-dimensional vector space over the reals; the absolute value is a norm for these two structures.
Any norm on a one-dimensional vector space is equivalent (up to scaling) to the absolute value norm, meaning that there is a norm-preserving isomorphism of vector spaces where is either or and norm-preserving means that
This isomorphism is given by sending to a vector of norm which exists since such a vector is obtained by multiplying any non-zero vector by the inverse of its norm.
Euclidean norm
On the -dimensional Euclidean space the intuitive notion of length of the vector is captured by the formula
This is the Euclidean norm, which gives the ordinary distance from the origin to the point X—a consequence of the Pythagorean theorem.
This operation may also be referred to as "SRSS", which is an acronym for the square root of the sum of squares.
The Euclidean norm is by far the most commonly used norm on but there are other norms on this vector space as will be shown below.
However, all these norms are equivalent in the sense that they all define the same topology on finite-dimensional spaces.
The inner product of two vectors of a Euclidean vector space is the dot product of their coordinate vectors over an orthonormal basis.
Hence, the Euclidean norm can be written in a coordinate-free way as
The Euclidean norm is also called the quadratic norm, norm, norm, 2-norm, or square norm; see space.
It defines a distance function called the Euclidean length, distance, or distance.
The set of vectors in whose Euclidean norm is a given positive constant forms an -sphere.
Euclidean norm of complex numbers
The Euclidean norm of a complex number is the absolute value (also called the modulus) of it, if the complex plane is identified with the Euclidean plane This identification of the complex number as a vector in the Euclidean plane, makes the quantity (as first suggested by Euler) the Euclidean norm associated with the complex number. For , the norm can also be written as where is the complex conjugate of
Quaternions and octonions
There are exactly four Euclidean Hurwitz algebras over the real numbers. These are the real numbers the complex numbers the quaternions and lastly the octonions where the dimensions of these spaces over the real numbers are respectively.
The canonical norms on and are their absolute value functions, as discussed previously.
The canonical norm on of quaternions is defined by
for every quaternion in This is the same as the Euclidean norm on considered as the vector space Similarly, the canonical norm on the octonions is just the Euclidean norm on
Finite-dimensional complex normed spaces
On an -dimensional complex space the most common norm is
In this case, the norm can be expressed as the square root of the inner product of the vector and itself:
where is represented as a column vector and denotes its conjugate transpose.
This formula is valid for any inner product space, including Euclidean and complex spaces. For complex spaces, the inner product is equivalent to the complex dot product. Hence the formula in this case can also be written using the following notation:
Taxicab norm or Manhattan norm
The name relates to the distance a taxi has to drive in a rectangular street grid (like that of the New York borough of Manhattan) to get from the origin to the point
The set of vectors whose 1-norm is a given constant forms the surface of a cross polytope, which has dimension equal to the dimension of the vector space minus 1.
The Taxicab norm is also called the norm. The distance derived from this norm is called the Manhattan distance or distance.
The 1-norm is simply the sum of the absolute values of the columns.
In contrast,
is not a norm because it may yield negative results.
p-norm
Let be a real number.
The -norm (also called -norm) of vector is
For we get the taxicab norm, for we get the Euclidean norm, and as approaches the -norm approaches the infinity norm or maximum norm:
The -norm is related to the generalized mean or power mean.
For the -norm is even induced by a canonical inner product meaning that for all vectors This inner product can be expressed in terms of the norm by using the polarization identity.
On this inner product is the defined by
while for the space associated with a measure space which consists of all square-integrable functions, this inner product is
This definition is still of some interest for but the resulting function does not define a norm, because it violates the triangle inequality.
What is true for this case of even in the measurable analog, is that the corresponding class is a vector space, and it is also true that the function
(without th root) defines a distance that makes into a complete metric topological vector space. These spaces are of great interest in functional analysis, probability theory and harmonic analysis.
However, aside from trivial cases, this topological vector space is not locally convex, and has no continuous non-zero linear forms. Thus the topological dual space contains only the zero functional.
The partial derivative of the -norm is given by
The derivative with respect to therefore, is
where denotes Hadamard product and is used for absolute value of each component of the vector.
For the special case of this becomes
or
Maximum norm (special case of: infinity norm, uniform norm, or supremum norm)
If is some vector such that then:
The set of vectors whose infinity norm is a given constant, forms the surface of a hypercube with edge length
Energy norm
The energy norm of a vector is defined in terms of a symmetric positive definite matrix as
It is clear that if is the identity matrix, this norm corresponds to the Euclidean norm. If is diagonal, this norm is also called a weighted norm. The energy norm is induced by the inner product given by for .
In general, the value of the norm is dependent on the spectrum of : For a vector with a Euclidean norm of one, the value of is bounded from below and above by the smallest and largest absolute eigenvalues of respectively, where the bounds are achieved if coincides with the corresponding (normalized) eigenvectors. Based on the symmetric matrix square root , the energy norm of a vector can be written in terms of the standard Euclidean norm as
Zero norm
In probability and functional analysis, the zero norm induces a complete metric topology for the space of measurable functions and for the F-space of sequences with F–norm
Here we mean by F-norm some real-valued function on an F-space with distance such that The F-norm described above is not a norm in the usual sense because it lacks the required homogeneity property.
Hamming distance of a vector from zero
In metric geometry, the discrete metric takes the value one for distinct points and zero otherwise. When applied coordinate-wise to the elements of a vector space, the discrete distance defines the Hamming distance, which is important in coding and information theory.
In the field of real or complex numbers, the distance of the discrete metric from zero is not homogeneous in the non-zero point; indeed, the distance from zero remains one as its non-zero argument approaches zero.
However, the discrete distance of a number from zero does satisfy the other properties of a norm, namely the triangle inequality and positive definiteness.
When applied component-wise to vectors, the discrete distance from zero behaves like a non-homogeneous "norm", which counts the number of non-zero components in its vector argument; again, this non-homogeneous "norm" is discontinuous.
In signal processing and statistics, David Donoho referred to the zero "norm" with quotation marks.
Following Donoho's notation, the zero "norm" of is simply the number of non-zero coordinates of or the Hamming distance of the vector from zero.
When this "norm" is localized to a bounded set, it is the limit of -norms as approaches 0.
Of course, the zero "norm" is not truly a norm, because it is not positive homogeneous.
Indeed, it is not even an F-norm in the sense described above, since it is discontinuous, jointly and severally, with respect to the scalar argument in scalar–vector multiplication and with respect to its vector argument.
Abusing terminology, some engineers omit Donoho's quotation marks and inappropriately call the number-of-non-zeros function the norm, echoing the notation for the Lebesgue space of measurable functions.
Infinite dimensions
The generalization of the above norms to an infinite number of components leads to and spaces for with norms
for complex-valued sequences and functions on respectively, which can be further generalized (see Haar measure). These norms are also valid in the limit as , giving a supremum norm, and are called and
Any inner product induces in a natural way the norm
Other examples of infinite-dimensional normed vector spaces can be found in the Banach space article.
Generally, these norms do not give the same topologies. For example, an infinite-dimensional space gives a strictly finer topology than an infinite-dimensional space when
Composite norms
Other norms on can be constructed by combining the above; for example
is a norm on
For any norm and any injective linear transformation we can define a new norm of equal to
In 2D, with a rotation by 45° and a suitable scaling, this changes the taxicab norm into the maximum norm. Each applied to the taxicab norm, up to inversion and interchanging of axes, gives a different unit ball: a parallelogram of a particular shape, size, and orientation.
In 3D, this is similar but different for the 1-norm (octahedrons) and the maximum norm (prisms with parallelogram base).
There are examples of norms that are not defined by "entrywise" formulas. For instance, the Minkowski functional of a centrally-symmetric convex body in (centered at zero) defines a norm on (see below).
All the above formulas also yield norms on without modification.
There are also norms on spaces of matrices (with real or complex entries), the so-called matrix norms.
In abstract algebra
Let be a finite extension of a field of inseparable degree and let have algebraic closure If the distinct embeddings of are then the Galois-theoretic norm of an element is the value As that function is homogeneous of degree , the Galois-theoretic norm is not a norm in the sense of this article. However, the -th root of the norm (assuming that concept makes sense) is a norm.
Composition algebras
The concept of norm in composition algebras does share the usual properties of a norm since null vectors are allowed. A composition algebra consists of an algebra over a field an involution and a quadratic form called the "norm".
The characteristic feature of composition algebras is the homomorphism property of : for the product of two elements and of the composition algebra, its norm satisfies In the case of division algebras and the composition algebra norm is the square of the norm discussed above. In those cases the norm is a definite quadratic form. In the split algebras the norm is an isotropic quadratic form.
Properties
For any norm on a vector space the reverse triangle inequality holds:
If is a continuous linear map between normed spaces, then the norm of and the norm of the transpose of are equal.
For the norms, we have Hölder's inequality
A special case of this is the Cauchy–Schwarz inequality:
Every norm is a seminorm and thus satisfies all properties of the latter. In turn, every seminorm is a sublinear function and thus satisfies all properties of the latter. In particular, every norm is a convex function.
Equivalence
The concept of unit circle (the set of all vectors of norm 1) is different in different norms: for the 1-norm, the unit circle is a square oriented as a diamond; for the 2-norm (Euclidean norm), it is the well-known unit circle; while for the infinity norm, it is an axis-aligned square. For any -norm, it is a superellipse with congruent axes (see the accompanying illustration). Due to the definition of the norm, the unit circle must be convex and centrally symmetric (therefore, for example, the unit ball may be a rectangle but cannot be a triangle, and for a -norm).
In terms of the vector space, the seminorm defines a topology on the space, and this is a Hausdorff topology precisely when the seminorm can distinguish between distinct vectors, which is again equivalent to the seminorm being a norm. The topology thus defined (by either a norm or a seminorm) can be understood either in terms of sequences or open sets. A sequence of vectors is said to converge in norm to if as Equivalently, the topology consists of all sets that can be represented as a union of open balls. If is a normed space then
Two norms and on a vector space are called if they induce the same topology, which happens if and only if there exist positive real numbers and such that for all
For instance, if on then
In particular,
That is,
If the vector space is a finite-dimensional real or complex one, all norms are equivalent. On the other hand, in the case of infinite-dimensional vector spaces, not all norms are equivalent.
Equivalent norms define the same notions of continuity and convergence and for many purposes do not need to be distinguished. To be more precise the uniform structure defined by equivalent norms on the vector space is uniformly isomorphic.
Classification of seminorms: absolutely convex absorbing sets
All seminorms on a vector space can be classified in terms of absolutely convex absorbing subsets of To each such subset corresponds a seminorm called the gauge of defined as
where is the infimum, with the property that
Conversely:
Any locally convex topological vector space has a local basis consisting of absolutely convex sets. A common method to construct such a basis is to use a family of seminorms that separates points: the collection of all finite intersections of sets turns the space into a locally convex topological vector space so that every p is continuous.
Such a method is used to design weak and weak* topologies.
norm case:
Suppose now that contains a single since is separating, is a norm, and is its open unit ball. Then is an absolutely convex bounded neighbourhood of 0, and is continuous.
The converse is due to Andrey Kolmogorov: any locally convex and locally bounded topological vector space is normable. Precisely:
If is an absolutely convex bounded neighbourhood of 0, the gauge (so that is a norm.
See also
References
Bibliography
Functional analysis
Linear algebra | Norm (mathematics) | [
"Mathematics"
] | 3,821 | [
"Functions and mappings",
"Mathematical analysis",
"Functional analysis",
"Mathematical objects",
"Mathematical relations",
"Norms (mathematics)",
"Linear algebra",
"Algebra"
] |
990,632 | https://en.wikipedia.org/wiki/Dynamical%20systems%20theory | Dynamical systems theory is an area of mathematics used to describe the behavior of complex dynamical systems, usually by employing differential equations or difference equations. When differential equations are employed, the theory is called continuous dynamical systems. From a physical point of view, continuous dynamical systems is a generalization of classical mechanics, a generalization where the equations of motion are postulated directly and are not constrained to be Euler–Lagrange equations of a least action principle. When difference equations are employed, the theory is called discrete dynamical systems. When the time variable runs over a set that is discrete over some intervals and continuous over other intervals or is any arbitrary time-set such as a Cantor set, one gets dynamic equations on time scales. Some situations may also be modeled by mixed operators, such as differential-difference equations.
This theory deals with the long-term qualitative behavior of dynamical systems, and studies the nature of, and when possible the solutions of, the equations of motion of systems that are often primarily mechanical or otherwise physical in nature, such as planetary orbits and the behaviour of electronic circuits, as well as systems that arise in biology, economics, and elsewhere. Much of modern research is focused on the study of chaotic systems and bizarre systems.
This field of study is also called just dynamical systems, mathematical dynamical systems theory or the mathematical theory of dynamical systems.
Overview
Dynamical systems theory and chaos theory deal with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible steady states?", or "Does the long-term behavior of the system depend on its initial condition?"
An important goal is to describe the fixed points, or steady states of a given dynamical system; these are values of the variable that do not change over time. Some of these fixed points are attractive, meaning that if the system starts out in a nearby state, it converges towards the fixed point.
Similarly, one is interested in periodic points, states of the system that repeat after several timesteps. Periodic points can also be attractive. Sharkovskii's theorem is an interesting statement about the number of periodic points of a one-dimensional discrete dynamical system.
Even simple nonlinear dynamical systems often exhibit seemingly random behavior that has been called chaos. The branch of dynamical systems that deals with the clean definition and investigation of chaos is called chaos theory.
History
The concept of dynamical systems theory has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is given implicitly by a relation that gives the state of the system only a short time into the future.
Before the advent of fast computing machines, solving a dynamical system required sophisticated mathematical techniques and could only be accomplished for a small class of dynamical systems.
Some excellent presentations of mathematical dynamic system theory include , , , and .
Concepts
Dynamical systems
The dynamical system concept is a mathematical formalization for any fixed "rule" that describes the time dependence of a point's position in its ambient space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each spring in a lake.
A dynamical system has a state determined by a collection of real numbers, or more generally by a set of points in an appropriate state space. Small changes in the state of the system correspond to small changes in the numbers. The numbers are also the coordinates of a geometrical space—a manifold. The evolution rule of the dynamical system is a fixed rule that describes what future states follow from the current state. The rule may be deterministic (for a given time interval one future state can be precisely predicted given the current state) or stochastic (the evolution of the state can only be predicted with a certain probability).
Dynamicism
Dynamicism, also termed the dynamic hypothesis or the dynamic hypothesis in cognitive science or dynamic cognition, is a new approach in cognitive science exemplified by the work of philosopher Tim van Gelder. It argues that differential equations are more suited to modelling cognition than more traditional computer models.
Nonlinear system
In mathematics, a nonlinear system is a system that is not linear—i.e., a system that does not satisfy the superposition principle. Less technically, a nonlinear system is any problem where the variable(s) to solve for cannot be written as a linear sum of independent components. A nonhomogeneous system, which is linear apart from the presence of a function of the independent variables, is nonlinear according to a strict definition, but such systems are usually studied alongside linear systems, because they can be transformed to a linear system as long as a particular solution is known.
Related fields
Arithmetic dynamics
Arithmetic dynamics is a field that emerged in the 1990s that amalgamates two areas of mathematics, dynamical systems and number theory. Classically, discrete dynamics refers to the study of the iteration of self-maps of the complex plane or real line. Arithmetic dynamics is the study of the number-theoretic properties of integer, rational, -adic, and/or algebraic points under repeated application of a polynomial or rational function.
Chaos theory
Chaos theory describes the behavior of certain dynamical systems – that is, systems whose state evolves with time – that may exhibit dynamics that are highly sensitive to initial conditions (popularly referred to as the butterfly effect). As a result of this sensitivity, which manifests itself as an exponential growth of perturbations in the initial conditions, the behavior of chaotic systems appears random. This happens even though these systems are deterministic, meaning that their future dynamics are fully defined by their initial conditions, with no random elements involved. This behavior is known as deterministic chaos, or simply chaos.
Complex systems
Complex systems is a scientific field that studies the common properties of systems considered complex in nature, society, and science. It is also called complex systems theory, complexity science, study of complex systems and/or sciences of complexity. The key problems of such systems are difficulties with their formal modeling and simulation. From such perspective, in different research contexts complex systems are defined on the base of their different attributes.
The study of complex systems is bringing new vitality to many areas of science where a more typical reductionist strategy has fallen short. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines including neurosciences, social sciences, meteorology, chemistry, physics, computer science, psychology, artificial life, evolutionary computation, economics, earthquake prediction, molecular biology and inquiries into the nature of living cells themselves.
Control theory
Control theory is an interdisciplinary branch of engineering and mathematics, in part it deals with influencing the behavior of dynamical systems.
Ergodic theory
Ergodic theory is a branch of mathematics that studies dynamical systems with an invariant measure and related problems. Its initial development was motivated by problems of statistical physics.
Functional analysis
Functional analysis is the branch of mathematics, and specifically of analysis, concerned with the study of vector spaces and operators acting upon them. It has its historical roots in the study of functional spaces, in particular transformations of functions, such as the Fourier transform, as well as in the study of differential and integral equations. This usage of the word functional goes back to the calculus of variations, implying a function whose argument is a function. Its use in general has been attributed to mathematician and physicist Vito Volterra and its founding is largely attributed to mathematician Stefan Banach.
Graph dynamical systems
The concept of graph dynamical systems (GDS) can be used to capture a wide range of processes taking place on graphs or networks. A major theme in the mathematical and computational analysis of graph dynamical systems is to relate their structural properties (e.g. the network connectivity) and the global dynamics that result.
Projected dynamical systems
Projected dynamical systems is a mathematical theory investigating the behaviour of dynamical systems where solutions are restricted to a constraint set. The discipline shares connections to and applications with both the static world of optimization and equilibrium problems and the dynamical world of ordinary differential equations. A projected dynamical system is given by the flow to the projected differential equation.
Symbolic dynamics
Symbolic dynamics is the practice of modelling a topological or smooth dynamical system by a discrete space consisting of infinite sequences of abstract symbols, each of which corresponds to a state of the system, with the dynamics (evolution) given by the shift operator.
System dynamics
System dynamics is an approach to understanding the behaviour of systems over time. It deals with internal feedback loops and time delays that affect the behaviour and state of the entire system. What makes using system dynamics different from other approaches to studying systems is the language used to describe feedback loops with stocks and flows. These elements help describe how even seemingly simple systems display baffling nonlinearity.
Topological dynamics
Topological dynamics is a branch of the theory of dynamical systems in which qualitative, asymptotic properties of dynamical systems are studied from the viewpoint of general topology.
Applications
In biomechanics
In sports biomechanics, dynamical systems theory has emerged in the movement sciences as a viable framework for modeling athletic performance and efficiency. It comes as no surprise, since dynamical systems theory has its roots in Analytical mechanics. From psychophysiological perspective, the human movement system is a highly intricate network of co-dependent sub-systems (e.g. respiratory, circulatory, nervous, skeletomuscular, perceptual) that are composed of a large number of interacting components (e.g. blood cells, oxygen molecules, muscle tissue, metabolic enzymes, connective tissue and bone). In dynamical systems theory, movement patterns emerge through generic processes of self-organization found in physical and biological systems. There is no research validation of any of the claims associated to the conceptual application of this framework.
In cognitive science
Dynamical system theory has been applied in the field of neuroscience and cognitive development, especially in the neo-Piagetian theories of cognitive development. It is the belief that cognitive development is best represented by physical theories rather than theories based on syntax and AI. It also believed that differential equations are the most appropriate tool for modeling human behavior. These equations are interpreted to represent an agent's cognitive trajectory through state space. In other words, dynamicists argue that psychology should be (or is) the description (via differential equations) of the cognitions and behaviors of an agent under certain environmental and internal pressures. The language of chaos theory is also frequently adopted.
In it, the learner's mind reaches a state of disequilibrium where old patterns have broken down. This is the phase transition of cognitive development. Self-organization (the spontaneous creation of coherent forms) sets in as activity levels link to each other. Newly formed macroscopic and microscopic structures support each other, speeding up the process. These links form the structure of a new state of order in the mind through a process called scalloping (the repeated building up and collapsing of complex performance.) This new, novel state is progressive, discrete, idiosyncratic and unpredictable.
Dynamic systems theory has recently been used to explain a long-unanswered problem in child development referred to as the A-not-B error.
Further, since the middle of the 1990s cognitive science, oriented towards a system theoretical connectionism, has increasingly adopted the methods from (nonlinear) “Dynamic Systems Theory (DST)“. A variety of neurosymbolic cognitive neuroarchitectures in modern connectionism, considering their mathematical structural core, can be categorized as (nonlinear) dynamical systems. These attempts in neurocognition to merge connectionist cognitive neuroarchitectures with DST come from not only neuroinformatics and connectionism, but also recently from developmental psychology (“Dynamic Field Theory (DFT)”) and from “evolutionary robotics” and “developmental robotics” in connection with the mathematical method of “evolutionary computation (EC)”. For an overview see Maurer.
In second language development
The application of Dynamic Systems Theory to study second language acquisition is attributed to Diane Larsen-Freeman who published an article in 1997 in which she claimed that second language acquisition should be viewed as a developmental process which includes language attrition as well as language acquisition. In her article she claimed that language should be viewed as a dynamic system which is dynamic, complex, nonlinear, chaotic, unpredictable, sensitive to initial conditions, open, self-organizing, feedback sensitive, and adaptive.
See also
Related subjects
List of dynamical system topics
Baker's map
Biological applications of bifurcation theory
Dynamical system (definition)
Embodied Embedded Cognition
Fibonacci numbers
Fractals
Gingerbreadman map
Halo orbit
List of types of systems theory
Oscillation
Postcognitivism
Recurrent neural network
Combinatorics and dynamical systems
Synergetics
Systemography
Related scientists
People in systems and control
Dmitri Anosov
Vladimir Arnold
Nikolay Bogolyubov
Andrey Kolmogorov
Nikolay Krylov
Jürgen Moser
Yakov G. Sinai
Stephen Smale
Hillel Furstenberg
Grigory Margulis
Elon Lindenstrauss
Notes
Further reading
External links
Dynamic Systems Encyclopedia of Cognitive Science entry.
Definition of dynamical system in MathWorld.
DSWeb Dynamical Systems Magazine
Dynamical systems
Complex systems theory
Computational fields of study | Dynamical systems theory | [
"Physics",
"Mathematics",
"Technology"
] | 2,763 | [
"Computational fields of study",
"Mechanics",
"Computing and society",
"Dynamical systems"
] |
990,657 | https://en.wikipedia.org/wiki/Service%20data%20unit | In Open Systems Interconnection (OSI) terminology, a service data unit (SDU) is a unit of data that has been passed down from an OSI layer or sublayer to a lower layer. This unit of data (SDU) has not yet been encapsulated into a protocol data unit (PDU) by the lower layer. That SDU is then encapsulated into the lower layer's PDU and the process continues until reaching the PHY, physical, or lowest layer of the OSI stack.
The SDU can also be thought of as a set of data that is sent by a user of the services of a given layer, and is transmitted semantically unchanged to a peer service user.
SDU and PDU
It differs from a PDU in that the PDU specifies the data that will be sent to the peer protocol layer at the receiving end, as opposed to being sent to a lower layer.
The SDU accepted by any given layer (n) from layer (n+1) above, is a PDU of the layer (n+1) above. In effect the SDU is the 'payload' of a given PDU. The layer (n) may add headers or trailers, or both, to the SDU and may do other kinds of reformatting, recoding, splitting or transformations on the data, forming one or more layer (n) PDUs. The added headers or trailers and other possible changes are part of the process that makes it possible to get data from a source to a destination. Layer (n) may also generate additional layer (n) PDUSs. Each unit of data that layer (n) gives to layer (n-1) below is in turn handed down as a layer (n-1) SDU.
When the PDU of layer (n+1), plus any metadata layer (n) would add; would exceed the maximum size a layer-n PDU can be (called layer (n)'s maximum transmission unit); the SDU must be split into multiple payloads for layer (n); a process known as fragmentation.
MAC SDU
MAC SDUS or MSDUS are data units transmitted between other Media access controllers on a lower OSI Layer. The PDU counterpart MAC PDU that does the same thing but on the same OSI Layer. When there are larger MAC PDU's as MAC SDU's in the system, the MAC PDU includes more MAC SDU's, because of packet aggregation. If the MAC PDU's are smaller then the MAC SDU's includes more MAC PDU's, because of packet segmentation.
See also
Federal Standard 1037C
References
Telecommunications standards | Service data unit | [
"Technology"
] | 566 | [
"Computing stubs",
"Computer network stubs"
] |
990,677 | https://en.wikipedia.org/wiki/SAS%20%28software%29 | SAS (previously "Statistical Analysis System") is a statistical software suite developed by SAS Institute for data management, advanced analytics, multivariate analysis, business intelligence, criminal investigation, and predictive analytics. SAS' analytical software is built upon artificial intelligence and utilizes machine learning, deep learning and generative AI to manage and model data. The software is widely used in industries such as finance, insurance, health care and education.
SAS was developed at North Carolina State University from 1966 until 1976, when SAS Institute was incorporated. SAS was further developed in the 1980s and 1990s with the addition of new statistical procedures, additional components and the introduction of JMP. A point-and-click interface was added in version 9 in 2004. A social media analytics product was added in 2010.
Technical overview and terminology
SAS is a software suite that can mine, alter, manage and retrieve data from a variety of sources and perform statistical analysis on it. SAS provides a graphical point-and-click user interface for non-technical users and more through the SAS language.
SAS programs have DATA steps, which retrieve and manipulate data, PROC (procedures) which analyze the data, and may also have functions. Each step consists of a series of statements.
The DATA step has executable statements that result in the software taking an action, and declarative statements that provide instructions to read a data set or alter the data's appearance. The DATA step has two phases: compilation and execution. In the compilation phase, declarative statements are processed and syntax errors are identified. Afterwards, the execution phase processes each executable statement sequentially. Data sets are organized into tables with rows called "observations" and columns called "variables". Additionally, each piece of data has a descriptor and a value.
PROC statements call upon named procedures. Procedures perform analysis and reporting on data sets to produce statistics, analyses, and graphics. There are more than 300 named procedures and each one performs a substantial body of statistical work. PROC statements can also display results, sort data or perform other operations.
SAS macros are pieces of code or variables that are coded once and referenced to perform repetitive tasks.
SAS data can be published in HTML, PDF, Excel, RTF and other formats using the Output Delivery System, which was first introduced in 2007. SAS Enterprise Guide is SAS's point-and-click interface. It generates code to manipulate data or perform analysis without the use of the SAS programming language.
The SAS software suite has more than 200 add-on packages, sometimes called components Some of these SAS components, i.e. add on packages to Base SAS include:
History
Origins
The development of SAS started in 1966 after North Carolina State University re-hired Anthony Barr to program his analysis of variance and regression software so that it would run on IBM System/360 computers. The project was funded by the National Institutes of Health. and was originally intended to analyze agricultural data to improve crop yields. Barr was joined by student James Goodnight, who developed the software's statistical routines, and the two became project leaders. In 1968, Barr and Goodnight integrated new multiple regression and analysis of variance routines. In 1972, after issuing the first release of SAS, the project lost its funding. According to Goodnight, this was because NIH only wanted to fund projects with medical applications. Goodnight continued teaching at the university for a salary of $1 and access to mainframe computers for use with the project, until it was funded by the University Statisticians of the Southern Experiment Stations the following year. John Sall joined the project in 1973 and contributed to the software's econometrics, time series, and matrix algebra. Another early participant, Caroll G. Perkins, contributed to SAS' early programming. Jolayne W. Service and Jane T. Helwig created SAS's first documentation.
The first versions of SAS, from SAS 71 to SAS 82, were named after the year in which they were released. In 1971, SAS 71 was published as a limited release. It was used only on IBM mainframes and had the main elements of SAS programming, such as the DATA step and the most common procedures, i.e. PROCs. The following year a full version was released as SAS 72, which introduced the MERGE statement and added features for handling missing data or combining data sets. The development of SAS has been described as an "inflection point" in the history of artificial intelligence. In 1976, Barr, Goodnight, Sall, and Helwig removed the project from North Carolina State and incorporated it as the SAS Institute, Inc.
Development
SAS was redesigned in SAS 76. The INPUT and INFILE statements were improved so they could read most data formats used by IBM mainframes. Generating reports was also added through the PUT and FILE statements. The ability to analyze general linear models was also added as was the FORMAT procedure, which allowed developers to customize the appearance of data. In 1979, SAS 79 added support for the IBM VM/CMS operating system and introduced the DATASETS procedure. Three years later, SAS 82 introduced an early macro language and the APPEND procedure.
Beginning with SAS 4, released in 1984, SAS releases have followed a sequential naming convention not based on year of release. SAS version 4 had limited features, but made SAS more accessible. Version 5 introduced a complete macro language, array subscripts, and a full-screen interactive user interface called Display Manager. In 1985, SAS was rewritten in the C programming language. This enabled the SAS' MultiVendor Architecture which allows the software to run on UNIX, MS-DOS, and Windows. It was previously written in PL/I, Fortran, and assembly language.
In the 1980s and 1990s, SAS released a number of components to complement Base SAS. SAS/GRAPH, which produces graphics, was released in 1980, as well as the SAS/ETS component, which supports econometric and time series analysis. A component intended for pharmaceutical users, SAS/PH-Clinical, was released in the 1990s. The Food and Drug Administration standardized on using SAS/PH-Clinical for new drug applications in 2002. Vertical products like SAS Financial Management and SAS Human Capital Management (then called CFO Vision and HR Vision respectively) were also introduced.
JMP was developed by SAS co-founder John Sall and a team of developers, in order to take advantage of the graphical user interface introduced in the 1984 Apple Macintosh. JMP's name originally stood for "John's Macintosh Project". JMP was shipped for the first time in 1989. Updated versions of JMP were released continuously after 2002 with the most recent release in 2016. In January 2022, JMP became a wholly owned subsidiary of SAS Institute, having previously been a business unit of the company.
SAS 6 was used throughout the 1990s and was available on a wider range of operating systems, including Macintosh, OS/2, Silicon Graphics, and PRIMOS. SAS introduced new features through dot-releases. From 6.06 to 6.09, a user interface based on the Windows paradigm was introduced and support for SQL was added. Version 7 introduced the Output Delivery System (ODS) and an improved text editor. Subsequent releases improved upon the ODS. For example, more output options were added in version 8. The number of operating systems that were supported was reduced to UNIX, Windows and z/OS, and Linux was added. SAS 8 and SAS Enterprise Miner were released in 1999.
Recent history
In 2002, SAS Text Miner software was introduced. Text Miner analyzes text data like emails for patterns in business intelligence applications. In 2004, SAS Version 9.0 was released, referred to as "Project Mercury" internally, and was designed to make SAS accessible to a broader range of business users. SAS 9.0 added custom user interfaces based on the user's role and established the point-and-click user interface of SAS Enterprise Guide as the software's primary graphical user interface (GUI). The Customer Relationship Management (CRM) features were improved in 2004 with SAS Interaction Management. In 2008, SAS announced Project Unity, designed to integrate data quality, data integration, and master data management.
SAS Institute Inc v World Programming Ltd was a lawsuit with developers of a competing implementation, World Programming System, alleging that they had infringed SAS's copyright in part by implementing the same functionality. The case was referred by the United Kingdom's High Court of Justice to the European Court of Justice on 11 August 2010. In May 2012, the European Court of Justice ruled in favor of World Programming, finding that "the functionality of a computer program and the programming language cannot be protected by copyright."
A free version of SAS was introduced for students in 2010. SAS Social Media Analytics, a tool for social media monitoring, engagement and sentiment analysis, was also released that year. SAS Rapid Predictive Modeler (RPM), which creates basic analytical models using Microsoft Excel, was introduced the same year. In 2010, JMP 9 included a new interface for using the R programming language and an add-in for MS Excel. The following year, a High Performance Computing platform was made available in a partnership with Teradata and EMC Greenplum. In 2011, the company released SAS Enterprise Miner 7.1. The company introduced 27 data management products from October 2013 to October 2014 and updates to 160 others. At the SAS Global Forum 2015, SAS announced several new products that were specialized for different industries, as well as new training software.
The company has invested in the development of artificial general intelligence, or "strong AI", with the goal of advancing deep learning and natural language processing to the point of achieving cognitive computing.
In 2019, SAS announced that it would be investing $1 billion into the development of advanced artificial intelligence, deep learning, natural language processing and machine learning. It announced an additional $1 billion investment into these areas in 2023, particularly for industries such as finance, insurance, government, health care and energy. In September 2023, the company announced its expansion of research into the applications of generative AI in analytics, data management and modeling.
Software products
As of 2011, SAS's largest set of products was its line for customer intelligence. Numerous SAS modules for web, social media and marketing analytics may be used to profile customers and prospects, predict their behaviors and manage and optimize communications.
SAS also provides the SAS Fraud Framework. The framework's primary functionality is to monitor transactions across different applications, networks and partners and use analytics to identify anomalies that are indicative of fraud. This software uses artificial intelligence to monitor income and assets. The SAS Asset and Liability Management platform utilizes generative AI and machine learning to monitor risk and model risk management strategies.
SAS Governance, Risk and Compliance Manager provides risk modeling, scenario analysis, and other functions in order to manage and visualize risk, compliance and corporate policies. There is also a SAS Enterprise Risk Management product-set designed primarily for banks and financial services organizations.
SAS products for monitoring and managing the operations of IT systems are collectively referred to as SAS IT Management Solutions. SAS collects data from various IT assets on performance and utilization, then creates reports and analyses. SAS's Performance Management products consolidate and provide graphical displays for key performance indicators (KPIs) at the employee, department and organizational level.
The SAS Supply Chain Intelligence product suite is offered for supply chain needs, such as forecasting product demand, managing distribution and inventory and optimizing pricing. There is also a "SAS for Sustainability Management" set of software to forecast environmental, social and economic effects and identify causal relationships between operations and their impact on the environment or ecosystem.
SAS has various analytical tools related to risk management. The SAS Asset and Liability Management platform utilizes generative AI and machine learning to monitor risk and model risk management strategies.
SAS has products for specific industries, such as government, retail, telecommunications, aerospace, marketing optimization, and high-performance computing. The company has a suite of analytical products related to health care and life sciences.
SAS University Edition
In May 2014, SAS announced the launch of SAS University Edition. This offering could be downloaded free for non-commercial use. In 2022, the SAS University Edition was replaced by two entirely web-based versions: SAS OnDemand for Academics and SAS Viya for Learners.
Comparison to other products
In a 2005 article for the Journal of Marriage and Family comparing statistical packages from SAS and its competitors Stata and SPSS, Alan C. Acock wrote that SAS programs provide "extraordinary range of data analysis and data management tasks," but were difficult to learn and use. SPSS and Stata, meanwhile, were both easier to learn but had less capable analytic abilities, though these could be expanded with paid (in SPSS) or free (in Stata) add-ons. Acock concluded that SAS was best for power users, while occasional users would benefit most from SPSS and Stata. A 2014 comparison by the University of California, Los Angeles, gave similar results.
Competitors such as Revolution Analytics and Alpine Data Labs advertise their products as considerably cheaper than SAS's. In a 2011 comparison, Doug Henschen of InformationWeek found that start-up fees for the three are similar, though he admitted that the starting fees were not necessarily the best basis for comparison. SAS's business model is not weighted as heavily on initial fees for its programs, instead focusing on revenue from annual subscription fees.
SAS Viya
In 2016, SAS Viya, an artificial intelligence, machine learning, analytics and data management platform, was introduced with a new architecture optimized for running SAS software in public clouds. Viya also increased interoperability with open source software, allowing models to be built in tools such as R, Python and Jupyter, and then executed on SAS's Cloud Analytics Services (CAS) engine. In 2020, a further architectural revamp in Viya 4 containerized the software. SAS sells Viya alongside SAS 9.4, and has not positioned it as a replacement for SAS 9.4.
In 2023, two new software as a service (SaaS) modules for SAS Viya were released as a private preview: Workbench, for use in creating AI models, and App Factory, for use in creating AI applications. Both modules support multiple programming languages and are expected to become generally available in 2024. SAS Viya also became available on Microsoft Azure Marketplace under a pay-as-you-use model in 2023.
In 2023, the company introduced SAS Health, a common health data model built on the SAS Viya platform.
Adoption
According to IDC, SAS is the largest market-share holder in "advanced analytics" with 35.4 percent of the market as of 2013. It is the fifth largest market-share holder for business intelligence (BI) software with a 6.9% share and the largest independent vendor. It competes in the BI market against SAP BusinessObjects, IBM Cognos, SPSS Modeler, Oracle Hyperion, and Microsoft Power BI. SAS has been named in the Gartner Leader's Quadrant for Data Integration Tools and for Business Intelligence and Analytical Platforms.
A study published in 2011 in BMC Health Services Research found that SAS was used in 42.6 percent of data analyses in health service research, based on a sample of 1,139 articles drawn from three journals.
Uses and applications
Education
SAS' analytical software is used in education to measure and visualize student outcomes and growth trends. Several states, including Virginia, North Carolina, Mississippi, Missouri, and North Dakota use its software to measure and analyze learning loss and learning recovery in students.
Energy and manufacturing
SAS' analytical software is widely used in the petroleum and natural gas industry as well as global manufacturing.
Environmental science
SAS and the International Institute for Applied Systems Analysis launched an app that crowdsources image data related to deforestation to train AI algorithms that can identify human impact on the environment. The University of Florida's Center for Coastal Solutions partners with SAS to develop research, training programs and analytical tools related to environmental issues affecting coastal communities.
The UNC Center for Galapagos Studies partnered with SAS in 2023 to create a model that can track the health and migratory patterns of species such as sea turtles and hammerhead sharks, as well as the health of the phytoplankton population.
Finance and insurance
SAS' risk management and fraud prevention software are widely used by governmental organizations and private enterprises. The company's fraud detection and prevention software is used by the tax agencies of various countries, including the United States, United Kingdom, Ireland, New Zealand, the Netherlands, and Canada. In 2023, the Finance Minister of Malta announced that Malta would begin using SAS' software to detect tax evasion.
Healthcare and life sciences
SAS develops data analysis and machine learning techniques that are widely applied in healthcare, medical research and life sciences. SAS has partnered on public health initiatives with the Centers for Disease Control and Prevention and Black Dog Institute.
SAS has been a partner of the Cleveland Clinic since 1982. During the COVID-19 pandemic, the clinic used predictive models developed by SAS to forecast factors such as patient volume, availability of medical equipment and bed capacity in various scenarios. SAS joined UNC Chapel Hill's Rapidly Emerging Antiviral Drug Development Initiative (READDI) in 2021. Duke Health partnered with SAS in 2023 to develop cloud-based artificial intelligence that can analyze patterns in health equity and patient outcomes.
In 2023, the Texas state government contracted SAS to build a centralized visualization platform for predicting and tracking future outbreaks of influenza.
See also
Comparison of numerical-analysis software
Comparison of OLAP servers
JMP (statistical software), a subsidiary of SAS Institute Inc.
SAS language
R (programming language)
References
Further reading
Wikiversity:Data Analysis using the SAS Language
External links
SAS OnDemand for Academics No-cost access for learners (free SAS Profile required)
A Glossary of SAS terminology
SAS for Developers
SAS community forums
SAS Institute
Fourth-generation programming languages
Business intelligence software
Proprietary software programmed in C
Data mining and machine learning software
Data warehousing
Extract, transform, load tools
Mathematical optimization software
Numerical software
Proprietary commercial software for Linux
Proprietary cross-platform software
Science software for Linux | SAS (software) | [
"Mathematics"
] | 3,718 | [
"Numerical software",
"Mathematical software"
] |
990,696 | https://en.wikipedia.org/wiki/Dedicated%20short-range%20communications | Dedicated short-range communications (DSRC) is a technology for direct wireless exchange of vehicle-to-everything (V2X) and other intelligent transportation systems (ITS) data between vehicles, other road users (pedestrians, cyclists, etc.), and roadside infrastructure (traffic signals, electronic message signs, etc.). DSRC, which can be used for both one- and two-way data exchanges, uses channels in the licensed 5.9 GHz band. DSRC is based on IEEE 802.11p.
History
In October 1999, the United States Federal Communications Commission (FCC) allocated 75 MHz of spectrum in the 5.9 GHz band for DSRC-based ITS uses. By 2003, DSRC was used in Europe and Japan for electronic toll collection. In August 2008, the European Telecommunications Standards Institute (ETSI) allocated 30 MHz of spectrum in the 5.9 GHz band for ITS.
In November 2020, the FCC reallocated the lower 45 MHz of the 75 MHz spectrum to the neighboring 5.8 GHz ISM band for unlicensed non-ITS uses, citing DSRC's lack of adoption. Of the 30 MHz that remained for licensed ITS uses, 10 MHz was kept for DSRC (Channel 180, 5.895–5.905 GHz) and 20 MHz was reserved for a successor to DSRC, LTE-CV2X (Channel 183, 5.905–5.925 GHz).
Applications
Singapore's Electronic Road Pricing scheme plans to use DSRC technology for road use measurement (ERP2) to replace its ERP1 overhead gantry method.
In June 2017, the Utah Department of Transportation and the Utah Transit Authority (UTA) demonstrated the use of DSRC for transit signal priority on SR-68 (Redwood Road) in Salt Lake City, whereby several UTA transit buses equipped with DSRC equipment could request changes to signal timing if they were running behind schedule.
Other applications include:
Emergency warning system for vehicles
Cooperative Adaptive Cruise Control
Cooperative Forward Collision Warning
Intersection collision avoidance
Approaching emergency vehicle warning (Blue Waves)
Vehicle safety inspection
Emergency vehicle signal preemption
Electronic parking payments
Commercial vehicle clearance and safety inspections
In-vehicle signing
Rollover warning
Probe data collection
Highway-rail intersection warning
Electronic toll collection
Standardization
DSRC systems in Europe, Japan and the U.S. are incompatible and have significant differences, including spectrum and channels (5.8 GHz RF, 5.9 GHz RF, infrared), data transmission rates, and protocols.
The European standardization organisation European Committee for Standardization (CEN), sometimes in co-operation with the International Organization for Standardization (ISO) developed some DSRC standards:
EN 12253:2004 Dedicated Short-Range CommunicationPhysical layer using microwave at 5.8 GHz (review)
EN 12795:2002 Dedicated Short-Range Communication (DSRC)DSRC Data link layer: Medium Access and Logical Link Control (review)
EN 12834:2002 Dedicated Short-Range CommunicationApplication layer (review)
EN 13372:2004 Dedicated Short-Range Communication (DSRC)DSRC profiles for RTTT applications (review)
EN ISO 14906:2004 Electronic Fee CollectionApplication interface
Each standard addresses different layers in the OSI model communication stack.
See also
V2V
Vehicular communication systems
Telematics
CALM
References
External links
Performance Evaluation of Short-Range Communication Links for Road Transport & Traffic Telematics
A comparison of different technologies for EFC and other ITS applications
Connectsafe Wireless Vehicle Communication System - University of South Australia
Dedicated Short-Range Communications (DSRC) Fact Sheet – U.S. Department of Transportation ITS JPO
Wireless networking
Electronic toll collection
Automotive technologies | Dedicated short-range communications | [
"Technology",
"Engineering"
] | 748 | [
"Wireless networking",
"Computer networks engineering"
] |
990,816 | https://en.wikipedia.org/wiki/Hama%20yumi | The is a sacred bow (yumi) used in 1103 A.D. in Japan. This bow is said to be one of the oldest and most sacred Japanese weapons; the first Emperor Jimmu is always depicted carrying a bow.
According to legend; at that time, the Imperial Palace was taken over by an evil demon, which caused the Emperor to fall ill with great anxiety and suffering. When the Imperial High Priests tried and failed in their efforts to destroy the demon and dispel the Imperial household of its influence, they were at a loss. Finally, an archer, Minamoto no Yorimasa, was summoned to the Imperial Palace in the hopes of slaying the demon with his bow and arrow, ridding the palace of this plague. With a steady hand and a virtuous heart, Yorimasa vanquished the demon with the first arrow, and his bow was declared to be a hama yumi, an "Evil-Destroying Bow" (and the first arrow a hama ya an "Evil-Destroying Arrow").
Since then, hama yumi have been used in Buddhist and Shinto rituals of purification (i.e., Shihōbarai, 四方払い, the Purification/Sweep of the Four Directions). In Japan, it is believed that merely the twanging of its bowstring will frighten away ghosts, evil spirits and negative influences from the house. A miko will carry a hama yumi and a set of hama ya as part of their religious regalia, while back in Feudal Japan, they were used quite literally in defence of the shrine or temple.
As a result, , decorative arrows, are sold even today at shrines as Engimono (good-luck charms); smaller replicas have been placed in shrines and people's homes. It is believed that even just one Hama-Ya which has been blessed by a Shinto Priest carries great spiritual power, will bring protection against the forces of evil, and for purification, and they are also believed to have the ability to attract vast good fortune. Hama ya and hama yumi are often given as gifts to celebrate the first New Year of a male baby's life.
Hama-yumi replicas are scale versions of the sacred Japanese bow, coated with urushi, wrapped in fine rattan and accented in gold leaf. They are displayed in a stand, along with two arrows tipped with yanone (traditional warrior tips); one representing male and the other female, yin and yang (vermilion signifying male energy (yang), and black representing female energy (yin)).
See also
Apotropaic magic
Omamori
Ofuda (御札/お札) - a paper charm
Azusa yumi (梓弓) - a bow made from the wood of the Japanese cherry birch tree (Betula grossa)
Saigū-yumi (祭宮弓) - a ceremonial bow
References
Evil-Destroying Bow
1100s works
12th century in Japan
Bows (archery)
Weapons in Buddhist mythology
Ritual weapons
Ceremonial weapons
Buddhist symbols
Shinto religious objects
Buddhist ritual implements
Talismans
Exorcism in Shinto
Exorcism in Buddhism
Sacred musical instruments
Religious objects | Hama yumi | [
"Physics"
] | 663 | [
"Religious objects",
"Physical objects",
"Matter"
] |
990,894 | https://en.wikipedia.org/wiki/Miombo | Miombo woodland is a tropical and subtropical grasslands, savannas, and shrublands biome (in the World Wide Fund for Nature scheme) located in central and southern tropical Africa. It includes three woodland savanna ecoregions (listed below) characterized by the dominant presence of Brachystegia and Julbernardia species of trees, and has a range of climates ranging from humid to semi-arid, and tropical to subtropical or even temperate. The trees characteristically shed their leaves for a short period in the dry season to reduce water loss and produce a flush of new leaves just before the onset of the wet season with rich gold and red colours masking the underlying chlorophyll, reminiscent of autumn colours in the temperate zone.
Miombo woodlands extend across south-central Africa, running from Angola in the west to Tanzania in the east, including parts of Democratic Republic of the Congo, Malawi, Mozambique, Zambia, and Zimbabwe. They are bounded on the north by the humid Congolian forests, on the northeast by Acacia–Commiphora bushland, and on the south by semi-arid woodlands, grasslands, and savannas.
The woodland gets its name from miombo (plural, singular muombo), the Bemba word for Brachystegia species. Other Bantu languages of the region, such as Swahili and Shona, have related if not identical words, such as Swahili miyombo (singular myombo). These woodlands are dominated by trees of subfamily Detarioideae, particularly miombo (Brachystegia), Julbernardia and Isoberlinia, which are rarely found outside miombo woodlands.
Miombo woodlands can be classified as dry or wet based on the per annum amount and distribution of rainfall. Dry woodlands occur in those areas receiving less than 1000 mm annual rainfall, mostly in Zimbabwe, central Tanzania, eastern and southern Mozambique, Malawi, and southern Zambia. Wet woodlands are those receiving more than 1000 mm annual rainfall, mainly located in northern Zambia, eastern Angola, central Malawi, and western Tanzania. Wet miombo generally has a taller canopy (15 metres or more), more tree cover (60% or more ground cover), and greater species diversity than dry miombo.
Ecoregions
Three ecoregions are currently recognized.
Angolan wet miombo woodlands – Angola
Central Zambezian wet miombo woodlands – Angola, Burundi, Democratic Republic of the Congo, Malawi, Tanzania, and Zambia
Dry miombo woodlands – southeastern Angola, Malawi, Mozambique, central and southern Tanzania, Zambia and Zimbabwe. The dry miombo woodlands ecoregion includes the Eastern miombo woodlands and Southern miombo woodlands ecoregions previously delineated by the World Wide Fund for Nature.
Flora and fauna
Despite the relatively nutrient-poor soil, long dry season, and low rainfall in some areas, the woodland is home to many species, including several endemic bird species. The predominant tree is miombo (Brachystegia spp.). It also provides food and cover for mammals such as the African elephant (Loxodonta africana), African wild dog (Lycaon pictus), sable antelope (Hippotragus niger) and Lichtenstein's hartebeest (Sigmoceros lichtensteinii).
People
The miombo woodlands are important to the livelihoods of many rural people who depend on the resources available from the woodland. The wide variety of species provides non-timber products such as fruits, honey, fodder for livestock and fuelwood to various different largely Bantu peoples such as the Bemba people, Lozi people, Yao people, Luvale people, Shona people, and Luba people.
Notes
References
Campbell, Bruce M., ed. 1996. The Miombo Transition: Woodlands & Welfare in Africa, CIFOR,
External links
Earthtrends.wri.org: Map of Miombo forests-grasslands-drylands
Ecoregions of Africa
Tropical and subtropical grasslands, savannas, and shrublands
Grasslands of Africa
Ecoregions of Angola
Ecoregions of Burundi
Ecoregions of the Democratic Republic of the Congo
Ecoregions of Malawi
Ecoregions of Mozambique
Ecoregions of Tanzania
Ecoregions of Zambia
Ecoregions of Zimbabwe
Swahili words and phrases
Biota of the Afrotropical realm
Afrotropical ecoregions
Zambezian region | Miombo | [
"Biology"
] | 909 | [
"Biota of the Afrotropical realm",
"Biota by biogeographic realm"
] |
991,053 | https://en.wikipedia.org/wiki/Programmer%20%28hardware%29 | In the context of installing firmware onto a device, a programmer, device programmer, chip programmer, device burner, or PROM writer is a device that writes, a.k.a. burns, firmware to a target device's non-volatile memory.
Typically, the target device memory is one of the following types: PROM, EPROM, EEPROM, Flash memory, eMMC, MRAM, FeRAM, NVRAM, PLD, PLA, PAL, GAL, CPLD, FPGA.
Connection
Generally, a programmer connects to a device in one of two ways.
Insertion
In some cases, the target device is inserted into a socket (usually ZIF) on the programmer. If the device is not a standard DIP packaging, a plug-in adapter board, which converts the footprint with another socket, is used.
Cable & port
In some cases, a programmer connects to a device via a cable to a connection port on the device. This is sometimes called on-board programming, in-circuit programming, or in-system programming.
Transfer
Data is transferred from the programmer to the device as signals via connecting pins.
Some devices have a serial interface
for receiving data (including JTAG interface).
Other devices communicate on parallel pins, followed by a programming pulse with a higher voltage for programming the data into the device.
Usually, a programmer is controlled via a connected personal computer through a parallel port,
USB port,
or LAN interface.
A program on the controlling computer interacts with the programmer to perform operations such as configure install parameters and program the device,
Types
There are four general types of programmers:
Automated programmers often have multiple programming sites/sockets for mass production. Sometimes used with robotic pick and place handlers with on-board sites to support high volume and complex output such as laser marking, 3D inspection, tape input/output, etc.
Development programmers usually have a single programming site; used for first article development and small-series production.
Pocket programmers for development and field service.
Specialized programmers for certain circuit types only, such as FPGA, microcontroller, and EEPROM programmers.
History
Regarding old PROM programmers, as the many programmable devices have different voltage requirements, every pin driver must be able to apply different voltages in a range of 025 Volts.
But according to the progress of memory device technology, recent flash memory programmers do not need high voltages.
In the early days of computing, booting mechanism was a mechanical devices usually consisted of switches and LEDs. It means the programmer was not an equipment but a human, who entered machine codes one by one, by setting the switches in a series of "on" and "off" positions. These positions of switches corresponded to the machine codes, similar to today's assembly language.
Nowadays, EEPROMs are used for bootstrapping mechanism as BIOS, and no need to operate mechanical switches for programming.
Manufacturers
For each vendor's web site, refer to "External links" section.
Batronix GmbH & Co. KG
BPM Microsystems
Conitec Datasystems
Data I/O Corporation
DediProg Technology Co., Ltd
Elnec s.r.o
Elprosys Sp. z o.o.
halec
Hi-Lo System Research
MCUmall Electronics Inc.
Phyton, Inc.
Xeltek Inc.
See also
Off-line programming
In-system programming
Debug port
JTAG interface
Common Flash Memory Interface
Open NAND Flash Interface Working Group
Atmel AVR#Programming interfaces
PIC microcontroller#Device programmers
Intel HEX – ASCII file format
SREC – ASCII file format
ELF – Binary file format
COFF – Binary file format
Hardware description language
References
External links
Technical information
JEDEC - Memory Configurations: JESD21-C
JEDEC - Common Flash Interface (CFI) Specification, JESD68.01, September 2003.
Intel - Common Flash Interface (CFI) and Command Sets
IEEE Std 1532-2002 (Revision of IEEE Std 1532-2001) - IEEE Standard for In-System Configuration of Programmable Devices
What is the IEEE 1532 Standard? Keysight Technologies
JEDEC - STANDARD DATA TRANSFER FORMAT BETWEEN DATA PREPARATION SYSTEM AND PROGRAMMABLE LOGIC DEVICE PROGRAMMER: JESD3-C, Jun 1994
JEDEC - JC-42 Solid State Memories
Manufacturers
Batronix GmbH & Co. KG
BPM Microsystems
Conitec Datasystems Inc.
Data I/O Corporation
Elnec s.r.o.
Elprosys Sp. z o.o.
Dediprog
halec
Hi-Lo System Research Co. Ltd.
MCUmall Electronics Inc.
Minato Holdings Inc.
Phyton, Inc.
Xeltek Inc.
Computer engineering
Integrated circuits
Non-volatile memory
Gate arrays | Programmer (hardware) | [
"Technology",
"Engineering"
] | 982 | [
"Gate arrays",
"Electrical engineering",
"Computer engineering",
"Integrated circuits"
] |
991,054 | https://en.wikipedia.org/wiki/Sol%20%28colloid%29 | A sol is a colloidal suspension made out of tiny solid particles in a continuous liquid medium. Sols are stable, so that they do not settle down when left undisturbed, and exhibit the Tyndall effect, which is the scattering of light by the particles in the colloid. The size of the particles can vary from 1 nm - 100 nm. Examples include amongst others blood, pigmented ink, cell fluids, paint, antacids and mud.
Artificial sols can be prepared by two main methods: dispersion and condensation. In the dispersion method, solid particles are reduced to colloidal dimensions through techniques such as ball milling and Bredig's arc method. In the condensation method, small particles are formed from larger molecules through a chemical reaction.
The stability of sols can be maintained through the use of dispersing agents, which prevent the particles from clumping together or settling out of the suspension. Sols are often used in the sol-gel process, in which a sol is converted into a gel through the addition of a crosslinking agent.
In a sol, solid particles are dispersed in a liquid continuous phase, while in an emulsion, liquid droplets are dispersed in a liquid or semi-solid continuous phase.
References
Colloids
Colloidal chemistry | Sol (colloid) | [
"Physics",
"Chemistry",
"Materials_science"
] | 271 | [
"Colloidal chemistry",
"Surface science",
"Colloids",
"Chemical mixtures",
"Condensed matter physics"
] |
991,105 | https://en.wikipedia.org/wiki/Line%20driver | A line driver is an electronic amplifier circuit designed for driving a load such as a transmission line. The amplifier's output impedance may be matched to the characteristic impedance of the transmission line.
Line drivers are commonly used within digital systems, e.g. to communicate digital signals across circuit-board traces and cables.
In analog audio, a line driver is typically used to drive line-level analog signal outputs, for example to connect a CD player to an amplified speaker system.
References
Electronic amplifiers | Line driver | [
"Technology"
] | 101 | [
"Electronic amplifiers",
"Amplifiers"
] |
991,169 | https://en.wikipedia.org/wiki/Rodenticide | Rodenticides are chemicals made and sold for the purpose of killing rodents. While commonly referred to as "rat poison", rodenticides are also used to kill mice, woodchucks, chipmunks, porcupines, nutria, beavers, and voles.
Some rodenticides are lethal after one exposure while others require more than one. Rodents are disinclined to gorge on an unknown food (perhaps reflecting an adaptation to their inability to vomit), preferring to sample, wait and observe whether it makes them or other rats sick. This phenomenon of poison shyness is the rationale for poisons that kill only after multiple doses.
Besides being directly toxic to the mammals that ingest them, including dogs, cats, and humans, many rodenticides present a secondary poisoning risk to animals that hunt or scavenge the dead corpses of rats.
Classes of rodenticides
Anticoagulants
Anticoagulants are defined as chronic (death occurs one to two weeks after ingestion of the lethal dose, rarely sooner), single-dose (second generation) or multiple-dose (first generation) rodenticides, acting by effective blocking of the vitamin-K cycle, resulting in inability to produce essential blood-clotting factors—mainly coagulation factors II (prothrombin) and VII (proconvertin).
In addition to this specific metabolic disruption, massive toxic doses of 4-hydroxycoumarin, 4-thiochromenone and 1,3-indandione anticoagulants cause damage to tiny blood vessels (capillaries), increasing their permeability, causing internal bleeding. These effects are gradual, developing over several days. In the final phase of the intoxication, the exhausted rodent collapses due to hemorrhagic shock or severe anemia and dies. The question of whether the use of these rodenticides can be considered humane has been raised.
The main benefit of anticoagulants over other poisons is that the time taken for the poison to induce death means that the rats do not associate the damage with their feeding habits.
First-generation rodenticidal anticoagulants generally have shorter elimination half-lives, require higher concentrations (usually between 0.005% and 0.1%) and consecutive intake over days in order to accumulate the lethal dose, and are less toxic than second-generation agents.
Second-generation anticoagulant rodenticides (or SGARs) are far more toxic than those of the first generation. They are generally applied in lower concentrations in baits—usually on the order of 0.001% to 0.005%—are lethal after a single ingestion of bait and are also effective against strains of rodents that became resistant to first-generation anticoagulants; thus, the second-generation anticoagulants are sometimes referred to as "superwarfarins".
Phylloquinone has been suggested, and successfully used, as antidote for pets or humans accidentally or intentionally exposed to anticoagulant poisons. Some of these poisons act by inhibiting liver functions and in advanced stages of poisoning, several blood-clotting factors are absent, and the volume of circulating blood is diminished, so that a blood transfusion (optionally with the clotting factors present) can save a person who has been poisoned, an advantage over some older poisons. A unique enzyme produced by the liver enables the body to recycle vitamin K. To produce the blood clotting factors that prevent excessive bleeding, the body needs vitamin K. Anticoagulants hinder this enzyme's ability to function. Internal bleeding could start if the body's reserve of anticoagulant runs out from exposure to enough of it. Because they bind more closely to the enzyme that produces blood clotting agents, single-dose anticoagulants are more hazardous. They may also obstruct several stages of the recycling of vitamin K. Single-dose or second-generation anticoagulants can be stored in the liver because they are not quickly eliminated from the body.
Metal phosphides
Metal phosphides have been used as a means of killing rodents and are considered single-dose fast acting rodenticides (death occurs commonly within 1–3 days after single bait ingestion). A bait consisting of food and a phosphide (usually zinc phosphide) is left where the rodents can eat it. The acid in the digestive system of the rodent reacts with the phosphide to generate toxic phosphine gas. This method of vermin control has possible use in places where rodents are resistant to some of the anticoagulants, particularly for control of house and field mice; zinc phosphide baits are also cheaper than most second-generation anticoagulants, so that sometimes, in the case of large infestation by rodents, their population is initially reduced by copious amounts of zinc phosphide bait applied, and the rest of population that survived the initial fast-acting poison is then eradicated by prolonged feeding on anticoagulant bait. Inversely, the individual rodents that survived anticoagulant bait poisoning (rest population) can be eradicated by pre-baiting them with nontoxic bait for a week or two (this is important to overcome bait shyness, and to get rodents used to feeding in specific areas by specific food, especially in eradicating rats) and subsequently applying poisoned bait of the same sort as used for pre-baiting until all consumption of the bait ceases (usually within 2–4 days). These methods of alternating rodenticides with different modes of action gives actual or almost 100% eradications of the rodent population in the area, if the acceptance/palatability of baits are good (i.e., rodents feed on it readily).
Zinc phosphide is typically added to rodent baits in a concentration of 0.75% to 2.0%. The baits have strong, pungent garlic-like odor due to the phosphine liberated by hydrolysis. The odor attracts (or, at least, does not repel) rodents, but has a repulsive effect on other mammals. Birds, notably wild turkeys, are not sensitive to the smell, and might feed on the bait, and thus fall victim to the poison.
The tablets or pellets (usually aluminium, calcium or magnesium phosphide for fumigation/gassing) may also contain other chemicals which evolve ammonia, which helps reduce the potential for spontaneous combustion or explosion of the phosphine gas.
Metal phosphides do not accumulate in the tissues of poisoned animals, so the risk of secondary poisoning is low.
Before the advent of anticoagulants, phosphides were the favored kind of rat poison. During World War II, they came into use in United States because of shortage of strychnine due to the Japanese occupation of the territories where the strychnine tree is grown. Phosphides are rather fast-acting rat poisons, resulting in the rats dying usually in open areas, instead of in the affected buildings.
Phosphides used as rodenticides include:
aluminium phosphide (fumigant and bait)
calcium phosphide (fumigant only)
magnesium phosphide (fumigant only)
zinc phosphide (bait only)
Hypercalcemia (vitamin D overdose)
Cholecalciferol (vitamin D3) and ergocalciferol (vitamin D2) are used as rodenticides. They are toxic to rodents for the same reason they are important to humans: they affect calcium and phosphate homeostasis in the body. Vitamins D are essential in minute quantities (few IUs per kilogram body weight daily, only a fraction of a milligram), and like most fat soluble vitamins, they are toxic in larger doses, causing hypervitaminosis D. If the poisoning is severe enough (that is, if the dose of the toxin is high enough), it leads to death. In rodents that consume the rodenticidal bait, it causes hypercalcemia, raising the calcium level, mainly by increasing calcium absorption from food, mobilising bone-matrix-fixed calcium into ionised form (mainly monohydrogencarbonate calcium cation, partially bound to plasma proteins, [CaHCO3]+), which circulates dissolved in the blood plasma. After ingestion of a lethal dose, the free calcium levels are raised sufficiently that blood vessels, kidneys, the stomach wall and lungs are mineralised/calcificated (formation of calcificates, crystals of calcium salts/complexes in the tissues, damaging them), leading further to heart problems (myocardial tissue is sensitive to variations of free calcium levels, affecting both myocardial contractibility and action potential propagation between the atria and ventricles), bleeding (due to capillary damage) and possibly kidney failure. It is considered to be single-dose, cumulative (depending on concentration used; the common 0.075% bait concentration is lethal to most rodents after a single intake of larger portions of the bait) or sub-chronic (death occurring usually within days to one week after ingestion of the bait). Applied concentrations are 0.075% cholecalciferol (30,000 IU/g)<ref name=usda2006>{{cite conference |last1=Rizor |first1=Suzanne E. |last2=Arjo |first2=Wendy M. |last3=Bulkin |first3=Stephan |last4=Nolte |first4=Dale L. |title=Efficacy of Cholecalciferol Baits for Pocket Gopher Control and Possible Effects on Non-Target Rodents in Pacific Northwest Forests |url=https://naldc.nal.usda.gov/download/39036/PDF |conference=Vertebrate Pest Conference (2006) |publisher=USDA |quote= 0.15% cholecalciferol bait appears to have application for pocket gopher control.' Cholecalciferol can be a single high-dose toxicant or a cumulative multiple low-dose toxicant.' |access-date=27 August 2019 |archive-date=14 September 2012 |archive-url=https://web.archive.org/web/20120914083512/http://naldc.nal.usda.gov/download/39036/PDF |url-status=dead }}</ref> and 0.1% ergocalciferol (40,000 IU/g) when used alone, which can kill a rodent or a rat.
There is an important feature of calciferols toxicology, that they are synergistic with anticoagulant toxicant. In other words, mixtures of anticoagulants and calciferols in same bait are more toxic than a sum of toxicities of the anticoagulant and the calciferol in the bait, so that a massive hypercalcemic effect can be achieved by a substantially lower calciferol content in the bait, and vice versa, a more pronounced anticoagulant/hemorrhagic effects are observed if the calciferol is present. This synergism is mostly used in calciferol low concentration baits, because effective concentrations of calciferols are more expensive than effective concentrations of most anticoagulants.
The first application of a calciferol in rodenticidal bait was in the Sorex product Sorexa D (with a different formula than today's Sorexa D), back in the early 1970s, which contained 0.025% warfarin and 0.1% ergocalciferol. Today, Sorexa CD contains a 0.0025% difenacoum and 0.075% cholecalciferol combination. Numerous other brand products containing either 0.075-0.1% calciferols (e.g. Quintox) alone or alongside an anticoagulant are marketed.
The Merck Veterinary Manual states the following:
Although this rodenticide [cholecalciferol] was introduced with claims that it was less toxic to nontarget species than to rodents, clinical experience has shown that rodenticides containing cholecalciferol are a significant health threat to dogs and cats. Cholecalciferol produces hypercalcemia, which results in systemic calcification of soft tissue, leading to kidney failure, cardiac abnormalities, hypertension, CNS depression and GI upset. Signs generally develop within 18-36 hours of ingestion and can include depression, anorexia, polyuria and polydipsia. As serum calcium concentrations increase, clinical signs become more severe. ... GI smooth muscle excitability decreases and is manifest by anorexia, vomiting and constipation. ... Loss of renal concentrating ability is a direct result of hypercalcemia. As hypercalcemia persists, mineralization of the kidneys results in progressive renal insufficiency."
Additional anticoagulant renders the bait more toxic to pets as well as humans. Upon single ingestion, solely calciferol-based baits are considered generally safer to birds than second generation anticoagulants or acute toxicants. Treatment in pets is mostly supportive, with intravenous fluids and pamidronate disodium. The hormone calcitonin is no longer commonly used.
Other
Other chemical poisons include:
ANTU (α-naphthylthiourea; specific against Brown rat, Rattus norvegicus'')
Arsenic trioxide
Barium carbonate (sometimes called Witherite)
Chloralose (a narcotic prodrug)
Crimidine (inhibits metabolism of vitamin B6)
1,3-Difluoro-2-propanol ("Gliftor")
Endrin (organochlorine insecticide, used in the past for extermination of voles in fields)
Fluoroacetamide ("1081")
Phosacetim (a delayed-action acetylcholinesterase inhibitor)
Phosphorus allotropes
Pyrinuron (a urea derivative)
Scilliroside and other cardiac glycosides like oleandrin or digoxin
Sodium fluoroacetate ("1080")
Strychnine (A naturally occurring convulsant and stimulant)
Tetramethylenedisulfotetramine ("tetramine") - Deadly toxic to humans so use should be avoided
Thallium sulfate
Mitochondrial toxins like bromethalin and 2,4-dinitrophenol (cause high fever and brain swelling)
Zyklon B/Uragan D2 (hydrogen cyanide gas absorbed in an inert carrier)
Combinations
In some countries, fixed three-component rodenticides, i.e., anticoagulant + antibiotic + vitamin D, are used. Associations of a second-generation anticoagulant with an antibiotic and/or vitamin D are considered to be effective even against most resistant strains of rodents, though some second generation anticoagulants (namely brodifacoum and difethialone), in bait concentrations of 0.0025% to 0.005% are so toxic that resistance is unknown, and even rodents resistant to other rodenticides are reliably exterminated by application of these most toxic anticoagulants.
Low-toxicity/Eco-friendly rodenticides
Powdered corn cob and corn meal gluten have been developed as rodenticides. They were approved in the EU and patented in the US in 2013. These preparations rely on dehydration and electrolyte imbalance to cause death.
Inert gas killing of burrowing pest animals is another method with no impact on scavenging wildlife. One such method has been commercialized and sold under the brand name Rat Ice.
Non-target issues
Secondary poisoning and risks to wildlife
One of the potential problems when using rodenticides is that dead or weakened rodents may be eaten by other wildlife, either predators or scavengers. Members of the public deploying rodenticides may not be aware of this or may not follow the product's instructions closely enough. There is evidence of secondary poisoning being caused by exposure to prey.
The faster a rodenticide acts, the more critical this problem may be. For the fast-acting rodenticide bromethalin, for example, there is no diagnostic test or antidote.
This has led environmental researchers to conclude that low strength, long duration rodenticides (generally first generation anticoagulants) are the best balance between maximum effect and minimum risk.
Proposed US legislation change
In 2008, after assessing human health and ecological effects, as well as benefits, the US Environmental Protection Agency (EPA) announced measures to reduce risks associated with ten rodenticides. New restrictions by sale and distribution restrictions, minimum package size requirements, use site restriction, and tamper resistant products would have taken effect in 2011. The regulations were delayed pending a legal challenge by manufacturer Reckitt-Benkiser.
Notable rat eradications
The entire rat populations of several islands have been eradicated, most notably New Zealand's Campbell Island, Hawadax Island, Alaska (formerly known as Rat Island), Macquarie Island and Canna, Scotland (declared rat-free in 2008). According to the Friends of South Georgia Island, all of the rats have been eliminated from South Georgia.
Alberta, Canada, through a combination of climate and control, is also believed to be rat-free.
See also
Poison shyness
Pesticide
Thallium poisoning
Substances poisonous to dogs
References
Further reading
External links
National Pesticide Information Center
Fact Sheet on EPA's Proposed Risk Mitigation Decision for Nine Rodenticides
EPA Rodenticide Cluster Reregistration Eligibility Decision Fact Sheet
Biocides | Rodenticide | [
"Biology",
"Environmental_science"
] | 3,844 | [
"Biocides",
"Rodenticides",
"Toxicology"
] |
991,210 | https://en.wikipedia.org/wiki/Divisibility%20rule | A divisibility rule is a shorthand and useful way of determining whether a given integer is divisible by a fixed divisor without performing the division, usually by examining its digits. Although there are divisibility tests for numbers in any radix, or base, and they are all different, this article presents rules and examples only for decimal, or base 10, numbers. Martin Gardner explained and popularized these rules in his September 1962 "Mathematical Games" column in Scientific American.
Divisibility rules for numbers 1−30
The rules given below transform a given number into a generally smaller number, while preserving divisibility by the divisor of interest. Therefore, unless otherwise noted, the resulting number should be evaluated for divisibility by the same divisor. In some cases the process can be iterated until the divisibility is obvious; for others (such as examining the last n digits) the result must be examined by other means.
For divisors with multiple rules, the rules are generally ordered first for those appropriate for numbers with many digits, then those useful for numbers with fewer digits.
To test the divisibility of a number by a power of 2 or a power of 5 (2n or 5n, in which n is a positive integer), one only need to look at the last n digits of that number.
To test divisibility by any number expressed as the product of prime factors , we can separately test for divisibility by each prime to its appropriate power. For example, testing divisibility by 24 is equivalent to testing divisibility by 8 (23) and 3 simultaneously, thus we need only show divisibility by 8 and by 3 to prove divisibility by 24.
Step-by-step examples
Divisibility by 2
First, take any number (for this example it will be 376) and note the last digit in the number, discarding the other digits. Then take that digit (6) while ignoring the rest of the number and determine if it is divisible by 2. If it is divisible by 2, then the original number is divisible by 2.
Example
376 (The original number)
37 6 (Take the last digit)
6 ÷ 2 = 3 (Check to see if the last digit is divisible by 2)
376 ÷ 2 = 188 (If the last digit is divisible by 2, then the whole number is divisible by 2)
Divisibility by 3 or 9
First, take any number (for this example it will be 492) and add together each digit in the number (4 + 9 + 2 = 15). Then take that sum (15) and determine if it is divisible by 3. The original number is divisible by 3 (or 9) if and only if the sum of its digits is divisible by 3 (or 9).
Adding the digits of a number up, and then repeating the process with the result until only one digit remains, will give the remainder of the original number if it were divided by nine (unless that single digit is nine itself, in which case the number is divisible by nine and the remainder is zero).
This can be generalized to any standard positional system, in which the divisor in question then becomes one less than the radix; thus, in base-twelve, the digits will add up to the remainder of the original number if divided by eleven, and numbers are divisible by eleven only if the digit sum is divisible by eleven.
Example.
492 (The original number)
4 + 9 + 2 = 15 (Add each individual digit together)
15 is divisible by 3 at which point we can stop. Alternatively we can continue using the same method if the number is still too large:
1 + 5 = 6 (Add each individual digit together)
6 ÷ 3 = 2 (Check to see if the number received is divisible by 3)
492 ÷ 3 = 164 (If the number obtained by using the rule is divisible by 3, then the whole number is divisible by 3)
Divisibility by 4
The basic rule for divisibility by 4 is that if the number formed by the last two digits in a number is divisible by 4, the original number is divisible by 4; this is because 100 is divisible by 4 and so adding hundreds, thousands, etc. is simply adding another number that is divisible by 4. If any number ends in a two digit number that you know is divisible by 4 (e.g. 24, 04, 08, etc.), then the whole number will be divisible by 4 regardless of what is before the last two digits.
Alternatively, one can just add half of the last digit to the penultimate digit (or the remaining number). If that number is an even natural number, the original number is divisible by 4
Also, one can simply divide the number by 2, and then check the result to find if it is divisible by 2. If it is, the original number is divisible by 4. In addition, the result of this test is the same as the original number divided by 4.
Example.
General rule
2092 (The original number)
20 92 (Take the last two digits of the number, discarding any other digits)
92 ÷ 4 = 23 (Check to see if the number is divisible by 4)
2092 ÷ 4 = 523 (If the number that is obtained is divisible by 4, then the original number is divisible by 4)
Second method
6174 (the original number)
check that last digit is even, otherwise 6174 can't be divisible by 4.
61 7 4 (Separate the last 2 digits from the rest of the number)
4 ÷ 2 = 2 (last digit divided by 2)
7 + 2 = 9 (Add half of last digit to the penultimate digit)
Since 9 isn't even, 6174 is not divisible by 4
Third method
1720 (The original number)
1720 ÷ 2 = 860 (Divide the original number by 2)
860 ÷ 2 = 430 (Check to see if the result is divisible by 2)
1720 ÷ 4 = 430 (If the result is divisible by 2, then the original number is divisible by 4)
Divisibility by 5
Divisibility by 5 is easily determined by checking the last digit in the number (475), and seeing if it is either 0 or 5. If the last number is either 0 or 5, the entire number is divisible by 5.
If the last digit in the number is 0, then the result will be the remaining digits multiplied by 2. For example, the number 40 ends in a zero, so take the remaining digits (4) and multiply that by two (4 × 2 = 8). The result is the same as the result of 40 divided by 5(40/5 = 8).
If the last digit in the number is 5, then the result will be the remaining digits multiplied by two, plus one. For example, the number 125 ends in a 5, so take the remaining digits (12), multiply them by two (12 × 2 = 24), then add one (24 + 1 = 25). The result is the same as the result of 125 divided by 5 (125/5=25).
Example.
If the last digit is 0
110 (The original number)
11 0 (Take the last digit of the number, and check if it is 0 or 5)
11 0 (If it is 0, take the remaining digits, discarding the last)
11 × 2 = 22 (Multiply the result by 2)
110 ÷ 5 = 22 (The result is the same as the original number divided by 5)
If the last digit is 5
85 (The original number)
8 5 (Take the last digit of the number, and check if it is 0 or 5)
8 5 (If it is 5, take the remaining digits, discarding the last)
8 × 2 = 16 (Multiply the result by 2)
16 + 1 = 17 (Add 1 to the result)
85 ÷ 5 = 17 (The result is the same as the original number divided by 5)
Divisibility by 6
Divisibility by 6 is determined by checking the original number to see if it is both an even number (divisible by 2) and divisible by 3.
If the final digit is even the number is divisible by two, and thus may be divisible by 6. If it is divisible by 2 continue by adding the digits of the original number and checking if that sum is a multiple of 3. Any number which is both a multiple of 2 and of 3 is a multiple of 6.
Example.
324 (The original number)
Final digit 4 is even, so 324 is divisible by 2, and may be divisible by 6.
3 + 2 + 4 = 9 which is a multiple of 3. Therefore the original number is divisible by both 2 and 3 and is divisible by 6.
Divisibility by 7
Divisibility by 7 can be tested by a recursive method. A number of the form 10x + y is divisible by 7 if and only if x − 2y is divisible by 7. In other words, subtract twice the last digit from the number formed by the remaining digits. Continue to do this until a number is obtained for which it is known whether it is divisible by 7. The original number is divisible by 7 if and only if the number obtained using this procedure is divisible by 7. For example, the number 371: 37 − (2×1) = 37 − 2 = 35; 3 − (2 × 5) = 3 − 10 = −7; thus, since −7 is divisible by 7, 371 is divisible by 7.
Similarly a number of the form 10x + y is divisible by 7 if and only if x + 5y is divisible by 7. So add five times the last digit to the number formed by the remaining digits, and continue to do this until a number is obtained for which it is known whether it is divisible by 7.
Another method is multiplication by 3. A number of the form 10x + y has the same remainder when divided by 7 as 3x + y. One must multiply the leftmost digit of the original number by 3, add the next digit, take the remainder when divided by 7, and continue from the beginning: multiply by 3, add the next digit, etc. For example, the number 371: 3×3 + 7 = 16 remainder 2, and 2×3 + 1 = 7. This method can be used to find the remainder of division by 7.
A more complicated algorithm for testing divisibility by 7 uses the fact that 100 ≡ 1, 101 ≡ 3, 102 ≡ 2, 103 ≡ 6, 104 ≡ 4, 105 ≡ 5, 106 ≡ 1, ... (mod 7). Take each digit of the number (371) in reverse order (173), multiplying them successively by the digits 1, 3, 2, 6, 4, 5, repeating with this sequence of multipliers as long as necessary (1, 3, 2, 6, 4, 5, 1, 3, 2, 6, 4, 5, ...), and adding the products (1×1 + 7×3 + 3×2 = 1 + 21 + 6 = 28). The original number is divisible by 7 if and only if the number obtained using this procedure is divisible by 7 (hence 371 is divisible by 7 since 28 is).
This method can be simplified by removing the need to multiply. All it would take with this simplification is to memorize the sequence above (132645...), and to add and subtract, but always working with one-digit numbers.
The simplification goes as follows:
Take for instance the number 371
Change all occurrences of 7, 8 or 9 into 0, 1 and 2, respectively. In this example, we get: 301. This second step may be skipped, except for the left most digit, but following it may facilitate calculations later on.
Now convert the first digit (3) into the following digit in the sequence 13264513... In our example, 3 becomes 2.
Add the result in the previous step (2) to the second digit of the number, and substitute the result for both digits, leaving all remaining digits unmodified: 2 + 0 = 2. So 301 becomes 21.
Repeat the procedure until you have a recognizable multiple of 7, or to make sure, a number between 0 and 6. So, starting from 21 (which is a recognizable multiple of 7), take the first digit (2) and convert it into the following in the sequence above: 2 becomes 6. Then add this to the second digit: 6 + 1 = 7.
If at any point the first digit is 8 or 9, these become 1 or 2, respectively. But if it is a 7 it should become 0, only if no other digits follow. Otherwise, it should simply be dropped. This is because that 7 would have become 0, and numbers with at least two digits before the decimal dot do not begin with 0, which is useless. According to this, our 7 becomes 0.
If through this procedure you obtain a 0 or any recognizable multiple of 7, then the original number is a multiple of 7. If you obtain any number from 1 to 6, that will indicate how much you should subtract from the original number to get a multiple of 7. In other words, you will find the remainder of dividing the number by 7. For example, take the number 186:
First, change the 8 into a 1: 116.
Now, change 1 into the following digit in the sequence (3), add it to the second digit, and write the result instead of both: 3 + 1 = 4. So 116 becomes now 46.
Repeat the procedure, since the number is greater than 7. Now, 4 becomes 5, which must be added to 6. That is 11.
Repeat the procedure one more time: 1 becomes 3, which is added to the second digit (1): 3 + 1 = 4.
Now we have a number smaller than 7, and this number (4) is the remainder of dividing 186/7. So 186 minus 4, which is 182, must be a multiple of 7.
Note: The reason why this works is that if we have: a+b=c and b is a multiple of any given number n, then a and c will necessarily produce the same remainder when divided by n. In other words, in 2 + 7 = 9, 7 is divisible by 7. So 2 and 9 must have the same remainder when divided by 7. The remainder is 2.
Therefore, if a number n is a multiple of 7 (i.e.: the remainder of n/7 is 0), then adding (or subtracting) multiples of 7 cannot change that property.
What this procedure does, as explained above for most divisibility rules, is simply subtract little by little multiples of 7 from the original number until reaching a number that is small enough for us to remember whether it is a multiple of 7. If 1 becomes a 3 in the following decimal position, that is just the same as converting 10×10n into a 3×10n. And that is actually the same as subtracting 7×10n (clearly a multiple of 7) from 10×10n.
Similarly, when you turn a 3 into a 2 in the following decimal position, you are turning 30×10n into 2×10n, which is the same as subtracting 30×10n−28×10n, and this is again subtracting a multiple of 7. The same reason applies for all the remaining conversions:
20×10n − 6×10n=14×10n
60×10n − 4×10n=56×10n
40×10n − 5×10n=35×10n
50×10n − 1×10n=49×10n
First method example
1050 → 105 − 0=105 → 10 − 10 = 0. ANSWER: 1050 is divisible by 7.
Second method example
1050 → 0501 (reverse) → 0×1 + 5×3 + 0×2 + 1×6 = 0 + 15 + 0 + 6 = 21 (multiply and add). ANSWER: 1050 is divisible by 7.
Vedic method of divisibility by osculation
Divisibility by seven can be tested by multiplication by the Ekhādika. Convert the divisor seven to the nines family by multiplying by seven. 7×7=49. Add one, drop the units digit and, take the 5, the Ekhādika, as the multiplier. Start on the right. Multiply by 5, add the product to the next digit to the left. Set down that result on a line below that digit. Repeat that method of multiplying the units digit by five and adding that product to the number of tens. Add the result to the next digit to the left. Write down that result below the digit. Continue to the end. If the result is zero or a multiple of seven, then yes, the number is divisible by seven. Otherwise, it is not. This follows the Vedic ideal, one-line notation.
Vedic method example:
Is 438,722,025 divisible by seven? Multiplier = 5.
4 3 8 7 2 2 0 2 5
42 37 46 37 6 40 37 27
YES
Pohlman–Mass method of divisibility by 7
The Pohlman–Mass method provides a quick solution that can determine if most integers are divisible by seven in three steps or less. This method could be useful in a mathematics competition such as MATHCOUNTS, where time is a factor to determine the solution without a calculator in the Sprint Round.
Step A:
If the integer is 1000 or less, subtract twice the last digit from the number formed by the remaining digits. If the result is a multiple of seven, then so is the original number (and vice versa). For example:
112 -> 11 − (2×2) = 11 − 4 = 7 YES
98 -> 9 − (8×2) = 9 − 16 = −7 YES
634 -> 63 − (4×2) = 63 − 8 = 55 NO
Because 1001 is divisible by seven, an interesting pattern develops for repeating sets of 1, 2, or 3 digits that form 6-digit numbers (leading zeros are allowed) in that all such numbers are divisible by seven. For example:
001 001 = 1,001 / 7 = 143
010 010 = 10,010 / 7 = 1,430
011 011 = 11,011 / 7 = 1,573
100 100 = 100,100 / 7 = 14,300
101 101 = 101,101 / 7 = 14,443
110 110 = 110,110 / 7 = 15,730
01 01 01 = 10,101 / 7 = 1,443
10 10 10 = 101,010 / 7 = 14,430
111,111 / 7 = 15,873
222,222 / 7 = 31,746
999,999 / 7 = 142,857
576,576 / 7 = 82,368
For all of the above examples, subtracting the first three digits from the last three results in a multiple of seven. Notice that leading zeros are permitted to form a 6-digit pattern.
This phenomenon forms the basis for Steps B and C.
Step B:
If the integer is between 1001 and one million, find a repeating pattern of 1, 2, or 3 digits that forms a 6-digit number that is close to the integer (leading zeros are allowed and can help you visualize the pattern). If the positive difference is less than 1000, apply Step A. This can be done by subtracting the first three digits from the last three digits. For example:
341,355 − 341,341 = 14 -> 1 − (4×2) = 1 − 8 = −7 YES
67,326 − 067,067 = 259 -> 25 − (9×2) = 25 − 18 = 7 YES
The fact that 999,999 is a multiple of 7 can be used for determining divisibility of integers larger than one million by reducing the integer to a 6-digit number that can be determined using Step B. This can be done easily by adding the digits left of the first six to the last six and follow with Step A.
Step C:
If the integer is larger than one million, subtract the nearest multiple of 999,999 and then apply Step B. For even larger numbers, use larger sets such as 12-digits (999,999,999,999) and so on. Then, break the integer into a smaller number that can be solved using Step B. For example:
22,862,420 − (999,999 × 22) = 22,862,420 − 21,999,978 -> 862,420 + 22 = 862,442
862,442 -> 862 − 442 (Step B) = 420 -> 42 − (0×2) (Step A) = 42 YES
This allows adding and subtracting alternating sets of three digits to determine divisibility by seven. Understanding these patterns allows you to quickly calculate divisibility of seven as seen in the following examples:
Pohlman–Mass method of divisibility by 7, examples:
Is 98 divisible by seven?
98 -> 9 − (8×2) = 9 − 16 = −7 YES (Step A)
Is 634 divisible by seven?
634 -> 63 − (4×2) = 63 − 8 = 55 NO (Step A)
Is 355,341 divisible by seven?
355,341 − 341,341 = 14,000 (Step B) -> 014 − 000 (Step B) -> 14 = 1 − (4×2) (Step A) = 1 − 8 = −7 YES
Is 42,341,530 divisible by seven?
42,341,530 -> 341,530 + 42 = 341,572 (Step C)
341,572 − 341,341 = 231 (Step B)
231 -> 23 − (1×2) = 23 − 2 = 21 YES (Step A)
Using quick alternating additions and subtractions:
42,341,530 -> 530 − 341 + 42 = 189 + 42 = 231 -> 23 − (1×2) = 21 YES
Multiplication by 3 method of divisibility by 7, examples:
Is 98 divisible by seven?
98 -> 9 remainder 2 -> 2×3 + 8 = 14 YES
Is 634 divisible by seven?
634 -> 6×3 + 3 = 21 -> remainder 0 -> 0×3 + 4 = 4 NO
Is 355,341 divisible by seven?
3 × 3 + 5 = 14 -> remainder 0 -> 0×3 + 5 = 5 -> 5×3 + 3 = 18 -> remainder 4 -> 4×3 + 4 = 16 -> remainder 2 -> 2×3 + 1 = 7 YES
Find remainder of 1036125837 divided by 7
1×3 + 0 = 3
3×3 + 3 = 12 remainder 5
5×3 + 6 = 21 remainder 0
0×3 + 1 = 1
1×3 + 2 = 5
5×3 + 5 = 20 remainder 6
6×3 + 8 = 26 remainder 5
5×3 + 3 = 18 remainder 4
4×3 + 7 = 19 remainder 5
Answer is 5
Finding remainder of a number when divided by 7
7 − (1, 3, 2, −1, −3, −2, cycle repeats for the next six digits) Period: 6 digits.
Recurring numbers: 1, 3, 2, −1, −3, −2
Minimum magnitude sequence
(1, 3, 2, 6, 4, 5, cycle repeats for the next six digits) Period: 6 digits.
Recurring numbers: 1, 3, 2, 6, 4, 5
Positive sequence
Multiply the right most digit by the left most digit in the sequence and multiply the second right most digit by the second left most digit in the sequence and so on and so for. Next, compute the sum of all the values and take the modulus of 7.
Example: What is the remainder when 1036125837 is divided by 7?
Multiplication of the rightmost digit = 1 × 7 = 7
Multiplication of the second rightmost digit = 3 × 3 = 9
Third rightmost digit = 8 × 2 = 16
Fourth rightmost digit = 5 × −1 = −5
Fifth rightmost digit = 2 × −3 = −6
Sixth rightmost digit = 1 × −2 = −2
Seventh rightmost digit = 6 × 1 = 6
Eighth rightmost digit = 3 × 3 = 9
Ninth rightmost digit = 0
Tenth rightmost digit = 1 × −1 = −1
Sum = 33
33 modulus 7 = 5
Remainder = 5
Digit pair method of divisibility by 7
This method uses 1, −3, 2 pattern on the digit pairs. That is, the divisibility of any number by seven can be tested by first separating the number into digit pairs, and then applying the algorithm on three digit pairs (six digits). When the number is smaller than six digits, then fill zero's to the right side until there are six digits. When the number is larger than six digits, then repeat the cycle on the next six digit group and then add the results. Repeat the algorithm until the result is a small number. The original number is divisible by seven if and only if the number obtained using this algorithm is divisible by seven. This method is especially suitable for large numbers.
Example 1:
The number to be tested is 157514.
First we separate the number into three digit pairs: 15, 75 and 14.
Then we apply the algorithm: 1 × 15 − 3 × 75 + 2 × 14 = 182
Because the resulting 182 is less than six digits, we add zero's to the right side until it is six digits.
Then we apply our algorithm again: 1 × 18 − 3 × 20 + 2 × 0 = −42
The result −42 is divisible by seven, thus the original number 157514 is divisible by seven.
Example 2:
The number to be tested is 15751537186.
(1 × 15 − 3 × 75 + 2 × 15) + (1 × 37 − 3 × 18 + 2 × 60) = −180 + 103 = −77
The result −77 is divisible by seven, thus the original number 15751537186 is divisible by seven.
Another digit pair method of divisibility by 7
Method
This is a non-recursive method to find the remainder left by a number on dividing by 7:
Separate the number into digit pairs starting from the ones place. Prepend the number with 0 to complete the final pair if required.
Calculate the remainders left by each digit pair on dividing by 7.
Multiply the remainders with the appropriate multiplier from the sequence 1, 2, 4, 1, 2, 4, ... : the remainder from the digit pair consisting of ones place and tens place should be multiplied by 1, hundreds and thousands by 2, ten thousands and hundred thousands by 4, million and ten million again by 1 and so on.
Calculate the remainders left by each product on dividing by 7.
Add these remainders.
The remainder of the sum when divided by 7 is the remainder of the given number when divided by 7.
For example:
The number 194,536 leaves a remainder of 6 on dividing by 7.
The number 510,517,813 leaves a remainder of 1 on dividing by 7.
Proof of correctness of the method
The method is based on the observation that 100 leaves a remainder of 2 when divided by 7. And since we are breaking the number into digit pairs we essentially have powers of 100.
1 mod 7 = 1
100 mod 7 = 2
10,000 mod 7 = 2^2 = 4
1,000,000 mod 7 = 2^3 = 8; 8 mod 7 = 1
100,000,000 mod 7 = 2^4 = 16; 16 mod 7 = 2
10,000,000,000 mod 7 = 2^5 = 32; 32 mod 7 = 4
And so on.
The correctness of the method is then established by the following chain of equalities:
Let N be the given number .
Divisibility by 11
Method
In order to check divisibility by 11, consider the alternating sum of the digits. For example with 907,071:
so 907,071 is divisible by 11.
We can either start with or since multiplying the whole by does not change anything.
Proof of correctness of the method
Considering that , we can write for any integer:
Divisibility by 13
Remainder Test
13 (1, −3, −4, −1, 3, 4, cycle goes on.)
If you are not comfortable with negative numbers, then use this sequence. (1, 10, 9, 12, 3, 4)
Multiply the right most digit of the number with the left most number in the sequence shown above and the second right most digit to the second left most digit of the number in the sequence. The cycle goes on.
Example: What is the remainder when 321 is divided by 13?
Using the first sequence,
Ans: 1 × 1 + 2 × −3 + 3 × −4 = −17
Remainder = −17 mod 13 = 9
Example: What is the remainder when 1234567 is divided by 13?
Using the second sequence,
Answer: 7 × 1 + 6 × 10 + 5 × 9 + 4 × 12 + 3 × 3 + 2 × 4 + 1 × 1 = 178 mod 13 = 9
Remainder = 9
A recursive method can be derived using the fact that and that . This implies that a number is divisible by 13 iff removing the first digit and subtracting 3 times that digit from the new first digit yields a number divisible by 13. We also have the rule that 10 x + y is divisible iff x + 4 y is divisible by 13. For example, to test the divisibility of 1761 by 13 we can reduce this to the divisibility of 461 by the first rule. Using the second rule, this reduces to the divisibility of 50, and doing that again yields 5. So, 1761 is not divisible by 13.
Testing 871 this way reduces it to the divisibility of 91 using the second rule, and then 13 using that rule again, so we see that 871 is divisible by 13.
Beyond 30
Divisibility properties of numbers can be determined in two ways, depending on the type of the divisor.
Composite divisors
A number is divisible by a given divisor if it is divisible by the highest power of each of its prime factors. For example, to determine divisibility by 36, check divisibility by 4 and by 9. Note that checking 3 and 12, or 2 and 18, would not be sufficient. A table of prime factors may be useful.
A composite divisor may also have a rule formed using the same procedure as for a prime divisor, given below, with the caveat that the manipulations involved may not introduce any factor which is present in the divisor. For instance, one cannot make a rule for 14 that involves multiplying the equation by 7. This is not an issue for prime divisors because they have no smaller factors.
Prime divisors
The goal is to find an inverse to 10 modulo the prime under consideration (does not work for 2 or 5) and use that as a multiplier to make the divisibility of the original number by that prime depend on the divisibility of the new (usually smaller) number by the same prime.
Using 31 as an example, since 10 × (−3) = −30 = 1 mod 31, we get the rule for using y − 3x in the table below. Likewise, since 10 × (28) = 280 = 1 mod 31 also, we obtain a complementary rule y + 28x of the same kind - our choice of addition or subtraction being dictated by arithmetic convenience of the smaller value. In fact, this rule for prime divisors besides 2 and 5 is really a rule for divisibility by any integer relatively prime to 10 (including 33 and 39; see the table below). This is why the last divisibility condition in the tables above and below for any number relatively prime to 10 has the same kind of form (add or subtract some multiple of the last digit from the rest of the number).
Generalized divisibility rule
To test for divisibility by D, where D ends in 1, 3, 7, or 9, the following method can be used. Find any multiple of D ending in 9. (If D ends respectively in 1, 3, 7, or 9, then multiply by 9, 3, 7, or 1.) Then add 1 and divide by 10, denoting the result as m. Then a number N = 10t + q is divisible by D if and only if mq + t is divisible by D. If the number is too large, you can also break it down into several strings with e digits each, satisfying either 10e = 1 or 10e = −1 (mod D). The sum (or alternating sum) of the numbers have the same divisibility as the original one.
For example, to determine whether 913 = 10 × 91 + 3 is divisible by 11, find that m = (11 × 9 + 1) ÷ 10 = 10. Then mq + t = 10 × 3 + 91 = 121; this is divisible by 11 (with quotient 11), so 913 is also divisible by 11. As another example, to determine whether 689 = 10 × 68 + 9 is divisible by 53, find that m = (53 × 3 + 1) ÷ 10 = 16. Then mq + t = 16 × 9 + 68 = 212, which is divisible by 53 (with quotient 4); so 689 is also divisible by 53.
Alternatively, any number Q = 10c + d is divisible by n = 10a + b, such that gcd(n, 2, 5) = 1, if c + D(n)d = An for some integer A, where
The first few terms of the sequence, generated by D(n), are 1, 1, 5, 1, 10, 4, 12, 2, ... .
The piece wise form of D(n) and the sequence generated by it were first published by Bulgarian mathematician Ivan Stoykov in March 2020.
Proofs
Proof using basic algebra
Many of the simpler rules can be produced using only algebraic manipulation, creating binomials and rearranging them. By writing a number as the sum of each digit times a power of 10 each digit's power can be manipulated individually.
Case where all digits are summed
This method works for divisors that are factors of 10 − 1 = 9.
Using 3 as an example, 3 divides 9 = 10 − 1. That means (see modular arithmetic). The same for all the higher powers of 10: They are all congruent to 1 modulo 3. Since two things that are congruent modulo 3 are either both divisible by 3 or both not, we can interchange values that are congruent modulo 3. So, in a number such as the following, we can replace all the powers of 10 by 1:
which is exactly the sum of the digits.
Case where the alternating sum of digits is used
This method works for divisors that are factors of 10 + 1 = 11.
Using 11 as an example, 11 divides 11 = 10 + 1. That means . For the higher powers of 10, they are congruent to 1 for even powers and congruent to −1 for odd powers:
Like the previous case, we can substitute powers of 10 with congruent values:
which is also the difference between the sum of digits at odd positions and the sum of digits at even positions.
Case where only the last digit(s) matter
This applies to divisors that are a factor of a power of 10. This is because sufficiently high powers of the base are multiples of the divisor, and can be eliminated.
For example, in base 10, the factors of 101 include 2, 5, and 10. Therefore, divisibility by 2, 5, and 10 only depend on whether the last 1 digit is divisible by those divisors. The factors of 102 include 4 and 25, and divisibility by those only depend on the last 2 digits.
Case where only the last digit(s) are removed
Most numbers do not divide 9 or 10 evenly, but do divide a higher power of 10n or 10n − 1. In this case the number is still written in powers of 10, but not fully expanded.
For example, 7 does not divide 9 or 10, but does divide 98, which is close to 100. Thus, proceed from
where in this case a is any integer, and b can range from 0 to 99. Next,
and again expanding
and after eliminating the known multiple of 7, the result is
which is the rule "double the number formed by all but the last two digits, then add the last two digits".
Case where the last digit(s) is multiplied by a factor
The representation of the number may also be multiplied by any number relatively prime to the divisor without changing its divisibility. After observing that 7 divides 21, we can perform the following:
after multiplying by 2, this becomes
and then
Eliminating the 21 gives
and multiplying by −1 gives
Either of the last two rules may be used, depending on which is easier to perform. They correspond to the rule "subtract twice the last digit from the rest".
Proof using modular arithmetic
This section will illustrate the basic method; all the rules can be derived following the same procedure. The following requires a basic grounding in modular arithmetic; for divisibility other than by 2's and 5's the proofs rest on the basic fact that 10 mod m is invertible if 10 and m are relatively prime.
For 2n or 5n
Only the last n digits need to be checked.
Representing x as
and the divisibility of x is the same as that of z.
For 7
Since 10 × 5 ≡ 10 × (−2) ≡ 1 (mod 7), we can do the following:
Representing x as
so x is divisible by 7 if and only if y − 2z is divisible by 7.
See also
Division by zero
Parity (mathematics)
References
Sources
External links
Divisibility Criteria at cut-the-knot
Stupid Divisibility Tricks Divisibility rules for 2–100.
Elementary number theory
Division (mathematics)
Articles containing proofs
Mathematics-related lists | Divisibility rule | [
"Mathematics"
] | 8,280 | [
"Elementary number theory",
"Articles containing proofs",
"Elementary mathematics",
"Number theory"
] |
991,217 | https://en.wikipedia.org/wiki/Perspectivism | Perspectivism (; also called perspectivalism) is the epistemological principle that perception of and knowledge of something are always bound to the interpretive perspectives of those observing it. While perspectivism regard all perspectives and interpretations as being of equal truth or value, it holds that no one has access to an absolute view of the world cut off from perspective. Instead, all such occurs from some point of view which in turn affects how things are perceived. Rather than attempt to determine truth by correspondence to things outside any perspective, perspectivism thus generally seeks to determine truth by comparing and evaluating perspectives among themselves. Perspectivism may be regarded as an early form of epistemological pluralism, though in some accounts includes treatment of value theory, moral psychology, and realist metaphysics.
Early forms of perspectivism have been identified in the philosophies of Protagoras, Michel de Montaigne, and Gottfried Leibniz. However, its first major statement is considered to be Friedrich Nietzsche's development of the concept in the 19th century, influenced by Gustav Teichmüller's use of the term some years prior. For Nietzsche, perspectivism takes the form of a realist antimetaphysics while rejecting both the correspondence theory of truth and the notion that the truth-value of a belief always constitutes its ultimate worth-value. The perspectival conception of objectivity used by Nietzsche sees the deficiencies of each perspective as remediable by an asymptotic study of the differences between them. This stands in contrast to Platonic notions in which objective truth is seen to reside in a wholly non-perspectival domain.
According to Alexander Nehamas, perspectivism is often misinterpreted as a form of relativism, whereby we acknowledge the true virtue of fully rejecting the 'Law of excluded middle' regarding a particular proposition. Lacewing Michael adds that although perspectivism doesn't accede to an objective view of the world that is detached from our subjectivity, our assessment of reality can still approach "objectivity" subjectively and asymptotically. Nehamas also describes how perspectivism does not prohibit someone from holding some interpretations to be definitively true. It only alerts us that we cannot objectively determine the truth from outside our perspective. The idea that perspectivism is an absolutely true thesis, is called weak perspectivism by Brian Lightbody.
The basic principle that things are perceived differently from different perspectives (or that perspective determines one's limited and unprivileged access to knowledge) has sometimes been accounted as a rudimentary, uncontentious form of perspectivism. The basic practice of comparing contradictory perspectives to one another may also be considered one such form of perspectivism , as may the entire philosophical problem of how true knowledge is to penetrate one's perspectival limitations.
Precursors and early developments
In Western languages, scholars have found perspectivism in the philosophies of Heraclitus ( – ), Protagoras ( – ), Michel de Montaigne (1533 – 1592 CE), and Gottfried Leibniz (1646 – 1716 CE). The origins of perspectivism have also been found to lie also within Renaissance developments in philosophy of art and its artistic notion of perspective. In Asian languages, scholars have found perspectivism in Buddhist, Jain, and Daoist texts. Anthropologists have found a kind of perspectivism in the thinking of some indigenous peoples. Some theologians believe John Calvin interpreted various scriptures in a perspectivist manner.
Ancient Greek philosophy
The Western origins of perspectivism can be found in the pre-Socratic philosophies of Heraclitus and Protagoras. In fact, a major cornerstone of Plato's philosophy is his rejection and opposition to perspectivism—this forming a principal element of his aesthetics, ethics, epistemology, and theology. The antiperspectivism of Plato made him a central target of critique for later perspectival philosophers such as Nietzsche.
Montaigne
Montaigne's philosophy presents in itself a less as a doctrinaire position than as a core philosophical approach put into practice. Inasmuch as no one can occupy a God's-eye view, Montaigne holds that no one has access to a view which is totally unbiased, which does not according to its own perspective. It is instead only the underlying psychological biases which view one's own perspective as unbiased. In a passage from his "Of Cannibals", he writes:
Nietzsche
In his works, Nietzsche makes a number of statements on perspective which at times contrast each other throughout the development of his philosophy. Nietzsche's begins by challenging the underlying notions of 'viewing from nowhere', 'viewing from everywhere', and 'viewing without interpreting' as being absurdities. Instead, all is attached to some perspective, and all viewers are limited in some sense to the perspectives at their command. In The Genealogy of Morals he writes:
In this, Nietzsche takes a contextualist approach which rejects any God's-eye view of the world. This has been further linked to his notion of the death of God and the dangers of a resulting relativism. However, Nietzsche's perspectivism itself stands in sharp contrast to any such relativism. In outlining his perspectivism, Nietzsche rejects those who claim everything to be subjective, by disassembling the notion of the subject as itself a mere invention and interpretation. He further states that, since the two are mutually dependent on each other, the collapse of the God's-eye view causes also the notion of the thing-in-itself to fall apart with it. Nietzsche views this collapse to reveal, through his genealogical project, that all that has been considered non-perspectival knowledge, the entire tradition of Western metaphysics, has itself been only a perspective. His perspectivism and genealogical project are further integrated into each other in addressing the psychological drives that underlie various philosophical programs and perspectives, as a form of critique. Here, contemporary scholar Ken Gemes views Nietzsche's perspectivism to above all be a principle of moral psychology, rejecting interpretations of it as an epistemological thesis outrightly. It is through this method of critique that the deficiencies of various perspectives can be alleviated—through a critical mediation of the differences between them rather than any appeals to the non-perspectival. In a posthumously published aphorism from The Will to Power, Nietzsche writes:
While Nietzsche does not plainly reject truth and objectivity, he does reject the notions of truth, facts, and objectivity.
Truth theory and the value of truth
Despite receiving much attention within contemporary philosophy, there is no academic consensus on Nietzsche's conception of truth. While his perspectivism presents a number of challenges regarding the nature of truth, its more controversial element lies in its questioning of the of truth. Contemporary scholars Steven D. Hales and Robert C. Welshon write that:
Later developments
20th century
In the 20th century, perspectivism was discussed separately by José Ortega y Gasset and Karl Jaspers. Ortega's perspectivism, replaced his previous position that "man is completely social". His reversal is prominent in his work Verdad y perspectiva ("Truth and perspective"), where he explained that "each man has a mission of truth" and that what he sees of reality no other eye sees. He explained:From different positions two people see the same surroundings. However, they do not see the same thing. Their different positions mean that the surroundings are organized in a different way: what is in the foreground for one may be in the background for another. Furthermore, as things are hidden one behind another, each person will see something that the other may not.Ortega also maintained that perspective is perfected by the multiplication of its viewpoints. He noted that war transpires due to the lack of perspective and failure to see the larger contexts of the actions among nations. Ortega also cited the importance of phenomenology in perspectivism as he argued against speculation and the importance of concrete evidence in understanding truth and reality. In this discourse, he highlighted the role of "circumstance" in finding out the truth since it allows us to understand realities beyond ourselves.
21st century
During the 21st century, perspectivism has led a number of developments within analytic philosophy and philosophy of science, particularly under the early influence of Ronald Giere, Jay Rosenberg, Ernest Sosa, and others. This contemporary form of perspectivism, also known as scientific perspectivism, is more narrowly focused than prior forms—centering on the perspectival limitations of scientific models, theories, observations, and focused interest, while remaining more compatible for example with Kantian philosophy and correspondence theories of truth. Furthermore, scientific perspecitivism has come to address a number of scientific fields such as physics, biology, cognitive neuroscience, and medicine, as well as interdisciplinarity and philosophy of time. Studies of perspectivism have also been introduced into contemporary anthropology, initially through the influence of Eduardo Viveiros de Castro and his research into indigenous cultures of South America.
Types of Perspectivism
Contemporary types of perspectivism include:
Individualist perspectivism
Collectivist perspectivism
Transcendental perspectivism
Theological perspectivism
See also
Anekantavada, a fundamental doctrine of Jainism setting forth a pluralistic metaphysics, traceable to Mahavira (599–527 BCE)
Blind men and an elephant
Conceptual framework
Consilience, the unity of knowledge
Constructivist epistemology
Eclecticism
Fallibilism
Fusion of horizons
Integral theory (disambiguation)
Intersubjectivity
Metaphilosophy
Model-dependent realism
Moral nihilism
Moral skepticism
Multiperspectivalism, a current in Calvinist epistemology
Philosophy of Friedrich Nietzsche
Point of view (philosophy)
Rhizome (philosophy)
Standpoint theory
Value pluralism
References
Consensus reality
Epistemological theories
Philosophy of Friedrich Nietzsche
Hermeneutics
Philosophical analogies
Philosophical theories
Criticism of rationalism
Social epistemology | Perspectivism | [
"Technology"
] | 2,146 | [
"Social epistemology",
"Science and technology studies"
] |
991,335 | https://en.wikipedia.org/wiki/Pica%20%28typography%29 | The pica is a typographic unit of measure corresponding to approximately of an inch, or from to of a foot. One pica is further divided into 12 points.
In printing, three pica measures are used:
The French pica of 12 Didot points (also called cicero) generally is: 12 × 0.376 = .
The American pica of . It was established by the United States Type Founders' Association in 1886. In TeX one pica is of an inch.
The contemporary computer PostScript pica is exactly of an inch or of a foot, i.e. 4.2 mm or 0.1 in.
Publishing applications such as Adobe InDesign and QuarkXPress represent pica measurements with whole-number picas left of a lower-case p, followed by the points number, for example: 5p6 represents 5 picas and 6 points, or 5 picas.
Cascading Style Sheets (CSS) defined by the World Wide Web Consortium use pc as the abbreviation for pica ( of an inch), and pt for point ( of an inch).
The pica is also used in measuring the font capacity and is applied in the process of copyfitting. The font length is measured there by the number of characters per pica (cpp). As books are most often printed with proportional fonts, cpp of a given font is usually a fractional number. For example, an 11-point font (like Helvetica) may have 2.4 cpp, thus a 5-inch (30-pica) line of a usual octavo-sized (6×8 in) book page would contain around 72 characters (including spaces).
There have existed copyfitting tables for a number of typefaces, and typefoundries often provided the number of characters per pica for each type in their specimen catalogs. Similar tables exist as well with which one can estimate the number of characters per pica knowing the lower-case alphabet length.
The typographic pica should not be confused with the Pica font of the typewriters, which means a font where 10 typed characters make up a line one inch long.
See also
Point (typography)
Pitch (typewriter)
Traditional point-size names
References
Typography
Units of length
Customary units of measurement in the United States | Pica (typography) | [
"Mathematics"
] | 488 | [
"Quantity",
"Units of measurement",
"Units of length"
] |
991,349 | https://en.wikipedia.org/wiki/Point%20%28typography%29 | In typography, the point is the smallest unit of measure. It is used for measuring font size, leading, and other items on a printed page. The size of the point has varied throughout printing's history. Since the 18th century, the size of a point has been between 0.18 and 0.4 millimeters. Following the advent of desktop publishing in the 1980s and 1990s, digital printing has largely supplanted the letterpress printing and has established the desktop publishing (DTP) point as the de facto standard. The DTP point is defined as of an inch (0.3528 mm) and, as with earlier American point sizes, is considered to be of a pica.
In metal type, the point size of the font describes the height of the metal body on which the typeface's characters were cast. In digital type, letters of a font are designed around an imaginary space called an em square. When a point size of a font is specified, the font is scaled so that its em square has a side length of that particular length in points. Although the letters of a font usually fit within the font's em square, there is not necessarily any size relationship between the two, so the point size does not necessarily correspond to any measurement of the size of the letters on the printed page.
History
The point was first established by the Milanese typographer, Francesco Torniella da Novara ( – 1589) in his 1517 alphabet, L'Alfabeto. The construction of the alphabet is the first based on logical measurement called "Punto," which corresponds to the ninth part of the height of the letters or the thickness of the principal stroke.
Notations
A measurement in points can be represented in three different ways. For example, 14 points (1 pica plus 2 points) can be written:
(12 points would be just "")—traditional style
1p2 (12 points would be just "1p")—format for desktop
14pt (12 points would be "12pt" or "1pc" since it is the same as 1 pica)—format used by Cascading Style Sheets defined by the World Wide Web Consortium.
Varying standards
There have been many definitions of a "point" since the advent of typography. Traditional continental European points at about are usually a bit larger than English points at around .
French points
The Truchet point, the first modern typographic point, was of a French inch or of the royal foot. It was invented by the French clergyman Sébastien Truchet.
During the metrication of France amid its revolution, a 1799 law declared the meter to be exactly 443.296 French lines long. This established a length to the royal foot of m or about 325 mm.
The Truchet point therefore became equal to mm or about .
It has also been cited as exactly 0.188 mm.
The Fournier point was established by Pierre Simon Fournier in 1737. The system of Fournier was based on a different French foot of c. 298 mm. With the usual convention that 1 foot equals 12 inches, 1 inch (pouce) was divided into 12 lines (lignes) and 1 line was further divided into 6 typographic points (points typographiques). One Fournier point is about 0.0135 English inches.
Fournier printed a reference scale of 144 points over two inches; however, it was too rough to accurately measure a single point.
The Fournier point did not achieve lasting popularity despite being revived by the Monotype Corporation in 1927. It was still a standard in Belgium, in parts of Austria, and in Northern France at the beginning of the 20th century. In Belgium, the Fournier system was used until the 1970s and later. It was called the "mediaan"-system.
The Didot point, established by François-Ambroise Didot in 1783, was an attempt to improve the Fournier system. He did not change the subdivisions (1 inch = 12 subdivisions = 72 points), but defined it strictly in terms of the royal foot, a legal length measure in France: the Didot point is exactly of a French foot or of a French inch, that is (by 1799) mm or about . Accordingly, one Didot point is exactly two Truchet points.
However, 12 Fournier points turned out to be 11 Didot points, giving a Fournier point of about ; later sources state it as being . To avoid confusion between the new and the old sizes, Didot also rejected the traditional names, thus parisienne became corps 5, nonpareille became corps 6, and so on. The Didot system prevailed because the French government demanded printing in Didot measurements.
Approximations were subsequently employed, largely owing to the Didot point's unwieldy conversion to metric units (the divisor of its conversion ratio has the prime factorization of ).
In 1878, Hermann Berthold defined 798 points as being equal to 30 cm, or 2660 points equalling 1 meter: that gives around to the point. A more precise number, , sometimes is given; this is used by TeX as the unit. This has become the standard in Germany and Central and Eastern Europe. This size is still mentioned in the technical regulations of the Eurasian Economic Union.
Metric points
pdfTEX, but not plain TeX or LaTeX, also supports a new Didot point (nd) at mm or and refers to a not further specified 1978 redefinition for it.
The French National Print Office adopted a point of mm or in about 1810 and continues to use this measurement today (though "recalibrated" to ).
Japanese and German standardization bodies instead opted for a metric typographic base measure of exactly mm or , which is sometimes referred to as the quart in Japan. The symbol Q is used in Japanese after the initial letter of quarter millimeter. Due to demand by Japanese typesetters, CSS adopted Q in 2015.
ISO 128 specifies preferred line thicknesses for technical drawings and ISO 9175 specifies respective pens. The steps between nominal sizes are based on a factor of √2 ≈ 1.414 in order to match ISO 216 paper sizes. Since the set of sizes includes thicknesses of 0.1 mm, 0.5 mm, 1 mm and 2 mm, there is also one of 0.35 mm which is almost exactly 1 pica point. In other words, 2−1.5 mm = mm approximates an English typographic point rather well.
American points
The basic unit of measurements in American typography was the pica, usually approximated as one sixth of an inch, but the exact size was not standardized, and various type foundries had been using their own.
During and after the American Revolutionary War, Benjamin Franklin was sent as commissioner (Ambassador) for the United States to France from December 1776 to 1785. While living there he had close contact with the Fournier family, including the father and Pierre Simon Fournier. Franklin wanted to teach his grandson Benjamin Franklin Bache about printing and typefounding, and arranged for him to be trained by Francois Ambroise Didot. Franklin then imported French typefounding equipment to Philadelphia to help Bache set up a type-foundry. Around 1790, Bache published a specimen sheet with some Fournier types. After the death of Franklin, the matrices and the Fournier mould were acquired by Binny and Ronaldson, the first permanent type-foundry in America. Successive mergers and acquisitions in 1833, 1860 and 1897 saw the company eventually become known as MacKellar, Smith & Jordan. The Fournier cicero mould was used by them to cast pica-sized type.
Nelson Hawks proposed, like Fournier, to divide one American inch exactly into six picas, and one pica into 12 points. However, this saw an opposition because the majority of foundries had been using picas less than one sixth of an inch. So in 1886, after some examination of various picas, the Type Founders Association of the United States approved the pica of the L. Johnson & Co. foundry of Philadelphia (the "Johnson pica") as the most established. The Johnson foundry was influential, being America's first and oldest foundry; established as Binny & Ronaldson in 1796, it would go through several names before being the largest of the 23 foundries that would merge in 1892 to form the American Type Founders Co. The official definition of one pica is , and one point is . That means 6 picas or 72 points constitute standard inches. A less precise definition is one pica equals , and one point . It was also noticed that 83 picas is nearly equal to 35 cm, so the Type Founders Association also suggested using a 35 cm metal rod for measurements, but this was not accepted by every foundry.
This has become known as the American point system. The British foundries accepted this in 1898.
In modern times this size of the point has been approximated as exactly () of the inch by Donald Knuth for the default unit of his TeX computer typesetting system and is thus sometimes known as the , which is 0. mm.
Old English points
Although the English Monotype manuals used 1 pica = .1660 inch, the manuals used on the European continent use another definition: there 1 pica = .1667 inch, the Old English pica.
As a consequence all the tables of measurements in the German, Dutch, French, Polish and all other manuals elsewhere on the European continent for the composition caster and the super-caster are different in quite some details.
The Monotype wedges used at the European continent are marked with an extra E behind the set-size: for instance: 5-12E, 1331-15E etc. When working with the E-wedges in the larger sizes the differences will increase even more.
Desktop publishing point
The desktop publishing point (DTP point) or PostScript point is defined as or 0.013 of the international inch, making it equivalent to mm = 0.352 mm. Twelve points make up a pica, and six picas make an inch.
This specification was found in the Xerox Interpress language used for its early digital printers and further developed by John Warnock and Charles Geschke when they created Adobe PostScript. It was adopted by Apple Computer as the standard for the display resolution of the original Macintosh desktop computer and the print resolution for the LaserWriter printer.
In 1996, it was adopted by W3C for Cascading Stylesheets (CSS) where it was later related at a fixed 3:4 ratio to the pixel due to a general (but wrong) assumption of 96 pixel-per-inch screens.
Apple point
Since the advent of high-density "Retina" screens with a much higher resolution than the original 72 dots per inch, Apple's programming environment Xcode sizes GUI elements in points that are scaled automatically to a whole number of physical pixels in order to accommodate for screen size, pixel density and typical viewing distance. This Cocoa point is equivalent to the pixel px unit in CSS, the density-independent pixel dp on Android and the effective pixel epx or ep in Windows UWP.
Font sizes
In lead typecasting, most font sizes commonly used in printing have conventional names that differ by country, language and the type of points used.
Desktop publishing software and word processors intended for office and personal use often have a list of suggested font sizes in their user interface, but they are not named and usually an arbitrary value can be entered manually. Microsoft Word, for instance, suggests every even size between 8 and 28 points and, additionally, 9, 11, 36, 48 and 72 points, i.e. the larger sizes equal 3, 4 and 6 picas. While most software nowadays defaults to DTP points, many allow specifying font size in other units of measure (e.g., inches, millimeters, pixels), especially code-based systems such as TeX and CSS.
See also
Dots per inch (DPI)
Pica (typography)
Body height (typography)
Traditional point-size names
References
Further reading
Typography
Units of length
Customary units of measurement in the United States | Point (typography) | [
"Mathematics"
] | 2,519 | [
"Quantity",
"Units of measurement",
"Units of length"
] |
991,459 | https://en.wikipedia.org/wiki/Hopper%20crystal | A hopper crystal is a form of crystal, the shape of which resembles that of a pyramidal hopper container.
The edges of hopper crystals are fully developed, but the interior spaces are not filled in. This results in what appears to be a hollowed out step lattice formation, as if someone had removed interior sections of the individual crystals. In fact, the "removed" sections never filled in, because the crystal was growing so rapidly that there was not enough time (or material) to fill in the gaps. The interior edges of a hopper crystal still show the crystal form characteristic to the specific mineral, and so appear to be a series of smaller and smaller stepped down miniature versions of the original crystal.
Hoppering occurs when electrical attraction is higher along the edges of the crystal; this causes faster growth at the edges than near the face centers. This attraction draws the mineral molecules more strongly than the interior sections of the crystal, thus the edges develop more quickly. However, the basic physics of this type of growth is the same as that of dendrites but, because the anisotropy in the solid–liquid inter-facial energy is so large, the dendrite so produced exhibits a faceted morphology.
Hoppering is common in many minerals, including lab-grown bismuth, galena, quartz (called skeletal or fenster crystals), gold, calcite, halite (salt), and water (ice).
In 2017, Frito-Lay filed for (and later received) a patent for a salt cube hopper crystal. Because the shape increases surface area to volume, it allows people to taste more salt compared to the amount actually consumed.
References
"Hopper crystals" in A New Kind of Science by Stephen Wolfram, p. 993.
External links
Images of hopper crystals, Glendale Community College Earth Science Image Archive
Crystals | Hopper crystal | [
"Chemistry",
"Materials_science"
] | 373 | [
"Crystallography",
"Crystals"
] |
991,484 | https://en.wikipedia.org/wiki/Louis%20J.%20Mordell | Louis Joel Mordell (28 January 1888 – 12 March 1972) was an American-born British mathematician, known for his research in number theory. He was born in Philadelphia, United States, in a Jewish family of Lithuanian extraction.
Education
Mordell was educated at the University of Cambridge where he completed the Cambridge Mathematical Tripos as a student of St John's College, Cambridge, starting in 1906 after successfully passing the scholarship examination. He graduated as third wrangler in 1909.
Research
After graduating Mordell began independent research into particular diophantine equations: the question of integer points on the cubic curve, and special case of what is now called a Thue equation, the Mordell equation
y2 = x3 + k.
He took an appointment at Birkbeck College, London in 1913. During World War I he was involved in war work, but also produced one of his major results, proving in 1917 the multiplicative property of Srinivasa Ramanujan's tau-function. The proof was by means, in effect, of the Hecke operators, which had not yet been named after Erich Hecke; it was, in retrospect, one of the major advances in modular form theory, beyond its status as an odd corner of the theory of special functions.
In 1920, he took a teaching position in UMIST, becoming the Fielden Chair of Pure Mathematics at the University of Manchester in 1922 and Professor in 1923. There he developed a third area of interest within number theory, the geometry of numbers. His basic work on Mordell's theorem is from 1921 to 1922, as is the formulation of the Mordell conjecture. He was an Invited Speaker of the International Congress of Mathematicians (ICM) in 1928 in Bologna and in 1932 in Zürich and a Plenary Speaker of the ICM in 1936 in Oslo.
He took British citizenship in 1929. In Manchester he also built up the department, offering posts to a number of outstanding mathematicians who had been forced from posts on the continent of Europe. He brought in Reinhold Baer, G. Billing, Paul Erdős, Chao Ko, Kurt Mahler, and Beniamino Segre. He also recruited J. A. Todd, Patrick du Val, Harold Davenport and Laurence Chisholm Young, and invited distinguished visitors.
In 1945, he returned to Cambridge as a Fellow of St. John's, when elected to the Sadleirian Chair, and became Head of Department. He officially retired in 1953. It was at this time that he had his only formal research students, of whom J. W. S. Cassels was one. His idea of supervising research was said to involve the suggestion that a proof of the transcendence of the Euler–Mascheroni constant was probably worth a doctorate. His book Diophantine Equations (1969) is based on lectures, and gives an idea of his discursive style. Mordell is said to have hated administrative duties.
Anecdote
While visiting the University of Calgary, the elderly Mordell attended the Number Theory seminars and would frequently fall asleep during them. According to a story by number theorist Richard K. Guy, the department head at the time, after Mordell had fallen asleep, someone in the audience asked "Isn't that Stickelberger's theorem?" The speaker said "No it isn't." A few minutes later the person interrupted again and said "I'm positive that's Stickelberger's theorem!" The speaker again said no it wasn't. The lecture ended, and the applause woke up Mordell, and he looked up and pointed at the board, saying "There's old Stickelberger's result!"
See also
Mordell–Weil group
References
1888 births
1972 deaths
20th-century British mathematicians
Academics of Birkbeck, University of London
Academics of the University of Manchester Institute of Science and Technology
Academics of the Victoria University of Manchester
Alumni of St John's College, Cambridge
American people of Lithuanian-Jewish descent
British Jews
De Morgan Medallists
Fellows of St John's College, Cambridge
Fellows of the Royal Society
Number theorists
Mathematicians from Philadelphia
Sadleirian Professors of Pure Mathematics
Central High School (Philadelphia) alumni
American emigrants to the United Kingdom | Louis J. Mordell | [
"Mathematics"
] | 864 | [
"Number theorists",
"Number theory"
] |
991,666 | https://en.wikipedia.org/wiki/Multiple%20instruction%2C%20single%20data | In computing, multiple instruction, single data (MISD) is a type of parallel computing architecture where many functional units perform different operations on the same data. Pipeline architectures belong to this type, though a purist might say that the data is different after processing by each stage in the pipeline. Fault tolerance executing the same instructions redundantly in order to detect and mask errors, in a manner known as task replication, may be considered to belong to this type. Applications for this architecture are much less common than MIMD and SIMD, as the latter two are often more appropriate for common data parallel techniques. Specifically, they allow better scaling and use of computational resources. However, one prominent example of MISD in computing are the Space Shuttle flight control computers.
Systolic arrays
Systolic arrays (< wavefront processors), first described by H. T. Kung and Charles E. Leiserson are an example of MISD architecture. In a typical systolic array, parallel input data flows through a network of hard-wired processor nodes, resembling the human brain which combine, process, merge or sort the input data into a derived result.
Systolic arrays are often hard-wired for a specific operation, such as "multiply and accumulate", to perform massively parallel integration, convolution, correlation, matrix multiplication or data sorting tasks. A systolic array typically consists of a large monolithic network of primitive computing nodes, which can be hardwired or software-configured for a specific application. The nodes are usually fixed and identical, while the interconnect is programmable. More general wavefront processors, by contrast, employ sophisticated and individually programmable nodes which may or may not be monolithic, depending on the array size and design parameters. Because the wave-like propagation of data through a systolic array resembles the pulse of the human circulatory system, the name systolic was coined from medical terminology.
A significant benefit of systolic arrays is that all operand data and partial results are contained within (passing through) the processor array. There is no need to access external buses, main memory, or internal caches during each operation, as with standard sequential machines. The sequential limits on parallel performance dictated by Amdahl's law also do not apply in the same way because data dependencies are implicitly handled by the programmable node interconnect.
Therefore, systolic arrays are extremely good at artificial intelligence, image processing, pattern recognition, computer vision, and other tasks that animal brains do exceptionally well. Wavefront processors, in general, can also be very good at machine learning by implementing self-configuring neural nets in hardware.
While systolic arrays are officially classified as MISD, their classification is somewhat problematic. Because the input is typically a vector of independent values, the systolic array is not SISD. Since these input values are merged and combined into the result(s) and do not maintain their independence as they would in a SIMD vector processing unit, the array cannot be classified as such. Consequently, the array cannot be classified as a MIMD either, since MIMD can be viewed as a mere collection of smaller SISD and SIMD machines.
Finally, because the data swarm is transformed as it passes through the array from node to node, the multiple nodes are not operating on the same data, which makes the MISD classification a misnomer. The other reason why a systolic array should not qualify as a MISD is the same as the one which disqualifies it from the SISD category: The input data is typically a vector, not a single data value, although one could argue that any given input vector is a single dataset.
The above notwithstanding, systolic arrays are often offered as a classic example of MISD architecture in textbooks on parallel computing and in the engineering class. If the array is viewed from the outside as atomic it should perhaps be classified as SFMuDMeR = single function, multiple data, merged result(s).
Footnotes
Flynn's taxonomy
Parallel computing
Misd
de:Flynnsche Klassifikation#MISD (Multiple Instruction, Single Data) | Multiple instruction, single data | [
"Technology"
] | 859 | [
"Classes of computers",
"Computers",
"Computer systems"
] |
991,671 | https://en.wikipedia.org/wiki/Grenville%20Turner | Grenville Turner (1 November 1936 – 22 August 2024) was a British geochemist who was a research professor at the University of Manchester. He was one of the pioneers of cosmochemistry.
Education
Todmorden Grammar School
St. John's College, Cambridge (MA)
Balliol College, Oxford
In 1962, he was awarded his D.Phil. (Oxford University's equivalent of a PhD) in nuclear physics.
Career
University of California, Berkeley: assistant professor, 1962–64
University of Sheffield: lecturer in physics, 1964–74, senior lecturer 1974–79, reader 1979–80, professor 1980–88
Caltech: research associate, 1970–71
University of Manchester: professor of isotope geochemistry, Department of Earth Sciences, 1988–
Member of committees for SERC, the British National Space Centre and PPARC
Scientific work
Turner was a leading figure in cosmochemistry since the 1960s. His pioneering work on rare gases in meteorites led him to develop the argon–argon dating technique that demonstrated the great age of meteorites and provided a precise chronology of rocks brought back by the Apollo missions. He was one of the few UK scientists to be a Principal Investigator of these Apollo samples.
His argon-dating technique involved stepped pyrolysis of the rocks to force out the argon, then determining the isotopic ratios in the gas by mass spectrometry. This was later refined by the use of lasers. These techniques have been invaluable to cosmochemists and geochemists, and have been applied (by Turner and others) to determine the geochronology of diamonds and inclusions in them, and the precise ages of mantle and crustal rocks from the Earth.
He went on to develop even better techniques, such as iodine-xenon chronology. He used laser resonance ionisation of xenon to measure samples with only a few thousand atoms of xenon; this enabled him to get accurate data from tiny samples, including individual chondrules. He could even trace secondary processes, such as alteration by heat, fluids or shock.
Turner set up the first ion microprobe in the United Kingdom intended for use primarily for examining extraterrestrial material. He used it to measure oxygen-isotope variations in the Martian meteorite ALH 84001. His results cast light on the environment in which the carbonate grains and so-called microfossils in that meteorite formed.
He was a founder member of the UK Cosmochemical Analysis Network, a network of laboratories in research institutions that analyse extraterrestrial material.
He continued to be an active researcher during retirement. In 2004, he announced a plutonium-xenon technique for dating terrestrial materials.
Death
Turner died in Sheffield on 22 August 2024, at the age of 87, after being diagnosed with grade IV astrocytoma in 2022.
Honours and awards
Fellow of the Royal Society, 1980 (member of Council 1990–92)
Fellow, Meteoritical Society, 1980
Rumford Medal of the Royal Society, 1996
Fellow, Geochemical Society and European Association of Geochemistry 1996
Fellow, American Geophysical Union, 1998
Leonard Medal of the Meteoritical Society, 1999
Urey Medal of the European Association of Geochemistry, 2002
Gold Medal of the Royal Astronomical Society for geophysics, 2004
References
Debrett's People of Today, 2006
Who's Who, 2006
The Observatory, October 2005, p285-6
1936 births
2024 deaths
Academics of the University of Sheffield
British physicists
Fellows of the Royal Society
People from Todmorden
Recipients of the Gold Medal of the Royal Astronomical Society
Fellows of the American Geophysical Union
British geochemists
Alumni of St John's College, Cambridge
Alumni of the University of Oxford | Grenville Turner | [
"Chemistry"
] | 771 | [
"Geochemists",
"British geochemists"
] |
991,712 | https://en.wikipedia.org/wiki/Eddington%20Medal | The Eddington Medal is awarded by the Royal Astronomical Society for investigations of outstanding merit in theoretical astrophysics. It is named after Sir Arthur Eddington. First awarded in 1953, the frequency of the prize has varied over the years, at times being every one, two or three years. Since 2013 it has been awarded annually.
Recipients
Source is unless otherwise noted.
See also
List of astronomy awards
List of physics awards
List of prizes named after people
References
External links
Winners
Physics awards
Awards established in 1953
Awards of the Royal Astronomical Society
1953 establishments in the United Kingdom
Astrophysics | Eddington Medal | [
"Physics",
"Astronomy",
"Technology"
] | 114 | [
"Astronomy prizes",
"Astrophysics",
"Awards of the Royal Astronomical Society",
"Science and technology awards",
"Astronomical sub-disciplines",
"Physics awards"
] |
991,784 | https://en.wikipedia.org/wiki/Circumscribed%20sphere | In geometry, a circumscribed sphere of a polyhedron is a sphere that contains the polyhedron and touches each of the polyhedron's vertices. The word circumsphere is sometimes used to mean the same thing, by analogy with the term circumcircle. As in the case of two-dimensional circumscribed circles (circumcircles), the radius of a sphere circumscribed around a polyhedron is called the circumradius of , and the center point of this sphere is called the circumcenter of .
Existence and optimality
When it exists, a circumscribed sphere need not be the smallest sphere containing the polyhedron; for instance, the tetrahedron formed by a vertex of a cube and its three neighbors has the same circumsphere as the cube itself, but can be contained within a smaller sphere having the three neighboring vertices on its equator. However, the smallest sphere containing a given polyhedron is always the circumsphere of the convex hull of a subset of the vertices of the polyhedron.
In De solidorum elementis (circa 1630), René Descartes observed that, for a polyhedron with a circumscribed sphere, all faces have circumscribed circles, the circles where the plane of the face meets the circumscribed sphere. Descartes suggested that this necessary condition for the existence of a circumscribed sphere is sufficient, but it is not true: some bipyramids, for instance, can have circumscribed circles for their faces (all of which are triangles) but still have no circumscribed sphere for the whole polyhedron. However, whenever a simple polyhedron has a circumscribed circle for each of its faces, it also has a circumscribed sphere.
Related concepts
The circumscribed sphere is the three-dimensional analogue of the circumscribed circle.
All regular polyhedra have circumscribed spheres, but most irregular polyhedra do not have one, since in general not all vertices lie on a common sphere. The circumscribed sphere (when it exists) is an example of a bounding sphere, a sphere that contains a given shape. It is possible to define the smallest bounding sphere for any polyhedron, and compute it in linear time.
Other spheres defined for some but not all polyhedra include a midsphere, a sphere tangent to all edges of a polyhedron, and an inscribed sphere, a sphere tangent to all faces of a polyhedron. In the regular polyhedra, the inscribed sphere, midsphere, and circumscribed sphere all exist and are concentric.
When the circumscribed sphere is the set of infinite limiting points of hyperbolic space, a polyhedron that it circumscribes is known as an ideal polyhedron.
Point on the circumscribed sphere
There are five convex regular polyhedra, known as the Platonic solids. All Platonic solids have circumscribed spheres. For an arbitrary point on the circumscribed sphere of each Platonic solid with number of the vertices , if are the distances to
the vertices , then
References
External links
Elementary geometry
Spheres | Circumscribed sphere | [
"Mathematics"
] | 711 | [
"Elementary mathematics",
"Elementary geometry"
] |
991,786 | https://en.wikipedia.org/wiki/Inscribed%20sphere | In geometry, the inscribed sphere or insphere of a convex polyhedron is a sphere that is contained within the polyhedron and tangent to each of the polyhedron's faces. It is the largest sphere that is contained wholly within the polyhedron, and is dual to the dual polyhedron's circumsphere.
The radius of the sphere inscribed in a polyhedron P is called the inradius of P.
Interpretations
All regular polyhedra have inscribed spheres, but most irregular polyhedra do not have all facets tangent to a common sphere, although it is still possible to define the largest contained sphere for such shapes. For such cases, the notion of an insphere does not seem to have been properly defined and various interpretations of an insphere are to be found:
The sphere tangent to all faces (if one exists).
The sphere tangent to all face planes (if one exists).
The sphere tangent to a given set of faces (if one exists).
The largest sphere that can fit inside the polyhedron.
Often these spheres coincide, leading to confusion as to exactly what properties define the insphere for polyhedra where they do not coincide.
For example, the regular small stellated dodecahedron has a sphere tangent to all faces, while a larger sphere can still be fitted inside the polyhedron. Which is the insphere? Important authorities such as Coxeter or Cundy & Rollett are clear enough that the face-tangent sphere is the insphere. Again, such authorities agree that the Archimedean polyhedra (having regular faces and equivalent vertices) have no inspheres while the Archimedean dual or Catalan polyhedra do have inspheres. But many authors fail to respect such distinctions and assume other definitions for the 'inspheres' of their polyhedra.
See also
Circumscribed sphere
Inscribed circle
Midsphere
Sphere packing
References
Coxeter, H.S.M. Regular Polytopes 3rd Edn. Dover (1973).
Cundy, H.M. and Rollett, A.P. Mathematical Models, 2nd Edn. OUP (1961).
External links
Elementary geometry
Polyhedra
Spheres | Inscribed sphere | [
"Mathematics"
] | 455 | [
"Elementary mathematics",
"Elementary geometry"
] |
991,849 | https://en.wikipedia.org/wiki/Hose | A hose is a flexible hollow tube or pipe designed to carry fluids from one location to another, often from a faucet or hydrant.
Early hoses were made of leather, although modern hoses are typically made of rubber, canvas, and helically wound wire. Hoses may also be made from plastics such as polyvinyl chloride, polytetrafluoroethylene, and polyethylene terephthalate, or from metals such as stainless steel.
See also
Heated hose
References
Further reading
Hydraulics | Hose | [
"Physics",
"Chemistry"
] | 110 | [
"Physical systems",
"Hydraulics",
"Fluid dynamics"
] |
991,868 | https://en.wikipedia.org/wiki/Torture%20murder | A torture murder is a murder where death was preceded by the torture of the victim. In many legal jurisdictions a murder involving "exceptional brutality or cruelty" will attract a harsher sentence.
Frequency
Lynching in the United States—extrajudicial killing by a mob, which often served as a means of racial terrorism—frequently involved public torture of the victim or victims, and was in many instances followed by human trophy collecting.
In the 21st century, many of the murders of foreigners in and citizens of Iraq and Syria committed by members of the terrorist organization Daesh have been preceded by torture. Film footage of the persecution of Muslims in Myanmar documents the aftermath and testimony of torture murder by government forces, and evidence has linked torture murder with many other massacres, war crimes, and genocides, both contemporary and historical.
Punishment
Murder laws worldwide vary a great deal, but a murder involving torture will generally attract a harsher penalty than a murder alone. Legal mechanisms of penalty enhancement vary between jurisdictions. In the laws of Italy, Germany, Norway, and many parts of the United States, there are two or more "degrees" of murder, with wording such as: "...inflicting torture upon the victim prior to the victim's death" typically used to rule that the highest degree should apply. In other jurisdictions, it may be that even if there was just one crime of murder, the sentencing practices and guidelines are such that the aggravating circumstance of any torture will nevertheless allow for a harsher than normal penalty, up to and including life imprisonment.
See also
Antisocial personality disorder
Sadistic personality disorder
Psychopathy
Robert Berdella
Gabriel Fernandez
References
murder
Murder
Killings by type
Harassment and bullying | Torture murder | [
"Biology"
] | 345 | [
"Harassment and bullying",
"Behavior",
"Aggression"
] |
992,021 | https://en.wikipedia.org/wiki/James%20B.%20Pollack | James Barney Pollack (July 9, 1938 – June 13, 1994) was an American astrophysicist who worked for NASA's Ames Research Center.
Pollack was born on July 9, 1938, in New York City, and was brought up in Woodmere, Long Island by a Jewish family that was in the women's garment business. He was a valedictorian of his class at Lawrence High School and graduated from Princeton University in 1960. He then received his master's in nuclear physics at University of California, Berkeley in 1962 and his Ph.D from Harvard in 1965, where he was a student of Carl Sagan. He was openly gay. Dorion Sagan told how his father came to the defense of Pollack's partner in a problem with obtaining treatment at the university health service emergency department.
Pollack specialized in atmospheric science, especially the atmospheres of Mars and Venus. He investigated the possibility of terraforming Mars, the extinction of the dinosaurs and the possibility of nuclear winter since the 1980s with Christopher McKay and Sagan. The work of Pollack et al. (1996) on the formation of giant planets ("core accretion paradigm") is seen today as the standard model.
He explored the weather on Mars using data from the Mariner 9 spacecraft and the Viking mission. On this he based ground-breaking computer simulations of winds, storms, and the general climate on that planet. An overview of Pollack's scientific vita is given in the memorial talk "James B. Pollack: A Pioneer in Stardust to Planetesimals Research" held at an Astronomical Society of the Pacific 1996 symposium.
He was a recipient of the Gerard P. Kuiper Prize in 1989 for outstanding lifetime achievement in the field of planetary science. Pollack died at his home in California in 1994 from a rare form of spinal cancer, at age 55.
A crater on Mars was named in his honor.
References
External links
Short biography
1938 births
1994 deaths
American planetary scientists
University of California, Berkeley alumni
Harvard University alumni
Deaths from spinal cancer
Deaths from cancer in California
Neurological disease deaths in California
American LGBTQ scientists
LGBTQ people from New York (state)
People from Woodmere, New York
Lawrence High School (Cedarhurst, New York) alumni
Scientists from New York (state)
20th-century American LGBTQ people
LGBTQ physicists
LGBTQ astronomers | James B. Pollack | [
"Astronomy"
] | 482 | [
"Astronomers",
"LGBTQ astronomers"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.